Feed aggregator

Today, the Supreme Court ruled 7-2 that Trinity Lutheran Church can’t be denied a state playground refurbishment subsidy simply because it’s a religious institution.

As I predicted after argument, the Court saw this as an easy case whereby the government improperly denied a public benefit because of religious status. This doesn’t mean that taxpayer funds can now be used to fund religious instruction or any other parade of horribles that was raised by Trinity Lutheran’s opponents.

Simply put, people and entities can’t be restricted from a government program simply because they’re religious. This is no different than the situation where police or fire protection is provided to houses of worship and other religious institutions.

It’s telling that Chief Justice Roberts’s attempt, via a curious footnote 3, to narrow the scope of his ruling to the facts of this case (to playgrounds?) didn’t command a majority. Justice Breyer only concurs in the judgment—he’s a pragmatist anyway—while Justices Thomas and Gorsuch specifically disclaim the disputed language. Meanwhile, Justice Sotomayor’s dissenting opinion, joined by Justice Ginsburg, seems to think that the ruling dissolves the separation of church and state altogether, footnote or no footnote.

Finally, I should note that the case doesn’t touch issues of taxpayer standing to challenge government grants or exemptions for businesses from generally applicable laws. (On the latter, stay tuned next term when the Court takes up the Masterpiece Cakeshop wedding-vendor case where a bakery declined on religious and free-speech grounds to make a cake for a same-sex ceremony.)

As our Policy Report noted last year, Cato is the only organization in the country that has gone to court to defend both one’s right to marry a person of the same sex and one’s right as a businessperson to join or not join as one chooses in assisting in celebrating a same-sex wedding. We’ll be hearing a lot more about that second issue over the coming year, because this morning the Supreme Court agreed to hear the case of Masterpiece Cakeshop v. Colorado Civil Rights Commission. The case presents the issue “whether applying Colorado’s public accommodations law to compel the petitioner to create expression that violates his sincerely held religious beliefs about marriage violates the free speech or free exercise clauses of the First Amendment.”

Cato scholars and commentators have written about this set of issues for years, including, to name a few, David Boaz (“The solution to injustice is never to reverse the injustice”), Roger Pilon (history of free association and public accommodations laws), Ilya Shapiro (“private individuals should be able to make their own decisions on whom to do business with and how—on religious or any other grounds”), Robert Levy (“Forcing private parties to serve gay weddings is a higher order of coercion than forcing private hotels and restaurants to provide rooms and food to black—or gay—travelers”), Jason Kuznicki (“The market doesn’t care, and that’s a wonderful thing”), Emily Ekins on the polling data on a divided public, and David Lampo (different legal issues at stake than in same-sex marriage cases). Cato filed an amicus brief in the parallel (alas unsuccessful) Arlene’s Flowers case involving Washington florist Barronelle Stutzman. I’ve written about the cake and flowers cases many times at Overlawyered (as well as about other vendor cases involving meeting halls and so forth), and have delved into the collateral damage to civil liberties seen in enforcement actions like that of Oregon in the Melissa and Aaron Klein (Sweet Cakes by Melissa) case. 

A second ruling this morning, while likely to get less attention than the Masterpiece Cakeshop certiorari grant, offers clues on the wholly separate issue of how the holdings of Obergefell and Windsor are faring at the Court. The answer seems to be just fine. In Pavan v. Smith, the justices summarily reversed the Arkansas Supreme Court, which had declined to order an amended birth certificate issued to a lesbian couple on the same terms on which the state would issue such a certificate for a child born via donor reproduction to an opposite-sex couple. Chief Justice Roberts joined the five justices who had been in the Obergefell majority, while Neil Gorsuch, joined by Justices Thomas and Alito, wanted the case argued. My quick take: while Gorsuch et al. offered reasonable-sounding grounds for slowing down and taking a look at the details of the Arkansas dispute, the Court is determined to disallow what it sees as any defiance of Obergefell or attempt to chip away at it, and read the Arkansas high court as having tried that.

Notably, Gorsuch in his dissent took a legal technician’s cool tone that diverged sharply from what one might have expected from the late Justice Scalia: he refrained from zingers at the majority’s expense, stayed far away from culture-war implications, and emphasized that the dispute that might have been aired was over how best to implement Obergefell, not whether to retreat from it. Some voices on the traditionalist sidelines have urged the Court’s conservative wing to wage rhetorical war against Obergefell and Windsor so as to set up an eventual overruling of those decisions. But not a single justice took that approach today. A new Pew survey, incidentally, confirms that opposition to legal recognition of same-sex marriage has extended its historic decline, and is now in a minority even among Republicans. 

Lost in all the commotion over the U.S. Supreme Court’s several decisions today is another important decision with ramifications for school choice. The Georgia Supreme Court unanimously ruled in Gaddy v. Georgia Department of Revenue that plaintiffs had no standing to challenge the state’s tax-credit scholarship program because the scholarship funds are private funds, not a government expenditure:

We also reject the assertion that plaintiffs have standing because these tax credits actually amount to unconstitutional expenditures of tax revenues or public funds. The statutes that govern the Program demonstrate that only private funds, and not public revenue, are used.

The program allows donors to receive tax credits in return for contributions to qualified nonprofit scholarship organizations that help families send their children to the schools of their choice. Plaintiffs asserted that the program violated Georgia’s Blaine Amendment, which prohibits the state from giving public funds to religious schools. However, as we explained in our amicus brief, no public funds are involved. “Taxpayers choose to donate voluntarily using their own private funds and receive a tax credit for the amount of the donation; no money ever enters or leaves the treasury.” Neither does the state direct where the funds are used. “The state exercises no control over which scholarship organizations donors choose to support, which students receive scholarships, or at which schools parents choose to use the scholarships.” The Georgia Supreme Court agreed:

Individuals and corporations chose the [scholarship organizations] to which they wish to direct contributions; these private [scholarship organizations] select the student recipients of the scholarships they award; and the students and their parents decide whether to use their scholarships at religious or other private schools. The State controls none of these decisions. Nor does it control the contributed funds or the educational entities that ultimately receive the funds.

“Today’s victory has secured Georgia parents’ right to continue choosing the best education for their children,” stated Erica Smith, an attorney for the Institute for Justice, which represented scholarship parents in the Gaddy case. “This Court correctly recognized that government should promote educational opportunity and choice, not limit it as the plaintiffs proposed.” 

The decision should also have implications outside of the Peach State. More than 250,000 students are using tax-credit scholarships in 17 states, and more states are likely to adopt similar programs in the years to come. However, opponents of school choice have failed to persuade any high court to block the tax credits by adopting “tax expenditure analysis,” a method of accounting that treats tax credits, deductions, and exemptions as government expenditures. Indeed, the Georgia Supreme Court joins a unanimous chorus of decisions by the U.S. Supreme Court and numerous state supreme courts, including in Arizona and Alabama, holding that tax credits are not public funds. Additionally, the Florida Supreme Court declined to hear an appeal from a lower court decision that similarly ruled that “all funds received by private schools under the [Florida Tax Credit Scholarship Program] come from private, voluntary contributions” to scholarship organizations. Likewise, the Illinois Supreme Court declined to hear a challenge to a lower court decision that tax credits do not constitute public funds. Finally, the New Hampshire Supreme Court also unanimously rejected a challenge to its tax-credit scholarship program, though it did not explicitly rule on the question of public funding. No high court has ever ruled that tax-credit scholarships constitute government expenditures. 

The Institute for Justice also recently won a case against Montana’s Department of Revenue for unconstitutionally preventing families from using tax-credit scholarships at religious schools. Though the legislature had included no such limitation, the department claimed allowing families to use the scholarships at religious schools would violate the state’s Blaine Amendment. A trial court disagreed, holding that (you guessed it) private donations given in exchange for tax credits are not public expenditures.

No doubt opponents of educational choice will continue to devise creative arguments as to why the courts should halt choice programs, but it appears that the “tax expenditure” argument against tax-credit scholarship programs has run its course.

If the death of Justice Antonin Scalia overshadowed the 2015-16 Supreme Court term, the extended absence of his successor and the subsequent battle (including eliminating the judicial filibuster) over the appointment of Neil Gorsuch dominated Court news for 2016. Indeed, Scalia’s absence was felt more in the the lower quality and quantity of cases that the Court took up: The justices ended up deciding 62 cases after argumentthe fewest evernone of which would’ve made it into the “greatest hits” in recent years given the six or seven consecutive “terms of the century.” And recall that the Trump Department of Education withdrew a 2015 guidance letter construing Title IX to require schools to treat transgender students consistent with their expressed gender identity, removing the most politically charged case from the Court’s already muted docket. 

In any event, Justice Gorsuch took his seat on the bench in April, and his initial opinions showcase his promised readable style and principled textualist approach to statutory interpretation. With Justice Anthony Kennedy refraining from announcing his rumored retirementthough we could get a telegram from Salzburg this summerCourt-watchers will likely have to wait another year for the first nomination in a “post-nuclear” world.

Cato still filed in 13 merits cases on important issues ranging from separation of powers, free speech (both commercial and disparaging), and property rights. Improving on a 4-4 performance in an unusual term last yearwhere we still beat the government handilyCato achieved a strong 9-4 showing, besting the combined Obama-Trump effort of 8-12. Cato also effectively drew votes from across the judicial spectrum, winning 10 votes from both Chief Justice John Roberts and Justice Elena Kagan, 9 votes from Justice Stephen Breyer, and 8 votes each from Justices Kennedy, Samuel Alito, and Ruth Bader Ginsburg.

Here’s the breakdown, in the order the opinions arrived:

Winning side (9)NLRB v. SW General, IncExpressions Hair Design v. Schneiderman; Nelson v. Colorado; Bank of America Corp. v. Miami; Kokesh v. SEC; Packingham v. North Carolina; Matal v. Tam; Lee v. Unites States; Trinity Lutheran Church v. Comer.

Losing side (4): Bravo-Fernandez v. United StatesSalman v. United States; Turner v. United States; Murr v. Wisconsin.

Donald Trump’s inauguration also marked the official end of the Obama era at the Supreme Court. A pair of unanimous losses brought the administration’s total to 48, more than a quarter of all cases argued by his administration and approximately 50% higher than both the Bush and Clinton teams. His total winning percentage of under 47% was also significantly lower than both of his predecessors, who finished at 60% and 63% respectively. Of course, the Trump administration is off to an even less auspicious start, with a 1-9 record and 5 unanimous losses in just half a term. (The apportionment of cases on either side of the inauguration may be somewhat artificial, given that most or all of these relatively low-profile Supreme Court arguments were handled by career lawyers, not political appointees, and the government’s position didn’t change with the change of administration.)

This fall promises another blockbuster term, with the travel ban, Fourth Amendment protection of cellphone location data, same-sex wedding vendors, and likely the fate of mandatory union dues headlining the docket.

I’m sure I’ll have more to say on this in future commentary, but if you’d like to learn more about all these cases/trends and the views of Cato-friendly scholars and lawyers, register for our 16th Annual Constitution Day Symposium, which will be held September 18. That’s also when we’ll be releasing the latest volume of the Cato Supreme Court Review, the editing of which will consume much of my summer.

Last week, Thomas Firey blogged here on how proponents of higher minimum wages tend to be selective when reviewing the academic literature. In particular, they mischaracterize it as implying that papers tend to show minimum wage hikes do not affect employment.

Right on cue, an important new NBER paper from academics at the University of Washington examining Seattle’s two minimum wage hikes since 2014 suggests significant job and hour losses as the pay floor rose.

Seattle raised its minimum wage from $9.47 to $11 per hour in 2015 and then again to $13 per hour in 2016. The report concludes:

Using a variety of methods to analyze employment in all sectors paying below a specified real hourly rate, we conclude that the second wage increase to $13 reduced hours worked in low-wage jobs by around 9 percent, while hourly wages in such jobs increased by around 3 percent. Consequently, total payroll fell for such jobs, implying that the minimum wage ordinance lowered low-wage employees’ earnings by an average of $125 per month in 2016. Evidence attributes more modest effects to the first wage increase.

The paper seems significant both as a result in its own right and in addressing why sometimes results can be so different depending on methodology.

It suggests that the disemployment effects caused by reductions in demand for low-wage labor rise disproportionately as the wage floor increases. This is logical, but often needs to be articulated given many appeals to the effects of past increases in wage floors as evidence for new much higher minimum wages (see the “Fight for 15,” for example).

Perhaps more importantly, whereas much of the previous literature examines low-wage industries (usually restaurants or retail) or teenagers as proxies for those impacted by minimum wage changes, this paper uses a comprehensive data set allowing assessment on employment for all categories of low-wage employees across industries and demographics. This suggests that previous research examining the impact on individual industries such as restaurants may have significantly underestimated the negative effects on hours worked for low-wage employees.

Of course, as with all studies, there will be disputes about methodology. The paper itself notes that it does not include companies with multiple locations, such as fast food chains, which theory suggests could bias the results in either direction. They also acknowledge that some of the “jobs lost” may have been replaced by jobs in the broader area, making their results an overestimate. Overall though, this is further evidence, adding to an already large literature, suggesting raising the minimum wage to high levels has a very high cost indeed and that the competitive model of the labor market holds.

Today’s Trinity Lutheran ruling strikes a blow against patently unequal treatment of religious Americans under state laws, an inequality felt no more acutely than in education. But it does not yet get us to where we need to be.

The huge impact of today’s ruling is that it says religious institutions cannot be barred from participating in government programs simply because they are religious. The Trinity Lutheran Church could not be ruled ineligible to participate in a grant program to improve playgrounds simply because it is a religious entity. This should have been a simple decision: It is clearly unequal treatment of religious Americans under the law to say “the reason you are ineligible for this benefit for which anyone else is eligible is that you are religious.”

This is crucial, but it is not sufficient to throw open the doors to full freedom and equality in education.

First, as Justices Thomas and Gorsuch note in their concurring opinions, the Trinity decision keeps in place the ruling in Locke v. Davey (2004) that a state could deny a student a scholarship otherwise available to him because he planned to study to become a minister. Trinity supports the rationale of denying funding for someone to learn to propagate religion. But why should someone be barred from accessing otherwise generally available funding only because the profession he wished to follow was religious? From a school choice perspective, if a goal of sending your child to a religious school with a voucher is that he or she will learn to evangelize, precedent still stands in your way.

Second, Trinity says that religious institutions cannot be excluded from funding otherwise available to other groups. It does not state that it is unconstitutional to require people to fund a single government institution—in education, de facto atheist or agnostic public schooling systems—then pay a second time for institutions that are consistent with their beliefs and values. That would be crucial to truly treat religious people equally, and to totally clear the path for school vouchers. (Tax credits, as we see again in Georgia today, are a different, more liberated story!)

Today’s ruling is a welcome move in a decidedly right direction, but it is not sufficient to achieve full equality in education.

Property owners have long suffered under the Supreme Court’s erratic rulings. It got worse today. In Murr v. Wisconsin, the Court ruled against the owners, 5-3, with Justice Kennedy writing for the majority, Chief Justice Roberts writing a dissent, joined by Justices Thomas and Alito, Thomas writing a separate dissent, and Justice Gorsuch taking no part. The problem isn’t simply with the majority’s holding and opinion, it’s with the dissent as well. Only Thomas points in the right direction.

This was a regulatory takings case arising under the Fifth Amendment’s Takings Clause, which prohibits government from taking private property for public use without just compensation. In separate conveyances in 1994 and 1995, the Murrs, four siblings, inherited two contiguous lots on the St. Croix River that their parents had purchased in 1960 and 1963. The parents had built an ancestral home on the first lot. They bought the second for investment purposes.

The trouble began in 2004 when the Murrs sought to sell the second lot, valued at $410,000, and use the proceeds to upgrade the ancestral home. But they were blocked by a 1975 local zoning ordinance that treated the two lots as one, even though they had long been deeded and taxed separately. Under the ordinance they had to sell the lots together or not at all. Out $410,000, the Murrs sued, claiming that the ordinance had deprived them of their right to sell their property.

Here it gets complicated. In a 1992 decision, Lucas v. South Carolina Coastal Council, a 5-4 Court held that David Lucas was entitled to compensation after an ordinance prohibiting him from building on his property effectively wiped out all of its value. The problem with this “wipeout” rule, of course, is that most regulations leave at least some value in the property. When Justice Stevens called the rule “arbitrary” since “the landowner whose property is diminished in value 95% recovers nothing,” Justice Scalia, writing for the Court, responded tersely, “Takings law is full of these ‘all or nothing’ situations.”

In so writing, Scalia was citing a 1978 decision, Penn Central v. New York, which gave us a balancing test that nobody understands, least of all Justice Brennan who crafted it.  There that Court held that its test must be applied to “the parcel as a whole,” not to some portion of it. Combined with Lucas, that makes all the difference in the world for the Murrs. If their lots are treated separately, as they have always been except for this ordinance, virtually all value in the second has been wiped out and the Murrs, under Lucas, are entitled to compensation for the taking. But with the two lots combined as one, value remains, so the state can escape paying the Murrs any compensation. Thus, the question before the Court was whether the state could do that simply by treating the two lots as one.

Thomas joined the dissent because, as he wrote, “it correctly applies this Court’s regulatory takings precedents, which no party has asked us to reconsider.” But he went on to say that “it would be desirable for us to take a fresh look at our regulatory takings jurisprudence, to see whether it can be grounded in the original public meaning of the Takings Clause of the Fifth Amendment or the Privileges or Immunities Clause of the Fourteenth Amendment.” Why take a fresh look? Because the Court “has never purported to ground [its] precedents in the Constitution as it was originally understood.”

Justice Kennedy begins his opinion for the Court with Justice Holmes’s famous 1922 remark, that if a regulation goes “too far” it constitutes a taking—and the opinion goes downhill from there, a mass of confusions. Roberts does a tolerable job of dissecting it, concluding that “today’s decision knocks the definition of ‘private property’ loose from its foundation on stable state law rules and throws it into the maelstrom of multiple factors” for determining when a taking occurs. Correct, but Roberts himself does little better. In fact, he writes that the Court’s holding “that the regulation does not constitute a taking that requires compensation … does not trouble him.” (emphasis added) It’s only the Court’s reasoning that’s troubling (and rightly so). Roberts would have vacated the judgment below and remanded for the court to identify the relevant property using ordinary principles of Wisconsin property law.

But there, precisely, is the problem. State law defined the property. There were two lots, deeded and taxed separately, and that continued to the present. But then state law redefined the property. It was the later local ordinance that combined the lots, effectively taking one of the most basic rights an owner has, the right to dispose of (sell) that distinct second lot, bought for investment purposes. That was when the taking occurred, even though it wasn’t realized until the Murrs tried to sell the lot. The rest of the analysis coming from Penn Central’s multi-factor balancing test—like whether the Murrs retained value in “the parcel as a whole”—is just so much distraction from the core issue. And even if that were the question, it takes us back to Lucas’s error. Roberts’ invokes the metaphor that treats property like a “bundle of sticks,” signifying all the rights that go with property. Lucas held, wrongly, that compensation is due only after the last stick is taken—the wipeout rule. No, a taking occurs with the first stick taken. The stick the Murrs lost was the right to sell that lot. It’s no more complicated than that—unless the decision turns on a long line of mistaken precedents. One can only hope that Justice Thomas will one day have an opportunity to write the opinion that sets this sorry record straight.

The Supreme Court today came down with opinions in two cases in which Cato filed a brief. First, in Murr v. Wisconsin, it unfortunately ruled against property owners in an important regulatory-takings case. Then, in Lee v. United States, it correctly found that a criminal defendant who had virtually no chance to win at trial—absent jury nullifcation, which was our focus—was still prejudiced by (and entitled to a new trial due to) his counsel’s wrong advice that he wouldn’t be deported if he pled guilty.

Murr: Whenever you see a court invoke a “multifactor balancing test,” you know it’s just making stuff up. Alas that’s what happened in Murr v. Wisconsin, where a family was deprived of significant use of its property—not to mention economic benefits—because of an unfortunate operation of local law. The Supreme Court compounded that harm by essentially deferring to state determinations of property owners’ rights, and did so by applying that “multifactor” standard that allows it to reach whatever result it wants. This ruling shows that in the grander scheme, as Justice Thomas noted in his dissent, the Supreme Court needs to reevaluate its regulatory-takings jurisprudence altogether. (For more, see Cato’s amicus brief.)

Lee: The Court was correct to give even seemingly hopeless criminal defendants the right to adequate legal repreentation. Jae Lee only took a plea deal because his lawyer repeatedly assured him that he wouldn’t face deportation. The fact that going to trial, where he had no legal leg to stand on, would’ve almost certainly resulted in a longer prison sentence is immaterial. It’s clear that for Lee, who was brought to the United States from South Korea as a child, the risk of being forced to leave the only country he knows was much more important than a longer prison sentence. Lurking under this case was the controversial doctrine-that-must-not-be-named of jury nullification, which was essentially Lee’s only chance for acquittal. (For more, see Cato’s amicus brief.)

Stay tuned Monday for the Supreme Court’s final opinions of the term (especially Trinity Lutheran), as well as decisions on whether to take up the travel-ban case, Masterpiece Bakery (vendors for same-sex weddings), and Peruta (Second Amendment right to carry). And maybe, just maybe, Justice Anthony Kennedy will announce his retirement—though if I had to bet, I’d say he sticks around another year.

Yesterday, I posted “Five Questions I Will Use to Evaluate the Phantom Senate Health Care Bill.” The phantom bill took corporeal form today when Senate Republicans released the text of the “Better Care Reconciliation Act.”

So how does the Senate bill fare with regard to my five questions?

1. Would it repeal the parts of ObamaCare—specifically, community rating—that preclude secure access to health care by causing coverage to become worse for the sick and the Exchanges to collapse?

No. The Senate bill would preserve ObamaCare’s community-rating price controls. To be fair, it would modify them. ObamaCare forbids premiums for 64-year-olds to be more than three times premiums for 18-year-olds. The Senate bill would allow premiums for the older cohort to be up to five times those for the younger cohort. But these “age rating” restrictions are the least binding part of ObamaCare’s community-rating price controls. Those price controls would therefore continue to wreak havoc in the individual market. The Senate bill would also preserve nearly all of ObamaCare’s other insurance regulations. 

2. Would it make health care more affordable, or just throw subsidies at unaffordable care?

The Senate bill, like ObamaCare, would simply throw taxpayer dollars at unaffordable care, rather than make health care more affordable.

Making health care more affordable means driving down health care prices. Recent experiments have shown that cost-conscious consumers do indeed push providers to cut prices. (See below graph. Source.)  

If you want to see that level of price reductions, you need something along the lines of “large” health savings accounts.

The Senate bill would make only minor adjustments to tax-free HSAs that would not deliver lower prices. 

3. Would it actually sunset the Medicaid expansion, or keep the expansion alive long enough for a future Democratic Congress to rescue it?

The bill would keep it alive so ObamaCare supporters can rescind the repeal.

To be fair, the Senate bill would forbid the 19 states that haven’t implemented ObamaCare’s Medicaid expansion from doing so. As explained below, however, the bill would expand a different entitlement–ObamaCare’s Exchange subsidies–to that population. 

The bill would also repeal the Medicaid expansion in 2024. Yet three new Congresses would take their seats between passage of the bill and when it would repeal the Medicaid expansion. We may even get a new president by then. It is almost guaranteed that one of those Congresses (if not all three) will be more supportive of the Medicaid expansion than the current Congress. Such a Congress could rescind that before it ever happens, as if it never happened.

Senate Republicans rigged this Medicaid-expansion repeal never to take effect.

4. Tax cuts are almost irrelevant—how much of ObamaCare’s spending would it repeal?

This one is hard to answer without an official score from the Congressional Budget Office–or even with one. Senate Republicans played budget games that hide how much of ObamaCare’s spending they are keeping. 

Senate Republicans required the CBO to compare the cost of the bill to projections of Exchange enrollment and spending that everyone agrees are inflated. So the forthcoming CBO score will make it look like the Senate bill increases the uninsured more than it actually does. Put differently, the CBO score will count some people as losing coverage under the Senate bill even though they weren’t going to have coverage anyway. By the same token, this gimmick will make the Senate bill look like it cuts ObamaCare spending more than it does. It requires the CBO to score the Senate bill as eliminating ObamaCare outlays that were never going to happen. The sneaky part is that this budget gimmick then allows Senate Republicans to apply those phantom cuts either to new spending or deficit reduction.

One way the Senate bill applies those phantom cuts to new spending is by expanding ObamaCare to an additional 2.6 million Americans. Thirty-one states and D.C. have implemented ObamaCare’s Medicaid expansion. The Kaiser Family Foundation estimates that in the 19 states that have not expanded Medicaid, there are 2.6 million able-bodied adults who earn too much to qualify for Medicaid but less than 100 percent of the federal poverty level, and thus not enough to receive a “premium assistance tax credit” toward the purchase of an Exchange plan. (We call them tax credits, but they are mostly outlays.) In 2020, the Senate bill would open eligibility for the tax credits to everyone below 100 percent of the federal poverty level in states that do not implement the expansion. This would expand ObamaCare to another 2.6 million people. In effect, it is Medicaid expansion by another means–and it effectively snubs GOP officials in the 19 states that did the right thing (reduced federal deficits, etc.) by not expanding Medicaid.

The Senate bill would also fund ObamaCare’s “cost-sharing” subsidies, something the law’s Democratic authors never did. That, too, would expand ObamaCare beyond what a Democratic Congress created.

So even if the bill’s spending cuts were real (they’re not; see above), we still wouldn’t really know how much the Senate bill reduces actual federal outlays. All we know for sure is that Senate Republicans want to hide how much ObamaCare spending they are preserving, and that the CBO score will likely overstate the bill’s deficit reduction.

5. If it leaves major elements of ObamaCare in place, would it lead voters to blame the ongoing failure of those provisions on (supposed) free-market reforms?

Yes.

Supporters of the Senate bill are calling it a huge win for conservative governance. 

Finished reading the Senate HC bill. Put simply: If it passes, it’ll be the greatest policy achievement by a GOP Congress in my lifetime.

— Avik Roy (@Avik) June 22, 2017

Yet the bill does almost nothing to address the fundamental flaws and instability in ObamaCare’s architecture. Community rating and other provisions of the law will continue to increase premiums, degrade the quality of coverage, and destabilize insurance markets. ObamaCare supporters, including those who also support a single-payer system, will be quick to blame ObamaCare’s failures on the conservative, free-market ideology that supposedly animates the Senate bill. Such claims will be nonsense. But the narrative will be difficult to combat. The Senate bill could therefore set back the cause of free-market health care reform by decades–yet another feature it shares with ObamaCare. 

—–

The Senate bill is not even a step in the right direction. If this is the choice facing congressional Republicans, it would be better if they did nothing. Consumers would continue to struggle under ObamaCare’s regulations, but those costs would focus attention on their source. The lines of accountability would be clearer if Republicans signed off on legislation that seems designed to rescue ObamaCare rather than repeal and replace it.

One of the liberties protected by the Constitution is the right to do business in other states, on the same terms as companies based in those states. That right is enshrined in the Privileges and Immunities Clause of Article IV, section 2, one of the handful of individual rights that the Framers saw fit to safeguard even before the Bill of Rights was enacted. In fact, ensuring the opportunity to do business out-of-state on equal terms with a state’s residents was one of the principal motivations for holding the Constitutional Convention in the first place. But the U.S. Court of Appeals for the Ninth Circuit has condoned California’s violation of that right.

California enacted a set of commercial-fishing license fees that require nonresidents to pay several times more than residents. The system is explicitly discriminatory, harshly regressive, and intentionally protectionist. The Supreme Court and the Fourth Circuit, in substantively identical circumstances, have ruled these kinds of provisions to be impermissible: States must charge license fees equally to residents and nonresidents alike, or else bear the burden of justifying their discrimination (which California has made little real effort to do). But an en banc majority of the Ninth Circuit quite literally imposed the opposite rule. Not only did it uphold California’s discrimination, but it supported its holding with guesstimates of tax payments and rough calculations of economic costs that the state itself had never supplied. The result is conflict between two federal circuits and an open door for new methods of discrimination that the Constitution has always forbidden.

Now, a group of fishermen, with amicus support from Cato, is asking the Supreme Court to hear their case and strike down California’s differential commercial fishing license fees. Under the Ninth Circuit’s reasoning, everything California spends on fishery regulation is considered a “subsidy” to that industry—a subsidy paid by resident taxpayers for which the state must be compensated. This framing ignores the fact that nonresident fishermen also pay California sales tax and California income tax for income derived from in-state activities (when their income is enough to qualify for taxation, which it often isn’t) and directly contradicts controlling Supreme Court precedent. This dangerous rationale could otherwise be applied to any number of the nearly one-third of US occupations currently regulated by the states, and if unchecked could contribute significantly to creating just the sort of balkanized national economy that the Constitution was intended to prevent.

The fact of the matter is that California is attempting to protect local business interests at the expense of nonresidents and dress up its blatantly protectionist violation of the Privileges and Immunities Clause in reasonable-sounding language about fairness. The Supreme Court should grant certiorari and remind the Ninth Circuit that this sort of behavior is constitutionally unacceptable.

One of the original arguments for educating children in traditional public schools is that they are necessary for a stable democratic society. Indeed, an English parliamentary spokesman, W.A. Roebuck, argued that mass government education would improve national stability through a reduction in crime.

Public education advocates, such as Stand for Children’s Jonah Edelman and the American Federation for Teachers’ Randi Weingarten, still insist that children must be forced to attend government schools in order to preserve democratic values.

Theory

In principle, if families make schooling selections based purely on self-interest, they may harm others in society. For instance, parents may send their children to schools that only shape academic skills. As a result, children could miss out on imperative moral education and harm others in society through a higher proclivity for committing crimes in the future.

However, since families value the character of their children, they are likely to make schooling decisions based on institutions’ abilities to shape socially desirable skills such as morality and citizenship. Further, since school choice programs increase competitive pressures, we should expect the quality of character education to increase in the market for schooling. An increase in the quality of character education decreases the likelihood of criminal activity and therefore improves social order.

Evidence

There are only three studies causally linking school choice programs to criminal activity. Two studies examine the impacts of charter schools and one looks at the private school voucher program in Milwaukee. Each study finds that access to a school choice program substantially reduces the likelihood that a student will commit criminal activity later on in life.

Notably, Dobbie & Fryer (2015) find that winning a random lottery to attend a charter school in Harlem completely eliminates the likelihood of incarceration for males. In addition, they find that female charter school lottery winners are less than half as likely to report having a teen pregnancy.

Note: A box highlighted in green indicates that the study found statistically significant crime reduction.

According to the only causal studies that we have on the subject, school choice programs improve social order through substantial crime reduction. If public education advocates want to continue to clench onto the idea that traditional public schools are necessary for democracy, they ought to explain why the scientific evidence suggests the opposite.

Of course, these impacts play a significant role in shaping the lives of individual children. Perhaps more importantly, these findings indicate that voluntary schooling selections can create noteworthy benefits for third parties as well. If we truly wish to live in a safe and stable democratic society, we ought to allow parents to select the schooling institutions that best shape the citizenship skills of their own children.

On Monday, the Supreme Court ruled that a North Carolina preventing sex offenders from accessing social media and other websites – without any attempt to tailor restrictions to potential contact with minors – violated the First Amendment. But restrictions on the freedom of speech aren’t the only unconstitutional deprivations sex offenders face.

In 1994, Minnesota passed what has become arguably the most aggressive and restrictive sex-offender civil-commitment statute in the country. The Minnesota Sex Offender Program (MSOP) provides for the indefinite civil commitment of “sexually dangerous” individuals, over and beyond whatever criminal sentence they may have already completed.

And while there is technically a system in place whereby committed individuals can petition for release or a loosening of their restrictions, in the more than 20 years that the MSOP has existed, only one person has ever been fully discharged (someone in the program for offenses committed as a minor, and he was only discharged after a court challenge). As Craig Bolte, one person committed in the MSOP, has testified, there is a distinct feeling that “the only way to get out is to die.”

The Supreme Court has held that states have the authority to commit individuals against their will outside the traditional criminal justice context, but only for the purpose of keeping genuinely dangerous people off the streets while undergoing rehabilitative treatment. Punishment and deterrence are legitimate goals exclusively of the criminal justice system, so any deprivation of liberty for either of those two purposes must follow only from that system, with all the procedural protections our Constitution requires.

What sets Minnesota’s program apart from other schemes that have been upheld is that it doesn’t provide for any sort of periodic assessment to determine who does or doesn’t meet the requirements for discharge. By the state’s own admission, hundreds of civilly committed individuals have never received an assessment of their risk to the public, and hundreds more have received assessments only sporadically.

The MSOP is aware that at least some of the people in its custody satisfy statutory-discharge criteria, yet has taken no steps to determine who they are, let alone begin discharge proceedings. For these reasons, Kevin Karsjens and other similarly committed individuals have brought a federal class action challenging the MSOP as an irrational violation of their right to freedom from bodily restriction. They prevailed in the trial court, but the U.S. Court of Appeals for the Eighth Circuit reversed, stating that the plaintiffs have no liberty interest in freedom from physical restraint—not that their liberty interest must be balanced against the state’s interest in protecting the public from violence, but that for sex offenders, that liberty interest simply does not exist.

The plaintiffs now seek Supreme Court review. Cato, joined by the Reason Foundation, has filed an amicus brief in support of the committed individuals. The lack of periodic risk assessment and the punitive nature of the state’s policies represent an unconstitutional attempt to exact effectively criminal penalties on individuals who have not been provided the full procedural protections of criminal law.

The high court should intervene and repair the damage done by the unfettered confinement of sex offenders and restore the appropriate level of constitutional scrutiny to serious deprivations of liberty.

The Supreme Court will decide whether to take up Karsjens v. Piper when it returns from its summer recess.

Leftists don’t have many reasons to be cheerful.

Global economic developments keep demonstrating (over and over again) that big government and high taxes are not a recipe for prosperity. That can’t be very encouraging for them.

They also can’t be very happy about the Obama presidency. Yes, he was one of them, and he was able to impose a lot of his agenda in his first two years. But that experiment with bigger government produced very dismal results. And it also was a political disaster for the left since Republicans won landslide elections in 2010 and 2014 (you could also argue that Trump’s election in 2016 was a repudiation of Obama and the left, though I think it was more a rejection of the status quo).

But there is one piece of good news for my statist friends. The tax cuts in Kansas have been partially repealed. The New York Times is overjoyed by this development.

The Republican Legislature and much of Kansas has finally turned on Gov. Sam Brownback in his disastrous five-year experiment to prove the Republicans’ “trickle down” fantasy can work in real life — that huge tax cuts magically result in economic growth and more, not less, revenue. …state lawmakers who once abetted the Brownback budgeting folly passed a two-year, $1.2 billion tax increase this week to begin repairing the damage. …It will take years for Kansas to recover.

And you won’t be surprised to learn that Paul Krugman also is pleased.

Here’s some of what he wrote in his NYT column.

…there was an idea, a theory, behind the Kansas tax cuts: the claim that cutting taxes on the wealthy would produce explosive economic growth. It was a foolish theory, belied by decades of experience: remember the economic collapse that was supposed to follow the Clinton tax hikes, or the boom that was supposed to follow the Bush tax cuts? …eventually the theory’s failure was too much even for Republican legislators.

Another New York Times columnist did a victory dance as well.

The most momentous political news of the past week…was the Kansas Legislature’s decision to defy the governor and raise income taxes… Kansas, under Gov. Sam Brownback, has come as close as we’ve ever gotten in the United States to conducting a perfect experiment in supply-side economics. The conservative governor, working with a conservative State Legislature, in the home state of the conservative Koch brothers, took office in 2011 vowing sharp cuts in taxes and state spending, except for education — and promising that those policies would unleash boundless growth. The taxes were cut, and by a lot.

Brownback’s supply-side experiment was a flop, the author argues.

The cuts came. But the growth never did. As the rest of the country was growing at rates of just above 2 percent, Kansas grew at considerably slower rates, finally hitting just 0.2 percent in 2016. Revenues crashed. Spending was slashed, even on education… The experiment has been a disaster. …the Republican Kansas Legislature faced reality. Earlier this year it passed tax increases, which the governor vetoed. Last Tuesday, the legislators overrode the veto. Not only is it a tax increase — it’s even a progressive tax increase! …More than half of the Republicans in both houses voted for the increases.

If you read the articles, columns, and editorials in the New York Times, you’ll notice there isn’t a lot of detail on what actually happened in the Sunflower State. Lots of rhetoric, but short on details.

So let’s go to the Tax Foundation, which has a thorough review including this very helpful chart showing tax rates before the cuts, during the cuts, and what will now happen in future years (the article also notes that the new legislation repeals the exemption for small-business income).

We know that folks on the left are happy about tax cuts being reversed in Kansas. So what are conservatives and libertarians saying?

The Wall Street Journal opined on what really happened in the state.

…national progressives are giddy. Their spin is that because the vote reverses Mr. Brownback’s tax cuts in a Republican state that Donald Trump carried by more than 20 points, Republicans everywhere should stop cutting taxes. The reality is more prosaic—and politically cynical. …At bottom the Kansas tax vote was as much about unions getting even with the Governor over his education reforms, which included making it easier to fire bad teachers.

And the editorial also explains why there wasn’t much of an economic bounce when Brownback’s tax cuts were implemented, but suggests there was a bit of good news.

Mr. Brownback was unlucky in his timing, given the hits to the agricultural and energy industries that count for much of the state economy. But unemployment is still low at 3.7%, and the state has had considerable small-business formation every year since the tax cuts were enacted. The tax competition across the Kansas-Missouri border around Kansas City is one reason Missouri cut its top individual tax rate in 2014.

I concur. When I examined the data a few years ago, I also found some positive signs.

In any event, the WSJ is not overly optimistic about what this means for the state.

The upshot is that supposedly conservative Kansas will now have a higher top marginal individual income-tax rate (5.7%) than Massachusetts (5.1%). And the unions will be back for another increase as spending rises to meet the new greater revenues. This is the eternal lesson of tax increases, as Illinois and Connecticut prove.

And Reason published an article by Ben Haller with similar conclusions.

What went wrong? First, the legislature failed to eliminate politically popular exemptions and deductions, making the initial revenue drop more severe than the governor planned. The legislature and the governor could have reduced government spending to offset the decrease in revenue, but they also failed on that front. Government spending per capita remained relatively stable in the years following the recession to the present, despite the constant fiscal crises. In fact, state expenditure reports from the National Association of State Budget Officers show that total state expenditures in Kansas increased every year except 2013, where expenditures decreased a modest 3 percent from 2012. It should then not come as a surprise that the state faced large budget gaps year after year. …tax cuts do not necessarily pay for themselves. Fiscal conservatives, libertarians, …may have the right idea when it comes to lowering rates to spur economic growth, but lower taxes by themselves are not a cure-all for a state’s woes. Excessive regulation, budget insolvency, corruption, older demographics, and a whole host of other issues can slow down economic growth even in the presence of a low-tax environment.

Since Haller mentioned spending, here’s another Tax Foundation chart showing inflation-adjusted state spending in Kansas. Keep in mind that Brownback was elected in 2010. The left argued that he “slashed” spending, but that assertion obviously is empty demagoguery.

Now time for my two cents.

Looking at what happened, there are three lessons from Kansas.

  1. A long-run win for tax cutters. If this is a defeat, I hope there are similar losses all over the country. If you peruse the first chart in this column, you’ll see that tax rates in 2017 and 2018 will still be significantly lower than they were when Brownback took office. In other words, the net result of his tenure will be a permanent reduction in the tax burden, just like with the Bush tax cuts. Not as much as Brownback wanted, to be sure, but leftists are grading on a very strange curve if they think they’ve won any sort of long-run victory.
  2. Be realistic and prudent. It’s a good idea to under-promise and over-deliver. That’s true for substance and rhetoric.
    1. Don’t claim that tax cuts pay for themselves. That only happens in rare circumstances, usually involving taxpayers who have considerable control over the timing, level, and composition of their income. In the vast majority of cases, tax cuts reduce revenue, though generally not as much as projected once “supply-side” responses are added to the equation.
    2. Big tax cuts require some spending restraint. Since tax cuts generally will lead to less revenue, they probably won’t be durable unless there’s eventually some spending restraint (which is one of the reasons why the Bush tax cuts were partially repealed and why I’m not overly optimistic about the Trump tax plan).
    3. Tax policy matters, but so does everything else. Lower tax rates are wonderful, but there are many factors that determine a jurisdiction’s long-run prosperity. As just mentioned, spending restraint is important. But state lawmakers also should pay attention to many other issues, such as licensing, regulation, and pension reform.
  3. Many Republicans are pro-tax big spenders. Most fiscal fights are really battles over the trend line of spending. Advocates of lower tax rates generally are fighting to reduce the growth of government, preferably so it expands slower than the private sector. Advocates of tax hikes, by contrast, want to enable a larger burden of government spending. What happened in Kansas shows that it’s hard to starve the beast if you’re not willing to put government on a diet.

By the way, all three points are why the GOP is having trouble in Washington.

The moral of the story? As I noted when writing about Belgium, it’s hard to have good tax policy if you don’t have good spending policy.

Recent terrorist attacks in Europe have increased death tolls and boosted fears on both sides of the Atlantic. Last year, I used common risk analysis methods to measure the annual chance of being murdered in an attack committed on U.S. soil by foreign-born terrorists. This blog is a back of the envelope estimate of the annual chance of being murdered in a terrorist attack in Belgium, France, Germany, Sweden and the United Kingdom. The annual chance of being murdered in a terrorist attack in the United States from 2001 to 2017 is about 1 in 1.6 million per year. Over the same period, the chances are much lower in European countries.

Methods and Sources

Belgium, France, and the United Kingdom are included because they have suffered some of the largest terrorist attacks in Europe in recent years. Sweden and Germany are included because they have each allowed in large numbers of refugees and asylum seekers who could theoretically be terrorism risks.

The main sources of data are the Global Terrorism Database at the University of Maryland for the years of 1975 to 2015, with the exception of 1993. I used the RAND Database of Worldwide Terrorism to fill in the year 1993. I have not compiled the identities of the attackers, any other information about them, or the number of convictions for planning attacks in Europe. The perpetrators are excluded from the fatalities where possible. Those databases do not yet include the years 2016 and 2017, so I relied on Bloomberg and Wikipedia to supply a rough estimate of the number of fatalities in terrorist attacks in each country in those two years through June 20, 2017. The United Nations Population Division provided the population estimates for each country per year.

Terrorism Fatality Risk for Each Country

This section displays the number of terrorist fatalities and the annual chance of a resident of each country being murdered. The results in this section answer three important questions: What is the annual chance of having been killed in a terrorist attack from 1975 through 2017 in each European country? Has the annual chance of being killed in a terrorist attack gone up since the 9/11 attacks? How does the risk in Europe compare to the risk in the United States?

European Terrorism from 1975 through June 20th, 2017

Residents of the United Kingdom have suffered the most from terrorism. Almost 78 percent of the European fatalities reported in Table 1 were residents of the United Kingdom and about 95 percent of those British fatalities occurred before 2001.

Residents of the United Kingdom suffered the most from terrorism with the highest annual chance of dying at one in 964,531 per year (Table 1).

Table 1: Fatalities and Annual Chance of Dying in a Terrorist Attack, 1975–June 20th, 2017

 

Fatalities

Annual Chance of Dying

United Kingdom

2,632

1 in 964,531

Belgium

64

1 in 6,936,545

France

506

1 in 4,984,301

Sweden

20

1 in 19,001,835

Germany

148

1 in 23,234,378

United States

3568

1 in 3,241,363

Sources: Global Terrorism Database, RAND Corporation, United Nations Population Division, Bloomberg, Wikipedia, author’s calculations.

The deadliest terrorist attack across these five European countries was the 1988 bombing of Pan Am 103 over Lockerbie, Scotland, which killed 270. An additional 110 residents of these five countries were murdered in that year. The next deadliest year was 1976 with 354 victims. The third deadliest year was 1975, when there were 1,252 murders in terrorist attacks (Figure 1). The number of fatalities in European terrorist attacks increased to 172 in 2015 and fell to 133 in 2016. Every death in a terrorist attack is a tragedy but Europeans should feel comforted by the fact that their chances of dying of such an attack are minuscule.

Figure 1: Terrorism Fatalities in Belgium, France, Germany, Sweden, and the United Kingdom, 1975–2017

Sources: Global Terrorism Database, RAND Corporation, United Nations Population Division, Bloomberg, Wikipedia.

Terrorism Risk in Europe versus the United States

The annual chance of being murdered in any terrorist attack in the United States from 2001 to 2017 is about 1 in 1.6 million per year (Table 2). The annual chances were much higher in every European country during the same period. Table 2 also includes the United States without the fatalities from the 9/11 attacks as they were such extremely deadly outliers that are unlikely to be repeated. Thus, excluding the 9/11 attacks in one example allows a potentially better cross-country comparison of the annual fatality chances. Strikingly, the annual chance of an American being murdered in a terrorist attack is almost identical across the two periods when 9/11 is excluded – evidence that those attacks were outliers that punctuated an otherwise steady trend.

Prior to 2001, the annual chance of dying in a terrorist attack in every country in Europe was higher than in the United States, with the sole exception of Sweden. When 9/11 occurred, the relative risk to residents in these countries flipped and the United States became more dangerous.

Table 2: Annual Chance of Dying in a Terrorist Attack by Period

 

Annual Chance of Dying in a Terrorist Attack

 

1975–2000

2001–2017

United States

19,767,153

1,602,021

France

6,059,061

4,006,878

Belgium

9,611,873

4,373,511

United Kingdom

590,389

8,796,562

Sweden

22,145,655

15,858,016

United States (exc. 9/11)

19,767,153

19,772,468

Germany

17,338,091

47,429,484

Sources: Global Terrorism Database, RAND Corporation, United Nations Population Division, Bloomberg, Wikipedia, author’s calculations.  Through June 20th, 2017.

Terrorism Risk Since 9/11

Many think that Islamic terrorism since 2001 is deadlier than past terrorism. This is certainly true in the United States where at least 3,246 people were killed on U.S-soil in all terror attacks from 2001 to through 2017 compared to only 322 from 1975 through 2000. Those differences are reflected in the greater, but still small, annual chance of an American dying from terrorism in the later period (Table 2). The chances of being murdered in a terrorist attack are also higher in France, Belgium, and Sweden but they are still tiny. Residents in the United Kingdom and Germany were less likely to die, per year, in a terrorist attack from 2001 through 2017.

The largest decline in risk was in the United Kingdom where the annual chance of being killed by terrorists went from 1 in 590,380 per year prior to 2001 to 1 in 8,796,562 per year from 2001 through June 20th, 2017. For 2016 and 2017 (so far), the chance of a British resident dying in a terrorist attack is about 1 in 3.5 million per year. The chance of a British resident being murdered in a non-terrorist homicide in 2013 was about 133 times as great as his or her chance of being murdered in a terrorist attack in the same year.

Conclusion

The chance of an American being murdered in a terrorist attack is greater than for a European resident of any of these five countries from 2001 through June 20th, 2017. Future terrorist attacks are unlikely to be as deadly as 9/11 even though there is a fat-tailed risk. When the unprecedented deadliness of 9/11 is excluded, the annual risk of being killed in a terrorist attack is reversed and residents of every European country except for Germany have a greater chance of being murdered than an American on U.S. soil.

The number of deaths from terrorism is so tiny that the addition or subtraction of another few murders can drastically change the annual chances of being murdered, which is evidence of how manageable the threat from terrorism actually is. If terrorism was as common or deadly as people erroneously believe it to be then another attack or two would not make a big difference in the annual chances.

A total of 3,370 residents of Belgium, France, Germany, Sweden, and the United Kingdom were murdered by terrorists from 1975 to June 20th, 2017. About 231 million people lived in those five countries in 2015. If they were combined into a single country, the annual chance of dying would be about 1 in 2.8 million per year over that period. The annual chance of being killed in a terrorist attack was a mere 1 in 8.3 million per year if those five European countries were judged as one state from 2001 through June 20th, 2017. That is a lower risk than the 1 in 1.6 million per year chance of an American being murdered in a terrorist attack on U.S. soil from 2001 through 2017. Even in Europe, terrorism is a relatively small and manageable threat.

There has been debate this week about how many libertarians are there. The answer is: it depends on how you measure it and how you define libertarian. The overwhelming body of literature, however, using a variety of different methods and different definitions, suggests that libertarians comprise about 10-20% of the population, but may range from 7-22%.

Furthermore, if one imposes the same level of ideological consistency on liberals, conservatives, and communitarians/populists that many do on libertarians, these groups too comprise similar shares of the population.

In this post I provide a brief overview of different methods academics have used to identify libertarians and what they found. Most methods start from the premise that libertarians are economically conservative and socially liberal. Despite this, different studies find fairly different results. What accounts for the difference?

1) First, people use different definitions of libertarians

2) Second, they use different questions in their analysis to identify libertarians

3) Third, they use very different statistical methods.

Let’s start with a few questions: How do you define a libertarian? Is there one concrete libertarian position on every policy issue?

What is the “libertarian position” on abortion? Is there one? What is the “libertarian position” on Social Security? Must a libertarian support abolishing the program, or might a libertarian support private accounts, or means testing, or sending it to the states instead? A researcher will find fewer libertarians in the electorate if they demand that libertarians support abolishing Social Security rather than means testing or privatizing it. 

Further, why are libertarians expected to conform to an ideological litmus test but conservatives and liberals are not? For instance, what is the “conservative position” on Social Security? Is there one? When researchers use rigid ideological definitions of liberals and conservatives, they too make up similar shares of the population as libertarians. Thus, as political scientist Jason Weeden has noted, researchers have to make fairly arbitrary decisions about where the cut-off points should be for the “libertarian,” “liberal,” or “conservative” position. This pre-judgement strongly determines how many libertarians researchers will find.

Next, did researchers simply ask people if they identify as libertarian, or did they ask them public policy questions (a better method)? If the latter, how many issue questions did they ask? Then, what questions did they ask?

For instance, what questions are used to determine if someone is “liberal on social issues”? For instance, did the researcher ask survey takers about legalizing marijuana or did the researcher ask about affirmative action for women in the workplace instead? Libertarians will answer these questions very differently and that will impact the number of libertarians researchers find.

While there is no perfect method, the fact that academics using a variety of different questions, definitions, and statistical techniques still find that the number is somewhere between 7-22% gives us some idea that the number of libertarians is considerably larger than 0.

Next, I give a brief overview of the scholarly research on the estimated share of libertarians, conservatives, liberals, and communitarians in the American electorate. I organize their findings by methods used starting with most empirically rigorous:

Ask people to answer a series of questions on a variety of policy topics and input their responses into a statistical algorithm

In theses studies, researchers ask survey respondents a variety of issue questions on economic and social/cultural issues. Then, they input people’s answers into a statistical clustering technique and allow an algorithm to find the number of libertarians. This is arguably the strongest method to identify libertarians.

  1. Political scientists Stanley Feldman and Christopher Johnson use a sophisticated statistical method to find ideological groups in the electorate (latent class analysis). They find six ideological groups based on answers to a variety of questions on economic and social issues. Feldman and Johnson’s results indicate that about:
  • 15% are likely libertarians (conservative on economics and liberal on social issues)
  • 23% are likely liberals
  • 17% are likely conservatives
  • 8% are communitarians/populists (liberal on economics and very conservative on social issues)
  • 13% are economic centrists but social liberals
  • 24% are economic centrists but lean socially conservative.   

Ask people to answer a series of questions on a variety of policy topics and plot their average responses on a 2-dimensional plot

In these studies, researchers 1) average responses to multiple questions on economics and then 2) average responses to multiple questions on social/cultural/identity/lifestyle issues. They then take the two averaged scores to plot respondents on a 2-dimensional graph (Economic Issues by Social Issues).

  1. Political scientist Jason Weeden averages people’s responses to questions on economics (income redistribution and government assistance to the poor) and on social issues (abortion, marijuana legalization, the morality of premarital sex) found in the General Social Survey. He finds:
  • 11% of Americans are libertarian
  • 11% are conservative
  • 14% are liberal
  • 9% are communitarian/populist
  • Remaining people are roughly evenly distributed between these groups

  1. Political Scientists William Clagget, Par Jason Engle, and Byron Shafer use answers to variety of questions on economics and the “culture” issues from the American National Election Studies from 1992-2008 to determine that:
  • 10% of the population is libertarian
  • 11% is populist
  • 30% is conservative
  • 30% is liberal
  • (Their methods are unclear and their “culture” index may include questions about spending on crime and support for affirmative action for women.)
  1. Political scientists William Maddow and Stuart Lilie average responses to three questions on government economic intervention and three questions about personal freedom from the American National Election Studies and find that:
  • 18% of the population is libertarian
  • 24% is liberal
  • 17% is conservative
  • 26% is communitarian 
  1. The Public Religion Research Institute added (rather than averaged) responses to 9 questions on social and economic issues and made a decision that cumulative scores of 9-25 would be coded as libertarian. Doing this they find that:
  • 7% of Americans are libertarian
  • 15% lean libertarian
  • 17% lean communalist
  • 7% are communalist
  • 54% have mixed attitudes
  1. For a previous Cato blog post, conducted a similar analyses and created three separate estimations. Each used averaged responses on economic questions, but were plotted alongside their average answers to either 1) social issues questions, 2) race/identity questions, and 3) criminal justice and racial equality questions.
  • Using economic and social issues I find:
    • 19% Libertarian
    • 20% Communitarian
    • 31% Conservative
    • 30% liberal
  • Using economic and race issues, I find:
    • 19% Libertarian
    • 15% Communitarian
    • 33% Conservative
    • 33% Liberal
  • Using economic and criminal justice issue positions I find:
    • 24% Libertarian
    • 15% Communitarian
    • 28% Conservative
    • 33% Liberal

Ask people to answer a question about economic policy and a question about social policy

While not as rigorous as asking people multiple questions, this is another quick way to observe the diversity of ideological opinion in surveys.

  1. Nate Silver of FiveThirtyEight using two questions from the General Social Survey: support for same-sex marriage and whether government ought to reduce income inequality with high taxes on the rich and income assistance to the poor, finds
  • 22% are libertarian
  • 25% conservative
  • 34% liberal
  • 20% communitarian 

  1. David Kirby and David Boaz use answers to 3 survey questions and find that 15% of the population are libertarians (agree that less government is better, and that free markets can better solve economic problems, and that we should be tolerant of different lifestyles)

Ask people if they identify as libertarian and know what the word means

The Pew Research Center found that 11% of Americans agree that the word “libertarian describes me well” and know libertarians “emphasize individual freedom by limiting the role of government.”

Ask people if they identify as socially liberal and fiscally conservative, an oft-used definition of libertarianism

A 2011 Reason-Rupe poll found that 8% of Americans said they were “conservative” on economic issues and also “liberal” on social issues. But the same method found 9% identified as “liberal” on both social and economic issues, 2% identified as liberal on economic issues and conservative on social issues, and 31% identified as conservative on both social and economic issues. They remainder were somewhere in the middle These results are consistent with polls from Rasmussen, and Gallup which finds a public preference for the word “conservative” over “liberal.” This means many people who endorse liberal policy are inclined to self-identify as moderate or conservative.

Conclusions

In sum, the overwhelming body of empirical evidence suggests that libertarians’ share of the electorate is likely somewhere between 10-20% and the conservative and liberal shares’ aren’t that much greater. Libertarians exist, quite a lot, but you have to know what you’re looking for.

Rumor has it that tomorrow is the day Senate Republican leaders will unveil the health care bill they have been busily assembling behind closed doors. So few details have emerged, President Trump could maybe learn something from Senate Majority Leader Mitch McConnell about how to prevent leaks. Even GOP senators are complaining they haven’t been allowed to see the bill.

Here are five questions I will be asking about the Senate health care bill if and when it sees the light of day.

  1. Would it repeal the parts of ObamaCare—specifically, community rating—that preclude secure access to health care for the sick by causing coverage to become worse for the sick and the Exchanges to collapse?
  2. Would it make health care more affordable, or just throw subsidies at unaffordable care?
  3. Would it actually sunset the Medicaid expansion, or keep the expansion alive long enough for a future Democratic Congress to rescue it?
  4. Tax cuts are almost irrelevant—how much of ObamaCare’s spending would it repeal?
  5. If it leaves major elements of ObamaCare in place, would it lead voters to blame the ongoing failure of those provisions on (supposed) free-market reforms?

Depending on how Senate Republicans—or at least, the select few who get to write major legislation—answer those questions, the bill could be a step in the right direction. Or it could be ObamaCare-lite.

The Trump administration’s recent proposal on infrastructure stressed federalism. It said that the “federal government now acts as a complicated, costly middleman between the collection of revenue and the expenditure of those funds by states and localities. Put simply, the administration will be exploring whether this arrangement still makes sense, or whether transferring additional [infrastructure] responsibilities to the states is appropriate.”

Indeed, the federal-middleman arrangement does not make sense. With regard to highways, federal funds go not just to the 47,000-mile interstate highway system (IHS), but also to the vast 3.9 million mile “federal-aid highway system.” But there are few advantages in federal funding over state funding for most the nation’s highways, which are owned by the states and mainly serve state-local needs.

As such, there have been many proposals to devolve at least the non-IHS activities to the states. In such “turnback” proposals, the federal government would cut its highway spending and its gas tax, and allow states to fill the void.

The turnback idea has been around awhile. A major 1987 study by the Advisory Commission on Intergovernmental Relations (ACIR) proposed devolving highway funding except for IHS funding to the states. The ACIR was led by a bipartisan mix of federal, state, and local elected officials, and was known for its top-notch staff experts.

Thirty years later, the ACIR report contains sound advice for today’s policymakers. Here are some excerpts:

The Commission concludes that a devolution of non-Interstate highway responsibilities and revenue sources to the states is a worthwhile goal and an appropriate step toward restoring a better balance of authority and accountability in the federal system (page 2).

It is the sense of the Commission that the Congress should move toward the goal of repealing all highway and bridge programs that are financed from the federal Highway Trust Fund, except for: (1) the Interstate highway system, (2) the portion of the bridge program that serves the Interstate system, (3) the emergency relief highway program, and (4) the federal lands highway program. The Commission urges that the Congress simultaneously relinquish an adequate share of the federal excise tax on gasoline—about 7 cents of the federal tax on motor fuel plus an additional 1 cent for a grant based on lane mileage—to finance the above programs (page 2). [Note: the federal gas tax at the time was just 9.1 cents per gallon].

With state and local governments freed from federal requirements, some of which are unsuitable and expensive, turnbacks offer the possibility of more flexible, more efficient, and more responsive financing of those roads that are of predominantly state or local concern. Investment in highways could be matched more closely to travel demand and to the benefits received by the communities served by those roads (page 3).

Highway turnbacks potentially can add both certainty and flexibility—as well as efficiency and accountability—to the financing of the nation’s transportation infrastructure as well as to the design and operation of both new and modernized roads (page 4).

In time, federal requirements and sanctions have accumulated, which have limited state and local governments’ flexibility in road construction and operation, have restricted these governments’ ability to address specific transportation needs, and have probably increased the cost and time needed for road improvements … The design standards required for receiving federal road grants may often be higher than those actually employed for roads built with state or local funds alone. The result can be that some federally subsidized highways are “gold-plated,” that is, built more lavishly than would be the case if state and local governments made the tradeoffs involved in highway plans and financed their choices by taxes levied on their own constituents (page 11).

[Federal highway regulations] may intrude the most broadly upon the choices of state-local governments and citizens. Examples include the rule that federally aided projects be preceded by an environmental analysis and the Davis-Bacon requirement to pay union wage rates, or the equivalent. The Federal Highway Administration has estimated that the Davis-Bacon requirement added between $293 and $586 million to road costs in FY 1986 (page 12).

The federal restriction on state and local road choices occurs not solely because federal standards are high, but because they tend to be inflexible, inappropriate to circumstances that vary from place to place, and more responsive to national interest groups than to the users of specific highways (page 13).

There is “fiscal equivalence” when the same political community—the same jurisdiction—finances a governmental program, is responsible for its operation, and receives the benefits of that program … The tie between taxing and spending promotes efficiency and careful choices, whether spending levels are high or low. Because various areas’ highway needs and preferences are so different, a nationally uniform program cannot tailor taxing and spending to each other, as state and local programs can (page 22).

With the Interstate system used for long-distance travel, most of the benefits of other federally aided roads are contained within state boundaries. These non-Interstate, federally aided roads should be considered for turnback. Absent federal funding, there is reason to believe that state-local responsibility for the devolved highways would not impair nationwide mobility or interstate commerce. Devolution would move toward “fiscal equivalence.” The same jurisdiction that finances a set of roads will benefit from them. Thus highway spending and highway services would be more closely linked than is presently the case. Efficiency would be enhanced as would political, fiscal, and program accountability (page 48).

The diverse goals and constituencies served by the federal highway program has led to a complex operation and has engendered controversy over the program’s procedures and allocation formulas … Devolution … would sharpen goals and priorities (page 48).

The ACIR report (“Devolving Selected Federal-Aid Highway Programs and Revenue Bases: A Critical Appraisal”) is here.

The federal-government-managed National Flood Insurance Program (NFIP) is $25 billion in debt, stokes moral hazard, and entails a regressive wealth transfer that favors coastal areas. The NFIP is set to expire at the end of September, offering policymakers an important chance to rethink the program. The House Financial Services Committee is considering the Flood Insurance Market Parity and Modernization Act Wednesday, the current version of the bill takes important steps in moving the U.S. towards a private flood insurance market. Private insurance would improve upon the NFIP by ending transfers from the general taxpayers to the wealthy and the coasts and by limiting moral hazard.

Private insurance functions as a market-driven regulator of risk. Private insurers devise premium payments to accurately reflect risk, forcing economic agents to internalize the risk they choose to assume. For instance auto insurance premiums depend both on a driver’s performance as well as other factors that correlate with risk, such as age or area of the country.

The enactment of the NFIP in 1968 reflected a belief that a centrally planned insurance program could better fulfill the regulatory function of insurance than the private market. Government-managed insurance could, it was held at the time, “limit future flood damages without hampering future economic development” and “prompt an adjustment in land use to reduce individual and public losses from floods,” reported a Housing and Urban Development study integral to the program’s design.

However, the NFIP’s fifty-year record shows why the reasoning behind the creation of the program was misguided. The NFIP is beset by many design flaws, especially in terms of how premiums are priced. About 20% of all NFIP policies are explicitly subsidized and receive a 60-65% discount off the NFIP’s typical rate. These subsidies are in no way a subsidy to poor homeowners but instead relate to the age of a property. They turn out to be wildly regressive.

Even the 80% of the NFIP’s so-called “full risk” properties are not priced accurately. For instance, despite their name the full risk rates do not include a loading charge to cover losses in especially bad years, so even these insurance policies are money-losers in the long run.

Moreover, the NFIP’s rates are not set on a property-by-property basis. Instead, they reflect average historical losses within a property’s risk-based categories. As a result, while the subsidies and lack of loading charge mean that the NFIP generally undercharges risk, in some instances premiums are actually overpriced.

Debt is not the only consequence of the NFIP’s misguided premiums. The systemic underpricing of insurance causes moral hazard, by masking the cost of flood risk and encouraging overdevelopment in flood-prone areas. Because the average home in the NFIP is much more valuable than an average American home, the program is regressive on the whole. And since a disproportionate number of properties in the NFIP are on the southeastern coast, wealth is transferred from the rest of the country to homeowners near the coast in those states.

Congress could, theoretically, fix some of these design problems, but past attempts to reform the NFIP to more closely resemble a private insurance company failed miserably, and exemplify why in practice government rarely succeeds in competently managing what should be private business. For instance, in 2012 Congress passed the Biggert-Waters Flood Insurance Reform Act, which required the NFIP to end subsidies and to begin including a catastrophe loading surcharge. However, due to interest group pressure Congress reversed itself just two years later, halting some reforms and getting rid of others outright. The quick backtrack was a classic example of government failing to act in the public interest due to concentrated benefits and diffused costs.

However, one positive aspect of the 2012 reforms has persisted. The Biggert-Waters law ended the NFIP’s de-facto monopoly by allowing property owners to meet mandatory purchase requirements with private market insurance. Private insurers have since returned to the market, successfully competing with the NFIP.

Recent innovations in catastrophic modeling and catastrophic risk hedging mean that private market flood insurance is more viable than ever. Insurance industry experts suggest that private insurers can cover most properties in the NFIP and note that U.S. flood risk is the largest growth area for world-wide private reinsurers.

A forthcoming Cato Policy Analysis discusses technological innovations in the private flood insurance industry and the social benefits of moving to private flood insurance and terminating NFIP. If that is politically impossible, it suggests that any reauthorization of the NFIP should at least include measures that level the playing field between the NFIP and private alternatives.

Measures to encourage private competition include allowing a more flexible array of private coverage terms to meet mandatory purchase requirements, mandating that FEMA release property-level flood data to private insurers, and allowing firms that contract with the NFIP to also issue their own insurance plans. The Flood Insurance Market Parity and Modernization Act contains many of these measures, and would represent an excellent step towards ending a system that subsidizes wealthy coastal homeowners to take imprudent risks.

Special thanks to Ari Blask, who co-authored the forthcoming report and provided copious assistance on this blog post as well. 

It comes as no surprise that the Supreme Court has agreed to hear the case of Gill v. Whitford, in which a district court struck down the Wisconsin legislature’s partisan gerrymander. Conservative justices want to hear the case as a way to correct an error, while liberals see it as their last best chance to tee up a landmark constitutional case on redistricting while Anthony Kennedy is still on the Court. Within hours, however, the grant of review was followed by a kicker – an order staying the court order below, over dissents from the four liberals – that calls in question whether the momentum is really with those hoping to change Kennedy’s mind.

Last time around, in 2004’s Vieth v. Jubelirer, the Court foreshadowed this day. Four Justices led by Scalia declared that for all the evils of political gamesmanship in drawing district lines – a practice already familiar before the American revolution – there was and is no appropriately “justiciable” way for the Court to correct things; it would be pulled into a morass of subjective and manipulable standards that could not be applied in a practical and consistent way and would cost it dearly in political legitimacy. Justice Anthony Kennedy, in a separate concurrence, agreed in dismissing the Pennsylvania case at hand, and said the Court was “correct to refrain from directing this substantial intrusion into the Nation’s political life” that would “commit federal and state courts to unprecedented intervention in the American political process.” But he left the door open to some future method of judicial relief “if some limited and precise rationale were found to correct an established violation of the Constitution.”

That set up a target for litigators and scholars to shoot for: can a formula be found that is “limited and precise” enough, and based on an “established” enough constitutional rationale, to convince Justice Kennedy? After all, the Court’s 1962 Baker v. Carr one-person-one-vote decision on districting had been an unprecedented intervention in the American political process, but also one that could be implemented by a simple formula yielding consistent outcomes and little need for ongoing supervision (take the number of people in a state and divide by the number of districts).  

Plaintiffs in the Wisconsin case are hoping that a newly devised index they call the “efficiency gap” can serve as an adequately objective measure of whether partisan gerrymandering has taken place, given the presence of evidence of such motivation. Even if courts accept this, it is another big jump to the confidence that they can provide consistent and predictable remedies unaffected by judges’ own political prejudices. 

The decision to stay or not stay a lower court order often provides a peek as to which side the Justices expect to prevail. And the five-member majority to stay the Wisconsin order – a majority including unsurprisingly Gorsuch, but more significantly Kennedy – suggests that at this point it is the conservative side’s case to lose. 

Whatever the Court’s disposition of the Wisconsin case, gerrymandering remains a distinctive political evil, an aid to incumbency that promotes the interests of a permanent political class, and a worthy target for efforts at reform. I’ve written more on that here and here

 

The federal government runs more than 2,300 subsidy programs, and they are all susceptible to fraud and other types of improper payments. The EITC program, for example, throws about $18 billion down the drain each year in such payments.

Perhaps the program that generates the most outrageous rip-offs is the $150 billion Social Security Disability (SSDI) program. From the Washington Post today:

Eric Conn, the fugitive attorney who pleaded guilty to orchestrating a scheme to defraud the federal government of $600 million, remains at large since he cut off his court-ordered GPS monitoring bracelet on June 2…Conn in March entered guilty pleas to defrauding the Social Security Administration via bribes he paid to a doctor and a judge to process and approve his clients’ disability claims. 

From 2006 to 2016, Conn processed 1,700 client applications for Social Security benefits with a potential of $550 million in lifetime benefits. Since the revelation of the allegations, the Social Security Administration has contacted many of Conn’s former clients with claims they owe as much as $100,000 for disability payments going back 10 years unless they can prove they have been disabled the entire time…

Conn’s fraud scheme was fueled by television advertisements that included a 3-D television ad from 2010 and one from 2009 in which Conn hired YouTube star “Obama Girl” and Bluegrass music legend Ralph Stanley to sing a version of “Man of Constant Sorrows” with new lyrics that refer to Conn as a “superhero without a cape” and to brag that Conn had “learned Spanish off of a tape.” In a rap video, Conn billed himself as Hispanic-friendly: “Even if you’re Latino, no need to worry cuz this gringo speaks the lingo.”

One greedy lawyer, a corrupt doctor and judge, some jingoism, and our government gets ransacked for $600 million. That’s not very comforting to taxpayers, is it?

In his study of SSDI for DownsizingGovernment.org, Tad Dehaven said, “SSDI is a classic example of a well-intentioned effort to provide modest support to truly needy people that has exploded into a massive entitlement that is driving up the federal deficit.” 

DeHaven proposed these SSDI reforms: 

  • Cut the program’s average benefit levels.
  • Impose stricter eligibility standards to discourage claims from people who should be working.
  • Create a longer delay for the initial receipt of benefits to discourage frivolous applications.
  • Reduce the large number of appeals for people initially denied benefits.
  • Ensure greater quality control and consistency of decisions by officials and judges.
  • Create a “taxpayer advocate” in the administrative law process to challenge dubious claims made by applicants and their lawyers.
  • Apply continuous disability reviews of people receiving benefits in a more vigorous manner.

His study is here

Pages