Cato Op-Eds

Individual Liberty, Free Markets, and Peace
Subscribe to Cato Op-Eds feed

This is a very interesting development—one that’s been coming for a long time: Your car is a computer, some cars can be hacked, and now we know they can be hacked in dangerous ways.

The correct public policy response is implicit in this very good Wired article describing the whole thing. “Automakers need to be held accountable for their vehicles’ digital security,” writer Andy Greenberg says, quoting auto hacker Charlie Miller thus: “If consumers don’t realize this is an issue, they should, and they should start complaining to carmakers.”

That’s two very important consumer protection systems in a couple of brief sentences: In one, carmakers suffer lost sales if their cars are hackable or perceived as such. The market feedback system—including the article itself—causes automakers to work to make their cars less hackable.

In the other, carmakers suffer monetary damages if their cars are actually hacked in ways that cause injury. The common law tort system causes automakers to work to make cars less hackable. (I don’t know if this is what Greenberg had in mind for accountability, but it’s the legal accountability that’s already in place.)

Yes, these systems cause carmakers to seek to control perceptions of hackabillity and to deny responsibility when a harmful hack occurs. But on the whole they promote good behavior on the part of automakers, and safety for drivers.

Speaking of the common law, we are on the threshhold of a sea change in how liability for software defects is apportioned by contract. Software has typically been sold or licensed without any guarantee of its fitness, letting the risk of software failures fall entirely on the purchaser. That model can’t apply where failures are dangerous such as in driving controls and many implanted medical devices. There, software sellers are liable for failure.

As software grows more secure and in applications where successful functioning is important, liability for flaws will shift to sellers. That should generally happen at the pace buyers demand, based on their willingness to pay.

As is typical, it is not the market processes and common law already husbanding automakers’ behavior that get the attention in Greenberg’s article. He writes of new legislation that would “set new digital security standards for cars and trucks.” Senators Markey (D-MA) and Blumenthal (D-CT) undoubtedly want drivers to be protected. What is open to question is whether any group of politicians in Congress and lawyers in federal agencies can set standards better than the myriad actors in the marketplace, allocating risks according to their desires and needs, under common obligations to protect others from harm.

My article in this week’s Washington Examiner magazine argues that because U.S. wars seem so cheap, they tempt us into making war too casually. I explain that while this tendency isn’t new, recent technology breakthroughs, which allowed the development of drones, have made it worse. We now make war almost like people buy movies or songs online, where low prices and convenience encourage purchase without much debate or consideration of value. I label the phenomenon one-click wars.

If we take occasional drone strikes as a minimum standard, the United States is at war in six countries: Pakistan, Somalia, Yemen, Syria, Afghanistan, and Iraq, with Libya likely to rejoin the list. In the first three, U.S. military action is exclusively the work of drones. Regular U.S. ground forces are present only in Iraq, where they avoid direct combat, and Afghanistan, where they mostly do.

There’s something remarkable in that combination of militarism and restraint. How can we be so willing to make war but so reluctant to take risks in making it?

My explanation starts with power. Wealth, technological prowess, and military might give the United States unique ability to make war around the world. But labor scarcity, liberal values, and our isolated geography that makes the stakes remote  limit our tolerance for sacrificing lives, even foreign ones, in war. This reluctance to bear the human costs of war leads to reliance on long-range technology, especially airpower.

Airpower, despite its historical tendency to fail without help from ground forces, always offers hope that we are only a few bombs away from enemy capitulation. The promise of cheap, clean wars is always alluring. They would let you escape the choice between the bloody sacrifices war entails and the liberal values it offends. 

Recent developments added to our proclivity to go for the quick military fix. Innovations in surveillance and targeting greatly enhanced airstrikes’ accuracy and paved the way for armed drones. Jihadists spread out among complex Islamist insurgencies. Meanwhile, the wars in Iraq and Afghanistan restored the U.S. public’s aversion to casualties, which the September 11 attacks had suppressed.

The belief that we can fight riskless war is a good problem to have. It causes are good things: wealth, power, and safety. But it’s still problem for a couple reasons.

One is a tendency to corrode democratic government and encourage dumb decisions. Our government’s division of war powers follows from the theory that conflict and debate about policy tends to improve it. That requires Congress to jealously guard its war powers. Unfortunately, it has tended to abdicate them where low costs keep the public disinterested. An engaged Congress is no antidote to dumb wars, But wars started by unchecked presidents are more likely to rely on dubious rationales and thus to be foolish.

The other problem is that wars are rarely as cheap as they initially seem. That’s especially true of drone strikes, I argue, because their costs are hard to see:

They initially either occur downrange, in the form of dead people whose families can’t vote, or in the future, as abstractions like resentment. Because these costs are slow to arrive and obscure, while the benefits are relatively concrete and immediate, drone strikes have a specious attraction. That makes them especially resistant to judicious debate.

I agree with those who argue that one of those risks is blowback, meaning delayed violence or diplomatic consequences. I also discuss the less-appreciated danger of escalation, where the strikes, by getting us involved in conflicts without winning them, create pressure for more costly measures.

My admittedly partial solutions to this problem of feckless war-making involve efforts to capture war costs up front to heighten debate about rationales. The piece mentions several ways to do so. It ends with the suggestion that because bombing people tends to produce unanticipated trouble, “those unwilling to pay much for wars should probably avoid them.”

Former Obama administration economist, Jared Bernstein, argues for higher taxes in a New York Times op-ed yesterday. His piece begins:

Like it or not, the campaign season is upon us, and that almost certainly means somebody is going to try to buy your vote with a tax cut — even though average federal tax rates are already low in historical terms, our tax code remains tilted in favor of the wealthy, and our children, neighborhoods and infrastructure desperately need public investment.

I tried to use my imagination and think of how a thoughtful and intelligent liberal like Bernstein might conceive of tax policy. But I could not come up with any scenario under which this statement might be considered true: “our tax code remains tilted in favor of the wealthy.”   

The plain fact of the matter is that the federal tax system is highly graduated, or what liberals call “progressive.” Lower-income households pay much smaller shares of their income in taxes than do higher-income households.

In his article, Bernstein uses data from the respected Tax Policy Center (TPC), as I do here. The first table shows TPC estimates of average federal tax rates (total taxes divided by income) for U.S. households (specifically, “tax units”) in five income groups.

Average Federal Tax Rates, 2015

Income Group Income Tax Payroll Tax Other Taxes Total Taxes Lowest

-5.0%

6.4%

2.2%

3.6%

Second

-1.9

7.6

2.1

7.8

Middle

2.9

7.9

2.3

13.1

Fourth

6.1

8.4

2.5

17.0

Highest

15.6

6.0

4.1

25.7

   Source: Tax Policy Center estimates.

The average household in the highest group will pay 25.7 percent of its income toward taxes in 2015, which compares to 3.6 percent in the lowest group. The average household in the middle group will pay a rate about half that of the highest group. I don’t see how this data can be reconciled with Bernstein’s claim.

Data from other sources shows the same tilt in tax burdens toward high earners. Actually, “piling on” on high earners is more accurate than “tilt.” The following screenshot is from Table A-6 in this Joint Committee on Taxation report. I’ve circled the key column. Average tax rates rise rapidly as income rises. The highest earners in 2015 will pay an average federal tax rate of 33.1 percent, which is about twice the rate of those with middling incomes, and many times the rate of people at the bottom.

 

Perhaps Bernstein meant “tilted in favor of the wealthy” compared to other countries. But we have pretty solid data showing that is not correct either. Tax Foundation summarizes OECD data here showing that the U.S. has the most graduated, or progressive, tax system among the high-income nations.

Bernstein is right that the “campaign season is upon us.” But that doesn’t give him license to tilt tax data upside down to fit his policy narrative.

Criticizing my recent post-mortem on King v. Burwell, Scott Lemieux kindly calls me “ObamaCare’s fiercest critic” for my role in that ObamaCare case. Other words he associates with my role include “defiant,” “ludicrous,” “farcical,” “dumber,” “snake oil,” “ludicrous” (again), “irrational,” “aggressive,” “comically transparent,” and “dishonest.”

Somewhere amid the deluge, Lemieux reaches his main claim, which is that (somehow) I admitted: “the King lawsuit wasn’t designed to uphold the statute passed by Congress in 2010. It was intended to ‘enfranchise’ the people who voted against the bill.” I’m not quite sure what Lemieux means. But perhaps Lemieux doesn’t understand my point about how the Supreme Court helped President Obama disenfranchise his political opponents.

As all nine Supreme Court justices acknowledged in King, “the most natural reading of the pertinent statutory phrase” is that Congress authorized the Affordable Care Act’s premium subsidies, employer mandate, and (to a large extent) individual mandate only in states that agreed to establish a health-insurance “Exchange.” That is, all nine justices agreed that the plain meaning of the operative statutory language allows states to veto key provisions of the ACA—sort of like the Medicaid veto that has existed for 50 years and lets states destroy health insurance for millions of poor Americans. The Exchange veto includes the power to shield millions of state residents from the ACA’s least-popular provisions: the individual mandate and the employer mandate.

When 34 states exercised those vetoes by refusing to establish Exchanges, it was a repudiation of the ACA, driven by voters who elected ACA opponents to statewide offices in 2009, 2010, 2011, and 2012.  In the 2010 elections, the first to be held after President Obama signed the ACA, Republicans netted control of six governorships, 22 state legislative chambers, and took outright control of more state legislatures than they had since 1952. During 2012, when states had to make the crucial decision about whether to establish an Exchange, Republicans controlled 29 governorships, two legislative chambers (and thus the entire legislature) in 26 states, and one legislative chamber in a further five states. That doesn’t even include Nebraska’s legislature, which is unicameral, non-partisan, and also refused to establish an Exchange. Public opposition to the ACA was a major—if not the major—factor in these gains, as well as GOP gains in Congress. This should not be surprising, given the ACA’s instant and enduring unpopularity.

So when President Obama chose to implement those subsidies and mandates in states that refused to establish Exchanges, he wasn’t just exceeding the powers Congress granted him under the ACA. He wasn’t just imposing taxes and spending money directly contrary to “the most natural reading of the pertinent statutory phrase.” He was actively disempowering his political opponents in the states by stripping state officials of a veto that “the most natural reading of the pertinent statutory phrase” granted them. He disenfranchised Republican and independent voters—individual Americans who put those state officials into office—by taking away the effect of their votes.

And when the Supreme Court strained to reach its counter-textual King ruling, it did not merely ratify these never-authorized taxes and entitlements. It also ratified the president’s sweeping attempt to disenfranchise his political opponents. Nixon had nothing on these guys.

In the face of the disenfranchisement of millions of voters for partisan advantage, Lemieux responds with hand-waving: “Allegedly, single special elections in 2009 and 2011 are supposed to be decisive repudiations of the Affordable Care Act.” I honestly don’t know what he’s talking about. The relevant 2009 and 2011 elections were for state offices in New Jersey and Virginia, and were not special elections. The only special election that really bears on any of this was Scott Brown’s special election to the U.S. Senate from Massachusetts—also a repudiation of the ACA—and that was in January 2010.

Lemieux claims the 2012 presidential race is a more accurate reflection of the demos than pesky state and congressional elections anyway, because “the man who signed the ACA [got] nearly five million more votes than his opponent.” That’s true, but it hardly means what Lemieux implies. Since the president’s opponent had supported an identical law while governor of Massachusetts, but then distanced himself from it, one could as credibly claim the voters would have preferred an ObamaCare opponent, but barring that they preferred sincerity to opportunism. That interpretation would be fortified by the fact that those same voters have consistently opposed the ACA, and elected a Congress (it’s a law-making body) devoted to repealing it.

As the high concentration of colorful descriptors in Lemieux’s blog post suggests, it could stand even more debunking. But I’ll leave off here to see if he and I can at least agree on where we disagree.

You Ought to Have a Look is a feature from the Center for the Study of Science posted by Patrick J. Michaels and Paul C. (“Chip”) Knappenberger.  While this section will feature all of the areas of interest that we are emphasizing, the prominence of the climate issue is driving a tremendous amount of web traffic.  Here we post a few of the best in recent days, along with our color commentary.

In case you missed it the House Natural Resources Committee, this week, held a hearing examining the Administration’s determination of the social cost of carbon—that is, how much future damage (out to the year 2300) the Administration deems is caused by the climate change that results from each emitted (metric) ton of carbon dioxide.

As you may imagine from this description, determining a value of the social cost of carbon is an extremely contentious issue, made more so by the fact that the Obama Administration requires that the social cost of carbon, or SCC, be included in the cost/benefit analysis of all federal actions (under National environmental Protection Act, NEPA) and proposed regulations.

Years ago, we warned about how powerful a tool the SCC was in the Administrations hands and have worked to raise the level of public awareness. To summarize our concerns:

The administration’s SCC is a devious tool designed to justify more and more expensive rules and regulations impacting virtually every aspects of our lives, and it is developed by violating federal guidelines and ignoring the best science.

The more people know about this the better.

Our participation in the Natural Resources Committee hearing helped further our goal.

That the hearing was informative, contentious, and well-attended by both the committee members and the general public is a testament to the fact that we have been at least partly successful elevating the SCC from an esoteric “wonky” subject to one that is, thankfully, starting getting the attention it deserves.

In this edition of You Ought to Have a Look, we highlight excerpts from the hearing witnesses, which along with our Dr. Patrick Michaels, included Dr. Kevin Dayaratna (from The Heritage Foundation), Scott Segal (from the Policy Resolution Group) and Dr. Michael Dorsey (from US Climate Plan).  The full written submissions by the witness are available here.

For what it’s worth, Michaels, Dayaratna, and Segal were majority witnesses, while Dorsey was the witness selected by the minority. During the hearing, the US Climate Plan (the organization which Dorsey co-founded and serves as the Vice President of Strategy) sent out this tweet:

So in the name of fair play, we’ll focus on the testimony of the other three witnesses.

Scott Segal’s testimony detailed why the SCC was a misguided concept that is inappropriate as a basis for federal rulemaking and administrative action, and further, violates the Administrative Procedures Act. Basically, Segal thinks the current SCC determination procedure should be dispensed with. Here’s a tidbit:

[A]s the President’s Climate Action Plan comes further into focus, more and more regulations claiming to reduce carbon emissions as a primary or secondary benefit will use SCC to appear cost-beneficial when the truth might be otherwise. When actual environmental benefits fail to satisfy a skeptical audience, SCC should not be used as Hamburger Helper to make the dish look larger than it really is.

In his testimony, Kevin Dayaratna focused on the models used by the Administration to calculate the social cost of carbon and how sensitive they are to changes in the initial assumptions. Using a different set of assumptions, completely in keeping with other federal guidelines and mainstream science, Dayaratna reported:

Interestingly, under a reasonable set of assumptions, the SCC is overwhelmingly likely to be negative, which would suggest the government should, in fact, subsidize (not limit) carbon dioxide emissions. I do not use these results to suggest that the government should actually subsidize carbon dioxide emissions, but rather to illustrate the extreme sensitivity of these models to reasonable changes to assumptions.

He went on to reasonably conclude:

Our results clearly illustrate that the models used to estimate the SCC are far too sensitive to reasonable changes in assumptions to be useful tools for policymaking.

And finally, Pat’s testimony focused on the scientific shortcoming of the Administration’s SCC procedures. Here’s a juicy section from his written testimony:

Here, I address why this decision [not to alter the SCC in light of new science] was based on a set of flimsy, internally inconsistent excuses and amounts to a continuation of the [Administration’s] exclusion of the most relevant science—an exclusion which assures that low, or even negative values of the social cost of carbon (which would imply a net benefit of increased atmospheric carbon dioxide levels), do not find their way into cost/benefit analyses of proposed federal actions. If, in fact, the social cost of carbon were near zero, it would eliminate the justification for any federal action (greenhouse gas emissions regulations, ethanol mandates, miles per gallon standards, solar/wind subsidies, DoE efficiency regulations, etc.) geared towards reducing carbon dioxide emissions.

A video of Pat’s oral testimony is also available.

No matter how you look at it, whether from the science, the economics, the modelling, or the procedures involved, the Administration’s determination of the social cost of carbon is simply not up to snuff. Rather than playing a major influence in virtually all federal actions, it ought simply be discarded, as it is largely unfixable at this point in time.

Nearly a month ago Greek voters rejected more economic austerity as a condition of another European bailout. Today Athens is implementing an even more severe austerity program.

Few expect Greece to pay back the hundreds of billions of dollars it owes. Which means another economic crisis is inevitable, with possible Greek exit (“Grexit”) from the Eurozone.

Blame for the ongoing crisis is widely shared. Greece has created one of Europe’s most sclerotic economies. The Eurocrats, an elite including politicians, journalists, businessmen, and academics, determined to create a United States of Europe irrespective of the wishes of European peoples.

European leaders welcomed Athens into the Eurozone in 2001 even though everyone knew the Greek authorities were lying about the health of their economy. Economics was secondary.

Unfortunately, equalizing exchange rates cemented Greece’s lack of international competitiveness. Enjoying an inflated credit rating, Greece borrowed wildly and spent equally promiscuously on consumption.

Greece could have simply defaulted on its debts. However, Paris and Berlin, in particular, wanted to rescue their improvident banks which held Athens’ debt.

Thus, in return for tough loan conditions most of the Greek debt was shifted onto European taxpayers through two bail-outs costing roughly $265 billion. Greece’s economy has suffered, and the leftwing coalition party Syriza won Greece’s January election. Impasse resulted at the end of June as the second bailout expired.

Athens denounced its creditors for insisting on repayment. Prime Minister Alexis Tsipras criticized “ultimatums, blackmail and fearmongering.”

But writing off Greek debt would require European governments to confess their financial folly to their taxpayers. Restructuring Greek debt also would set off similar demands from other heavily indebted states.

Moreover, Eurocrats committed to a consolidated continental government refused to consider a Grexit. For decades European elites have simply rolled over any opposition.

So now what?

Tsipras encouraged his people to reject their creditors’ best offer, but then almost immediately announced that he was forced to request a third bailout.

After bitter debate Euro leaders offered some $96 billion. But they insisted on even tougher conditions than before.

Alas, assuming a new deal is formalized, there is little chance that it will work. The European Commission admitted that success required “very strong ownership of the Greek authorities” of reforms. But that has never been the case.

While past Athens governments reduced easily measured outlays, they failed at more fundamental restructuring. In pressing parliament to approve the latest program, Tsipras announced: “The government does not believe in these measures.”

Understandably, European distrust of Athens is deep. Moreover, the IMF, a party to the first two bail-outs, proclaimed that the latest agreement is not viable.

Germany suggested “reprofiling” the debt, that is, further lengthening maturities and reducing interest payments. However, Euro leaders insisted that any relaxation of Greece’s debt burden could only follow full implementation of the reform program. And even “a very substantial re-profiling,” noted the European Commission, “would still leave Greece with very high debt-to-GDP levels for an extended period.”

Perhaps even more significant, as I noted in Forbes online, “instead of advancing continental consolidation the common currency has become an obstacle to European political union. Populist parties are rising across Europe. Some oppose austerity and others criticize bailouts, but all appeal to people who feel ignored and victimized by the Eurocrats and other elites.”

Indeed, the latest plan is dividing long-time allies. La Figaro reported on “extremely hard, even extremely violent” discussions among the European governments. German Finance Minister Schaeuble publicly challenged Chancellor Merkel.

Syriza could break apart. Tsipras won initial parliamentary approval for the new deal only with opposition support. On a second vote he lost 32 Syriza deputies, more than half of the party executive committee, and three cabinet members. New elections are likely this fall.

Europe’s leaders want to believe that they have solved the latest Greek crisis. However, the third bailout likely will not be the final word. For the first time in decades, the European Project is in serious doubt.

Like certain weeds and infectious diseases, some myths about banking seem beyond human powers of eradication.

I was reminded of this recently by a Facebook correspondent’s reply to my recent post on “Hayek and Free Banking.” “We had free banking in the US from 1830 until 1862,” he wrote. “It didn’t work out too well.” “During the Wildcat Era,” he added, “banks were unregulated and failed by the hundreds.”

Imagine the effect my critic must have anticipated — the crushing blow his revelations would surely deal to my cherished beliefs. Upon reading his words, my eyes widen; my jaw goes slack. Can this really be so?, I ask myself? I read the ominous sentences again, more slowly, sub-vocalizing. Beads of sweat gather across my brow. Then, pursing my lips, my eyes downcast, I turn my head, first left, then right, then left again. If only I had known! All these years…no one ever…I mean, how was I supposed…it never occurred to me… DARNITALL! Why didn’t I think of looking at the U.S. experience before shooting my mouth off about free banking?

Well, that isn’t what happened. “What cheek this fellow has!” was more like it. (OK, it wasn’t exactly that, either.) Of course I’ve looked into the U.S. record. So has Larry White. And Kevin Dowd. And every other dues-paying member of the Modern Free Banking School. We’ve looked into it, and we’ve found nothing there to change our minds concerning the advantages of freedom in banking.

So what about all those “unregulated” wildcats? First of all, there’s never been a time in U.S. history when banking was truly unregulated, or anything close. Up until 1837, just getting permission to open a bank was a hard slog, when it wasn’t altogether impossible. Here’s Richard Hildreth’s tongue-in-cheek description of how one went about becoming a banker back in 1837:

The first thing is, to get a charter. One from the General Government, with exclusive privileges, and a clause prohibiting the grant of any other bank, is esteemed best of all. But such a charter is a non-such not easy to be got.[1]

Next best is a State Bank, in which the state government takes a portion of the stock, with a clause, if possible, prohibiting the grant of any other bank within the state. But if such a bank is not to be had, a bare charter, without any exclusive privileges, should be thankfully accepted.

It is very desirable however, that no other bank should be permitted in the county, city, town or village, in which the new bank is established; and all existing banks, are to join together upon all occasions, in a solemn protest against the creation of any new banks, declaring with one voice, that the multiplication of small banks, — which, by way of emphasis, may be denounced, as “little peddling shaving shops,” — is ruinous to the country, produces a scarcity of money, &c. &c. &c.

In order to obtain a charter, it is necessary to be on good terms with the legislature applied to. Obstinate opposers may be silenced by the promise of a certain number of shares in the stock, — which shares, if very obstinate, they must be allowed to keep without paying for.

This being properly prepared, a petition is to be presented to the legislature, representing that in the town of ——–, the public good requires the establishment of a bank. … The bank is to be asked for, solely on public grounds; not a whisper about the profits the petitioners expect to make by it.

If the petition is coolly received, it may be well to revise the private list of stock-holders, and to add the names of several of the legislators. …

If nothing better can be done, employ some influential politician to procure a charter for you, and buy him out at a premium.[2]

When Hildreth wrote, around 600 U.S. banks were in business. That may seem like plenty. But the fact that the vast majority of these were in the northeast, and that hardly any had branches, meant that most U.S. communities still had no banks at all. In most territories and states west of the Mississippi, becoming a banker wasn’t just difficult: it was illegal.

1837 was also, however, the year in which Michigan passed a “free banking” law, becoming the first of thirteen states that would pass similar laws over the course of the next two decades. The laws provided for something akin to a general incorporation procedure for banks, making it unnecessary for state legislators to vote on specific bank bills, and to that extent improved upon the former bank-by-bank charter or “spoils” system. But despite the name, which suggested, if not completely unregulated banking, at least the sort of lightly-regulated banking for which Scotland was then famous, the laws didn’t even come close to allowing American banks the freedoms that their Scottish counterparts enjoyed. Indeed, the restrictions imposed on U.S. “free” banks proved so onerous that the laws don’t even appear to have achieved a substantial overall easing of entry into the banking business.[3]

Two rules, common to all U.S. free banking laws, were to have especially important consequences. The first denied U.S. “free” banks the right to establish branches—something their Scottish counterparts were famous for doing, and that even some chartered U.S. banks could and did do. The other required them to secure their notes using specific securities, which were to be lodged for safekeeping with state banking authorities. U.S. “free” banks were not free, in other words, to decide how to employ the funds represented by their notes, which were in those days a more important source of bank funding than bank deposits. Such “bond deposit” requirements were also unknown in the Scottish system.

So U.S. “free” banks were hardly “unregulated.” They did, however, “fail by the hundreds”– 2.42 hundred, to be precise, which was no small portion of the total. The question is, why did so many American “free” banks fail? Was it because they weren’t regulated enough? No sir: it was because they were over-regulated: the free banking laws of several states forced banks to invest in what very risky securities — and especially in risky state government bonds — while the rule against branching limited their ability to diversify around this risk, especially by relying more on deposits than on notes. It was owing to these restrictive components of U.S.-style free banking that scads of American free banks ended up going bust.

And that’s not just one kooky free banker’s opinion: it’s the opinion of every competent monetary historian who has looked into the matter.[4] According to Matt Jaremski, whose 2010 Vanderbilt U. dissertation is the most careful study to date, the bond-deposit requirements of antebellum free-banking laws “seem to be the underlying cause of the free banking system’s [sic] high failure rate relative to the charter banking system. While bond price declines were significantly correlated with free bank failures, they were not correlated with the failure rate of charter banks.” Moreover, it wasn’t the general level of bond prices that mattered, but only the prices of specific securities that banks were legally obliged to purchase.

And “wildcat” banking? It’s no coincidence that that expression appears to have first gained currency, so to speak, in Michigan in the 1830s, where it was used to refer to some of the more disreputable banks established under that state’s original free banking law.[5] That law proved such a fiasco that it was repealed just two years later, after inflicting heavy losses on innocent note holders.[6] The law appears to have encouraged more than a few bankers to throw large quantities of their notes onto the market, while situating their banks as remotely as possible, the better to avoid pesky redemption requests. But here, as with U.S. free bank failures generally, regulations were to blame. It just so happened that the securities banks were encouraged to hold under Michigan’s law were especially lousy, consisting as they did “either of bonds and mortgages upon real estate within this state or in bonds executed by resident freeholders of the state.”[7] Call it the Wild West version of Community Reinvestment.

Notwithstanding what happened in Michigan, and all the attention it received, “wildcat” banking, understood to mean banking of the fly-by-night sort, was actually quite rare. In Wisconsin, Indiana, and Illinois, whose free banking laws also proved disastrous, it was unimportant, if not altogether unknown; even in Michigan itself it doesn’t seem to have survived the first free-banking law.[8] Indeed, the all-around record of U.S.-style free banking improved significantly as the Civil War approached. Even banknote discounts — another consequence of unit banking that has been wrongly treated as a necessary consequence of having multiple banks of issue — had become almost trivial by the early 1860s. According to my own research, someone who, in October 1863, was foolish enough to purchase every non-Confederate banknote in the country for its full face value, in order to sell the notes to a broker in either Chicago or New York, would have suffered a loss on that transaction of less than one percent of his or her investment.[9] That’s less than the cost merchants incur today when they accept credit cards, or what people typically pay to withdraw cash from an ATM that doesn’t belong to their own bank.

The best reason I can think of for the persistence of the myth of rampant wildcat banking is simply that stories about it made for more titillating reading than ones about the mass of less colorful, if no less unfortunate, free-bank bank failures. Wildcat banking is to the history of banking what the O.K. Corral and Wild Bill Hickok are to the history of the far west.

Somewhat harder to account for is the fact that, in America at least, “free banking” has come to refer exclusively to the antebellum U.S. episodes (as well as to a similar — and mercifully short-lived — Canadian experiment). The expression was, after all, appropriated by U.S. state legislators for the sake of its appealing connotations, after having been in use for some time overseas, where it and its equivalents (“la liberté des banques,” “bankfreiheit,” etc.) continued to stand for genuinely unregulated banking, or something close to it. Sheer parochialism is, I’m afraid, partly to blame: many authorities on American banking, whether economists, historians, or economic historians, appear to be unfamiliar with European writings on free banking, or with the banking systems those writings regard as exemplary.

The limited interest that even some of the more painstaking authorities on U.S. style “free” banking have shown in free banking of the other sort seems to me a shame. After all, what could be more informative than to compare, say, Michigan’s experience with Scotland’s, so as to gain a better understanding of the consequences of laissez-faire banking on the one hand and of certain departures from laissez faire on the other? By failing, not only to make such comparisons, but (in some cases) to even recognize non-U.S.-style free banking and the literature concerning it, such experts have unwittingly encouraged people to confuse U.S.-style “free banking” with the real McCoy.

_________________

[1] Thanks to Andrew Jackson’s efforts, the Charter of the 2nd Bank of the United States had been allowed to expire the year before.

[2] Richard Hildreth, The History of Banks (Boston: Hilliard, Gray & Company, 1837), pp. 97-8.

[3] See Kenneth Ng, “Free Banking Laws and Barriers to Entry in Banking, 1838-1860.” Journal of Economic History 48 (4) (December 1988). Since the Scottish system was itself essentially a “charter” system, entry into it was also strictly limited. Limited entry was, indeed, the most important of several departures of pre-1845 Scottish banking was genuine laissez faire.

[4] See, among other works, Hugh Rockoff, The Free Banking Era: A Reexamination (New York: Arno press, 1975); Arthur J. Rolnick and Warren E. Weber, “The Causes of Free Bank Failures: A Detailed Examination,” Journal of Monetary Economics 14 (3) (November 1984); Gerald P. Dwyer, “Wildcat Banking, Banking Panics, and Free Banking in the United States,” Federal Reserve Bank of Atlanta Economic Review, December 1996; Howard Bodenhorn, State Banking in Early America: A New Economic History (New York: Oxford University Press, 2003); and Matthew S. Jaremski, “Free Banking: A Reassessment Using Bank-Level Data” (PhD Dissertation, Vanderbilt University, August 2010).

[5] Dwyer, p. 1.

[6] Michigan took another, more successful stab at free banking in 1857.

[7] Dwyer, p. 6.

[8] Ibid., pp. 9-10, and the studies mentioned therein.

[9] See my article, “The Suppression of State Banknotes.” Economic Inquiry 38 (4) (October 2000).

[Cross-posted from Alt-M.org]

Today, the Justice Department indicted Dylann Roof on 33 federal hate crime charges for the killings of nine people at Emanuel A.M.E. church in Charleston last month. This indictment is entirely unnecessary.

Hard as it may be for some to imagine now, there was a long time in this country when racially and politically motivated violence against blacks was not prosecuted by state and local authorities. Or sometimes, as in the case of Emmett Till—the young boy from Chicago who was lynched in Mississippi for allegedly being too forward with a white woman—prosecution was a farce and the perpetrators were acquitted.

But in the present case, South Carolina authorities moved quickly and effectively to catch Roof and did not hesitate to charge him with nine counts of murder. This was South Carolina’s duty and their law enforcement officers have appeared to perform professionally and competently. 

The Department of Justice should be more judicious with its funds and resources. The opportunity costs of a duplicative prosecution takes resources away from crimes that fall more appropriately in the federal purview, such as interstate criminal enterprises and government corruption. Today’s indictment is federal meddling in a case the state already has under control.

Even if some wholly unlikely chain of events leads to Roof’s acquittal, the DOJ could push forward with their prosecution at that time. But, in reality, that isn’t going to happen and no one at DOJ thinks it will. By not waiting for the outcome of the state’s prosecution, the timing strongly suggests the DOJ wants to assume jurisdiction for Roof’s prosecution. Thus, this indictment is an unabashed political move.

While the murders were rightly condemned as a national tragedy, it was a tremendous blow to the community of Charleston and the state as a whole. As such, the primary responsibility for prosecuting Dylann Roof belongs to South Carolina. Neither national grief nor DOJ politics should stand in the way of South Carolina’s prerogative to deliver justice on its own terms.

UPDATE: Shortly after this post went live, U.S. Attorney General Loretta Lynch released a statement on the indictment. Notably, she referred to the state and federal cases as “parallel prosecutions.” But Roof cannot be in two courtrooms at the same time and so one proceeding will have to take place before the other.

It is hard to identify any justice interest served by federal prosecution. Rather, this appears to be for the institutional interests of the Justice Department. 

Karl Marx was wrong about many things but right about one thing: the revolutionary way capitalism attacks and destroys feudalism. As I explain in a new study,  in India, the rise of capitalism since the economic reforms of 1991 has also attacked and eroded casteism, a social hierarchy that placed four castes on top with a fifth caste—dalits—like dirt beneath the feet of others. Dalits, once called untouchables, were traditionally denied any livelihood save virtual serfdom to landowners and the filthiest, most disease-ridden tasks, such as cleaning toilets and handling dead humans and animals. Remarkably, the opening up of the Indian economy has enabled dalits to break out of their traditional low occupations and start businesses. The Dalit Indian Chamber of Commerce and Industry (DICCI) now boasts over 3,000 millionaire members. This revolution is still in its early stages, but is now unstoppable.

Milind Kamble, head of DICCI, says capitalism has been the key to breaking down the old caste system. During the socialist days of India’s command economy, the lucky few with industrial licenses ran virtual monopolies and placed orders for supplies and logistics entirely with members of their own caste. But after the 1991 reforms opened the floodgates of competition, businesses soon discovered that to survive, they had to find the most competitive inputs. What mattered was the price of your supplier, not his caste.

Many tasks earlier done in-house were contracted out for efficiency, and this opened new spaces that could be filled by new entrepreneurs, including dalits. DIOCCI members had a turnover of half a billion dollars in 2014 and aim to double it within five years. Kamble says dalits have ceased to be objects of pity and are becoming objects of envy. They are no longer just job-seekers, they are now job creators.

Even in rural areas, dalits have increasingly moved up the income and social ladders in the last two decades.  One survey in the state of Uttar Pradesh shows the proportion of dalits owning brick houses is up from 38 percent to 94 percent, the proportion running their own businesses is up from 6 percent to 36.7 percent, and the proportion owning cell phones is up from zero to one-third. Some former serfs have now become bosses. A rising proportion have become land-owners, and sometimes hire upper-caste workers. Even more revolutionary, say dalits, is the change in their social status. Once they were virtually bonded laborers, and could not eat or drink with the upper castes. Today the bonded labor system is almost gone, and dalits operate restaurants at which upper castes eat and drink. They remain relatively poor and discriminated against, but economic reform since 1991 has revolutionized their social and economic status.

Here we introduce a new feature from the Center for the Study of Science, “On the Bright Side.” OBS will highlight the beneficial impacts of human activities on the state of our world, including improvements to human health and welfare, as well as the natural environment. Our emphasis will typically focus on the oft-neglected positive externalities of carbon dioxide emissions and associated climate change. Far too often, the media, environmental organizations, governmental panels and policymakers concentrate their efforts on the putative negative impacts of potential CO2-induced global warming. We hope to counter that pessimism with a heavy dose of positive reporting on the considerable good humans are doing for themselves and for the planet.

According to Piao et al. (2015), the reliable detection and attribution of changes in vegetation growth are essential prerequisites for “the development of successful strategies for the sustainable management of ecosystems.” And indeed they are, especially in today’s world in which so many scientists and policy makers are concerned with what to do (or not do) about the potential impacts of CO2-induced climate change. However, detecting vegetative change, let alone determining its cause, can be an extraordinarily difficult task to accomplish. Nevertheless, that is exactly what Piao et al. set out to do in their recent study.

More specifically, the team of sixteen Chinese, Australian and American researchers set out to investigate trends in vegetational change across China over the past three decades (1982-2009), quantifying the contributions from different factors including (1) climate change, (2) rising atmospheric CO2 concentrations, (3) nitrogen deposition and (4) afforestation. To do so, they used three different satellite-derived Leaf Area Index (LAI) datasets (GLOBMAP, GLASS, and GIMMIS) to detect spatial and temporal changes in vegetation during the growing season (GS, defined as April to October), and five process-based ecosystem models (CABLE, CLM4, ORCHIDEE, LPJ and VEGAS) to determine the attribution.

With respect to detection, this work revealed that most regions of China experienced a greening trend indicative of enhanced growth across the time period studied (see Figure 1). Overall, 56 percent of the area studied experienced a significant increase in greening (95% level) when using the GLOBMAP dataset, compared with 54 and 31 percent using the GLASS and GIMMIS datasets. Those regions with the largest greening trends include southwest China and part of the North China Plain. 

Figure 1. Spatial distribution of the trend in LAIGS over the period 1982–2009 as calculated by the GIMMS dataset (a), GLOBMAP dataset (b) and the GLASS dataset (c). The frequency distribution of the significance level (P value) of the trends calculated for the three LAIGS datasets is shown in panel d.

With respect to attribution, Piao et al. report that “the combined effect of CO2 fertilization and climate change with the effect of nitrogen deposition, leads to the conclusion that these three factors are responsible for almost all of the average increasing trend of LAIGS observed from the satellites” (see Figure 2). They also report that “at the country scale, the average trend of LAIGS attributed to rising CO2 concentration is estimated to be … about 85% of the average LAIGS trend estimated by satellite datasets,” while noting secondarily that the enhanced nitrogen deposition driven by fossil fuel combustion and agricultural fertilization is likely the source of the remaining portion of China’s enhanced vegetation growth, citing the findings of Reay et al. (2008), Thomas et al. (2009), Fleischer et al. (2013) and Yu et al. (2014).

Figure 2. Trend in China’s LAIGS over the period 1982–2009 at the country scale for the three satellite remote sensing datasets and five process models described in the text above. Significance levels of 95 and 99 percent are denoted with one and two asterisks, respectively. See the authors’ original text (Piao et al., 2015) for additional explanation of this figure.

In considering the researchers’ several findings, it is clear that the fossil fuel combustion that has resulted in the rise in atmospheric CO2 and enhanced nitrogen deposition over the past three decades has provided a great benefit to Chinese vegetation. As illustrated in Figure 2, led primarily by the increase in CO2, that benefit has been more than sufficient to compensate for the negative effects of climate change that also occurred over that time period. Thus, it would seem far more prudent to celebrate CO2 instead of demonizing it, like so many people incorrectly do these days; for atmospheric CO2 is truly the elixir of life!

 

References

Fleischer, K., Rebel, K.T., Molen, M.K., Erisman, J.W., Wassen, M.J., van Loon, E.E., Montagnani, L., Gough, C.M., Herbst, M., Janssens, I.A., Gianelle, D. and Dolman, A.J. 2013. The contribution of nitrogen deposition to the photosynthetic capacity of forests. Global Biogeochemical Cycles 27: 187-199.

Piao, S, Yin, G., Tan, J., Cheng, L., Huang, M., Li, Y., Liu, R., Mao, J., Myneni, R.B., Peng, S., Poulter, B., Shi, X., Xiao, Z., Zeng, N., Zeng, Z. and Wang, Y. 2015. Detection and attribution of vegetation greening trend in China over the last 30 years. Global Change Biology 21: 1601-1609.

Reay, D.S., Dentener, F., Smith, P., Grace, J. and Feely, R.A. 2008. Global nitrogen deposition and carbon sinks. Nature Geoscience 1: 430-437.

Thomas, R.Q., Canham, C.D., Weathers, K.C. and Goodale, C.L. 2009. Increased tree carbon storage in response to nitrogen deposition in the U.S. Nature Geoscience 3: 13-17.

Yu, G., Chen, Z., Piao, S., Peng, C., Ciais, P., Wang, Q., Li, X., and Zhu, X. 2014. High carbon dioxide uptake by subtropical forest ecosystems in the East Asian monsoon region. Proceedings of the National Academy of Sciences USA 111: 4910-4915.

A recent AP/GfK poll finds a slim majority (51%) of Americans say businesses with religious objections should be required to serve same-sex couples. Forty-six percent (46%) say these business owners should be allowed to refuse service. However, for business owners specifically offering wedding-related services, Americans say these particular businesses “should be allowed to refuse service” by a margin of 59% to 39%. 

These results correspond with the nuanced argument Cato scholar Roger Pilon recently made in the Wall Street Journal. Pilon explains that businesses open to the public ought to serve everyone; however, business owners with religious objections (who are not a monopoly) should not be forced to participate in the creative act of planning and participating in the wedding of a same-sex couple. Referring to two couples who recently were heavily fined for declining to provide services for same-sex weddings, he writes:

“Because they represent their businesses as open to the public, the Kleins and Giffords shouldn’t be able to deny entrance and normal service to gay customers… But it is a step further—and an important one—to force religious business owners to participate in a same-sex wedding, to force them to engage in the creative act of planning the event, baking a special-order cake for it, photographing it, and so on.”

Americans seem inclined to support this nuanced view that businesses should serve all customers regardless of sexual orientation, religion, race, gender, income, national origin, etc.—but that wedding-related businesses requiring owners’ direct participation in the wedding should not be forced to provide service against the owners’ religious beliefs.

Just because the public thinks wedding-related business owners ought not be forced to serve same-sex couples doesn’t mean the public opposes same-sex marriage. Polls show (here, here) that in advance of June’s Supreme Court ruling, roughly 6 in 10 Americans supported legalizing marriage for same-sex couples.

Taken together, these polls suggest that a sizeable share of Americans think the state should not prohibit same-sex couples from getting married and that the state should not require wedding-related business owners with religious objections to provide services against their will.

In blogs over the last several months, I have revisited the fiscal records of the eight Republican presidential candidates who have gubernatorial experience. As the 2016 race heats up, the candidates will begin making many promises on tax and spending issues, but will we be able to believe them?

The records show that some governors worked hard to limit the size and scope of government. Others grew government with more spending and higher taxes. The candidates fit into three categories: the “A’s,” the falling grades, and the consistent “B’s”.

Three of the former governors earned at least one “A” during their tenure: George Pataki of New York, Jeb Bush of Florida, and Bobby Jindal of Louisiana. Pataki earned high marks for slashing state spending by $2 billion and cutting the personal income tax. Bush passed a billion dollar property tax cut coupled with a large business tax cut. Jindal dramatically cut spending. Spending is down 9 percent in Louisiana since fiscal year 2009.

Pataki’s and Bush’s grades didn’t stay in the upper tier. Pataki fell to a depressing “D” by the end of his tenure because of his support for large spending and tax increases. Spending grew at twice the rate of population growth and inflation during his tenure. He backed several tax hikes and NY issued billions in new debt. Florida’s budget exploded during Bush’s second term, lowering his final grade to a “C.”

Other governors had precipitous falls too. John Kasich of Ohio received a “B” in his first report card. In 2014 his grade fell to a “D,” and that tied him as the worst Republican governor on fiscal policy in the country. Kasich supported huge spending hikes. Spending increased 18 percent from 2012 to 2015, according to the National Association of State Budget Officers, and he expanded Medicaid over objections from the legislature. He requested another 11 percent increase for fiscal year 2016. Mike Huckabee of Arkansas fell from a “B” to an “F” as he embraced a number of net, new tax increases totaling $500 million. He also doubled total state spending.

The final group of governors includes governors who received consistent, strong grades over their tenures. These governors didn’t set the curve, but did demonstrate a record of tax-and-spending restraint. Rick Perry of Texas earned a “B” on five of his six report cards, with a “C” for the sixth grade. Spending did grow while Perry was governor, but at least its growth was limited to population growth and inflation. Scott Walker of Wisconsin passed a  large tax cut and large-scale union reforms, but spending grew a bit quicker than the national average. Chris Christie of New Jersey continues to propose tax cuts and fiscal reforms, but his progress has been stymied by the Democratic-controlled legislature.

Candidates will offer bold promises of action if they are elected. Many will pledge to cut taxes and federal spending.  The records of past governors provide some evidence to determine whether a candidate’s promises should be trusted or if the promises are just bluster to garner support.

Note: This piece discusses the eight former or current Republican governors running for the presidency. Two former Democrat governors are also running. Below is a summary of all ten candidates’ fiscal records while governor:

This is what happens in a world without markets for water, as Eli Saslow reports in the Washington Post:

Their two peach trees had turned brittle in the heat, their neighborhood pond had vanished into cracked dirt and now their stainless-steel faucet was spitting out hot air. “That’s it. We’re dry,” Miguel Gamboa said during the second week of July, and so he went off to look for water….

For a few days now, they had been without running water in the fifth year of a California drought that had finally come to them. First it had devastated the orchards where Gamboa and his wife had once picked grapes. Then it drained the rivers where they had fished and the shallow wells in rural migrant communities. All the while, Gamboa and his wife had donated a little of their hourly earnings to relief efforts in the San Joaquin Valley and offered to share their own water supply with friends who had run out, not imagining the worst consequences of a drought could reach them here, down the road from a Starbucks, in a remodeled house surrounded by gurgling birdbaths and towering oaks.

The article reads like science fiction. And it’s so tragic, because markets could go a long way toward allocating California’s water to its highest-valued uses, as Peter Van Doren and Gary Libecap discussed recently. More Cato studies on water markets here.

John Kasich, the Governor of Ohio, makes his presidential announcement today. He becomes the 16th person to join the Republican field and the 8th current or former governor.

Kasich is a fiscal policy expert. He has made a federal Balanced Budget Amendment a key talking point in his speeches and appearances so far, and was known for being a budget cutter while in Congress. His record in Ohio tells a very different story. Spending has risen rapidly  during Kasich’s tenure in Columbus.

Data from the National Association of State Budget Officers illustrates the rapid growth general fund spending. From fiscal year 2012, Kasich’s first full fiscal year, to fiscal year 2015, general fund spending increased in Ohio by 18 percent. Nationally, state general fund spending increased by 12 percent during that period. Kasich’s proposed budget for fiscal year 2016 increased spending further. It included a year-over-year increase of 11 percent. The average governor proposed a spending increase of 3 percent from fiscal year 2015 to fiscal year 2016.

Much of the increase is due to Kasich’s support of Medicaid expansion. In 2013 the Ohio House of Representatives opposed Medicaid expansion. They inserted a provision in the state budget forbidding the Kasich administration from expansion without their approval. Kasich stripped the provision from the budget and then proceeded to expand the program without their approval.

Just 18 months after the expansion took effect, the costs have exploded. According to a recent report from the state’s Legislative Service Commission, costs are 63 percent, or $1.4 billion, over budget. The report says the overage is because of “higher than expected caseloads and per person costs.”  The expansion population was 600,000 in June of 2015, compared to estimates of 366,000. Medicaid expenditures are 9.5 percent higher in fiscal year 2015 than they were in fiscal year 2014.

Much of this spending increase underlies Kasich’s grades on Cato’s Governors Report Card. Kasich received a respectable “B” in 2012, but his score plummeted to a “D” in 2014 as the spending increases started. He scored the worst of any governor—Republican or Democrat—on spending in the 2014 report card. And Kasich tied for the worst grade overall of any Republican governor.

Kasich has made some progress on tax reform. The personal income tax was cut in 2013, 2014, and 2015. Kasich supported a plan to exclude a portion of small-business income from taxation in 2013 and to expand the provision in 2015. But Kasich has also supported several tax increases, such as increasing taxes on shale gas and oil, the state’s gross receipts tax, and cigarettes.

Overall, Kasich’s legacy in Columbus is disappointing. He has presided over large spending increases. His proposed fiscal year 2016 spending increase was three times larger than the national average among the states. While he has made some meaningful tax reforms, his support for bigger government in numerous areas is troubling.

Last month, when Justice Anthony Kennedy found that same-sex marriage was a “fundamental right,” did he and the four other justices for whom he wrote find a “new” constitutional right? Or is it rather, as some of us have long argued, that the Constitution protected that right for nearly a century and a half, like the right to same-sex sodomy (Lawrence v. Texas), to sell and use contraceptives (Griswold v. Connecticut), to educate one’s child in a parochial school (Pierce v. Society of Sisters), and, dare I say, to freedom of contract in employment (Lochner v. New York)?

I address those questions in a piece in today’s National Law Journal, defending Kennedy’s conclusion in Obergefell v. Hodges but taking exception to his reasoning. The fundamental right to same-sex marriage rests mainly, he argued, on the liberty interest that is protected under the Fourteenth Amendment’s Due Process Clause. Not so, said Justice Clarence Thomas in his dissent. Drawing extensively on John Locke’s state-of-nature approach to political legitimacy, which the Founders and Framers drew on as well, Thomas argued that the Obergefell plaintiffs were not denied the right to marry. They were perfectly free to go to any willing clergyman who would marry them—and the state would not have interfered with their liberty to do so. What they wanted, he saw, was a state license, the state’s positive recognition of the marriage, and the legal benefits that go with the state’s recognition.

Kennedy’s conclusion as against the state, therefore, belongs properly not under the Fourteenth Amendment’s Due Process but under its Equal Protection Clause. The state denied same-sex couples the same benefits it granted opposite-sex couples and thus discriminated against them. Thus, the right to marry someone of the same sex may be a natural right that anyone would enjoy in the state of nature; but once we leave that state, if an actual state we’re in grants the privileges of marriage, that right is entailed and derived from the Fourteenth Amendment’s Privileges or Immunities Clause; and its denial is properly litigated against the state under the Equal Protection Clause. Unfortunately, Thomas never developed those points—nor could he have without coming out for same-sex marriage—nor did Kennedy’s brief and gauzy discussion of equal protection get to the heart of the matter either.

But to return to the questions with which I began above, the most disquieting aspect of the five opinions the case generated—the four dissenters wrote separate opinions—was found in Chief Justice John Roberts’ dissent. Focusing on Kennedy’s mistaken Due Process argument, Roberts took the occasion to launch a sustained attack against the Court’s discovery of unenumerated rights under the Due Process Clause. However off-point in Obergefell, his attack—citing the “discredited” Lochner decision no fewer than 16 times—included such Holmesian gems as “the Fourteenth Amendment does not enact John Stuart Mill’s On Liberty.”

Only Kennedy, for all his fuzziness, came close to putting his finger on the core of the matter before us—the constitutional status of the unenumerated rights at issue in cases like those mentioned above. The Obergefell plaintiffs, he wrote, almost in passing, “pose no risk of harm to themselves or third parties.” Indeed! That could have come straight out of John Stuart Mill. But the source is irrelevant. Rather, the constitutional point is that if the state is going to restrict your liberty, it has to have a very good reason—that’s what the Constitution, at bottom, is all about. And from Lawrence all the way back to Lochner, the state didn’t have a good reason for limiting the liberty of the plaintiffs, which means that their right was always there to be protected, even though, as in Lawrence, the Court hadn’t recognized it when it ruled otherwise, 17 years earlier in Bowers v. Hardwick, the case Lawrence overturned.

In Obergefell, Kennedy seemed to notice that when he wrote, quoting his opinion in Lawrence, that Bowers was “not correct when it was decided.” It would have been good if he had then drawn expressly the implicit conclusion, that the right at issue was always protected. It didn’t take “new insights and societal understandings” to discover that, as he averred in Obergefell. It takes simply a plain understanding of the theory of our Constitution of liberty.

 

Congress faces a deadline at the end of July to extend federal highway funding. Policymakers are likely to cobble together a short-term fix for the funding gap in the Highway Trust Fund (HTF), rather than enacting a permanent solution.

Annual HTF spending is projected to be $53 billion and rising in coming years, while HTF revenues will be $40 billion. That leaves an annual funding gap of at least $13 billion. A good permanent fix would be to cut federal spending by $13 billion to match the revenues. State governments could fill the gap with their own funding, efficiency improvements, or privatization.

That straightforward decentralization solution is not popular with highway lobby groups, and it is usually not mentioned as an option by reporters. A recent Washington Post Wonkblog column is typical. It examined road quality and potholes in the states, and then concluded that more federal money was needed.

The Post published my letter in response to Wonkblog on Saturday:

The article noted that some states (such as California) have many car-damaging potholes, while others (such as Florida) have very few. It said that “we haven’t been putting enough money into the Highway Trust Fund.”

Actually, the data reveal that California ought to be learning lessons from Florida on how to spend existing funds more efficiently. The fact that some states have much better highways than others shows that states can solve their own highway problems — without the top-down federal actions suggested in the article.

Here are some of the details from the original Wonkblog story:

… 28 percent of the nation’s major roadways—interstates, freeways, and major arterial roadways in urban areas—are in “poor” condition.

[Other than D.C.] … the worst roads are in California where 51 percent of the highways are rated poor. Rhode Island, New Jersey and Michigan all have “poor” ratings of 40 percent or more. Dang.

And while everybody loves to make fun of Florida, the Sunshine State actually has the smallest percentage of bad roads in the nation—only 7 percent. Nevada, Missouri, Minnesota and Arkansas round out the top 5.

Note that Florida is a warm and sunny, while Minnesota is cold and snowy, yet they both have very good roads. Meanwhile, California is warm and sunny, while Michigan is cold and snowy, yet they both have very poor roads. Wonkblog correctly notes, “I might have expected weather and latitude to play a big role in road quality, but that doesn’t seem to be the case here.”

So far so good, but then Wonkblog jumps to his predetermined solution, and completely ignores the implication of the data he had just presented. He says, “One main reason why our roads are in such bad shape is that we haven’t been putting enough money into the Highway Trust Fund to keep up with infrastructure needs.”

According to Wonkblog’s own chart, only 10 percent of the roads in Minnesota are in “poor” condition, while 51 percent in California are poor. Thus, bad roads are clearly a state-level failing. Wonkblog immediately grabs for the magic wand of more federal funding, but he might have asked what it is that states like Minnesota are doing right with the existing funding.  

The other weird thing about Wonkblog’s conclusion is that he says, “we haven’t been putting enough money into” the HTF. But, of course, it is spending that might affect road quality, not the revenues “into” the fund. If you look at HTF spending, it has remained high in recent years because Congress has filled it with general fund revenues, as I chart here.

In sum, there are apparently dramatic differences in road quality between the states. That may stem from differences in state funding, state efficiency, and state competence, but seemingly not climate conditions. All the states have the ability by themselves to have high quality roads, but some states it appears have important road-investment lessons to learn from the others.

Twelve states, as well as the District of Columbia and Puerto Rico, currently grant (or will soon grant) drivers’ licenses to unauthorized immigrants. An additional two—Arizona and Nebraska—explicitly grant licenses to immigrants brought to the United States as small children (“Dreamers”). This is a favorable trend, both for public safety and for liberty.

If you want an illustration of the public safety benefits from using drivers’ licenses solely for driving administration, give a read to this Voice of America article which illustrates clearly that illegal immigrants drive even when licensing is unavailable to them. Now that licensing is available, a California applicant who is not legally in this country must first prove residence. “He must also take an eye test to show he can see well, and a written test on driving rules. He must also take a driving test to show he can operate a motor vehicle.” Bringing all drivers up to such minimum standards undoubtedly improves safety outcomes.

For liberty, though, the shift back toward using driver licensing for driving is especially welcome. In 2005, amid a wave of anti-immigrant sentiment stoked by terror fears, Congress passed the REAL ID Act, which requires states to get proof of legal presence if their licenses and IDs are to be accepted by federal agencies. It appeared for a time as though states kowtowing to the federal government would help turn their driver’s licenses into an all-purpose federal tracking and control instrument, a national ID.

It has become increasingly clear that the Department of Homeland Security’s Transportation Security Administration will never follow through on the feds’ threat to turn away air travelers from states that don’t comply with REAL ID (though many are still taken in by DHS talking points). Some states are declining to implement REAL ID at all. Others are producing easy-to-acquire licenses that are labeled “not for federal purposes,” which REAL ID permits.

The states giving licenses to unauthorized immigrants today run the gamut from “liberal” to “conservative”: California, Colorado, Connecticut, Delaware (effective December 2015), Hawaii (effective January 2016), Illinois, Maryland, New Mexico, Nevada, Utah, Vermont, and Washington. For varying reasons—and with varying levels of controversy—they’re re-asserting state authority over a state prerogative: driver licensing policy.

That’s good federalism. It’s good for road safety. And it’s especially good for keeping motor vehicle bureaucrats from being TSA agents and vice versa.

A new Cato Institute/YouGov poll finds a solid majority—58%—of Americans supports the main components of the Iran nuclear deal, in which the United States and other countries would ease oil and economic sanctions on Iran for 10-15 years in return for Iran agreeing to stop its nuclear program over that period. Forty percent (40%) oppose such a deal.

Americans also prefer Congress to allow such a deal to go forward (53%) rather than block the agreement (46%). Support declines slightly when the deal is described as an agreement between the “Obama administration and Iran.”

Full poll results found here

Despite support for the deal, Americans remain skeptical it will stop Iran’s nuclear program. Fifty-two percent (52%) of Americans say the agreement is “unlikely” to “stop Iran from developing nuclear weapons,” including 32% who say it’s “extremely unlikely.” Conversely, 46% believe the deal is likely to achieve its primary goal.

However, Americans are more optimistic the deal will delay Iran from developing nuclear weapons. The poll found 51% of Americans think the deal will likely “delay” Iran’s nuclear development while 47% disagree.

The survey also offered Americans an opportunity to select which one of several policy options would be “most effective” in reducing the likelihood Iran develops nuclear weapons. Doing so found a plurality –40%— think the Iran nuclear agreement would be more effective than taking military action against Iran’s nuclear facilities (23%), imposing new economic sanctions against Iran (23%) or continuing existing sanctions against Iran (12%).

Ultimately, 63% of Americans say it would be a “disaster” if Iran developed nuclear capabilities while 32% say the problem could be managed and 5% say it wouldn’t be a problem. Nevertheless, Americans tend to have confidence that the Iran nuclear agreement may be the next best step toward reducing that possibility.

Partisanship and Religion Polarizes, Youth Gives Confidence

Partisanship polarizes support for the Iran nuclear agreement and the perceived threat.

Fully 80% of Democrats support the deal while 62% of Republicans oppose it. Independents side with Democrats with 55% in favor. Democrats are also far more likely to believe the deal will “stop” Iran from developing nuclear weapons (71%) compared to only 22% of Republicans. Even still, Democrats are 30 points less likely than Republicans to say Iran obtaining a nuclear weapon would be a disaster (49% v 79%).

Young Americans are also far more supportive of the deal, more likely to believe in its efficacy and less likely to fear Iran obtaining a nuclear weapon compared to older Americans.

Fully 68% of Americans 18-29 support the Iran nuclear deal compared to 50% of those over 65. Furthermore, 6 in 10 millennials say the agreement will stop Iran’s nuclear development compared to only 3 in 10 seniors.

Ultimately, young Americans are simply less concerned if Iran gains nuclear capabilities. While 76% of seniors say such an outcome would be a disaster only 48% of 18-29 year olds agree. Instead, 51% of millennials say Iranian nuclear capabilities would be a “problem that can be managed” (39%) or “not a problem at all” (12%). These results comport with findings from Trevor Thrall and Eric Goepner’s recent study of millennial attitudes on foreign policy.

Religiosity plays a role as well: the more someone attends religious services the more they believe that economic sanctions or military intervention will best prevent Iran from obtaining nuclear capabilities and the less likely they are to believe that the Iran nuclear agreement will work.

For instance, among those who often attend religious services, 40% think economic sanctions will best reduce Iranian nuclear development followed by 30% who say the nuclear agreement and 28% who say military intervention. Conversely among those who never attend religious services 47% think the Iran nuclear agreement will be most effective, followed by 30% who think economic sanctions and 19% who think military intervention will work best.

Similarly, Protestants are about 20 points less likely than the religiously unaffiliated to think the Iranian nuclear agreement (28% v. 49%) will work better than economic sanctions or military intervention in reducing Iranian nuclear capabilities.

Sign up for future releases of Cato public opinion studies and insights here.

The Cato Institute/YouGov Poll of 1,004 adults was conducted July 14-16, 2015, using a sample drawn from YouGov’s probability-based online panel, which is designed to be representative of the U. S. population. The margin of sampling error for all respondents is plus or minus 4.3 percentage points. Toplines (.pdf) results can be found here, crosstabs (.xls) can be found here.

I owe a heckuva lot to Friedrich Hayek. Had it not been for him, I might never have heard of “free banking,” meaning the genuine article rather than the phony antebellum U.S. version. Certainly I would never have found myself writing about it. Nor, perhaps, would any other modern economist.

It was two pamphlets that Hayek published in the 1970s — first, Choice in Currency (1976) and then Denationalisation of Money (1978) — that caused the scales to fall off of my eyes and of those of some other economists, thereby encouraging us to reconsider the merits of private and competitive currency systems. That reconsideration in turn led to a revival of interest in former free banking episodes, including those of Scotland and Canada, which monetary economists had previously neglected or overlooked. In short, were it not for Hayek, there’d be no such thing as a Modern Free Banking School.

Yet Hayek himself was no free banker. For starters, his own vision of “choice in currency” had little if anything in common with historical free banking arrangements. In those arrangements, banks dealt in established, precious-metal monetary units, like the British pound and the American dollar, receiving deposits of metallic money, or claims to such, and offering in place their own readily-redeemable liabilities, including circulating banknotes. In Hayek’s scheme, in contrast, competing firms issue irredeemable paper notes, with each brand representing a distinct monetary unit. Far from resembling ordinary commercial banks, Hayek’s “banks” resemble so many modern central banks in that they issue a sort of “fiat” money. But they differ from actual central banks in enjoying neither monopoly privileges nor the power to compel anyone to accept their products.1

Competition, Hayek claimed, would force private issuers of irredeemable currencies to maintain those currencies’ purchasing power, or else go out of business. An overexpanding free bank, in contrast, is disciplined, not by an eventual loss of reputation, but by the more immediate prospect of running out of cash reserves. Hayek’s claims have always been controversial, even among persons (myself among them) who are inclined to favor competitive currency arrangements over monopolistic ones. It isn’t clear that a Hayekian money issuer would ever manage to get its paper accepted, or that it would resist the temptation to hyperinflate if it did.2

But Hayek didn’t merely differ from free bankers in proposing a form of currency competition distinct from free banking. He expressly opposed free banking. Asked, during a 1945 radio interview, whether he considered the Federal Reserve System a step along “the road to serfdom,” he unhesitatingly replied, “No. That the monetary system must be under central control has never, to my mind, been denied by any sensible person.”3 And although by the 1970s he had come to believe it both possible and desirable to have a currency stock consisting of the irredeemable paper of numerous private firms, he also continued to maintain that, so long as government authorities supplied a nation’s standard money, private firms should not be able to issue circulating paper claims denominated and redeemable in that money.

The most explicit, later statement of Hayek’s views on free banking occurs in a lecture he gave at a conference in New Orleans in 1977, just as Denationalisation of Money was in press:

We have indeed given government, and for fairly good reasons, the exclusive right to issue gold coins. And after we had given the government that right, I think it was equally understandable that we also gave the government the control over any money or any claims, paper claims, for coins or money of that definition. That people other than the government are not allowed to issue dollars if the government issues dollars is a perfectly reasonable arrangement, even if it has not turned out to be completely beneficial. And I am not suggesting that other people should be entitled to issue dollars. All the discussion in the past about free banking was really about the idea that not only the government or government institutions but others should also be able to issue dollar notes. That, of course, would not work.4

Actually, governments monopolized the coining of gold and other metals, not for any good reasons, but because doing so gave them the opportunity to manipulate precious-metal standards in pursuit of narrow fiscal ends. But it is the last sentence of this quote that’s most surprising, for what Hayek declares “unworkable” is an arrangement that worked quite successfully in many places, including Canada, where it survived well into Hayek’s own lifetime. Canada’s banking and currency system had in fact been remarkably stable, altogether avoiding the crises by which the U.S. was afflicted in the decades leading to the Fed’s establishment, and weathering the Great Depression far better than the U.S. system did despite having lacked a central bank until after that episode’s nadir.5

That Hayek should have written as if he was quite unaware of the Canadian experience or, for that matter, of the still more famous Scottish free banking episode is extremely puzzling. It was Hayek, after all, who supervised, and signed off on, Vera Smith’s 1935 doctoral dissertation on “The Rationale of Central Banking” (subsequently published by P.S. King & Son, and reprinted by LibertyPress) in which she discusses both the Canadian and the Scottish episodes, as well as some other free banking episodes, in unmistakably favorable terms.

Had Hayek forgotten his own PhD student’s work, if not some of his own early research? Had he simply changed his mind, reverting to conventional wisdom after a brief interval during which he had entertained a more favorable view of free banking? Or had he never accepted Free Banking School arguments?

Larry White, who drew attention to Hayek’s anti free-banking stance some years ago in a History of Political Economy article entitled, “Why Didn’t Hayek Favor Laissez Faire in Banking?,” favors the third hypothesis, tracing Hayek’s position to his unwavering belief that a free banking system would manage the stock of bank money in a procyclical manner. Whereas for Mises, who did favor free banking,6 a cyclical boom was most likely to be set off by a central bank, for Hayek it is the competitive commercial banks that are most likely to overissue. Unlike Mises, Hayek subscribed to the popular view that banks might expand credit without limit so long as they expanded in unison, and that they would in fact be inclined to overexpand, while allowing their reserve ratios to decline, in response to cyclical increases in the demand for loans.

But Hayek was mistaken. The popular view, according to which banks can expand credit all they like so long as they expand it in unison, incorrectly equates a bank’s demand for reserves with its net demand for such — that is, with its need for reserves to cover expected or deterministic outflows. This overlooks banks’ need for “precautionary” reserves, or reserves that serve to protect against an undue risk of stochastic or random reserves losses. Even a well-coordinated, industry-wide expansion of bank credit will involve some increase in banks’ collective demand for precautionary reserves. For that reason such a coordinated expansion isn’t sustainable unless it’s accompanied by an increase in the nominal quantity of bank reserves. That is why, if one examines the record of so-called bank lending “manias,” one finds that they typically involve, not a substantial decline in bank reserve ratios, but a substantial increase in the nominal quantity of bank reserves.

Whether Hayek was right to reject free banking or not, and tempting as it may be for fans of free banking to claim him as one of their own, doing so would hardly be doing justice to that great economist. We may credit him for inspiring us all; but we mustn’t otherwise associate him with opinions that, rightly or wrongly, he flatly rejected.

______________________________________

[1] The modern cybercurrency market, consisting of Bitcoin and its many less well-known rivals (“altcoins”) offers something close to a real-world counterpart of Hayek’s scheme.

[2] For criticisms of Hayek’s scheme, and others like it, see George Selgin and Lawrence H. White, “How Would the Invisible Hand Handle Money?,” Journal of Economic Literature 32 (4) (December 1994), and Lawrence H. White, The Theory of Monetary Institutions, Part XII, “Competitive Supply of Fiat-Type Money” (New York: Blackwell, 1999).

[3] Hayek on Hayek: An Autobiographical Dialogue, ed. Stephen Kresge and Leif Wenar (Chicago: University of Chicago Press, 1994), p. 116; quoted in White, “Why Didn’t Hayek Favor Laissez Faire in Banking?, p. 763 n12.

[4] F. A. Hayek, “Toward a Free Market Monetary System.” Journal of Libertarian Studies 5 (1) (Fall 1979). Whether Hayek, like Friedman before him, imagines that private banks’ circulating paper dollars would be indistinguishable from the fiat dollars issued by the central authority, is unclear from this passage. If so, he committed the crude error of equating free banking with counterfeiting.

[5] On Canada’s decision, despite its monetary system’s good record, to establish a central bank in 1935, see Bordo and Redish.

[6] See Lawrence H. White, “Mises on Free Banking and Fractional Reserves,” in John W. Robbins and Mark Spangler, eds., A Man of Principle: Essays in Honor of Hans F. Sennholz (Grove City: Grove City College Press).

Without government interference, insurance markets will naturally charge higher premiums for riskier individuals. For example, life insurance premiums vary considerably based on factors that increase the likelihood of death, such as age, gender, smoking status, and health.

Under Obamacare, many factors that influence healthcare expenditures are excluded from premiums. For example, premiums make no distinction for obesity, likelihood of having a baby, alcoholism or pre-existing conditions. One notable exception is for smokers, where premiums may be up to 50 percent higher than that for non-smokers. I have collected data on premiums for smokers and non-smokers in 35 states, and the data shows large variation in the extent to which smokers are charged more for their choice.

Smokers are certainly a risker group than non-smokers. Thus, one would expect some actuarial adjustment to premiums. Given the variation across states, it is clear that premiums vary not only due to a smoker’s greater risk, but other factors as well. At least part of the markup for smokers should be viewed as a “smoker’s tax” rather than an actuarial adjustment.

One expects that the detrimental effects of smoking would build over time. You wouldn’t expect to see large risk adjustments for young individuals. Let’s consider a 27-year-old who doesn’t receive subsidies but is mandated to purchase health insurance. If a non-smoker lived in Cheyenne, WY, he or she could purchase Blue Cross Blue Shield of Wyoming - BlueSelect Silver ValueTwo Plus Dental plan for $334 per month. This plan has a $3,000 deductible and an out-of-pocket maximum of $6,600. If the 27-year-old smoked, the same plan would be $417 per month, or 24.9% higher. For a pack-a-day smoker, this represents a $2.72 per-pack increase in expenditure due to Obamacare.

Smoker’s Premium in Wyoming - $2.72 per pack for a pack-a-day 27-year-old smoker

Smoker Non-Smoker

 

However, not all of the $2.72 per-pack is a tax. Smokers are more expensive. Consider a non-smoker in Marquette, MI, who selects Blue Cross Blue Shield of Michigan - Blue Cross Silver Extra with Dental and Vision, a Multi-State Plan. That person pays $335 per month, nearly identical to the premium for the non-smoker in Wyoming. The plan has a $2,000 deductible and an out-of-pocket maximum of $5,500. If the 27-year smoked, the same plan costs $351 per month (4.8 percent higher), or $0.53 per-pack of cigarettes. If 53 cents per pack approximates the actuarial adjustment for young smoker, then much of the mark-up in Wyoming – $2.19 of the $2.72 – doesn’t represent risk, and can be viewed as a smoking tax.

Smoker’s Premium in Michigan - $0.53 per pack for a pack-a-day 27-year-old smoker

Smoker Non-Smoker

 

It might be the case that the numbers above are the exception, not the rule. Yet, in a more comprehensive analysis of premiums, it is clear that the smoker’s premium varies considerably by state. Wyoming has some of the highest mark-ups, while Michigan has some of the lowest mark-ups. The plans presented above are quite similar with respect to premiums and cost sharing for non-smokers, yet the smoker’s mark-up varies greatly. The table below shows average mark-ups for young smokers, restricting the set of plans to Obamacare “Silver” plans for 27-year-olds.

 

If a “pack-a-day” smoker is an overstatement for actual consumption, then Obamacare cigarette taxes are extremely high in some states – in fact, far higher than the explicit excise tax. The median excise tax on cigarettes is $1.36 per pack, and is $0.60 per pack in Wyoming (ranking 40th out of the states) and $2.00 per pack in Michigan (ranking 12th).

Based on this analysis, it is clear that the Obamacare smoker’s tax doesn’t represent risk adjustment in many states. But why are cigarette taxes in Obamacare – above and beyond the actuarial adjustment – a problem? Aren’t smokers are doing something terrible to themselves (and others, through secondhand smoke)? In economics, one of the core assumptions is individual rationality. People weight the costs and benefits of their actions and do what’s best for them. Everyday behavior – from smoking cigarettes, to eating pizza instead of broccoli (or sometimes both), to jaywalking in order to save a few seconds of time, to getting in the car to drive to work – involves risk and rewards. If people understand the inherent risks and rewards, then we respect consumer autonomy even if we wouldn’t make the same choice. The economic argument for taxing behavior like smoking (through excise taxes or Obamacare taxes) is that it creates negative externalities. For smoking, there are in fact negative externalities. These are costs produced but not borne by the smoker, the most obvious of which is secondhand smoke. When such externalities exist, corrective taxation is one of several ways that a more efficient allocation of resources can be achieved. Nonetheless, evidence suggests that cigarette taxes at their current levels more than pay for such negative externalities. As importantly, there’s no reason to think these externalities are much different in Wyoming and Michigan.

With that said, should we be concerned with Obamacare cigarette taxes versus, say, excise taxes? One disadvantage of differing excise taxes across state or city borders is that it encourages smuggling or purchases from low-tax areas. Thus, the tax doesn’t correct the negative externality. That differs, of course, from Obamacare taxes where a person would need to move from Wyoming to Michigan to reduce the tax. Yet, Obamacare cigarette taxes present a host of problems. The vast majority of people do not receive health insurance from Obamacare, so its cigarette taxes do not correct the externalities smoking produces. In addition, the cigarette taxes in Obamacare lack transparency. They are buried in the weeds of Obamacare premiums as hefty smoking taxes, meant to influence or punish the choices of 18 percent of American adults. Perhaps if smoking rates were as high as obesity, they’d have enough political power that bureaucrats wouldn’t punish them.

 

Aaron Yelowitz is an associate professor in economics at University of Kentucky and a Visiting Scholar at Cato Institute.

Pages