Feed aggregator

GOP Agrees Bush Was Wrong to Invade Iraq, Now What?”—that’s how the US News headline put it last week. A good question, because it’s not at all clear what that grudging concession signifies. It’s nice that 12 years after George W. Bush lumbered into the biggest foreign policy disaster in a generation, the leading Republican contenders are willing to concede, under enhanced interrogation, that maybe it wasn’t the right call. It would be nicer still if we could say they’d learned something from that disaster. 

Alas, the candidates’ peevish and evasive answers to the Iraq Question didn’t provide any evidence for that. Worst of all was Jeb Bush’s attempt to duck the question by using fallen soldiers as the rhetorical equivalent of a human shield. Ohio governor John Kasich flirted with a similar tactic—“There’s a lot of people who lost limbs and lives over there, OK?”—before conceding, “But if the question is, if there were not weapons of mass destruction should we have gone, the answer would’ve been no.” 

That’s how most of the GOP field eventually answered the question, with some version of
the “faulty intelligence” excuse. We thought Saddam Hussein had stockpiles of chemical
and biological weapons and was poised for a nuclear breakout; it was just our bad luck
that turned out not to be true; so the war was—well, not a “mistake,” insists Marco Rubio, just, er—whatever the word is for something you definitely wouldn’t do again if you had the
power to travel back in time. As Scott Walker, who’s been studying up super-hard on
foreign policy, explained: you can’t fault President Bush: invading Iraq just made sense, based on “the information he had available” at the time. 

Well, no—invading Iraq was a spectacularly bad idea based on what we knew at the time. If we’d found stockpiles of so-called WMD, it would still have been a spectacularly bad idea. Saddam’s possession of unconventional weapons was a necessary condition in the Bush administration’s case for war, but it wasn’t—or shouldn’t have been—sufficient to make that case compelling, because with or without chemical and biological weapons, Saddam’s Iraq was never a national security threat to the United States. 

Put aside the fact that, as applied to chem/bio, “WMD” is a misnomer; assume for the sake of argument that President Bush’s claim that “one vial, one canister, one crate” of the stuff could “bring a day of horror like none we have ever known” was an evidence-based, good-faith assessment of those weapons’ potential, instead of a ludicrous and cynical exaggeration. Even so, you’d still have to show that Saddam Hussein was so hell-bent on hitting the U.S., he’d risk near-certain destruction to do it. 

There was never any good reason to believe that. This, after all, was a dictator who, during the 1991 Gulf War, had been deterred from using chemical weapons against US troops in the middle of an ongoing invasion. As then-Secretary of State James Baker later explained, the George H.W. Bush administration:

made it very clear that if Iraq used weapons of mass destruction, chemical weapons, against United States forces that the American people would demand vengeance and that we had the means to achieve it. … we made it clear that in addition to ejecting Iraq from Kuwait, if they used those types of weapons against our forces we would in addition to throwing them out of Kuwait, we would adopt as a goal the elimination of the regime in Baghdad.

Eleven years later, as the George W. Bush administration pushed for another war with Iraq, there wasn’t any convincing evidence that Saddam Hussein had, in the interim, warmed up to the idea of committing regime suicide through the use of CBW. Even the flawed October 2002 National Intelligence Estimate (NIE) prepared during the run-up to the Iraq War vote concluded that “Baghdad for now appears to be drawing a line short of conducting terrorist attacks with conventional or CBW against the United States, fearing that exposure of Iraqi involvement would provide Washington a stronger cause for making war.” 

By that time, with Bush 43 sounding the alarm about Iraq’s “growing fleet of manned and unmanned aerial vehicles that could be used to disperse chemical or biological weapons across broad areas [including] missions targeting the United States,” it should have been apparent that the case for war rested on a series of imaginary hobgoblins. As Jim Henley put it a couple of years ago, “In the annals of projection, the US claim that Saddam was building tiny remote-controlled death planes wins some kind of prize.”  

What if the Iraqi dictator instead passed off those weapons to terrorists, “secretly and without fingerprints”? In the 2003 State of the Union, that’s what President Bush argued Saddam just might do: “imagine those 19 hijackers with other weapons and other plans, this time armed by Saddam Hussein.” But the notion that Hussein was likely to pass chemical or biological weapons to Al Qaeda was only slightly less fantastic than the scenario that had him crop-dusting US cities with short-range, Czech-built training drones. As my colleague Doug Bandow pointed out at the time: “Baghdad would be the immediate suspect and likely target of retaliation should any terrorist deploy [WMD], and Saddam knows this.” 

I made similar arguments two weeks before the war in a piece called “Why Hussein Will Not Give Weapons of Mass Destruction to Al Qaeda”:  

The idea that Hussein views a WMD strike via terrorist intermediaries as a viable strategy is rank speculation, contradicted by his past behavior. Hussein’s hostility toward Israel predates his struggle with the United States. He’s had longstanding ties with anti-Israeli terror groups and he’s had chemical weapons for over 20 years. Yet there has never been a nerve gas attack in Israel. Why? Because Israel has nuclear weapons and conventional superiority, and Hussein wants to live. If he’s ever considered passing off chemical weapons to Palestinian terrorists, he decided that he wouldn’t get away with it. He has even less reason to trust Al Qaeda with a potentially regime-ending secret.

In its 2004 after-action reassessment of the administration’s case for preventive war, the Carnegie Endowment concluded

there was no positive evidence to support the claim that Iraq would have transferred WMD or agents to terrorist groups and much evidence to counter it. Bin Laden and Saddam were known to detest and fear each other, the one for his radical religious beliefs and the other for his aggressively secular rule and persecution of Islamists. Bin Laden labeled the Iraqi ruler an infidel and an apostate, had offered to go to battle against him after the invasion of Kuwait in 1990, and had frequently called for his overthrow. … the most intensive searching over the last two years has produced no solid evidence of a cooperative relationship between Saddam’s government and Al Qaeda. ….the Iraqi regime had a long history of sponsoring terrorism against Israel, Kuwait, and Iran, providing money and weapons to these groups. Yet over many years Saddam did not transfer chemical, biological, or radiological materials or weapons to any of them “probably because he knew that they could one day be used against his secular regime.”

In the judgment of U.S. intelligence, a transfer of WMD by Saddam to terrorists was likely only if he were “sufficiently desperate” in the face of an impending invasion. Even then, the NIE concluded, he would likely use his own operatives before terrorists. Even without the particular relationship between Saddam and bin Laden, the notion that any government would turn over its principal security assets to people it could not control is highly dubious. States have multiple interests and land, people, and resources to protect. They have a future. Governments that made such a transfer would put themselves at the mercy of groups that have none of these. Terrorists would not even have to use the weapons but merely allow the transfer to become known to U.S. intelligence to call down the full wrath of the United States on the donor state, thereby opening opportunities for themselves. 

You don’t have to “know what we know now” to recognize the poverty of the case for war. You just had to know what we knew then. 

Even so, it’s possible that GOP hawks have learned something from the Iraq debacle, however loathe they are to admit it. Like Saddam’s Iraq, the Syrian and Iranian regimes have long had unconventional weapons and links to terrorist proxies. But I haven’t heard even Lindsey Graham or Marco Rubio invoke the risk of terrorist transfer to make the case for war with Iran or Syria. Perhaps that’s because it’s as unpersuasive an argument now as it should have been then. 

Besides, maybe it’s asking too much to expect professional politicians to depart entirely from the sentiments of the people they want to vote for them. A recent Vox Populi/Daily Caller poll asked Republican voters in early primary states: “Looking back now, and regardless of what you thought at the time, do you think it was the right decision for the United States to invade Iraq in 2003?” Nearly 60 percent of them answered in the affirmative. The GOP’s 2016 contenders may not have good answers to the Iraq Question, but, apparently, they’re miles ahead of their constituents.  

At the risk of sounding like a broken record (well, OK–at the risk of continuing to sound like a broken record), I’d like to say a bit more about economists’ tendency to get their monetary history wrong. In particular, I’d like to take aim at common myths about the gold standard.

If there’s one monetary history topic that tends to get handled especially sloppily by monetary economists, not to mention other sorts, this is it. Sure, the gold standard was hardly perfect, and gold bugs themselves sometimes make silly claims about their favorite former monetary standard. But these things don’t excuse the errors many economists commit in their eagerness to find fault with that “barbarous relic.”

The false claims I have in mind are mostly ones I and others–notably Larry White–have countered before. Still I thought it would be useful to address them again here, because they’re still far from being dead horses, and also so that students wrapping-up the semester will have something convenient to send to their misinformed gold-bashing profs (though I urge them to wait until grades are in before sharing!).

For the sake of those who don’t care to wade through the whole post, here is a “jump to” list of the points covered:

1. The Gold Standard wasn’t an instance of government price fixing. Not traditionally, anyway.
2. A gold standard isn’t particularly expensive. In fact, fiat money tends to cost more.
3. Gold supply “shocks” weren’t particularly shocking.
4. The deflation that the gold standard permitted wasn’t such a bad thing.
5. It wasn’t to blame for 19th-century American financial crises.
6. On the whole, the classical gold standard worked remarkably well (while it lasted).
7. It didn’t have to be “managed” by central bankers.
8. In fact, central banking tends to throw a wrench in the works.
9. “The “Gold Standard” wasn’t to blame for the Great Depression.
10. It didn’t manage money according to any economists’ theoretical ideal. But neither has any fiat-money-issuing central bank.

1. The Gold Standard wasn’t an instance of government price fixing. Not traditionally, anyway.

As Larry White has made the essential point as well as I ever could, I hope I may be excused for quoting him at length:

Barry Eichengreen writes that countries using gold as money ‘fix its price in domestic-currency terms (in the U.S. case, in dollars).’ He finds this perplexing:

But the idea that government should legislate the price of a particular commodity, be it gold, milk or gasoline, sits uneasily with conservative Republicanism’s commitment to letting market forces work, much less with Tea Party–esque libertarianism. Surely a believer in the free market would argue that if there is an increase in the demand for gold, whatever the reason, then the price should be allowed to rise, giving the gold-mining industry an incentive to produce more, eventually bringing that price back down. Thus, the notion that the U.S. government should peg the price, as in gold standards past, is curious at the least.

To describe a gold standard as “fixing” gold’s “price” in terms of a distinct good, domestic currency, is to get off on the wrong foot. A gold standard means that a standard mass of gold (so many grams or ounces of pure or standard-alloy gold) defines the domestic currency unit. The currency unit (“dollar”) is nothing other than a unit of gold, not a separate good with a potentially fluctuating market price against gold. That one dollar, defined as so many grams of gold, continues be worth the specified amount of gold—or in other words that one unit of gold continues to be worth one unit of gold—does not involve the pegging of any relative price. Domestic currency notes (and checking account balances) are denominated in and redeemable for gold, not priced in gold. They don’t have a price in gold any more than checking account balances in our current system, denominated in fiat dollars, have a price in fiat dollars. Presumably Eichengreen does not find it curious or objectionable that his bank maintains a fixed dollar-for-dollar redemption rate, cash for checking balances, at his ATM.

Remarkably, as White goes on to show, the rest of Eichengreen’s statement proves that, besides not having understood the meaning of gold’s “fixed” dollar price, Eichengreen has an uncertain grasp of the rudimentary economics of gold production:

As to what a believer in the free market would argue, surely Eichengreen understands that if there is an increase in the demand for gold under a gold standard, whatever the reason, then the relative price of gold (the purchasing power per unit of gold over other goods and services) will in fact rise, that this rise will in fact give the gold-mining industry an incentive to produce more, and that the increase in gold output will in fact eventually bring the relative price back down.

I’ve said more than once that, the more vehement an economist’s criticisms of the gold standard, the more likely he or she knows little about it. Of course Eichengreen knows far more about the gold standard than most economists, and is far from being its harshest critic, so he’d undoubtedly be an outlier in the simple regression, y = α + β(x) (where y is vehemence of criticism of the gold standard and x is ignorance of the subject). Nevertheless, his statement shows that even the understanding of one of the gold standard’s most well-known critics leaves much to be desired.

Although, at bottom, the gold standard isn’t a matter of government “fixing” gold’s price in terms of paper money, it is true that governments’ creation of monopoly banks of issue, and the consequent tendency for such monopolies to be treated as government- or quasi-government authorities, ultimately led to their being granted sovereign immunity from the legal consequences to which ordinary, private intermediaries are usually subject when they dishonor their promises. Because a modern central bank can renege on its promises with impunity, a gold standard administered by such a bank more closely resembles a price-fixing scheme than one administered by a commercial bank. Still, economists should be careful to distinguish the special features of a traditional gold standard from those of central-bank administered fixed exchange rate schemes.


2. A gold standard isn’t particularly expensive. In fact, fiat money tends to cost more.

Back in the early 1950s, and again in 1960, Milton Friedman estimated that the gold required for the U.S. to have a “real” gold standard would have cost 2.5% of its annual GNP. But that’s because Friedman’s idea of a “real” gold standard was one in which gold coins alone served as money, with no fractionally-backed bank-supplied substitutes. As Larry White shows in his Theory of Monetary Institutions (p. 47) allowing for 2% specie reserves–which is more than what some former gold-based free-banking systems needed–the resource cost of a gold standard taking advantage of fractionally-backed banknotes and deposits would be about one-fiftieth of the number Friedman came up with. That’s a helluva bargain for a gold “seal of approval” that could mean having access to international capital at substantially reduced rates, according to research by Mike Bordo and Hugh Rockoff.

Friedman himself eventually changed his mind about the economies to be achieved by employing fiat money:

Monetary economists have generally treated irredeemable paper money as involving negligible real resource costs compared with a commodity currency. To judge from recent experience, that view is clearly false as a result of the decline in long-term price predictability.

I took it for granted that the real resource cost of producing irredeemable paper money was negligible, consisting only of the cost of paper and printing. Experience under a universal irredeemable paper money standard makes it crystal clear that such an assumption, while it may be correct with respect to the direct cost to the government of issuing fiat outside money, is false for society as a whole and is likely to remain so unless and until a monetary structure emerges under an irredeemable paper standard that provides a high degree of long-run price level predictability.*

Unfortunately, neither White’s criticism of Friedman’s early calculations nor Friedman’s own about-face have kept gold standard critics from repeating the old canard that a fiat standard is more economical than a gold standard. Ross Starr, for example, observes in his 2013 book on money that “The use of paper or fiduciary money instead of commodity money is resource saving, allowing commodity inventories to be liquidated.” Although he understands that fractionally-backed banknotes and deposits may go some way toward economizing on commodity-money reserves, Starr (quoting Adam Smith, but failing to look up historic Scottish bank reserve ratios) insists nonetheless that “a significant quantity of the commodity backing must be maintained in inventory to successfully back the currency,” and then proceeds to build a case for fiat money from this unwarranted assertion:

The next step in economizing on the capital tied up in backing the currency is to use a fiat money. Substituting a government decree for commodity backing frees up a significant fraction of the economy’s capital stock for productive use. No longer must the economy hold gold, silver, or other commodities in inventory to back the currency. No longer must additional labor and capital be used to extract them from the earth. Those resources are freed up and a simple virtually costless government decree is substituted for them.

Tempting as it is to respond to such hooey simply by noting that the vaults of the world’s official fiat-money managing institutions presently contain rather more than zero ounces of gold–31,957.5 metric tons more, to be precise–that response only hints at the fundamental flaw in Starr’s reasoning, which is his treatment of fiat money as a culmination, or limiting case, of the resource savings to be had by resort to fractional commodity-money reserves. That treatment overlooks a crucial difference between fiat money and readily redeemable banknotes and deposits, for whereas redeemable banknotes and deposits are generally understood by their users to be close, if not perfect, substitutes for commodity money, fiat money, the purchasing power of which is unhinged from that of any former money commodity, is nothing of the sort. On the contrary: its tendency to depreciate relative to real commodities, and to gold in particular, is notorious. Consequently holders of fiat money have reason to hold “commodity inventories” as a hedge against the risk that fiat money will depreciate.

If the hedge demand for a former money commodity is large enough, resort to fiat money doesn’t save any resources at all. Indeed, as Roger Garrison notes, “a paper standard administered by an irresponsible monetary authority may drive the monetary value of gold so high that more resource costs are incurred under the paper standard than would have been incurred under a gold standard.” A glance at the history of gold’s real price suffices to show that this is precisely what has happened:


From “After the Gold Rush,” The Economist, July 6, 2010.


Taking the long-run average price of gold, in 2010 prices, to be somewhere around $470, prior to the closing of the gold window in 1917, that price was exceeded on only three occasions, and never dramatically: around the time of the California gold rush, around the turn of the 20th century, and for several years following FDR’s devaluation of the dollar. Since 1971, in contrast, it has exceeded that average, and exceeded it substantially, more often than not. Here is Roger Garrison again:

There is a certain asymmetry in the cost comparison that turns the resource-cost argument against paper standards. When an irresponsible monetary authority begins to overissue paper money, market participants begin to hoard gold, which stimulates the gold-mining industry and drives up the resource costs. But when new discoveries of gold are made, market participants do not begin to hoard paper or to set up printing presses for the issue of unbacked currency. Gold is a good substitute for an officially instituted paper money, but paper is not a good substitute for an officially recognized metallic money. Because of this asymmetry, the resource costs incurred by the State in its efforts to impose a paper standard on the economy and manage the supply of paper money could be avoided if the State would simply recognize gold as money. These costs, then, can be counted against the paper standard.

So if it’s avoidance of gold resource costs that’s desired, including avoidance of the very real environmental consequences of gold mining, a gold standard looks like the right way to go.


3. Gold supply “shocks” weren’t particularly shocking

Of the many misinformed criticisms of the gold standard, none seems to me more wrong-headed than the complaint that the gold standard isn’t even a reliable guarantee against serious inflation. The RationalWiki entry on the gold standard is as good an example of this as any:

Even gold can suffer problems with inflation.Gold rushes such as the California Gold Rush expanded the money supply and, when not matched with a simultaneous increase in economic output, caused inflation.The “Price Revolution” of the 16th century demonstrates a case of dramatic long-run inflation. During this period, western European nations used a bimetallic standard (gold and silver). The Price Revolution was the result of a huge influx of silver from central European mines starting during the late 15th century combined with a flood of new bullion from the Spanish treasure fleets and the demographic shift brought about by the Black Plague (i.e., depopulation).

Admittedly the anonymous authors of this article may not be professional economists; but take my word for it that the same arguments might be heard from any number of such professionals. Brad DeLong, for example, in a list of “Talking Points on the Likely Consequences of re-establishment of the Gold Standard” (my emphasis), includes observation that “significant advances in gold mining technology could provide a significant boost to the average rate of inflation over decades.”

Like I said, the gold standard is hardly free of defects. But being vulnerable to bouts of serious inflation isn’t one of them. Consider the “dramatic” 16th century inflation referred to in the RationalWiki entry. Had that entries’ authors referred to plain-old Wikipedia’s entry on “Price revolution,” they would have read there that

Prices rose on average roughly sixfold over 150 years. This level of inflation amounts to 1-1.5% per year, a relatively low inflation rate for the 20th century standards, but rather high given the monetary policy in place in the 16th century.

I have no idea what the authors mean by their second statement, as there was certainly no such thing as “monetary policy” at the time, and they offer no further explanation or citation. So far as I can tell, they mean nothing more than that prices hadn’t been rising as fast before the price revolution than they did during it, which though trivially true says nothing about how “high” the inflation was by any standards, including those of the 16th century. In any case it was not only “not high” but dangerously low according to standards set, rightly or wrongly, by today’s monetary experts. Finally, though the point is often overlooked, the European Price Revolution actually began well in advance of major American specie shipments, which means that, far from being attributable to such shipments alone, it was a result of several causes, including coin debasements.

What about the California Gold rush, which is also supposed to show how changes in the supply of gold will lead to inflation “when not matched with a simultaneous increase in economic output”? To judge from available statistics, it appears that producers of other goods were almost a match for all those indefatigable forty-niners: as Larry White reports, although the U.S. GDP deflator did rise a bit in the years following the gold rush,

The magnitude was surprisingly small. Even over the most inflationary interval, the [GDP deflator] rose from 5.71 in 1849 (year 2000 = 100) to 6.42 in 1857, an increase of 12.4 percent spread over eight years. The compound annual price inflation rate over those eight years was slightly less than 1.5 percent.

Once again, the inflation rate was such as would have had today’s central banks rushing to expand their balance sheets.

Nor do the CPI estimates tell a different story. See if you can spot the gold-rush-induced inflation in this chart:

*Graphing Various Historical Economic Series,” MeasuringWorth, 2015.

Despite popular beliefs, the California gold rush was actually not the biggest 19th-century gold supply innovation, at least to judge from its bearing on the course of prices. That honor belongs instead to the Witwatersrand gold rush of 1886, the effects of which later combined with those of the Klondike rush of 1896 to end a long interval of gradual deflation (discussed further below) and begin one of gradual inflation.

Brad DeLong is thus quite right to refer to the South African discoveries in observing that even a gold standard poses some risk of inflation:

For example, the discovery and exploitation of large gold reserves near present-day Johannesburg at the end of the nineteenth century was responsible for a four percentage point per year shift in the worldwide rate of inflation–from a deflation of roughly two percent per year before 1896 to an inflation of roughly two percent per year after 1896.

Allowing for the general inaccuracy of 19th-century CPI estimates, DeLong’s statistics are correct. But that “For example” is quite misleading. Like I said: this is the most serious instance of an inflationary gold “supply shock” of which I’m aware. Yet even it served mainly to put an end to a deflationary trend, without ever giving rise to an inflation rate substantially above what central banks today consider (rightly or wrongly) optimal. As for the four percentage point change in the rate of inflation “per year,” presumably meaning “in one year,” it’s hardly remarkable: changes as big or larger are common throughout the 19th century, partly owing to the notoriously limited data on which CPI estimates for that era are based. Even so, they can’t be compared to the much larger jumps in inflation with which the history of fiat monies is riddled, even setting hyperinflations aside. Keep this in mind as you reflect upon Brad’s conclusion that

Under the gold standard, the average rate of inflation or deflation over decades ceases to be under the control of the government or the central bank, and becomes the result of the balance between growing world production and the pace of gold mining.

Alas, keeping matters in perspective–that is, comparing the gold standard’s actual inflation record, not to that which might be achieved by means of an ideally-managed fiat money, but to the actual inflation record of historic fiat-money systems, is something many critics of the gold standard seem reluctant to do, perhaps for good reason.

While we’re on the subject, nothing could be more absurd than attempts to demonstrate the unsuitability of gold as a monetary medium by referring to gold’s unstable real value in the years since the gold standard was abandoned. Yet this is a favorite debating point among the gold standard’s less thoughtful critics, including Paul Krugman:

There is a remarkably widespread view that at least gold has had stable purchasing power. But nothing could be further from the truth. Here’s the real price of gold — the price deflated by the consumer price index — since 1968:

Compare Professor Krugman’s chart to the one in the previous section. Then ask yourself (1) Has gold’s price behaved differently since 1968 than it did before?; and (2) Why might this be so? If your answers are “Yes” and “Because gold and paper dollars are no longer close substitutes, and gold is now widely used to hedge against depreciation of the dollar and other fiat currencies,” you understand the gold standard better than Krugman does. But don’t get a swelled head over it, because it really isn’t saying much: Krugman is one of the observations that sits squarely on the upper right end of y = α + β(x).


4. The deflation that the gold standard permitted wasn’t such a bad thing.

The complaint that a gold standard doesn’t rule out inflation is but a footnote to the more frequent complaint that it suffers, in Brad DeLong’s words, from “a deflationary bias which makes it likely that a gold standard regime will see a higher average unemployment rate than an alternative managed regime.” According to Ben Bernanke “There is…a high correlation in the data between deflation (falling prices) and depression (falling output).”

That the gold standard tended to be deflationary–or that it tended to be so for sometimes long intervals between gold discoveries–can’t be denied. But what certainly can be denied is that these periods of slow deflation went hand-in-hand with high unemployment. Having thoroughly reviewed the empirical record, Andrew Atkeson and Patrick Kehoe conclude as follows:

Deflation and depression do seem to have been linked during the1930s. But in the rest of the data for 17 countries and more than 100 years, there is virtually no evidence of such a link.

More recently Claudio Borio and several of his BIS colleagues reported similar findings. How then (you may wonder), did Bernanke arrive at his opposite conclusion? Easy: he looked only at data for the 1930s–the worst deflationary crisis ever–ignoring all the rest.

Why is deflation sometimes depressing, and sometimes not? The simple answer is that there is more than one sort of deflation. There’s the sort that’s caused by a collapse of spending, like the “Great Contraction” of the 1930s, and then there’s the sort that’s driven by greater output of real goods and services–that is, by outward shifts in aggregate supply rather than inward shifts in aggregate demand. Most of the deflation that occurred during the classical gold standard era (1873-1914) was of the latter, “good” sort.

Although I’ve been banging the drum for good deflation since the 1990s, and Mike Bordo and others have made the specific point that the gold standard mostly involved inflation of the good rather than bad sort, too many economists, and way too many of those who have got more than their fare share of the public’s attention, continue to ignore the very possibility of supply-driven deflation.

Of the many misunderstandings propagated by economists’ tendency to assume that deflation and depression must go hand-in-hand, none has been more pernicious than the widespread belief that throughout the U.S. and Europe, the entire period from 1873 to 1896 constituted one “Great” or “Long Depression .” That belief is now largely discredited, except perhaps among some newspaper pundits and die-hard Marxists, thanks to the efforts of G.B. Saul and others. The myth of a somewhat shorter “Long Depression,” lasting from 1873-1879, persists, however, though economic historians have begun chipping away at that one as well.


5. It wasn’t to blame for 19th-century American financial crises.

Speaking of 1873, after claiming that a gold standard is undesirable because it makes deflation (and therefore, according to his reasoning, depression) more likely, Krugman observes:

The gold bugs will no doubt reply that under a gold standard big bubbles couldn’t happen, and therefore there wouldn’t be major financial crises. And it’s true: under the gold standard America had no major financial panics other than in 1873, 1884, 1890, 1893, 1907, 1930, 1931, 1932, and 1933. Oh, wait.

Let me see if I understand this. If financial crises happen under base-money regime X, then that regime must be the cause of the crises, and is therefore best avoided. So if crises happen under a fiat money regime, I guess we’d better stay away from fiat money. Oh, wait.

You get the point: while the nature of an economy’s monetary standard may have some bearing on the frequency of its financial crises, it hardly follows that that frequency depends mainly on its monetary standard rather than on other factors, like the structure, industrial and regulatory, of the financial system.

That U.S. financial crises during the gold standard era had more to do with U.S. financial regulations than with the workings of the gold standard itself is recognized by all competent financial historians. The lack of branch banking made U.S. banks uniquely vulnerable to shocks, while Civil-War rules linked the supply of banknotes to the extent of the Federal government’s indebtedness., instead of allowing that supply to adjust with seasonal and cyclical needs. But there’s no need to delve into the precise ways in which such misguided legal restrictions to the umerous crises to which Krugman refers. It should suffice to point out that Canada, which employed the very same gold dollar, depended heavily on exports to the U.S., and (owing to its much smaller size) was far less diversified, endured no banking crises at all, and very few bank failures, between 1870 and 1939.


6. 0n the whole, the classical gold standard worked remarkably well (while it lasted).

Since Keynes’s reference to gold as a “barbarous relic” is so often quoted by the gold standard’s critics, it seems only fair to repeat what Keynes had to say, a few years before, not about gold per se, itself, but about the gold-standard era:

What an extraordinary episode in the economic progress of man that age was which came to an end in August, 1914! The greater part of the population, it is true, worked hard and lived at a low standard of comfort, yet were, to all appearances, reasonably contented with this lot. But escape was possible, for any man of capacity or character at all exceeding the average, into the middle and upper classes, for whom life offered, at a low cost and with the least trouble, conveniences, comforts, and amenities beyond the compass of the richest and most powerful monarchs of other ages. The inhabitant of London could order by telephone, sipping his morning tea in bed, the various products of the whole earth, in such quantity as he might see fit, and reasonably expect their early delivery upon his doorstep; he could at the same moment and by the same means adventure his wealth in the natural resources and new enterprises of any quarter of the world, and share, without exertion or even trouble, in their prospective fruits and advantages… He could secure forthwith, if he wished it, cheap and comfortable means of transit to any country or climate without passport or other formality, could despatch his servant to the neighboring office of a bank or such supply of the precious metals as might seem convenient, and could then proceed abroad to foreign quarters, without knowledge of their religion, language, or customs, bearing coined wealth upon his person, and would consider himself greatly aggrieved and much surprised at the least interference. But, most important of all, he regarded this state of affairs as normal, certain, and permanent, except in the direction of further improvement, and any deviation from it as aberrant, scandalous, and avoidable.

It would, of course, be foolish to suggest that the gold standard was entirely or even largely responsible for this Arcadia, such as it was. But it certainly did contribute both to the general abundance of goods of all sorts, to the ease with which goods and capital flowed from nation to nation, and, especially, to the sense of a state of affairs that was “normal, certain, and permanent.”

The gold standard achieved these things mainly by securing a degree of price-level and exchange rate stability and predictability that has never been matched since. According to Finn Kydland and Mark Wynne:

The contrast between the price stability that prevailed in most countries under the gold standard and the instability under fiat standards is striking. This reflects the fact that under commodity standards (such as the gold standard), increases in the price level (which were frequently associated with wars) tended to be reversed, resulting in a price level that was stable over long periods. No such tendency is apparent under the fiat standards that most countries have followed since the breakdown of the gold standard between World War I and World War II.

The high degree of price level predictability, together with the system of fixed exchange rates that was incidental to the gold standard’s widespread adoption, substantially reduced the riskiness of both production and international trade, while the commitment to maintain the standard resulted, as I noted, in considerably lower international borrowing costs.

Those pundits who find it easy to say “good riddance” to the gold standard, in either its classical or its decadent variants, need to ask themselves what all the fuss over monetary “reconstruction” was about, following each of the world wars, if not achieving a simulacrum at least of the stability that the classical gold standard achieved. True, those efforts all failed. But that hardly means that the ends sought weren’t very worthwhile ones, or that those who sought them were “lulled by the myth of a golden age.” Though they may have entertained wrong beliefs concerning how the old system worked, they weren’t wrong in believing that it did work, somehow.


7. It didn’t have to be managed by central bankers.

But how? The once common view that the classical gold standard worked well only thanks to its having been carefully managed by the Bank of England and other central banks, as well as the related view that its success depended on international agreements and other forms of central bank cooperation, is now, thankfully, no longer subscribed to even by the gold-standard’s more well-informed critics. Instead, as Julio Gallarotti observes, the outcomes of that standard “were primarily the resultants [sic] of private transactions in the markets for goods and money” rather than of any sort of government or central-bank management or intervention. But the now accepted view doesn’t quite go far enough. In fact, central banks played no essential part at all in achieving the gold standard’s most desirable outcomes, which could have been achieved as well, or better, by systems of competing banks-of-issue, and which were in fact achieved by means of such systems in many participating nations, including the United States, Switzerland (until 1901), and Canada. And although it is common for central banking advocates to portray such banks as sources of emergency liquidity to private banks, during the classical gold standard era liquidity assistance often flowed the other way, and did so notwithstanding monopoly privileges that gave central banks so many advantages over their commercial counterparts. As Gallarotti observes (p. 81),

That central banks sometimes went to other central banks instead of the private market suggests nothing more than the fact that the rates offered by central banks were better, or too great an amount of liquidity may have been needed to be covered in the private market.


8. In fact, central banking tends to throw a wrench in the works.

To the extent that central banks did exercise any special influence on gold-standard era monetary adjustments, that influence, instead of helping, made things worse. Because an expanding central bank isn’t subject to the internal constraint of reserve losses stemming from adverse interbank clearings, it can create an external imbalance that must eventually trigger a disruptive drain of specie reserves. During liquidity crunches, on the other hand, central banks were more likely than commercial banks to become, in Jacob Viner’s words, “engaged in competitive increases of their discount rates and in raid’s on each other’s reserves.” Finally, central banks could and did muck-up the gold standard works by sterilizing gold inflows and outflows, violating the “rules of the gold standard game” that called for loosening in response to gold receipts and tightening in response to gold losses.

Competing banks of issue could be expected to play by these “rules,” because doing so was consistent with profit maximization. The semi-public status of central banks, on the other hand, confronted them with a sort of dual mandate, in which profits had to be weighed against other, “public” responsibilities (ibid., pp. 117ff.). Of the latter, the most pernicious was the perceived obligation to occasionally set aside the requirements for preserving international monetary equilibrium (“external balance”) for the sake of preserving or achieving preferred domestic monetary conditions (“internal balance”). As Barry Ickes observes, playing by the gold standards rules could be “very unpopular, potentially, as it involves sacrificing internal balance for external balance.” Commercial bankers couldn’t care less. Central bankers, on the other hand, had to care when to not care was to risk losing some of their privileges.

Today, of course, achieving internal balance is generally considered the sine qua non of sound central bank practice; and even where fixed or at least stable exchange rates are considered desirable it is taken for granted that external balance ought occasionally to be sacrificed for the sake of preserving domestic monetary stability. But to apply such thinking to the classical gold standard, and thereby conclude that in that context a similar sacrifice of external for internal stability represented a turn toward more enlightened monetary policy, is to badly misunderstand the nature of that arrangement, which was not just a fixed exchange rate arrangement but something more akin to an multinational monetary union or currency area. Within such an area, the fact that one central bank gains reserves while another looses them was itself no more significant, and no more a justification for violating the “rules of the game,” than the fact that a commercial bank somewhere gained reserves at the expense of another.

The presence of central banks did, however, tend to aggravate the disturbing effects of changes in international trade patterns compared to the case of international free banking. Central-bank sterilization of gold flows could, on the other hand, lead to more severe longer-run adjustments, as it was to do, to a far more dramatic extent, in the interwar period.


9. “The “Gold Standard” wasn’t to blame for the Great Depression.

I know I’m about to skate onto thin ice, so let me be more precise. To say that “The gold standard caused the Great Depression ” (or words to that effect, like “the gold standard was itself the principal threat to financial stability and economic prosperity between the wars”), is at best extremely misleading. The more accurate claim is that the Great Depression was triggered by the collapse of the jury-rigged version of the gold standard cobbled together after World War I, which was really a hodge-podge of genuine, gold-exchange, and gold-bullion versions of the gold standard, the last two of which were supposed to “economize” on gold. Call it “gold standard light.”

Admittedly there is one sense in which the real gold standard can be said to have contributed to the disastrous shenanigans of the 1920s, and hence to the depression that followed. It contributed by failing to survive the outbreak of World War I. The prewar gold standard thus played the part of Humpty Dumpty to the King’s and Queen’s men who were to piece the still-more-fragile postwar arrangement together. Yet even this is being a bit unfair to gold, for the fragility of the gold standard on the eve of World War I was itself largely due to the fact that, in most of the belligerent nations, it had come to be administered by central banks that were all-too easily dragooned by their sponsoring governments into serving as instruments of wartime inflationary finance.

Kydland and Wynne offer the case of the Bank of Sweden as illustrating the practical impossibility of preserving a gold standard in the face of a major shock:

During the period in which Sweden adhered to the gold standard (1873–1914), the Swedish constitution guaranteed the convertibility into gold of banknotes issued by the Bank of Sweden. Furthermore, laws pertaining to the gold standard could only be changed by two identical decisions of the Swedish Parliament, with an election in between. Nevertheless, when World War I broke out, the Bank of Sweden unilaterally decided to make its notes inconvertible. The constitutionality of this step was never challenged, thus ending the gold standard era in Sweden.

The episode seems rather less surprising, however, when one considers that “the Bank of Sweden,” which secured a monopoly of Swedish paper currency in 1901, is more accurately known as the Sveriges Riksbank, or “Bank of the Swedish Parliament.”

If the world crisis of the 1930s was triggered by the failure, not of the classical gold standard, but of a hybrid arrangement, can it not be said that the U.S. , which was among the few nations that retained a full-fledged gold standard, was fated by that decision to suffer a particularly severe downturn? According to Brad DeLong,

Commitment to the gold standard prevented Federal Reserve action to expand the money supply in 1930 and 1931–and forced President Hoover into destructive attempts at budget-balancing in order to avoid a gold standard-generated run on the dollar.

It’s true that Hoover tried to balance the Federal budget, and that his attempt to do so had all sorts of unfortunate consequences. But the gold standard, far from forcing his hand, had little to do with it. Hoover simply subscribed to the prevailing orthodoxy favoring a balanced budget. So, for that matter, did FDR, until events forced him too change his tune: during the 1932 presidential campaign the New-Dealer-to-be assailed his opponent both for running a deficit and for his government’s excessive spending.

As for the gold standard’s having prevented the Fed from expanding the money supply (or, more precisely, from expanding the monetary base to keep the broader money supply from shrinking), nothing could be further from the truth. Dick Timberlake sets the record straight:

By August 1931, Fed gold had reached $3.5 billion (from $3.1 billion in 1929), an amount that was 81 percent of outstanding Fed monetary obligations and more than double the reserves required by the Federal Reserve Act. Even in March 1933 at the nadir of the monetary contraction, Federal Reserve Banks had more than $1 billion of excess gold reserves.


Whether Fed Banks had excess gold reserves or not, all of the Fed Banks’ gold holdings were expendable in a crisis. The Federal Reserve Board had statutory authority to suspend all gold reserve requirements for Fed Banks for an indefinite period.

Nor, according to a statistical study by Chang-Tai Hsieh and Christina Romer, did the Fed have reason to fear that by allowing its reserves to decline it would have raised fears of a devaluation. On the contrary: by taking steps to avoid a monetary contraction, the Fed would have helped to allay fears of a devaluation, while, in Timberlake’s words, initiating a “spending dynamic” that would have helped to restore “all the monetary vitals both in the United States and the rest of the world.”


10. It didn’t manage money according to any economists’ theoretical ideal. But neither has any fiat-money-issuing central bank.

Just as “paper” always beats “rock” in the rock-paper-scissors game, so does managed paper money always beat gold in the rock-paper monetary standards game economists like to play. But that’s only because under a fiat standard any pattern of money supply adjustment is possible, including a “perfect” pattern, where “perfect” means perfect according to the player’s own understanding. Even under the best of circumstances a gold standard is, on the other hand, unlikely to achieve any economist’s ideal of monetary perfection. Hence, paper beats rock. More precisely, paper beats rock, on paper.

And what does this impeccable logic tell us concerning the relative merits of gold versus paper money in practice? Diddly-squat. I mean it. To say something about the relative merits of paper and gold, you have to have theories–good ol’ fashioned, rational optimizing firm and agent theories–of how the supply of basic money adjusts under various conditions in the two sorts of monetary regimes. We have a pretty good theory of the gold standard, meaning one that meshes well with how that standard actually worked. The theory of fiat money is, in contrast, a joke, in part because it’s much harder to pin-down central bankers’ objectives (or any objectives apart from profit-maximization, which is at play in the case of gold), but mostly thanks to economists’ tendency to simply assume that central bankers behave like omniscient angels who, among other things, understand the finer points of DSGE models. That may do for a graduate class, or a paper in the AER. But good economics it most certainly isn’t.


I close with a few words concerning why it matters that we get the facts straight about the gold standard. It isn’t simply a matter of winning people over to that standard. Though I’m perhaps as ready as anyone to shed a tear for the old gold standard, I doubt that we can ever again create anything like it. But getting a proper grip on gold serves, not just to make the gold standard seem less unattractive than it is often portrayed to be, but to remove some of the sheen that has been applied to modern fiat-money arrangements using the same brush by which gold has been blackened. The point, in other words, isn’t to make a pitch for gold. It’s to make a pitch for something –anything– that’s better than our present, lousy money.


*I’m astonished to find that Friedman’s important and very interesting 1986 article, despite appearing in one of the leading academic journals, has to date been cited only 64 times (Google Scholar). Of these, nine are in works by myself, Kevin Dowd, and Lawrence White! I only wish I could attribute this neglect to monetary economists’ pro-fiat money bias. More likely it reflects their general lack of interest in alternative monetary arrangements.

As the Export-Import Bank’s charter nears expiration, supporters continue to argue that ending this government agency, which subsidizes loans to major U.S. exporters (mostly Boeing), is unwise because other countries also subsidize exports.  They’re especially eager to point to China, whose own export credit agency is very active in promoting Chinese manufacturers.  They then claim that allowing the bank charter to expire would be “unilateral disarmament.”

Claiming that the United States should pursue any economic policy on the grounds that China is doing it strikes me as bordering on insanity.  Market intervention by the Chinese government has resulted in large-scale misallocation and is a serious liability for the stability of the Chinese economy.  It’s true that Chinese subsidies to domestic industries reduce opportunities for U.S. businesses, and it’s perfectly alright for the U.S. government to condemn those policies.  But should we really seek to emulate them?

Competitive metaphors about trade are generally bad, and martial ones are especially unhelpful.  The United States is simply not engaged in a metaphorical war with its trading partners.  Thinking of trade as a contest inevitably leads to bad policy by giving governments an excuse to intervene in the market for the benefit of crony constituencies.  The fact that some U.S. businesses would make more money if foreign governments pursued better policies is not a legitimate excuse to intervene in the market on their behalf.

Regardless of the reasons offered to justify it, there are real consequences to the U.S. economy when the U.S. government picks winners and losers.

Supporters of the Ex-Im Bank make plenty of other bad arguments, all of which betray a fundamental distrust of free-market capitalism.  But “China does it” may be the worst one.

On Tuesday the House of Representatives unanimously passed an amendment to the  Commerce, Justice, Science, and Related Agencies appropriations bill, introduced by Rep. Joaquin Castro (D-TX), which takes $10 million from Drug Enforcement Administration (DEA) funds for salaries and expenses and puts it towards the Department of Justice’s Body Worn Camera Partnership Program. The program provides 50 percent matching grants for law enforcement agencies that wish to use body cameras.  

Prior to the passage of Castro’s amendment, the appropriations bill provided $15 million for the body-worn camera partnership initiative, $35 million less than requested by the Obama administration.

Castro’s amendment is one of the latest examples of legislation aimed at funding police body cameras which, despite their potential to be great tools for increasing law enforcement accountability, are expensive.

The cameras themselves can cost from around $100 to over $1,000 and are accompanied by costs associated with redaction and storage. The fiscal impact of body cameras is a major reason why some police departments have not used the technology. In 2014 the Police Executive Research Forum received surveys from about 250 police departments and found that “39 percent of the respondents that do not use body-worn cameras cited cost as a primary reason.”

An Illinois body camera bill on Gov. Rauner’s desk not only outlines body camera policies for Illinois police agencies that want to use body camera but also introduces a $5 fee on traffic tickets aimed at mitigating the cost of body cameras.

I have written before about why a federalist approach to body cameras is preferable to a federal top-down approach with attached financial incentives. If Rauner signs the Illinois bill into law it will be interesting to see how effective a traffic ticket fee is in funding the use of police body cameras. If it works state lawmakers may well seek to implement similar plans in their own states.

I am all for the DEA having its budget cut (ideally to $0), but the federal government providing conditional grants for body cameras is risky because some law enforcement agencies may implement federal policy recommendations not because they are the best but because doing so will cut costs. Grant applicants are urged to review a body camera paper published by the Office of Community Oriented Policing Services (COPS) and to “incorporate the most important program design elements in their proposal.” Unfortunately, the COPS body camera paper includes a worrying policy recommendation: allowing police officers to view body camera footage of incidents before they make a statement.

Federal lawmakers ought to be part of the ongoing discussions on police body camera policies, but federal policy proposals and suggestions shouldn’t come with financial assistance attached.

Last year China joined the U.S.-led Rim of the Pacific Exercise for the first time. However, Beijing’s role in RIMPAC has become controversial. Senate Armed Services Committee Chairman John McCain recently opined: “I would not have invited them this time because of their bad behavior.”

The Obama administration is conflicted. Bloomberg’s Josh Rogin worried that “so far, China is paying no price for its aggression.” Bonnie Glaser of CSIS suggested using the exercises to threaten the PRC. Patrick Cronin of the Center for a New American Security was less certain, acknowledging benefits of China’s inclusion: “It all depends on what you think RIMPAC should be.”

That is the key question. In part the exercise is about mutually beneficial cooperation for non-military purposes. With the simultaneous growth in commercial traffic and national navies, there likely will be increasing need and opportunity for joint search and rescue, operational safety, anti-piracy patrols, and humanitarian relief.

The question also involves military-military cooperation. Contacts between the Chinese and U.S. navies are few; those between the PRC’s forces and those of countries at odds with Beijing’s territorial claims, such as Japan and the Philippines, are even fewer.

There is value in allowing potential opponents a better assessment of one’s capabilities. Chinese expectations may be more realistic if they have a better sense of what and who they might face, especially the navies of their neighbors, which are expanding and becoming more competent.

Moreover, demystifying the other side makes it harder to demonize one’s potential adversaries. Obviously, even warm personal relationships don’t prevent governments from careening off to war with one another. However, learning that the other side’s military personnel are not devils incarnate might cause leaders to temper the advice they offer in a crisis.

Participation in the exercise also may be viewed as evidence that the U.S. is or is not attempting to contain the PRC. Hence inviting China in last year made American policy look a little less like containment.

Unfortunately, RIMPAC is too small and unimportant to much matter. No one who looks at U.S. behavior, and certainly no Chinese official who does so, can believe that Washington is engaged in anything except containment.

Granted, it can be pursued more or less ostentatiously. However, strengthening alliances surrounding China, moving more military forces to the Asia-Pacific, bolstering the militaries of neighboring states, and consistently backing the positions taken by the PRC’s antagonists outweigh an invitation to naval maneuvers every two years.

Finally, participation can be seen as a reward and denial as a punishment for China. Thus, Panda suggests barring Beijing participation so long as it does not respect freedom of navigation. He wrote: “The magnitude is severe enough to condition China’s behavior while not derailing decades of fragile U.S.-China goodwill altogether.”

If all it took to bring to heel America’s looming co-superpower and peer competitor was cancelling its navy out of a nonessential ocean exercise, Washington should have tried that tactic long ago. The PRC likely prefers to join than to sit at the sidelines. However, the benefits remain too small to cause China’s leaders to change fundamental policy objectives.

As I wrote for China-US Focus:  “The PRC is a revisionist power, as America once was. The former will seek to reverse or overturn past geopolitical decisions which it believes to be unfair or unrealistic. Beijing will abandon that course only when the costs of doing so rise sufficiently.”

“Losing” China’s RIMPAC invitation won’t make a difference. In contrast, an increasingly well-armed and well-organized set of neighbors willing to stand up to Chinese bullying would.

“Friendship diplomacy” cannot eliminate ideological differences and geopolitical concerns. Nevertheless, the U.S. and its allies and friends should continue to seek opportunities to invest China in a stable geopolitical order. Doing so won’t be easy, but extending an invitation to RIMPAC next year would be a worthwhile step in the meantime.

Since before the Declaration of Independence, equality under the law has been a central feature of American identity. The Fourteenth Amendment expanded that constitutional precept to actions by states, not just the federal government. For example, if a state government wants to use race as a factor in pursuing a certain policy, it must do so in the furtherance of a compelling reason—like preventing prison riots—and it must do so in as narrowly tailored a way as possible.

This means, among other things, that race-neutral solutions must be considered and used as much as possible. So if a state were to, say, set race-based quotas for who receives its construction contracts and then claim that no race-neutral alternatives will suffice—without showing why—that would fall far short of the high bar our laws set for race-conscious government action.

Yet that is precisely what Illinois has done.

Illinois’s Department of Transportation and the Illinois State Toll Highway Authority have implemented the U.S. Department of Transportation’s Disadvantaged Business Entity (“DBE”) program, which aims to remedy past discrimination against minority and women contractors by granting competitive benefits to those groups. While there may be a valid government interest in remedying past discrimination, Illinois’s implementation of the program blows through strict constitutional requirements and bases its broad use of racial preferences on studies that either employ highly dubious methodology or are so patently outdated that they provide no legal basis on which to conclude, as constitutionally required, that there remains ongoing, systemic, widespread racial (or gender) discrimination in the public-construction-contracting industry that only the DBE program can rectify.

Even the studies Illinois used recommended that the state seek to achieve its anti-discrimination goals to the fullest extent possible by race-neutral means. Naturally, Illinois ignored this advice and implemented what could only generously be called a half-hearted pretense of employing race-neutral measures. Even worse, a federal district court upheld Illinois’s implementation. Even though the state failed to show which race-neutral alternatives it considered, tried, or rejected, the court held that the DBE’s grant of benefits still passed strict scrutiny.

The contracting company that brought the suit has now appealed the case to the U.S. Court of Appeals for the Seventh Circuit. Cato has joined the Pacific Legal Foundation and Center for Equal Opportunity in filing a brief supporting that appeal. We argue that Illinois didn’t meet the high constitutional standards governing the use of race-conscious measures in its approach to the DBE program because it (1) failed to establish a strong basis in evidence that there even is ongoing, widespread, systemic racial discrimination that must be remedied, and (2) failed to establish the narrow-tailoring requirement that workable race-neutral measures be tried and found insufficient before the state can turn to using race.

By cutting corners with shoddy studies and paying lip service to race-neutral solutions, Illinois and the lower court have each done a disservice to the hard-won principle of equality under the law. We urge the Seventh Circuit to correct those mistakes when it takes up Midwest Fence Corp. v. USDOT this summer.

Rick Perry, former governor of Texas, will announce his second White House run tomorrow. Perry served as Texas governor from December 2000, following the election of George W. Bush, until January of this year. During his long tenure, Perry showed reasonable fiscal restraint. Perry did not shrink the size of Texas’ government, but he limited its growth both in terms of spending and tax revenue.

Perry appeared in six editions of Cato’s Fiscal Policy Report Card. His first appearance was in 2004. His scores show consistent management of Texas’ budget as governor. He received a “B” in five of his six reports, with a “C” in 2012. Below are his scores in each report card.

2004: B

2006: B

2008: B

2010: B

2012: C

2014: B

Perry’s track record is certainly consistent given the tendency of some governors to slip while in office. For instance, George Pataki’s score fell from an “A” to a “D” between his first and last reports.  Mike Huckabee went from a “B” to an “F.”

From fiscal year 2002 to fiscal year 2015, general fund spending in Texas increased 63 percent. This growth outpaced the growth in average state spending, which grew by 50 percent.

But when these figures are adjusted for population growth, Governor Perry’s record appears much better. Texas’ population grew almost 2.5 times faster than the United States’ population during this time period. In this situation, comparing per capita general fund spending growth is more instructive.

Texas: Per capita spending grew from $1,430 to $1,802, or 26 percent from FY2002 to FY2015

50-State Average: Per capita spending grew from $1,809 to $2,356, or 30 percent from FY2002 to FY2015

Spending did increase in Texas on a per capita basis, but it increased slower than spending in other states.

Inflation should also be considered. Once inflation is added, any remaining growth in general fund spending disappears. Texas’ spending increase of 63 percent from FY2002 to FY2015 exactly matches population growth plus inflation for that time period. Perry also pushed several constitutional provisions to try and ensure this fiscal restraint would last. He championed an amendment to the Texas constitution that would have limited spending growth to population growth plus inflation. He also tried to require that any tax increase must be approved by two-thirds of voters.

On taxes, Perry’s actions were mixed. In 2003, Perry supported a package that included fee increases. In 2006, Perry supported a plan that raised the cigarette tax by one dollar and repealed the state’s franchise tax in exchange for property tax cuts. The plan also created the complex Texas Margin tax as a replacement. The plan did result in a net decrease in taxes, but it was the wrong approach. The 2012 report card summarizes the issue well:

The new tax [the margin tax] hit 180,000 additional businesses and increased state-level taxes by more than $1 billion annually. The added state revenues were used to reduce local property taxes, but the overall effect of the package has been to centralize government power in the state and reduce beneficial tax competition between local jurisdictions.

In 2009, Perry pushed to increase the exemption on the Margin tax so that it would hit fewer businesses and he pushed to extend the exemption in 2013.

Perry’s long tenure as Texas governor shows consistency, earning a “B” on five out of six Cato report cards. Perry did not decrease the size of the state’s government, but he did limit its growth to population growth and inflation.


Another step toward criminalizing advocacy: writing in the Washington Post, Sen. Sheldon Whitehouse (D-R.I.) urges the U.S. Department of Justice to consider filing a racketeering suit against the oil and coal industries for having promoted wrongful thinking on climate change, with the activities of “conservative policy” groups an apparent target of the investigation as well. A trial balloon, or perhaps an effort to prepare the ground for enforcement actions already afoot?

Sen. Whitehouse cites as precedent the long legal war against the tobacco industry. When the federal government took the stance that pro-tobacco advocacy could amount to a legal offense, some of us warned tobacco wouldn’t remain the only or final target. To quote what I wrote in The Rule of Lawyers:

In a drastic step, the agreement ordered the disbanding of the tobacco industry’s former voices in public debate, the Tobacco Institute and the Council for Tobacco Research (CTR), with the groups’ files to be turned over to anti-tobacco forces to pick over the once-confidential memos contained therein; furthermore, the agreement attached stringent controls to any newly formed entity that the industry might form intended to influence public discussion of tobacco. In her book on tobacco politics, Up in Smoke, University of Virginia political scientist Martha Derthick writes that these provisions were the first aspect in news reports of the settlement to catch her attention. “When did the governments in the United States get the right to abolish lobbies?” she recalls wondering. “What country am I living in?” Even widely hated interest groups had routinely been allowed to maintain vigorous lobbies and air their views freely in public debate.

By the mid-2000s, calls were being heard, especially in other countries, for making denial of climate change consensus a legally punishable offense or even a “crime against humanity,” while widely known advocate James Hansen had publicly called for show trials of fossil fuel executives. Notwithstanding the tobacco precedent, it had been widely imagined that the First Amendment to the U.S. Constitution might deter image-conscious officials from pursuing such attacks on their adversaries’ speech. But it has not deterred Sen. Whitehouse.

Law professor Jonathan Adler, by the way, has already pointed out that Sen. Whitehouse’s op-ed “relies on a study that doesn’t show what he (it) claims.” And Sen. Whitehouse, along with Sen. Barbara Boxer (D-Calif.) and Edward Markey (D-Mass.), has been investigating climate-dissent scholarship in a fishing-expedition investigation that drew a pointed rebuke from then-Cato Institute President John Allison as an “obvious attempt to chill research into and funding of public policy projects you don’t like…. you abuse your authority when you attempt to intimidate people who don’t share your political beliefs.”

P.S. Kevin Williamson notes that if the idea of criminalizing policy differences was ever something to dismiss as an unimportant fringe position, it is no longer. (cross-posted from Overlawyered)

Bryan Caplan of George Mason University posted some comments I sent him along with some questions about a recent blog post of his.  His questions are in quotes, my responses follow.  First, some background.

It’s important to separate immigration (permanent) from migration (temporary).  Much of what we think of as “immigration” is actually migration as many of them return home.  Dudley Baines (page 35) summarizes some estimates of return migration from America’s past.

Country/Region of Origin            Return Rates

Nordics                                     20%

English & Welsh                         40%

Portuguese                                30-40%

Austro-Hungarians & Poles          30-40%

Italians                                      40-50%           


Gould estimates a 60 percent return rate for Italians – similar to Mexican unauthorized immigrants from 1965-1985. 

There were three parts to the Immigration Reform and Control Act of 1986 that all affected both immigration and migration.  The first part was the amnesty.  The second was employer sanctions through the I-9 form that was supposed to turn off the jobs magnet.  The third was increased border security to keep them out.  For the first two questions, I assume the rest of IRCA was passed.

1. How much higher would cumulative Mexican immigration since 1986 have been if the IRCA’s employer sanctions hadn’t been imposed?

Temporary migration would’ve been higher AND more illegal immigrants would have permanently settled in the United States.

IRCA didn’t change the probability of migrating illegally (Figure 6) but it made them more likely to stay once they arrived (Figure 7).  Since IRCA decreased the return rate, the population of illegal immigrants grew rapidly as the inflow was steady.  IRCA plugged the drain with border enforcement, not employer sanctions. 

Eliminating employer sanctions would increase the inflow of illegal immigrants by increasing their wages but not by much (see point 4 here).  The I-9 did not lower illegal immigrant wages enough to dissuade many from coming because the economic chaos in Mexico made the United States even more attractive by comparison.  Mexican unlawful migration thus would’ve been even greater without employer sanctions because the benefits to working here would’ve been larger.

Amnesty increased permanent settlement through chain migration.  IRCA granted green cards to roughly 2.7 million unauthorized immigrants that then allowed those new green card holders to sponsor family members who were then able to sponsor other family members, and so on.  Each green card holder could have sponsored many immigrants.  The amnesty of  the nearly 2.7 million increased the number of legal Mexican immigrants by millions more (my best guess).

2. How much higher would cumulative Mexican immigration since 1986 have been if the IRCA’s border security boost hadn’t been imposed?  (Your comments seem to suggest that it actually would have been lower, since guest workers wouldn’t have bothered to bring their families).

Temporary Mexican migration would’ve been higher, BUT fewer of them would’ve settled here permanently.

From 1965-1986, migrants moved between the United States and Mexico because it was easy to cross the border illegally.  If they returned to Mexico and they couldn’t find work, they could always return to the United States and find a job.  That explains most of the 26.7 million entries and 21.8 million subsequent returns of unauthorized Mexican migrants into the United States from 1965 to 1985.  I don’t know what the comparable figures are for 1986-present, but if the pre-IRCA regime continued, they would have been even larger.

Border security raised the price of entering the United States. It also raised the price of leaving by blocking the option to return if the Mexican economy tanked.  Once many decided to stay after getting passed the increased border security, they decided to send for their families and settle here. 

Later policies like the 3/10 year bars and the post-9/11 increase in border security combined with the Great Recession caused an even steeper decline in return rates, but IRCA started it (Figure 7).

The longer and increasing terms of residency for unlawful immigrants wouldn’t have happened if the border remained de facto open.  Many of the illegal migrants who came after IRCA wouldn’t have brought their families. 

3. How much higher would cumulative Mexican immigration since 1986 have been if the IRCA’s hadn’t been passed at all?

More Mexican workers would’ve migrated BUT many fewer of them would have permanently settled here as immigrants.  The circular flow of 1965-1986 would have continued and probably increased. Without the amnesty, there would’ve been roughly 2.7 million fewer Mexicans with green cards, which would’ve meant many fewer green cards for Mexicans in the future. 

P.S. Do your answers account for diaspora dynamics?

I do account for diaspora effects.  According to Doug Massey’s data (p. 69), the percentage of migrant heads of household with a spouse and/or children in the United States dropped from 1965 to 1985, loosening their familial ties.  Those with immediate family in the United States saw the size of their households increase.  From the 1965-1970, the odds of unlawful immigration by men rose until it stabilized in the 1980s before IRCA (p. 69).  Those behaviors are not consistent with migrants looking to settle permanently in the United States. 

After IRCA, the percentage of undocumented immigrants who were women or nonworkers jumped (p. 134), consistent with permanent settlement rather than temporary migration.      

The amnesty portion of IRCA increased the size of the Mexican-American population.  The legal immigration system emphasizes family reunification.  Amnestied Mexicans were thus able to sponsor their family members after receiving their green cards and even more so after about 45 percent of them earned citizenship.  Green cards for Mexicans would’ve increased at a slower rate without IRCA.  IRCA’s amnesty increased the size of the Mexican diaspora and set it up to grow more in the nearly 30 years since then than it would have otherwise.

The family-based immigration system that turned the amnesty into such a long-term increase in Mexican immigration was created by the Immigration Act of 1965, which is reviled by restrictionists today.  Ironically, the family reunification portion was concocted by restrictionists who lobbied for it.  The American Legion and the Daughters of the American Revolution opposed the abandonment of the national origins system under the 1965 Act.  According to historian (and noted restrictionist) Vernon M. Briggs Jr., they lobbied for a family-based immigration system because they thought it would preserve the European ethnic and racial balance of immigration. 

That backfired.      

I have one question for you, Bryan: If the benefits of immigration are as great as we like to argue, why are there so few illegal immigrants

P.S.  I think about Bryan’s brilliant book The Myth of the Rational Voter more than any other I’ve ever read.  If you have any interest in economics, politics, or are just perplexed by the silly things politicians say, I highly recommend it.   

On Tuesday, Nevada Gov. Brian Sandoval signed into law the nation’s fifth education savings account (ESA) program, and the first to offer ESAs to all students who previously attended a public school. Earlier this year, Sandoval signed the state’s first educational choice law, a very limited scholarship tax credit. Despite their limitations, both programs greatly expand educational freedom, and will serve as much-needed pressure-release valves for the state’s overcrowding challenge.

When Nevada parents remove their child from her assigned district school, the state takes 90 percent of the statewide average basic support per pupil (about $5,100) and instead deposits it into a private, restricted-use bank account. The family can then use those funds to purchase a wide variety of educational products and services, such as textbooks, tutoring, educational therapy, online courses, and homeschool curricula, as well as private school tuition. Low-income students and students with special needs receive 100 percent of the statewide average basic support per pupil (about $5,700). Unspent funds roll over from year to year.

The eligibility requirements for ESA programs in other states are more restrictive. In Florida, Mississippi, and Tennessee, ESAs are limited to students with special needs. Arizona initially restricted ESA eligibility to students with special needs, though lawmakers have since expanded eligibility to include foster children, children of active-duty military personnel, students assigned to district schools rated D or F, gifted students, and children living in Native American reservations.

Gov. Sandoval signs the nation’s first nearly universal ESA program into law. Photo courtesy of Tim Keller.

Research shows that parents in Arizona are overwhelmingly satisfied with the state’s ESA program and, as Lindsey Burke and I recently explained, ESAs are a significant improvement over school vouchers:

ESAs offer several key advantages over traditional school-choice programs. Because families can spend ESA funds at multiple providers and can save unspent funds for later, ESAs incentivize families to economize and maximize the value of each dollar spent, in a manner similar to the way they would spend their own money. ESAs also create incentives for education providers to unbundle services and products to better meet students’ individual learning needs.

One disappointing limitation of Nevada’s ESA is that it is restricted to students who previously attended their assigned district school for at least 100 days. This eligibility requirement unnecessarily excludes students whose assigned school is low-performing, unsafe, or simply not a good fit for that student. It also excludes families and communities who object to what is being taught at the district schools. Hopefully the legislature will expand the ESA eligibility to include all Nevada students in the near future.

At 2pm this Thursday, I will be testifying before the Senate Judiciary Committee’s Subcommittee on Oversight, Agency Action, Federal Rights and Federal Courts at a hearing investigating how the Internal Revenue Service developed the (illegal) “tax-credit rule” challenged in King v. Burwell. Witnesses include three Treasury and IRS officials involved in drafting the rule:

Panel I The Honorable Mark Mazur
Assistant Secretary for Tax Policy
Department of the Treasury

Ms. Emily McMahon
Deputy Assistant Secretary for Tax Policy
Department of the Treasury

Ms. Cameron Arterton
Deputy Tax Legislative Counsel for Tax Policy
Department of the Treasury

The second panel will consist of Michael Carvin (lead attorney for the plaintiffs in King v. Burwell, who argued the case before the Supreme Court), University of Iowa tax-law professor Andy Grewal (who discovered three additional ways, beyond King, that the IRS expanded eligibility for tax credits beyond clear limits imposed by the ACA), and me.

Lincoln Chafee, former U.S. Senator and Governor of Rhode Island, will announce his presidential run this week.  Chafee’s fiscal record as governor was moderately liberal, but much more centrist than Maryland’s Martin O’Malley.

Chafee served as governor of Rhode Island from January 2011 to January 2015, first as an Independent and then as a Democrat. (He was a Republican during his time in the U.S. Senate.) During his tenure, he received a “D” and a “B” on Cato’s Fiscal Policy Report Card on America’s Governors.

State spending grew substantially while Chafee was governor. From fiscal year 2011 to fiscal year 2012, Rhode Island general fund spending grew 5.2 percent and it grew another 5.1 percent from FY2012 to FY2013, according to data from the National Association of State Budget Officers (NASBO). NASBO data shows a 17 percent during Chafee’s entire tenure. This is almost three times population growth plus inflation for the state during his tenure.

Chafee promoted some tax increases to fund his expansion in government. In 2012 the state raised the cigarette tax. The sales tax base was expanded to cover clothing, pet services, and taxi rides. Sales tax base expansions can be the right pro-growth policy if they are combined with tax-rate cuts, but that was not on the case in Rhode Island. Chafee’s original proposal included even more tax increases, but the legislature prevented several from taking effect, such as an increase in the state’s meal and beverage tax.

Thankfully for Rhode Island, Chafee also supported numerous pro-growth reforms. Rhode Island’s tax reform package that passed in 2014 helped Chafee’s score in the most recent report card. The plan cut the corporate income tax from 9 to 7 percent, reduced the estate tax by increasing the exemption, and repealed the state’s franchise tax. He also supported a robust pension reform plan in 2011 that raised the retirement age and eliminated cost-of-living adjustments for beneficiaries.

Chafee joins a crowded presidential field with his announcement this week. For a Democratic Party that has moved far to left, he seems to have a more sensible fiscal approach than others in his party.

You Ought to Have a Look is a feature from the Center for the Study of Science posted by Patrick J. Michaels and Paul C. (“Chip”) Knappenberger.  While this section will feature all of the areas of interest that we are emphasizing, the prominence of the climate issue is driving a tremendous amount of web traffic.  Here we post a few of the best in recent days, along with our color commentary.

We highlight a couple of headlines this week that made us chuckle a bit, although what they portend is far from funny.

The first was from the always amusing “Energy and Environment” section of the Washington Post. Climate change beat writer Chris Mooney penned a piece headlined “The subtle — but real — relationship between global warming and extreme weather events” that was a hit-you-over-the-head piece about how human-caused global warming could be linked to various weather disasters of the past week, including the floods in Houston, the heatwave in India and hurricanes in general.

Mooney starts out, lamenting:

Last week, some people got really mad at Bill Nye the Science Guy. How come? Because he had the gall to say this on Twitter:

Billion$$ in damage in Texas & Oklahoma. Still no weather-caster may utter the phrase Climate Change.

Nye’s comments, and the reaction to them, raise a perennial issue: How do we accurately parse the relationship between climate change and extreme weather events, as they occur in real time?

It’s a particularly pressing question of late, following not only catastrophic floods in Texas and Oklahoma, but also a historic heatwave in India that has killed over 2,000 people so far, and President Obama’s recent trip to the National Hurricane Center in Miami, where he explicitly invoked the idea that global warming will make these storms worse (which also drew criticism).

As the Nye case indicates, there is still a lot of pushback whenever anyone dares to link climate change to extreme weather events. But we don’t have to be afraid to talk about this relationship. We merely have to be scrupulously accurate in doing so, and let scientists lead the way.

We must read different papers than Mr. Mooney. What little pushback there is (with a lot of it coming from us) has done little to impede the ubiquitous and speculative talk (or at the very least, insinuation) that global warming is involved in some material way in (or should we say, “made worse”) each and every extreme weather event. When it comes right down to it, adding significant quantities of greenhouse gases to the atmosphere (which we have done) does impact the flow of radiation through the atmosphere and in some way, ultimately, the weather. But the precise role that it plays in each weather event (extreme or otherwise) and whether or not such an impact is detectable, noticeable, or significant is far from scientifically understood—and almost certainly dwarfed by natural variability. This is true subtlety of the situation. Trotting out some scientist to say, “while we can’t definitively link global warming in to individual weather events, this is the sort of thing that is consistent with our expectations” is hogwash devoid of meaning. Such a statement underplays the scientific complexities involved and almost certainly overplays the role of human-caused climate change. If Mooney were accurately quantifying the subtleties, he’d have no business inserting them into his stories at all.

The fact of the matter is we examined the flooding situation in a 2004 article in the International Journal of Climatology[1] and in the Texas region there was no statistically significant change in the rainfall on the heaviest day of the year. Given that earth’s surface temperature hasn’t budged since then, the same should hold today.

The next piece wasn’t really a headline, but rather a tweet. Dr. Chris Landsea, a multi-talented hurricane specialist (researcher, forecaster, historian) from the National Hurricane Center (NHC) sent out this tweet after President Obama stopped by the NHC last week and made a few comments about, what else, the tie-in between human-caused global warming and hurricanes:

The link in Landsea’s tweet points to his article a few years ago that summarizes his well-studied opinion as to the current state of the science of hurricanes and climate change. Unlike many popular press/government stories, Landsea doesn’t shy away from the complexities and the confounding factors—which in fact aren’t subtle at all.

For example, when it comes to global warming’s role in modifying the strength of hurricanes, Landsea has this to say:

It is likely - in my opinion - that manmade global warming has indeed caused hurricanes to be stronger today. However, such a linkage without answering the more important question of - by how much? - is, at best, incomplete and, at worst, a misleading statement. The 1-2 mph change currently in the peak winds of strong hurricane due to manmade global warming is so tiny that it is not measureable by our aircraft and satellite technologies available today, which are only accurate to about 10 mph (~15 kph) for major hurricanes.

Landsea touches on topics of hurricane strength, number, lifespan, tracking, monitoring, demographics, damages, and, most importantly, implications. For example:

So after straightforward consideration of the non-meteorological factors of inflation, wealth increases, and population change, there remains no indication that there has been a long-term pick up of U.S. hurricane losses that could be related to global warming today. There have been no peer-reviewed studies published anywhere that refute this.

As an easy-to-read and extremely informative and insightful piece by one of the world’s leading hurricane researchers, this article is not to be missed. What’s more frightening than hurricanes themselves, is how far apart the opinion of leading scientists is from that of leading politicians.

But perhaps our favorite was this headline “I Fooled Millions Into Thinking Chocolate Helps Weight Loss. Here’s How” which tells the tale of how a team of conspirators showed how easy it is to get completely meaningless research findings into the scientific literature and then generating front page headlines and articles in diverse media sources from around the world.

The article, by science reporter-cum-dietary health researcher John Bohannon, is a must read. Laid out in chapters like the screenplay for The Sting, Bohannon describes how the whole thing went down, from “The Setup,” to “The Con,” from “The Hook,” to “The Mark” to “The Score.” Even more disconcerting than how they got the bad science in the literature—which is worrisome enough—is Bohannon’s description of the state of scientific reporting. While his remarks are made about science reporters covering diet, they apply, in spades, to many on the climate change beat. From Bohannon:

We journalists have to feed the daily news beast, and diet science is our horn of plenty. Readers just can’t get enough stories about the benefits of red wine or the dangers of fructose. Not only is it universally relevant—it pertains to decisions we all make at least three times a day—but it’s science! We don’t even have to leave home to do any reporting. We just dip our cups into the daily stream of scientific press releases flowing through our inboxes. Tack on a snappy stock photo and you’re done.

The only problem with the diet science beat is that it’s science. You have to know how to read a scientific paper—and actually bother to do it. For far too long, the people who cover this beat have treated it like gossip, echoing whatever they find in press releases. Hopefully our little experiment will make reporters and readers alike more skeptical.

If a study doesn’t even list how many people took part in it, or makes a bold diet claim that’s “statistically significant” but doesn’t say how big the effect size is, you should wonder why. But for the most part, we don’t. Which is a pity, because journalists are becoming the de facto peer review system. And when we fail, the world is awash in junk science.

There’s a lot more really juicy stuff in this piece. You ought to have a look—but your trust in science and the media will certainly be shaken, if it’s not crumbled already.

[1] Michaels, P.J., et al, 2004. Trends in Precipitation on the Wettest Days of the Year across the Contiguous United States.  Int. J. Climatology 24, 1873-1882.

Ariana Eunjung Cha reports on the newest target of public shaming in China:

Long before the Internet was invented, China’s Communist Party was already skilled in the art of public shaming.

Dissidents have been known to disappear and then reappear after having published essays of self-criticism. On state-run television, business people, celebrities and editors have appeared so regularly from behind prison bars speaking about their misdeeds that the segments were like an early take on reality TV.

Now officials are using the tactic on another group that it feels has wronged the country: smokers.

Beijing has not relied just on public humiliation. It has banned smoking in indoor public places and workplaces, complete with large fines and massive propaganda campaigns. It also plans to

take more dramatic measures by posting the names of those breaking the law three times on a Web site in order to shame them.

That may not sound like a big deal, but in Asia the reaction of online citizens to inappropriate behavior can be harsh. Among the most infamous cases is one in 2005 when a woman in South Korea who refused to clean up her dog’s waste was caught in photos that were posted online. Internet users quickly discerned her identity and she was harassed so badly that she reportedly quit her university.

We expect this sort of thing in a country ruled by the Chinese Communist Party and still influenced by Maoist ideas and practices. What’s disappointing is to see such tactics spreading in a country founded on the principles of life, liberty, and the pursuit of happiness. Where once people feared harassment for giving to gay-rights groups, now we see people harassed for giving money to oppose gay marriage. Silicon Valley CEO Brendan Eich was forced to resign for having donated $1000 to the campaign for Proposition 8. A small-town pizzeria in Indiana was faced with a firestorm of media, Twitter harassment, and death threats after one of its family owners said they wouldn’t provide pizzas for a hypothetical gay wedding reception. Two gay entrepreneurs, generous contributors to gay causes, were targeted after they had dinner with anti-gay-rights senator Ted Cruz. Numerous people caught in such crosshairs, including Eich and the dinner hosts, have issued statements of self-criticism, just like during the Cultural Revolution in China. Andrew Sullivan, a pioneering crusader for gay marriage, deplored the defenestration of Eich, asking in a blog post titled “The Hounding of a Heretic”:

Will he now be forced to walk through the streets in shame? Why not the stocks? The whole episode disgusts me – as it should disgust anyone interested in a tolerant and diverse society. If this is the gay rights movement today – hounding our opponents with a fanaticism more like the religious right than anyone else – then count me out. If we are about intimidating the free speech of others, we are no better than the anti-gay bullies who came before us.

And now we have “drought shaming” in California. The state refuses to do something sensible like charging market prices for water, so it’s forced to rationing and hectoring. And bring on the shaming:

California’s drought is turning neighbor against neighbor, as everyone seems to be on the lookout for water wasters….

In this new age of social media and apps for everything, so called “droughtshaming,” can be much more public, and nastier than what Demian got a taste of.

Just look at Twitter. If you search the social media site for the hashtags #DroughtShame or #DroughtShaming,” you’ll find hundreds, if not thousands of very public reprimands of water wasters, often with pictures, video, and a lot of addresses….

And there’s more — droughtshaming apps….

There’s another, newer app devoted only to droughtshaming, and it’s called, obviously, DroughtShameApp. Creator Dan Estes, a Santa Monica real estate agent, says he made the app just a few weeks ago out of a feeling of responsibility.

“I think like a lot of Angelenos, I’m a little freaked out by the drought,” he told NPR. “It just seems like something has to be done to avoid a long-term catastrophe.” Estes’ app lets users upload geo-located photos, with captions and addresses to report water wasters.

In many of these cases, actual legal coercion goes along with the public shaming. Beijing will fine smokers and bars, florists are being forced to supply flowers for gay weddings, and California has mandatory water restrictions. But the public shaming adds a new dimension of mob behavior and chilling effects.

Technology is part of the problem here. Back in 1978, when gays and their allies feared being on a list of opponents of the antigay Briggs Initiative, the list of donors was officially public. But you had to go to the office of the secretary of state (or maybe the county clerk) to inspect such a list. By 2008, when Proposition 8 was on the ballot, donor lists could be downloaded and posted on the internet in alphabetical and searchable form. From the privacy of your own home you could find out whether your friends, neighbors, or favorite celebrities had contributed to the side you found morally reprehensible. Today Facebook, Twitter, and specialized apps make it easier than ever to point a public finger at anyone who offends you.

I’m a First Amendment absolutist. I don’t want anyone forbidden to publicly criticize others. But I don’t want to live in a Cultural Revolution either. Chinese novelist Murong Xuecon remembers his childhood:

[My] teacher summoned me before an assembly of the whole school to read a 600-word essay of self-criticism that he had made me write. I admitted I was lazy. I said I didn’t respect discipline and had let down my teachers and parents. My classmates appeared amused and my teacher satisfied. For me it was like I had been exposed naked to all.

This kind of scene is not uncommon. From primary school to university, I witnessed countless such public humiliations: for fighting, cheating or petty misdemeanors. Caught committing any of these offenses and you may have to stand before the student body, criticizing your own “moral flaws,” condemning your character defects, showing yourself no mercy, even exaggerating your faults. Only those who have endured it can know the depth of shame one feels.

Our new bouts of Twitter shaming and demands for firings and public apologies feel too much like that. Murong went on to write:

Socialist countries tend to emphasize national and collective interest ahead of individual rights and dignity. This has been a constant throughout 66 years of Communist rule in China, but in the past two years the tendency has become increasingly strident. Cases of public shaming show us how in the name of some great cause, individual rights, dignity and privacy can all be sacrificed.

Respecting the rights of individual citizens — even wrongdoers — is a fundamental principle of a moral society. 

Indeed it is. Calling out genuine prejudice or threatening behavior is one thing. But public denunciations of people for holding the positions that, say, President Obama held a few years ago are too reminiscent of the forced conformity of authoritarian regimes. Let’s not let technology turn us into a new theocracy.

By nature, human beings can be pessimistic. But, depending on their political persuasion, people tend to focus on different things. Among the Progressive shibboleths in recent decades were concerns over overpopulation, exhaustion of natural resources and coming widespread famine. Data, however, tells a different story. The population growth is leveling off and food is more plentiful than before. The New York Times and the National Public Radio were forced to admit as much in two articles over the last couple of days.  

On May 31, 2015, the New York Times published a story entitled “The Unrealized Horrors of Population Explosion.” The article admits that the planet is not facing a problem of overpopulation. In fact, due to increased prosperity around the world, women have access to more information, education, and career choices. Female empowerment combined with the massive improvements in healthcare and dramatically falling infant mortality rates, have led to total fertility rate plummeting from 5 babies per woman in the 1950s to 2.5 in 2010s. 

To put it in the dry language of economics, as women’s earning potential increases, the opportunity cost of having babies increases as well. As such, more women chose to enter the labor force rather than stay at home and raise the children. The TFR of 2.5 babies per woman is still above the replacement rate of 2.1, but United Nations’ demographers predict that the world’s population will level off at 9 billion people and then start falling. That is already happening in a number of European countries. German population, for example, is predicted to decline from 80 million today to 71 million in 2060. 

So much, then, for the “settled” overpopulation consensus, which led, among other things, to forced sterilization of thousands of Indian men and women. As one author writes, “Incentives – radio sets, cash, food – were offered at first to volunteers who put themselves under the knife. When these failed to attract big numbers, Sanjay [Gandhi who was the son of the then Prime Minister Indira Gandhi and in charge of forced sterilization] handed down targets to government officials. The ‘find and operate’ missions that followed were directed at the most vulnerable and defenseless individuals in the country…. One [Indian] state reported 600,000 operations in two weeks…. Policemen on sterilization assignments ransacked entire villages in their pursuit of adult men. The threat to drop bombs on villages was issued.”

(The Stanford University professor Paul Ehrlich, who more than anyone was responsible for the overpopulation hysteria that gripped the late 1960s and 1970s, is still alive, still publishing, still listened to and still admired. He owes the world an apology.) 

Let’s turn to the question of food supply. On June 1, 2015, the NPR published an article entitled “There Are 200 Million Fewer Hungry People Than 25 Years Ago.” According to the state broadcaster, “The world isn’t as hungry as it used to be. A U.N. report has noted that 795 million people were hungry in the year 2014. That’s a mind-boggling number. But in fact it’s 200 million lower than the estimated 1 billion hungry people in 1990. The improvement is especially impressive because the world population has gone up by around 2 billion since the ’90s.”

Put differently, hunger is in retreat in spite of a still-growing population. Why? Because of increasing crop yields facilitated by modern machinery, synthetic fertilizers and faster transport. To give one example, in 1866, American farmers produced 24 bushels of corn per acre. In 2012, they produced 122 bushels of corn per acre. Concomitantly, the price of corn declined from $5.55 in 1866 (1982 dollars) to $3.15 in 2012.

As Professor Jesse H. Ausubel of the Rockefeller University points out, “If the world farmer reaches the average yield of today’s US corn grower during the next 70 years, ten billion people eating as people now on average do will need only half of today’s cropland. The land spared exceeds Amazonia. This will happen if farmers sustain the yearly 2 percent worldwide yield growth of grains achieved since 1960, in other words if social learning continues as usual.”

(The hero of increasing crop yields and improved global food supply was the father of the Green Revolution, Norman Borlaug, who is credited with saving more human lives than anyone in history. The world owes him a great deal of gratitude.)

It took a while for the New York Times and NPR to acknowledge what anyone familiar with Professor Julian Simon’s work has known since the publication of Simon’s 1981 book The Ultimate Resource. The key to feeding a growing population is to realize that human beings are intelligent animals. Unlike rabbits, people can find ways around scarcity.

Officials often try to implement dubious or controversial initiatives over weekends or holidays, when journalists and the public are likely to be less vigilant than normal.  Three-day holiday weekends are especially popular candidates for such maneuvers.  It is perhaps unsurprising that there were indications of a significant change regarding U.S. policy toward Syria on the Sunday before Memorial Day.  Turkey’s foreign minister announced that his country and the United States had agreed in principle to provide air protection for some 15,000 Syrian rebels being trained by Ankara and Washington once those insurgents re-enter Syrian territory.

Granted, an agreement in principle could break down over the details of implementation, and the Obama administration has yet to confirm the Turkish account.  Nevertheless, there are hints of an impending escalation of U.S. involvement in Syria’s murky civil war.  A lobbying effort by proponents of U.S. aid to factions trying to unseat dictator Bashar al-Assad is definitely taking place.  The number two Democrat in the Senate, Dick Durban of Illinois, has openly endorsed establishing and protecting “safe zones” for insurgents, and he is hardly alone.  

In essence, the United States and its Turkish ally appear to be contemplating the imposition of a “no fly” zone over northern Syria to prevent Assad’s forces from suppressing the rebel fighters.  It is pertinent to recall that a fateful step in America’s disastrous entanglement in Iraq was the creation of such zones against Saddam Hussein to protect Kurdish and Shiite insurgents in the 1990s.  A similar measure should not be undertaken lightly in Syria.

Indeed, the Syrian conflict is a cauldron of ethno-religious feuds involving multiple factions.  To a significant extent, it represents a bitter struggle for power between Assad’s coalition of religious minorities (including his Alawite political base and its Christian allies) and the Sunni Islamic majority.  That, in turn, is at least partly a broader regional power struggle between Shiite Iran and the major Sunni powers, primarily Saudi Arabia, Qatar, and Turkey, using Syrian factions as proxies. To make matters even more complex, Kurdish secessionists are exploiting the turmoil to try to establish an autonomous region in Syria’s north and northeast akin to the successful de facto Kurdish state in northern Iraq.

To be blunt, America does not have a dog in that fight.  It is especially naïve to believe that U.S. and Turkish-trained insurgents would be a strong “moderate” alternative to both Assad and ISIS.  The mythical moderate Syrian majority is just that: mythical.  Too many of the supposedly moderate rebel factions that we supported earlier in the conflict turned out to be radical Islamic fellow travelers.  Having been burned by that experience, U.S. policymakers should be doubly cautious about further entangling the United States in Syria’s troubles.

Establishing a de facto no fly zone would be a momentous, potentially very dangerous step.  At a minimum, such a change should be implemented only after a far-reaching public discussion, an extended debate in Congress, and a formal congressional vote authorizing that action.  It is disgraceful that officials might even consider trying to smuggle such an escalation of policy into practice through an announcement by an allied government in the middle of a holiday weekend.

The Transportation Security Administration (TSA) has another failure on its hands. In recent tests, undercover investigators smuggled mock explosives and banned weapons through U.S. airport checkpoints 96 percent of the time. According to ABC, “In one case, agents failed to detect a fake explosive taped to an agent’s back, even after performing a pat down that was prompted after the agent set off the magnetometer alarm.”

The unionized TSA has a history of inept management. Reports in 2012 by various House committees found that TSA operations are “costly, counterintuitive, and poorly executed,” and the agency “suffers from bureaucratic morass and mismanagement.” Former TSA chief Kip Hawley argued in an op-ed that the agency is “hopelessly bureaucratic.” And in 2014, former acting TSA chief Kenneth Kaspirin said that TSA has “a toxic culture” with “terrible” morale.

TSA has a penchant for wasting money on useless activities, leaving it less to spend on things that benefit travelers, such as more screening stations. A GAO report, for example, found that TSA continues to spend $200 million a year on a program to spot terrorists by their suspicious behaviors — yet the program does not work.

Perhaps most importantly, studies have found that TSA security performance is no better, and possibly worse, than private-sector screening, which is allowed at a handful of U.S. airports. I list some of the studies here.

The solution is to dismantle TSA and move responsibility for screening operations to the nation’s airports. The government would continue to oversee aviation safety, but airports would be free to contract out screening to expert aviation security firms. Such a reform would end TSA’s conflict of interest stemming from both operating airport screening and overseeing it.

Private airport screening is a successful approach used by other nations. All major airports in Canada use private screening firms, as do about three quarters of Europe’s major airports. That practice creates a more efficient security structure, and allows governments to focus on aviation intelligence and oversight.

Over a decade of experience has shown that the nationalization of airport screening under the Bush administration was a mistake. Let’s learn from reforms abroad, and bring in the private sector to boost the quality of our aviation security system.

For more on TSA’s failures and reform options, see here.

Every so often, I get asked why I’m so rigidly opposed to tax hikes in general and so vociferously against the imposition of new taxes in particular.

In part, my hostility is an ideological reflex. When pressed, though, I’ll confess that there are situations - in theory - where more taxes might be acceptable.

But there’s a giant gap between theory and reality. In the real world, I can’t think of a single instance in which higher taxes led to a fiscally responsible outcome.

That’s true on the national level. And it’s also true at the state level.

Speaking of which, the Wall Street Journal is - to put it mildly - not very happy at the tax-aholic behavior of Connecticut politicians. Here’s some of what was in a recent editorial.

The Census Bureau says Connecticut was one of six states that lost population in fiscal 2013-2014, and a Gallup poll in the second half of 2013 found that about half of Nutmeg Staters would migrate if they could. Now the Democrats who run the state want to drive the other half out too. That’s the best way to explain the frenzy by Governor Dannel Malloy and the legislature to raise taxes again… Mr. Malloy promised last year during his re-election campaign that he wouldn’t raise taxes, but that’s what he also said in 2010. In 2011 he signed a $2.6 billion tax hike promising that it would eliminate a budget deficit. Having won re-election he’s now back seeking another $650 million in tax hikes. But that’s not enough for the legislature, which has floated $1.5 billion in tax increases. Add a state-wide municipal sales tax that some lawmakers want, and the total could hit $2.1 billion over two years.

In other words, higher taxes in recent years have been used to fund more spending.

And now the politicians are hoping to play the same trick another time.

Apparently they don’t care that they’ve turned the Nutmeg State into a New England version of Illinois.

…the state grew a scant 0.9% in 2013, the last year state data are available. That was tied for tenth worst in the U.S. The state’s average compounded annual growth for the last four years is 0.42%. Slow growth means less tax revenue but spending never slows down. Some “40% of the state budget goes to government employee compensation and benefits, including payroll, state pensions, teacher pensions and current and retiree health care,” says Carol Platt Liebau, president of the Hartford-based Yankee Institute. …The Tax Foundation ranks Connecticut as one of the 10 worst states to do business. The state finished last in Gallup’s Job Creation Index in 2014 and now ties with Rhode Island for the worst job creation in the index since 2008.

What’s particularly discouraging is that Connecticut didn’t even have an income tax twenty-five years ago. But once the politicians got a new source of revenue, it’s been one tax hike after another.

Not too many years ago Connecticut was a tax refuge for New York City workers, but since it imposed an income tax in 1991 the rate has kept climbing, as it always does.

There are a couple of lessons from the disaster in Connecticut.

First and foremost, never give politicians a new source of revenue, which has very important implications for the debate in Washington, DC, about a value-added tax.

Unless, of course, you want to enable a bigger burden of government.

And for the states that don’t already have an income tax, the lesson is very clear. Under no circumstances should you allow your politicians to follow Connecticut on the path to fiscal perfidy.

Yet that’s exactly what may be happening in America’s northwest corner. As reported by the Seattle Times, there’s a plan percolating to create an income tax in the state of Washington. It’s being sold as a revenue swap.

State Treasurer Jim McIntire has a “grand bargain” in mind on tax reform and he wants to bend your ear. …the McIntire plan would institute a 5 percent personal-income tax with some exemptions, eliminate the state property tax and reduce business taxes. The plan would raise billions of dollars… The proposal also would lower the state sales tax to 5.5 percent from 6.5 percent.

But taxpayers should be very suspicious, particularly since politicians are talking about the need for more “investment,” which is a common rhetorical trick used by politicians who want to squander more money.

It certainly happens all the time in Washington, and it’s also happening in the Pacific Northwest.

“It is mathematically impossible for us to sustain an adequate investment in education on a shrinking tax base,” he said.

And when you read the fine print, it turns out that the politicians (and the interest groups in the government bureaucracy) want a lot more additional money from taxpayers.

…the plan would raise $7 billion in state revenue but would lower local levies by $3 billion, for an overall increase of about $4 billion.

Advocates of the new tax would prefer to avoid any discussion of big-picture principles.

“We need to have less of an ideological conversation about this,” he said in a news conference.

And their desire to avoid a philosophical discussion is understandable. After all, the big spenders didn’t fare so well the last time voters had a chance to vote on whether the state should impose an income tax.

Voters may not welcome McIntire’s argument, either. In 2010, a proposed income tax on high earners failed by a nearly 30-point margin.

The voters in Washington were very wise back in 2010, so let’s hope they haven’t lost their skepticism about the revenue plans of politicians over the past few years.

There’s every reason to suspect, after all, that the adoption of an income tax would be just as disastrous for the Evergreen State as it was for the Nutmeg State.

To close, I want to share some great advice that was presented by the always sound Professor Richard Vedder. I was at a conference a few years ago where he was also one of the speakers. Asked to comment on whether the Lone Star State should have an income tax, he threw his hands in the air and cried out with passion that, “Texas should give the Alamo to Osama bin Laden before allowing an income tax.”

So if I’m ever asked to speak in Seattle on fiscal policy, I’m going to steal Richard’s approach and and warn that “The state of Washington should give the Space Needle to North Korea before allowing an income tax.”

I doubt I’ll capture Professor Vedder’s rhetorical flair, but there won’t be any doubt that I’ll be 100-percent serious about the dangers of a state income tax.

And what about my home state of Connecticut?

Well, I don’t know of any big landmarks that they could have traded to avoid an income tax. About the only “good” thing to say is that New York’s tax system is probably even worse.

It has been 800 years since English barons negotiated a written peace agreement with King John. The original June 1215 agreement was revised and reissued numerous times, with the 1217 version gaining the title Magna Carta (“Great Charter”). Over the centuries, the document has had a powerful influence of the evolving British legal system and government.

The Great Charter will be explored at a Cato conference this week, and David Boaz recently blogged about the document’s importance to the American founding.

If you are interested in a very brief primer, I noticed this article (page 64) by British historian David Starkey in BBC History magazine. Starkey describes the 1215 charter as a radical break, and also the beginning of a long evolutionary process of building parliamentary government in Britain.

Here is the magazine’s summary of a conversation with Starkey, who has an upcoming book on the topic:

Magna Carta was initially drafted in 1215 in an attempt to broker peace between England’s barons and the unpopular King John. It failed, and the country was plunged into civil war. Following John’s death the charter then underwent a series of revisions over the next decade. An updated version was issued in 1216 by the government of his successor, the young Henry III, in an attempt to placate the rebels. Having won the war, the king issued a new edition in 1217 in order to cement peace. The final version was produced in 1225 in return for a grant of taxation.

And here are some of Starkey’s thoughts:

[Magna Carta] set out to do three things. Firstly, to bridle a king, John, who was dangerous and unpredictable  and made his whim the law, and secondly, to make it impossible for any other king to rule in the same way. It was successful in both of those things. The third thing was the great change, and something very different: it set out to create machinery that absolutely bound any king in iron to its measures.

… One of the things that we forget is that the Magna Carta of 1215 had 62 or 63 clauses, while the long-term one has in the region of 40. A third of it was struck out in 1216 …

… It had an immense and immediate impact on law and on the development of law. Individual clauses are very quickly pleaded. What’s striking is how many copies were circulated. It forced governments to behave differently, and set rules for good behaviour and, once the charter was reissued in 1225, it became impossible to impose general taxation without consent.

I think you are repeatedly struck by the ambition of 1215. Whatever you may think about the motives of the people like Robert Fitzwalter, clearly I rather respect ambition. I respect radicalism; I don’t necessarily like it, but I respect it. They are intellectually ambitious, which is impressive whatever one thinks. How do we go about setting an absolute monarch in chains?

… The year 1215 really is the beginning of a very particularly English politics – and I’m daring to use the word English – which has actually survived 800 years. The futures of England and the English political system are first sketched out in 1215 – or rather, in that crucial decade-long crisis of the charters from 1215 to 1225. You can trace so much back to that point: the whole dialogue of Whig and Tory; particular models of statesmanship that constantly repeat themselves; this crisis of charters leading directly to the establishment of parliament. The whole structure of parliamentary government really begins with the reissue of the charter in 1225.

For more on Magna Carta, the British Library website has useful resources.

True to form, in Elonis v. United States the Supreme Court continued its unparalleled defense of free speech – this time in the social-media context. Also true to form, however, Chief Justice John Roberts put together a near-unanimous majority by shying away from hard questions and thus leaving little guidance to lower courts.

The case involved a statute that made it a federal crime to transmit in interstate commerce – the Internet counts – “any communication containing any threat … to injure the person of another.” Based on a bizarre series of Facebook posts styled largely on the lurid lyrical stylings of Eminem, Anthony Elonis was convicted under that law of threatening his wife, the police, an FBI agent, and a kindergarten class. Yet prosecutors didn’t prove that Elonis intended to threaten anyone or even understood his words as being threatening. All they showed was that the individuals in question felt threatened by the posts. The Supreme Court correctly ruled that that’s not enough, that negligently throwing around violent rap lyrics shouldn’t get someone thrown in prison. As Roberts noted, the general rule is that a “guilty mind” – what lawyers call mens rea – is a necessary element of any crime.

But alas that’s as far as Roberts went: since the statute in question doesn’t specify the requisite state of mind, mere negligence isn’t enough. He did not say – the Court did not rule at all – whether an amended statute criminalizing negligent speech would pass First Amendment muster. (This issue was the focus of Cato’s amicus brief.) Indeed, as Justice Alito points out in partial dissent, the majority opinion doesn’t even say whether “reckless” Facebook posts come under the statute’s purvey (or whether that reading would in turn satisfy the First Amendment).

In short, I’m glad that amateur poet “Tone Dougie” (Elonis’s nom de rap) won’t be practicing his art in the hoosegow, but the Supreme Court’s minimalism has guaranteed this type of case – and maybe even this defendant – an encore. Particularly as social media and other new means of expression evolve, the justices need to do more than narrowly slice speech-chilling criminal laws.