Policy Institutes

Despite recent gains around the country, civil asset forfeiture reform suffered a setback in Maryland when Gov. Larry Hogan (R) vetoed a bill that would have placed restraints on the state’s civil forfeiture regime.

Civil asset forfeiture is a process by which the government is able to seize property (cash, vehicles, homes, hotels, and virtually any other item you can imagine) and keep the proceeds without ever charging the victim with a crime.  The bill, SB 528, would have established a $300 minimum seizure amount, shifted the burden of proof to the state when someone with an interest in the seized property asserts innocent ownership (e.g. a grandmother whose home is taken when her grandson is suspected of selling drugs out of the basement), and barred state law enforcement agencies from using lax federal seizure laws to circumvent state law.

In vetoing the measure, Gov. Hogan claimed that restraining civil asset forfeiture “would greatly inhibit” the war on drugs in the midst of a heroin epidemic and interfere with joint federal/state drug task forces. Gov. Hogan admitted that asset forfeiture laws “can be abused,” but that their utility outweighed the risk of abuse. 

Each of these assertions is misguided.

Civil forfeiture reform would certainly make it more difficult for law enforcement to seize property from citizens not charged with crimes.  Indeed, that is the entire purpose of reforming the law.  Likewise, the presumption of innocence, right to due process, and warrant requirement make it more difficult for the government to prosecute people suspected of crimes.  Those checks on hostile government action exist because governments with unfettered authority to summarily plunder and punish tend to do just that, and the litany of civil forfeiture horror stories is proof.

Therefore civil asset forfeiture is not merely susceptible to abuse; civil asset forfeiture is abuse.  Under no circumstances should someone be forced to forfeit their money, property, or even their home to the government on suspicion alone. The “inhibitions” Gov. Hogan’s statement laments are in fact the most fundamental defenses for private property and due process in a country founded to protect them.

Governor Hogan’s appeal to the efficacy drug war is similarly misguided.  We’re told that the prevalence of drugs, especially heroin, in Maryland is reason enough to keep forfeiture laws lax.  Decades of a failed drug war have proven the inefficacy of asset forfeiture as a means of stemming the flow of narcotics, and continuing that failure is no justification for abolishing the due process and private property rights of people who aren’t even charged with criminal behavior.

Remember: even an outright abolition of civil forfeiture wouldn’t mean the police couldn’t seize property from drug traffickers; it would just require the state to prove its suspicions in court before it takes someone’s property.  Criminal asset forfeiture would remain available to law enforcement inasmuch as there is any legitimate law enforcement justification for seizing property.

Lastly, Gov. Hogan’s veto statement announces the establishment of a working group, made up primarily of federal and state law enforcement and prosecutors (with a single seat going to the public defender), to decide whether any change to forfeiture law “is warranted” to prevent abuse and ensure law enforcement can still fight the war on drugs.  Tasking the very people who profit from civil forfeiture abuses with deciding whether changes are warranted casts immense doubt on the possibility of meaningful reform.

SB 528 is already a compromise bill.  It doesn’t abolish civil asset forfeiture, as New Mexico did.  It merely raises the protections due to innocent owners and requires state law enforcement to use state laws instead of excessively permissive federal forfeiture laws.  If even that is too much for Governor Hogan to tolerate, it seems unlikely that a working group of police and prosecutors is going to suggest much in the way of meaningful reform.

Civil asset forfeiture reform is not a partisan issue.  New Mexico’s abolition of the practice resulted from a bill that passed unanimously through both houses of the legislature and was signed by Republican Governor Susana Martinez.  Legislation reining in civil forfeiture in Montana was authored by Rep. Kelly McCarthy and signed by Governor Scott Bullock, both Democrats.

This is not a case of Republicans versus Democrats.  It’s a battle between those who believe that due process and private property rights trump the revenue generation and administrative ease of the state and those who believe that those rights are acceptable collateral damage in the war on crime.  Governor Hogan has chosen the wrong side of this debate.

In what has been aptly named “the world’s dumbest trade war,” both Europe and America have fought to limit imports of low-cost Chinese solar panels.  Much to the chagrin of anyone who likes solar power, the United States and the European Union have imposed high tariffs on Chinese panels in order to protect their own subsidized domestic industries. 

In 2013, the EU negotiated a deal with Chinese solar manufacturers that exempted them from the duties as long as they agreed to sell panels above a set minimum price.  By managing trade in this way, European authorities are essentially creating a solar cartel that divvies up market share among established companies who agree not to compete on price.

But cartel arrangements are notoriously difficult to maintain because any member of the group can ruin the scheme by reneging.  This would seem especially likely when the cartel arrangement was forced on them involuntarily by government in the first place.

So it is that some Chinese companies have tried to find innovative ways to compete despite government price controls.  According to the Wall Street Journal:

Among other violations of the settlement, the commission said Canadian Solar offered unreported “benefits” to its customers in Europe to buy their panels, effectively lowering the sales price below the minimum import-price set by the agreement.

The commission also questioned the practice by Canadian Solar and ReneSola of selling solar cells to firms in non-EU countries for assembly into panels that are then sold to the EU. Because the EU tariffs only apply to panels coming from China, the practice, though not a direct violation of the agreement, allows the two firms’ solar cells to enter the 28-nation EU unrestricted by the agreement.

Darn those Chinese and their legal attempts to help Europeans reduce greenhouse gas emissions through mutually beneficial exchange.  Don’t they know the EU wants prices to stay high to prop up subsidized domestic producers?  Shame on them!

In all seriousness, green industrial policy has become a global problem that will only grow as long as governments find the benefits of free trade in wind and solar power equipment less appealing than doling out privilege through managed trade.

Former Texas governor Rick Perry announced his candidacy for the 2016 GOP presidential nomination earlier today. Many recall his 2012 bid, which came to a rather spectacular end when Gov. Perry, on live television, forgot the name of the third federal agency he promised to eliminate if elected president. However, in a recent WSJ op-ed, Gov. Perry redeemed himself by offering a real candidate for elimination: the Export-Import Bank.

The Export-Import Bank (Ex-Im) provides financing and loan guarantees at below-market rates to foreign purchasers looking to buy products from American exporters. For example, if Emirates Air wants to buy planes from Boeing, Ex-Im can provide a loan guarantee, reducing the interest rate Emirates will pay, and thus incentivizing Emirates to buy from Boeing rather than Airbus.

Ex-Im’s supporters claim that these subsidies create jobs and finance domestic economic growth. But, they fail to consider the ensuing downstream effects, which Bastiat termed “ce qu’on ne voit pas”–that which is unseen. As the Cato scholar Daniel Ikenson makes clear, every dollar Ex-Im provides to subsidize foreign purchasers of U.S.-produced products discriminates against U.S. consumers of the same products. For example, when Emirates receives a subsidy for planes because it is a foreign company, Emirates gets a leg up on Delta.

An edifying account of how this system works was presented many years ago by the late Prof. Yale Brozen in his foreword to Prof. Leland Yeager’s classic Proposals for Government Credit Allocation (1977):

Whom you know and with whom you have influence becomes more important in obtaining capital than how productively you can use it. Capital is diverted from more productive uses to politically determined applications […]. The national income pie shrinks as an increasing proportion of our capital is allocated by the political process – not only because of its diversion from more productive uses but also because more and more of our resources are devoted to winning political influence, as that becomes the road to access to available capital and subsidies.

For the record, Ex-Im isn’t small potatoes. In FY 2015, Ex-Im’s loans and loan guarantees will total $30.9 billion, or 6.7% of all non-housing federal credit programs (see the accompanying chart). The Ex-Im’s total cumulative loans and guarantees outstanding (read: credit exposure) currently sits at $112 billion. Because the loans are granted at below-market rates, the Ex-Im does not receive fair compensation for the $112 billion of risk it takes on.

Instead of adopting a policy that makes a few U.S. exporters winners at the expense of many losers, there is a way to make all U.S. firms more competitive: just lower the grueling corporate tax rate. Rick Perry also embraces this idea in his op-ed, mirroring what I have been advocating for years.

The message is clear: taxes on corporations increase costs, decrease margins, and often lead to price increases. The top U.S. corporate tax rate (excluding state taxes) currently stands at 35%.

When our sky-high corporate tax rates are the highest of any of the 34 member countries of the Organization for Economic Co-operation and Development, something is wrong. There is clearly a better way to unburden U.S. corporations than to sponsor a “bank” in which politicians and bureaucrats, not capital markets, choose winners and losers. Rick Perry is right: it is time to move away from a mercantilist view of trade towards one that puts the market back in control. Kill the Export-Import Bank and cut corporate taxes, please.

The Spin Cycle is a reoccurring feature based upon just how much the latest weather or climate story, policy pronouncement, or simply poo-bah blather spins the truth. Statements are given a rating between 1-5 spin cycles, with less cycles meaning less spin. For a more in-depth description, visit the inaugural edition.

Today’s press buzz is about a new paper appearing in this week’s Science magazine which concludes that the “hiatus” in global warming is but a byproduct of bad data. The paper, “Possible artifacts of data biases in the recent global surface warming hiatus,” was authored by a research team led by Director of the National Oceanic and Atmospheric Administration’s National Climatic Data Center, Dr. Thomas Karl. Aside from missing the larger point—that the relevant question is not whether the earth is warming, but why it’s warming so much slower than the computer model projections—the paper’s conclusions have been well-run through the spin cycle.

The spin was largely conducted by the American Association for the Advancement of Science (AAAS), publisher of Science magazine, through its embargo campaign and the courting of major science writers in the media before the article had been made available to the general public (and other scientists). Given the obvious weaknesses in the new paper (see below and here, for starters), there seems the potential for more trouble at Science—something that Editor-in-Chief Marcia McNutt is up to her eyeballs with already.

One major problem with the new Karl and colleagues paper is that the headline-making finding turns out not even to be statistically significant at the standard scientific level—that is, having a less than 1-in-20 chance of being due to chance (unexplained processes) alone.

Instead, the results are reported as being “statistically significant” if they have less than a 1-in-10 chance of being caused by randomness.

More and more we are seeing lax statistical testing being applied in high profile papers (see here and here for recent examples). This tendency is extremely worrisome, as at the same time, the validity of large portions of the scientific literature is being questioned on the basis of (flawed) methodological design and poor application and interpretation of statistics. An illuminating example of how easily poor statistics can make it into the scientific literature and produce a huge influence on the media was given last week in the backstory of a made-up paper claiming eating chocolate could enhance weight loss efforts.

But, as the Karl et al. paper (as well as the other recent papers linked above) shows, some climate scientists are pushing forward with less than robust results anyway.

Why? Here’s a possible clue.

Recall an op-ed in the New York Times a few months back by Naomi Oreskes titled “Playing Dumb on Climate Change.” In it, Oreskes, a science historian (and author of the conspiratorial Merchants of Doubt) argued that since climate change was such an urgent problem, we shouldn’t have to apply the same 1-in-20 set of rigorous statistics to the result—it is slowing down the push for action. Climate scientists, Oreskes argued, were being too conservative in face of a well-known threat and therefore, “lowering the burden of proof” should be acceptable.

Oreskes article and suggestions were summarily panned. Nevertheless, evidence that it is being put into action is plentiful.

The new Karl et al. paper is a perfect example—abrogating normal statistical confidence levels to push a result that will be prime ammunition for the barrage of proposals aimed at restricting greenhouse gas emissions from fossil fuel use in producing energy. This is just more politicized science in the Administration’s relentless campaign over the upcoming UN climate summit in Paris, where it will push for a new international agreement restricting carbon dioxide emissions.

As an amusing side, using similar statistical procedures and confidence levels, the observed trend in Karl et al.’s newly-adjusted data is significantly lower than the average trend forecast to have occurred by the collection of climate models used by the U.N.’s Intergovernmental Panel on Climate Change (IPCC). In other words, these flimsy statistics kill both the “hiatus” and the climate models as well.  

For the spinning non-robust results into major climate-alarming headlines worldwide, we award a Normal Wash spin cycle (three spins) to the AAAS and Science editor-in-chief Marcia McNutt.  

The very day King John pledged to uphold Magna Carta, June 20, 1215, he asked Pope Innocent III to annul it.  The pope replied, “We utterly reject and condemn this settlement and under threat of excommunication we order that the king should not dare to observe it and that the barons and their associates should not require it to be observed.”

So, John reneged on his agreement with the barons, they rebelled and formed an alliance with King Philip II of France who prepared to invade England.  Before long, the French Prince Louis entered London, and the French controlled castles throughout England.  The English Church, however, backed John and refused to crown Lewis as England’s king. 

John fled from his pursuers, but somewhere along the line he contracted dysentery and was dying.  He appointed 13 executors including William Marshal who was among the most revered knights in England.  John died on October 19, 1216,  and his nine-year-old son was hastily crowned Henry III.  Because he was under-age, Marshal formed a regency government.  Although Marshal was able to seize an important English castle from the French, the civil war was substantially stalemated.

With John gone, the rebel barons found themselves in an awkward position – their alliance with foreigners who occupied England.  Patriotic English wanted to get the French out.  Fortunately, Prince Louis was happy to collect a bribe, and soon the French went home.

Regent Marshal recognized that there was more likely to be domestic peace if some fundamental legal issues were resolved and that consequently John’s repudiation of Magna Carta must be reversed.   So Marshal reviewed the document, made some cuts, and reissued Magna Carta in late 1216.   Among the cuts was paragraph 61 about the committee of 25 barons who would monitor the king’s compliance with Magna Carta and, if necessary, try to enforce it.  Perhaps less important than those words was the fact that the barons had demonstrated their willingness to use force against a tyrannical king.

The government needed more money again in 1217, and Marshall proposed a tax on the land held by knights – land that provided food and generated revenue to make possible their feudal military service.  Barons resisted, and Marshall reissued the previous version of Magna Carta with some clauses added to protect feudal privileges of the barons.

In February 1225, there were fears that England might be invaded by the French, and the government needed even more money to mount a defense.  There was much debate and eventually an agreement among the barons to pay a tax on moveable goods – provided the king reissued Magna Carta.  Accordingly, Henry III approved a version similar to that of  2017 and affirmed that he did it with his “spontaneous and free will” as well as with his royal seal.  He declared, “neither we nor our heirs will determine anything by which the liberties contained in this charter be violated or weakened.”

Over the centuries, the 1225 Magna Carta was reaffirmed dozens of times by English sovereigns.  In 1311, Edward II referred to some statutes, saying “that they be not contrary to the Great Charter.”

Other countries issued charters intended to limit  a ruler’s power, too, but Magna Carta really took root, and constitutional development went furthest in England.  For instance:

  • Magna Carta appeared in dozens of compilations of English laws, invariably as the first law.  Initially it was in Latin, then French and finally English.
  • The Due Process of Law Act was adopted in 1368, during the reign of Edward III, and it said, in part, that the “Great Charter be holden and kept in all Points, and if any Statute be made to the contrary, that shall be holden for none.”
  • In 1509, King Henry VIII approved of the beheading of Edmund Dudley and Richard Empson, accused of looting taxpayers and the government.  One of the formal charges involved violating Magna Carta.

Queen Elizabeth I, near the peak of her power in 1587, wanted to establish a new judicial post in her government for one of her cronies, Richard Cavendish, so he could make a lot of money by issuing certain documents in the common law courts.  She asked administrative judges whose approval was needed.  They refused, and they were charged with disobedience.  They had to explain themselves before the queen.

According to constitutional historian Henry Hallam, the judges said they meant no offense to her majesty, but her order was against “the law of the land” – meaning principles affirmed in Magna Carta.  Consequently, they said “no one is bound to obey such an order.”  When further pressed, they pointed out that the queen herself had sworn to uphold the law of the land.  The judges believed they couldn’t obey her order without violating the laws and their oaths.  The judges cited prior practices that had been rejected, because they violated the laws of the land.  Queen Elizabeth left the chamber without commenting, and nothing more was heard about the matter.

During the 17th century, the legal scholar, judge and member of Parliament Edward Coke (pronounced “Cook”) interpreted Magna Carta as a bedrock of the English constitutional law that enabled people to resist and rebel against the tyrannical Stuart kings. 

Many critics have belittled the importance of Magna Carta by dwelling on the fact that the rebel barons were looking out for their own interests as feudal lords.  But establishing constitutional limits on a ruler with arbitrary power is always extraordinarily difficult.  Some people succeed before others, and their success is likely to make it easier for more people to follow.

Although Magna Carta didn’t derive from the principles of a “higher law,” such as were received by Moses and articulated by Sophocles, Marcus Tullius Cicero, John Lilburne, John Locke, Thomas Paine, Thomas Jefferson,  and others, from a constitutional standpoint Magna Carta had similar standing.  It didn’t come from rulers.  It couldn’t be repealed.  It was forever.

A new paper posted today on ScienceXpress (from Science magazine), by Thomas Karl, Director of NOAA’s Climate Data Center, and several co-authors[1], that seeks to disprove the “hiatus” in global warming prompts many serious scientific questions.

The main claim[2] by the authors that they have uncovered a significant recent warming trend is dubious. The significance level they report on their findings (.10) is hardly normative, and the use of it should prompt members of the scientific community to question the reasoning behind the use of such a lax standard.

In addition, the authors’ treatment of buoy sea-surface temperature (SST) data was guaranteed to create a warming trend. The data were adjusted upward by 0.12°C to make them “homogeneous” with the longer-running temperature records taken from engine intake channels in marine vessels. 

As has been acknowledged by numerous scientists, the engine intake data are clearly contaminated by heat conduction from the engine itself, and as such, never intended for scientific use. On the other hand, environmental monitoring is the specific purpose of the buoys. Adjusting good data upward to match bad data seems questionable, and the fact that the buoy network becomes increasingly dense in the last two decades means that this adjustment must put a warming trend in the data.

The extension of high-latitude arctic land data over the Arctic Ocean is also questionable. Much of the Arctic Ocean is ice-covered even in high summer, meaning the surface temperature must remain near freezing. Extending land data out into the ocean will obviously induce substantially exaggerated temperatures.

Additionally, there exist multiple measures of bulk lower atmosphere temperature independent from surface measurements which indicate the existence of a “hiatus”[3]. If the Karl et al., result were in fact robust, it could only mean that the disparity between surface and mid-tropospheric temperatures is even larger that previously noted. 

Getting the vertical distribution of temperature wrong invalidates virtually every forecast of sensible weather made by a climate model, as much of that weather (including rainfall) is determined in large part by the vertical structure of the atmosphere.

Instead, it would seem more logical to seriously question the Karl et al. result in light of the fact that, compared to those bulk temperatures, it is an outlier, showing a recent warming trend that is not in line with these other global records.

And finally, even presuming all the adjustments applied by the authors ultimately prove to be accurate, the temperature trend reported during the “hiatus” period (1998-2014), remains significantly below (using Karl et al.’s measure of significance) the mean trend projected by the collection of climate models used in the most recent report from the United Nation’s Intergovernmental Panel on Climate Change (IPCC). 

It is important to recognize that the central issue of human-caused climate change is not a question of whether it is warming or not, but rather a question of how much. And to this relevant question, the answer has been, and remains, that the warming is taking place at a much slower rate than is being projected.

The distribution of trends of the projected global average surface temperature for the period 1998-2014 from 108 climate model runs used in the latest report of the U.N.’s Intergovernmental Panel on Climate Change (IPCC)(blue bars). The models were run with historical climate forcings through 2005 and extended to 2014 with the RCP4.5 emissions scenario. The surface temperature trend over the same period, as reported by Karl et al. (2015, is included in red. It falls at the 2.4th percentile of the model distribution and indicates a value that is (statistically) significantly below the model mean projection.

[1] Karl, T. R., et al., Possible artifacts of data biases in the recent global surface warming hiatus. Scienceexpress, embargoed until 1400 EDT June 4, 2015.

[2] “It is also noteworthy that the new global trends are statistically significant and positive at the 0.10 significance level for 1998-2012…”

[3] Both the UAH and RSS satellite records are now in their 21st year without a significant trend, for example

Although Venezuela’s inflation has soared (see: Up, Up, and Away), Venezuela is not experiencing a hyperinflationary episode–yet. Since the publication of Prof. Phillip Cagan’s famous 1956 study The Monetary Dynamics of Hyperinflation, the convention has been to define hyperinflation as when the monthly inflation rate exceeds 50%.

I regularly estimate the monthly inflation rates for Venezuela. To calculate those inflation rates, I use dynamic purchasing power parity (PPP) theory. While Venezuela’s monthly inflation rate has not advanced beyond the 50% per month mark on a sustained basis, it is dangerously close. Indeed, Venezuela’s inflation rate is currently 45% per month (see the accompanying chart).

If inflation moves much higher, the legacy of Hugo Chavez’s Bolivarian Revolution will be that Venezuela joins the rather select hyperinflation club as the 57th member. Yes, there have only been 56 documented hyperinflations

GOP Agrees Bush Was Wrong to Invade Iraq, Now What?”—that’s how the US News headline put it last week. A good question, because it’s not at all clear what that grudging concession signifies. It’s nice that 12 years after George W. Bush lumbered into the biggest foreign policy disaster in a generation, the leading Republican contenders are willing to concede, under enhanced interrogation, that maybe it wasn’t the right call. It would be nicer still if we could say they’d learned something from that disaster. 

Alas, the candidates’ peevish and evasive answers to the Iraq Question didn’t provide any evidence for that. Worst of all was Jeb Bush’s attempt to duck the question by using fallen soldiers as the rhetorical equivalent of a human shield. Ohio governor John Kasich flirted with a similar tactic—“There’s a lot of people who lost limbs and lives over there, OK?”—before conceding, “But if the question is, if there were not weapons of mass destruction should we have gone, the answer would’ve been no.” 

That’s how most of the GOP field eventually answered the question, with some version of
the “faulty intelligence” excuse. We thought Saddam Hussein had stockpiles of chemical
and biological weapons and was poised for a nuclear breakout; it was just our bad luck
that turned out not to be true; so the war was—well, not a “mistake,” insists Marco Rubio, just, er—whatever the word is for something you definitely wouldn’t do again if you had the
power to travel back in time. As Scott Walker, who’s been studying up super-hard on
foreign policy, explained: you can’t fault President Bush: invading Iraq just made sense, based on “the information he had available” at the time. 

Well, no—invading Iraq was a spectacularly bad idea based on what we knew at the time. If we’d found stockpiles of so-called WMD, it would still have been a spectacularly bad idea. Saddam’s possession of unconventional weapons was a necessary condition in the Bush administration’s case for war, but it wasn’t—or shouldn’t have been—sufficient to make that case compelling, because with or without chemical and biological weapons, Saddam’s Iraq was never a national security threat to the United States. 

Put aside the fact that, as applied to chem/bio, “WMD” is a misnomer; assume for the sake of argument that President Bush’s claim that “one vial, one canister, one crate” of the stuff could “bring a day of horror like none we have ever known” was an evidence-based, good-faith assessment of those weapons’ potential, instead of a ludicrous and cynical exaggeration. Even so, you’d still have to show that Saddam Hussein was so hell-bent on hitting the U.S., he’d risk near-certain destruction to do it. 

There was never any good reason to believe that. This, after all, was a dictator who, during the 1991 Gulf War, had been deterred from using chemical weapons against US troops in the middle of an ongoing invasion. As then-Secretary of State James Baker later explained, the George H.W. Bush administration:

made it very clear that if Iraq used weapons of mass destruction, chemical weapons, against United States forces that the American people would demand vengeance and that we had the means to achieve it. … we made it clear that in addition to ejecting Iraq from Kuwait, if they used those types of weapons against our forces we would in addition to throwing them out of Kuwait, we would adopt as a goal the elimination of the regime in Baghdad.

Eleven years later, as the George W. Bush administration pushed for another war with Iraq, there wasn’t any convincing evidence that Saddam Hussein had, in the interim, warmed up to the idea of committing regime suicide through the use of CBW. Even the flawed October 2002 National Intelligence Estimate (NIE) prepared during the run-up to the Iraq War vote concluded that “Baghdad for now appears to be drawing a line short of conducting terrorist attacks with conventional or CBW against the United States, fearing that exposure of Iraqi involvement would provide Washington a stronger cause for making war.” 

By that time, with Bush 43 sounding the alarm about Iraq’s “growing fleet of manned and unmanned aerial vehicles that could be used to disperse chemical or biological weapons across broad areas [including] missions targeting the United States,” it should have been apparent that the case for war rested on a series of imaginary hobgoblins. As Jim Henley put it a couple of years ago, “In the annals of projection, the US claim that Saddam was building tiny remote-controlled death planes wins some kind of prize.”  

What if the Iraqi dictator instead passed off those weapons to terrorists, “secretly and without fingerprints”? In the 2003 State of the Union, that’s what President Bush argued Saddam just might do: “imagine those 19 hijackers with other weapons and other plans, this time armed by Saddam Hussein.” But the notion that Hussein was likely to pass chemical or biological weapons to Al Qaeda was only slightly less fantastic than the scenario that had him crop-dusting US cities with short-range, Czech-built training drones. As my colleague Doug Bandow pointed out at the time: “Baghdad would be the immediate suspect and likely target of retaliation should any terrorist deploy [WMD], and Saddam knows this.” 

I made similar arguments two weeks before the war in a piece called “Why Hussein Will Not Give Weapons of Mass Destruction to Al Qaeda”:  

The idea that Hussein views a WMD strike via terrorist intermediaries as a viable strategy is rank speculation, contradicted by his past behavior. Hussein’s hostility toward Israel predates his struggle with the United States. He’s had longstanding ties with anti-Israeli terror groups and he’s had chemical weapons for over 20 years. Yet there has never been a nerve gas attack in Israel. Why? Because Israel has nuclear weapons and conventional superiority, and Hussein wants to live. If he’s ever considered passing off chemical weapons to Palestinian terrorists, he decided that he wouldn’t get away with it. He has even less reason to trust Al Qaeda with a potentially regime-ending secret.

In its 2004 after-action reassessment of the administration’s case for preventive war, the Carnegie Endowment concluded

there was no positive evidence to support the claim that Iraq would have transferred WMD or agents to terrorist groups and much evidence to counter it. Bin Laden and Saddam were known to detest and fear each other, the one for his radical religious beliefs and the other for his aggressively secular rule and persecution of Islamists. Bin Laden labeled the Iraqi ruler an infidel and an apostate, had offered to go to battle against him after the invasion of Kuwait in 1990, and had frequently called for his overthrow. … the most intensive searching over the last two years has produced no solid evidence of a cooperative relationship between Saddam’s government and Al Qaeda. ….the Iraqi regime had a long history of sponsoring terrorism against Israel, Kuwait, and Iran, providing money and weapons to these groups. Yet over many years Saddam did not transfer chemical, biological, or radiological materials or weapons to any of them “probably because he knew that they could one day be used against his secular regime.”

In the judgment of U.S. intelligence, a transfer of WMD by Saddam to terrorists was likely only if he were “sufficiently desperate” in the face of an impending invasion. Even then, the NIE concluded, he would likely use his own operatives before terrorists. Even without the particular relationship between Saddam and bin Laden, the notion that any government would turn over its principal security assets to people it could not control is highly dubious. States have multiple interests and land, people, and resources to protect. They have a future. Governments that made such a transfer would put themselves at the mercy of groups that have none of these. Terrorists would not even have to use the weapons but merely allow the transfer to become known to U.S. intelligence to call down the full wrath of the United States on the donor state, thereby opening opportunities for themselves. 

You don’t have to “know what we know now” to recognize the poverty of the case for war. You just had to know what we knew then. 

Even so, it’s possible that GOP hawks have learned something from the Iraq debacle, however loathe they are to admit it. Like Saddam’s Iraq, the Syrian and Iranian regimes have long had unconventional weapons and links to terrorist proxies. But I haven’t heard even Lindsey Graham or Marco Rubio invoke the risk of terrorist transfer to make the case for war with Iran or Syria. Perhaps that’s because it’s as unpersuasive an argument now as it should have been then. 

Besides, maybe it’s asking too much to expect professional politicians to depart entirely from the sentiments of the people they want to vote for them. A recent Vox Populi/Daily Caller poll asked Republican voters in early primary states: “Looking back now, and regardless of what you thought at the time, do you think it was the right decision for the United States to invade Iraq in 2003?” Nearly 60 percent of them answered in the affirmative. The GOP’s 2016 contenders may not have good answers to the Iraq Question, but, apparently, they’re miles ahead of their constituents.  

At the risk of sounding like a broken record (well, OK–at the risk of continuing to sound like a broken record), I’d like to say a bit more about economists’ tendency to get their monetary history wrong. In particular, I’d like to take aim at common myths about the gold standard.

If there’s one monetary history topic that tends to get handled especially sloppily by monetary economists, not to mention other sorts, this is it. Sure, the gold standard was hardly perfect, and gold bugs themselves sometimes make silly claims about their favorite former monetary standard. But these things don’t excuse the errors many economists commit in their eagerness to find fault with that “barbarous relic.”

The false claims I have in mind are mostly ones I and others–notably Larry White–have countered before. Still I thought it would be useful to address them again here, because they’re still far from being dead horses, and also so that students wrapping-up the semester will have something convenient to send to their misinformed gold-bashing profs (though I urge them to wait until grades are in before sharing!).

For the sake of those who don’t care to wade through the whole post, here is a “jump to” list of the points covered:

1. The Gold Standard wasn’t an instance of government price fixing. Not traditionally, anyway.
2. A gold standard isn’t particularly expensive. In fact, fiat money tends to cost more.
3. Gold supply “shocks” weren’t particularly shocking.
4. The deflation that the gold standard permitted wasn’t such a bad thing.
5. It wasn’t to blame for 19th-century American financial crises.
6. On the whole, the classical gold standard worked remarkably well (while it lasted).
7. It didn’t have to be “managed” by central bankers.
8. In fact, central banking tends to throw a wrench in the works.
9. “The “Gold Standard” wasn’t to blame for the Great Depression.
10. It didn’t manage money according to any economists’ theoretical ideal. But neither has any fiat-money-issuing central bank.

1. The Gold Standard wasn’t an instance of government price fixing. Not traditionally, anyway.

As Larry White has made the essential point as well as I ever could, I hope I may be excused for quoting him at length:

Barry Eichengreen writes that countries using gold as money ‘fix its price in domestic-currency terms (in the U.S. case, in dollars).’ He finds this perplexing:

But the idea that government should legislate the price of a particular commodity, be it gold, milk or gasoline, sits uneasily with conservative Republicanism’s commitment to letting market forces work, much less with Tea Party–esque libertarianism. Surely a believer in the free market would argue that if there is an increase in the demand for gold, whatever the reason, then the price should be allowed to rise, giving the gold-mining industry an incentive to produce more, eventually bringing that price back down. Thus, the notion that the U.S. government should peg the price, as in gold standards past, is curious at the least.

To describe a gold standard as “fixing” gold’s “price” in terms of a distinct good, domestic currency, is to get off on the wrong foot. A gold standard means that a standard mass of gold (so many grams or ounces of pure or standard-alloy gold) defines the domestic currency unit. The currency unit (“dollar”) is nothing other than a unit of gold, not a separate good with a potentially fluctuating market price against gold. That one dollar, defined as so many grams of gold, continues be worth the specified amount of gold—or in other words that one unit of gold continues to be worth one unit of gold—does not involve the pegging of any relative price. Domestic currency notes (and checking account balances) are denominated in and redeemable for gold, not priced in gold. They don’t have a price in gold any more than checking account balances in our current system, denominated in fiat dollars, have a price in fiat dollars. Presumably Eichengreen does not find it curious or objectionable that his bank maintains a fixed dollar-for-dollar redemption rate, cash for checking balances, at his ATM.

Remarkably, as White goes on to show, the rest of Eichengreen’s statement proves that, besides not having understood the meaning of gold’s “fixed” dollar price, Eichengreen has an uncertain grasp of the rudimentary economics of gold production:

As to what a believer in the free market would argue, surely Eichengreen understands that if there is an increase in the demand for gold under a gold standard, whatever the reason, then the relative price of gold (the purchasing power per unit of gold over other goods and services) will in fact rise, that this rise will in fact give the gold-mining industry an incentive to produce more, and that the increase in gold output will in fact eventually bring the relative price back down.

I’ve said more than once that, the more vehement an economist’s criticisms of the gold standard, the more likely he or she knows little about it. Of course Eichengreen knows far more about the gold standard than most economists, and is far from being its harshest critic, so he’d undoubtedly be an outlier in the simple regression, y = α + β(x) (where y is vehemence of criticism of the gold standard and x is ignorance of the subject). Nevertheless, his statement shows that even the understanding of one of the gold standard’s most well-known critics leaves much to be desired.

Although, at bottom, the gold standard isn’t a matter of government “fixing” gold’s price in terms of paper money, it is true that governments’ creation of monopoly banks of issue, and the consequent tendency for such monopolies to be treated as government- or quasi-government authorities, ultimately led to their being granted sovereign immunity from the legal consequences to which ordinary, private intermediaries are usually subject when they dishonor their promises. Because a modern central bank can renege on its promises with impunity, a gold standard administered by such a bank more closely resembles a price-fixing scheme than one administered by a commercial bank. Still, economists should be careful to distinguish the special features of a traditional gold standard from those of central-bank administered fixed exchange rate schemes.

 

2. A gold standard isn’t particularly expensive. In fact, fiat money tends to cost more.

Back in the early 1950s, and again in 1960, Milton Friedman estimated that the gold required for the U.S. to have a “real” gold standard would have cost 2.5% of its annual GNP. But that’s because Friedman’s idea of a “real” gold standard was one in which gold coins alone served as money, with no fractionally-backed bank-supplied substitutes. As Larry White shows in his Theory of Monetary Institutions (p. 47) allowing for 2% specie reserves–which is more than what some former gold-based free-banking systems needed–the resource cost of a gold standard taking advantage of fractionally-backed banknotes and deposits would be about one-fiftieth of the number Friedman came up with. That’s a helluva bargain for a gold “seal of approval” that could mean having access to international capital at substantially reduced rates, according to research by Mike Bordo and Hugh Rockoff.

Friedman himself eventually changed his mind about the economies to be achieved by employing fiat money:

Monetary economists have generally treated irredeemable paper money as involving negligible real resource costs compared with a commodity currency. To judge from recent experience, that view is clearly false as a result of the decline in long-term price predictability.

I took it for granted that the real resource cost of producing irredeemable paper money was negligible, consisting only of the cost of paper and printing. Experience under a universal irredeemable paper money standard makes it crystal clear that such an assumption, while it may be correct with respect to the direct cost to the government of issuing fiat outside money, is false for society as a whole and is likely to remain so unless and until a monetary structure emerges under an irredeemable paper standard that provides a high degree of long-run price level predictability.*

Unfortunately, neither White’s criticism of Friedman’s early calculations nor Friedman’s own about-face have kept gold standard critics from repeating the old canard that a fiat standard is more economical than a gold standard. Ross Starr, for example, observes in his 2013 book on money that “The use of paper or fiduciary money instead of commodity money is resource saving, allowing commodity inventories to be liquidated.” Although he understands that fractionally-backed banknotes and deposits may go some way toward economizing on commodity-money reserves, Starr (quoting Adam Smith, but failing to look up historic Scottish bank reserve ratios) insists nonetheless that “a significant quantity of the commodity backing must be maintained in inventory to successfully back the currency,” and then proceeds to build a case for fiat money from this unwarranted assertion:

The next step in economizing on the capital tied up in backing the currency is to use a fiat money. Substituting a government decree for commodity backing frees up a significant fraction of the economy’s capital stock for productive use. No longer must the economy hold gold, silver, or other commodities in inventory to back the currency. No longer must additional labor and capital be used to extract them from the earth. Those resources are freed up and a simple virtually costless government decree is substituted for them.

Tempting as it is to respond to such hooey simply by noting that the vaults of the world’s official fiat-money managing institutions presently contain rather more than zero ounces of gold–31,957.5 metric tons more, to be precise–that response only hints at the fundamental flaw in Starr’s reasoning, which is his treatment of fiat money as a culmination, or limiting case, of the resource savings to be had by resort to fractional commodity-money reserves. That treatment overlooks a crucial difference between fiat money and readily redeemable banknotes and deposits, for whereas redeemable banknotes and deposits are generally understood by their users to be close, if not perfect, substitutes for commodity money, fiat money, the purchasing power of which is unhinged from that of any former money commodity, is nothing of the sort. On the contrary: its tendency to depreciate relative to real commodities, and to gold in particular, is notorious. Consequently holders of fiat money have reason to hold “commodity inventories” as a hedge against the risk that fiat money will depreciate.

If the hedge demand for a former money commodity is large enough, resort to fiat money doesn’t save any resources at all. Indeed, as Roger Garrison notes, “a paper standard administered by an irresponsible monetary authority may drive the monetary value of gold so high that more resource costs are incurred under the paper standard than would have been incurred under a gold standard.” A glance at the history of gold’s real price suffices to show that this is precisely what has happened:

 

From “After the Gold Rush,” The Economist, July 6, 2010.

 

Taking the long-run average price of gold, in 2010 prices, to be somewhere around $470, prior to the closing of the gold window in 1917, that price was exceeded on only three occasions, and never dramatically: around the time of the California gold rush, around the turn of the 20th century, and for several years following FDR’s devaluation of the dollar. Since 1971, in contrast, it has exceeded that average, and exceeded it substantially, more often than not. Here is Roger Garrison again:

There is a certain asymmetry in the cost comparison that turns the resource-cost argument against paper standards. When an irresponsible monetary authority begins to overissue paper money, market participants begin to hoard gold, which stimulates the gold-mining industry and drives up the resource costs. But when new discoveries of gold are made, market participants do not begin to hoard paper or to set up printing presses for the issue of unbacked currency. Gold is a good substitute for an officially instituted paper money, but paper is not a good substitute for an officially recognized metallic money. Because of this asymmetry, the resource costs incurred by the State in its efforts to impose a paper standard on the economy and manage the supply of paper money could be avoided if the State would simply recognize gold as money. These costs, then, can be counted against the paper standard.

So if it’s avoidance of gold resource costs that’s desired, including avoidance of the very real environmental consequences of gold mining, a gold standard looks like the right way to go.

 

3. Gold supply “shocks” weren’t particularly shocking

Of the many misinformed criticisms of the gold standard, none seems to me more wrong-headed than the complaint that the gold standard isn’t even a reliable guarantee against serious inflation. The RationalWiki entry on the gold standard is as good an example of this as any:

Even gold can suffer problems with inflation.Gold rushes such as the California Gold Rush expanded the money supply and, when not matched with a simultaneous increase in economic output, caused inflation.The “Price Revolution” of the 16th century demonstrates a case of dramatic long-run inflation. During this period, western European nations used a bimetallic standard (gold and silver). The Price Revolution was the result of a huge influx of silver from central European mines starting during the late 15th century combined with a flood of new bullion from the Spanish treasure fleets and the demographic shift brought about by the Black Plague (i.e., depopulation).

Admittedly the anonymous authors of this article may not be professional economists; but take my word for it that the same arguments might be heard from any number of such professionals. Brad DeLong, for example, in a list of “Talking Points on the Likely Consequences of re-establishment of the Gold Standard” (my emphasis), includes observation that “significant advances in gold mining technology could provide a significant boost to the average rate of inflation over decades.”

Like I said, the gold standard is hardly free of defects. But being vulnerable to bouts of serious inflation isn’t one of them. Consider the “dramatic” 16th century inflation referred to in the RationalWiki entry. Had that entries’ authors referred to plain-old Wikipedia’s entry on “Price revolution,” they would have read there that

Prices rose on average roughly sixfold over 150 years. This level of inflation amounts to 1-1.5% per year, a relatively low inflation rate for the 20th century standards, but rather high given the monetary policy in place in the 16th century.

I have no idea what the authors mean by their second statement, as there was certainly no such thing as “monetary policy” at the time, and they offer no further explanation or citation. So far as I can tell, they mean nothing more than that prices hadn’t been rising as fast before the price revolution than they did during it, which though trivially true says nothing about how “high” the inflation was by any standards, including those of the 16th century. In any case it was not only “not high” but dangerously low according to standards set, rightly or wrongly, by today’s monetary experts. Finally, though the point is often overlooked, the European Price Revolution actually began well in advance of major American specie shipments, which means that, far from being attributable to such shipments alone, it was a result of several causes, including coin debasements.

What about the California Gold rush, which is also supposed to show how changes in the supply of gold will lead to inflation “when not matched with a simultaneous increase in economic output”? To judge from available statistics, it appears that producers of other goods were almost a match for all those indefatigable forty-niners: as Larry White reports, although the U.S. GDP deflator did rise a bit in the years following the gold rush,

The magnitude was surprisingly small. Even over the most inflationary interval, the [GDP deflator] rose from 5.71 in 1849 (year 2000 = 100) to 6.42 in 1857, an increase of 12.4 percent spread over eight years. The compound annual price inflation rate over those eight years was slightly less than 1.5 percent.

Once again, the inflation rate was such as would have had today’s central banks rushing to expand their balance sheets.

Nor do the CPI estimates tell a different story. See if you can spot the gold-rush-induced inflation in this chart:

*Graphing Various Historical Economic Series,” MeasuringWorth, 2015.

Despite popular beliefs, the California gold rush was actually not the biggest 19th-century gold supply innovation, at least to judge from its bearing on the course of prices. That honor belongs instead to the Witwatersrand gold rush of 1886, the effects of which later combined with those of the Klondike rush of 1896 to end a long interval of gradual deflation (discussed further below) and begin one of gradual inflation.

Brad DeLong is thus quite right to refer to the South African discoveries in observing that even a gold standard poses some risk of inflation:

For example, the discovery and exploitation of large gold reserves near present-day Johannesburg at the end of the nineteenth century was responsible for a four percentage point per year shift in the worldwide rate of inflation–from a deflation of roughly two percent per year before 1896 to an inflation of roughly two percent per year after 1896.

Allowing for the general inaccuracy of 19th-century CPI estimates, DeLong’s statistics are correct. But that “For example” is quite misleading. Like I said: this is the most serious instance of an inflationary gold “supply shock” of which I’m aware. Yet even it served mainly to put an end to a deflationary trend, without ever giving rise to an inflation rate substantially above what central banks today consider (rightly or wrongly) optimal. As for the four percentage point change in the rate of inflation “per year,” presumably meaning “in one year,” it’s hardly remarkable: changes as big or larger are common throughout the 19th century, partly owing to the notoriously limited data on which CPI estimates for that era are based. Even so, they can’t be compared to the much larger jumps in inflation with which the history of fiat monies is riddled, even setting hyperinflations aside. Keep this in mind as you reflect upon Brad’s conclusion that

Under the gold standard, the average rate of inflation or deflation over decades ceases to be under the control of the government or the central bank, and becomes the result of the balance between growing world production and the pace of gold mining.

Alas, keeping matters in perspective–that is, comparing the gold standard’s actual inflation record, not to that which might be achieved by means of an ideally-managed fiat money, but to the actual inflation record of historic fiat-money systems, is something many critics of the gold standard seem reluctant to do, perhaps for good reason.

While we’re on the subject, nothing could be more absurd than attempts to demonstrate the unsuitability of gold as a monetary medium by referring to gold’s unstable real value in the years since the gold standard was abandoned. Yet this is a favorite debating point among the gold standard’s less thoughtful critics, including Paul Krugman:

There is a remarkably widespread view that at least gold has had stable purchasing power. But nothing could be further from the truth. Here’s the real price of gold — the price deflated by the consumer price index — since 1968:

Compare Professor Krugman’s chart to the one in the previous section. Then ask yourself (1) Has gold’s price behaved differently since 1968 than it did before?; and (2) Why might this be so? If your answers are “Yes” and “Because gold and paper dollars are no longer close substitutes, and gold is now widely used to hedge against depreciation of the dollar and other fiat currencies,” you understand the gold standard better than Krugman does. But don’t get a swelled head over it, because it really isn’t saying much: Krugman is one of the observations that sits squarely on the upper right end of y = α + β(x).

 

4. The deflation that the gold standard permitted wasn’t such a bad thing.

The complaint that a gold standard doesn’t rule out inflation is but a footnote to the more frequent complaint that it suffers, in Brad DeLong’s words, from “a deflationary bias which makes it likely that a gold standard regime will see a higher average unemployment rate than an alternative managed regime.” According to Ben Bernanke “There is…a high correlation in the data between deflation (falling prices) and depression (falling output).”

That the gold standard tended to be deflationary–or that it tended to be so for sometimes long intervals between gold discoveries–can’t be denied. But what certainly can be denied is that these periods of slow deflation went hand-in-hand with high unemployment. Having thoroughly reviewed the empirical record, Andrew Atkeson and Patrick Kehoe conclude as follows:

Deflation and depression do seem to have been linked during the1930s. But in the rest of the data for 17 countries and more than 100 years, there is virtually no evidence of such a link.

More recently Claudio Borio and several of his BIS colleagues reported similar findings. How then (you may wonder), did Bernanke arrive at his opposite conclusion? Easy: he looked only at data for the 1930s–the worst deflationary crisis ever–ignoring all the rest.

Why is deflation sometimes depressing, and sometimes not? The simple answer is that there is more than one sort of deflation. There’s the sort that’s caused by a collapse of spending, like the “Great Contraction” of the 1930s, and then there’s the sort that’s driven by greater output of real goods and services–that is, by outward shifts in aggregate supply rather than inward shifts in aggregate demand. Most of the deflation that occurred during the classical gold standard era (1873-1914) was of the latter, “good” sort.

Although I’ve been banging the drum for good deflation since the 1990s, and Mike Bordo and others have made the specific point that the gold standard mostly involved inflation of the good rather than bad sort, too many economists, and way too many of those who have got more than their fare share of the public’s attention, continue to ignore the very possibility of supply-driven deflation.

Of the many misunderstandings propagated by economists’ tendency to assume that deflation and depression must go hand-in-hand, none has been more pernicious than the widespread belief that throughout the U.S. and Europe, the entire period from 1873 to 1896 constituted one “Great” or “Long Depression .” That belief is now largely discredited, except perhaps among some newspaper pundits and die-hard Marxists, thanks to the efforts of G.B. Saul and others. The myth of a somewhat shorter “Long Depression,” lasting from 1873-1879, persists, however, though economic historians have begun chipping away at that one as well.

 

5. It wasn’t to blame for 19th-century American financial crises.

Speaking of 1873, after claiming that a gold standard is undesirable because it makes deflation (and therefore, according to his reasoning, depression) more likely, Krugman observes:

The gold bugs will no doubt reply that under a gold standard big bubbles couldn’t happen, and therefore there wouldn’t be major financial crises. And it’s true: under the gold standard America had no major financial panics other than in 1873, 1884, 1890, 1893, 1907, 1930, 1931, 1932, and 1933. Oh, wait.

Let me see if I understand this. If financial crises happen under base-money regime X, then that regime must be the cause of the crises, and is therefore best avoided. So if crises happen under a fiat money regime, I guess we’d better stay away from fiat money. Oh, wait.

You get the point: while the nature of an economy’s monetary standard may have some bearing on the frequency of its financial crises, it hardly follows that that frequency depends mainly on its monetary standard rather than on other factors, like the structure, industrial and regulatory, of the financial system.

That U.S. financial crises during the gold standard era had more to do with U.S. financial regulations than with the workings of the gold standard itself is recognized by all competent financial historians. The lack of branch banking made U.S. banks uniquely vulnerable to shocks, while Civil-War rules linked the supply of banknotes to the extent of the Federal government’s indebtedness., instead of allowing that supply to adjust with seasonal and cyclical needs. But there’s no need to delve into the precise ways in which such misguided legal restrictions to the umerous crises to which Krugman refers. It should suffice to point out that Canada, which employed the very same gold dollar, depended heavily on exports to the U.S., and (owing to its much smaller size) was far less diversified, endured no banking crises at all, and very few bank failures, between 1870 and 1939.

 

6. 0n the whole, the classical gold standard worked remarkably well (while it lasted).

Since Keynes’s reference to gold as a “barbarous relic” is so often quoted by the gold standard’s critics, it seems only fair to repeat what Keynes had to say, a few years before, not about gold per se, itself, but about the gold-standard era:

What an extraordinary episode in the economic progress of man that age was which came to an end in August, 1914! The greater part of the population, it is true, worked hard and lived at a low standard of comfort, yet were, to all appearances, reasonably contented with this lot. But escape was possible, for any man of capacity or character at all exceeding the average, into the middle and upper classes, for whom life offered, at a low cost and with the least trouble, conveniences, comforts, and amenities beyond the compass of the richest and most powerful monarchs of other ages. The inhabitant of London could order by telephone, sipping his morning tea in bed, the various products of the whole earth, in such quantity as he might see fit, and reasonably expect their early delivery upon his doorstep; he could at the same moment and by the same means adventure his wealth in the natural resources and new enterprises of any quarter of the world, and share, without exertion or even trouble, in their prospective fruits and advantages… He could secure forthwith, if he wished it, cheap and comfortable means of transit to any country or climate without passport or other formality, could despatch his servant to the neighboring office of a bank or such supply of the precious metals as might seem convenient, and could then proceed abroad to foreign quarters, without knowledge of their religion, language, or customs, bearing coined wealth upon his person, and would consider himself greatly aggrieved and much surprised at the least interference. But, most important of all, he regarded this state of affairs as normal, certain, and permanent, except in the direction of further improvement, and any deviation from it as aberrant, scandalous, and avoidable.

It would, of course, be foolish to suggest that the gold standard was entirely or even largely responsible for this Arcadia, such as it was. But it certainly did contribute both to the general abundance of goods of all sorts, to the ease with which goods and capital flowed from nation to nation, and, especially, to the sense of a state of affairs that was “normal, certain, and permanent.”

The gold standard achieved these things mainly by securing a degree of price-level and exchange rate stability and predictability that has never been matched since. According to Finn Kydland and Mark Wynne:

The contrast between the price stability that prevailed in most countries under the gold standard and the instability under fiat standards is striking. This reflects the fact that under commodity standards (such as the gold standard), increases in the price level (which were frequently associated with wars) tended to be reversed, resulting in a price level that was stable over long periods. No such tendency is apparent under the fiat standards that most countries have followed since the breakdown of the gold standard between World War I and World War II.

The high degree of price level predictability, together with the system of fixed exchange rates that was incidental to the gold standard’s widespread adoption, substantially reduced the riskiness of both production and international trade, while the commitment to maintain the standard resulted, as I noted, in considerably lower international borrowing costs.

Those pundits who find it easy to say “good riddance” to the gold standard, in either its classical or its decadent variants, need to ask themselves what all the fuss over monetary “reconstruction” was about, following each of the world wars, if not achieving a simulacrum at least of the stability that the classical gold standard achieved. True, those efforts all failed. But that hardly means that the ends sought weren’t very worthwhile ones, or that those who sought them were “lulled by the myth of a golden age.” Though they may have entertained wrong beliefs concerning how the old system worked, they weren’t wrong in believing that it did work, somehow.

 

7. It didn’t have to be managed by central bankers.

But how? The once common view that the classical gold standard worked well only thanks to its having been carefully managed by the Bank of England and other central banks, as well as the related view that its success depended on international agreements and other forms of central bank cooperation, is now, thankfully, no longer subscribed to even by the gold-standard’s more well-informed critics. Instead, as Julio Gallarotti observes, the outcomes of that standard “were primarily the resultants [sic] of private transactions in the markets for goods and money” rather than of any sort of government or central-bank management or intervention. But the now accepted view doesn’t quite go far enough. In fact, central banks played no essential part at all in achieving the gold standard’s most desirable outcomes, which could have been achieved as well, or better, by systems of competing banks-of-issue, and which were in fact achieved by means of such systems in many participating nations, including the United States, Switzerland (until 1901), and Canada. And although it is common for central banking advocates to portray such banks as sources of emergency liquidity to private banks, during the classical gold standard era liquidity assistance often flowed the other way, and did so notwithstanding monopoly privileges that gave central banks so many advantages over their commercial counterparts. As Gallarotti observes (p. 81),

That central banks sometimes went to other central banks instead of the private market suggests nothing more than the fact that the rates offered by central banks were better, or too great an amount of liquidity may have been needed to be covered in the private market.

 

8. In fact, central banking tends to throw a wrench in the works.

To the extent that central banks did exercise any special influence on gold-standard era monetary adjustments, that influence, instead of helping, made things worse. Because an expanding central bank isn’t subject to the internal constraint of reserve losses stemming from adverse interbank clearings, it can create an external imbalance that must eventually trigger a disruptive drain of specie reserves. During liquidity crunches, on the other hand, central banks were more likely than commercial banks to become, in Jacob Viner’s words, “engaged in competitive increases of their discount rates and in raid’s on each other’s reserves.” Finally, central banks could and did muck-up the gold standard works by sterilizing gold inflows and outflows, violating the “rules of the gold standard game” that called for loosening in response to gold receipts and tightening in response to gold losses.

Competing banks of issue could be expected to play by these “rules,” because doing so was consistent with profit maximization. The semi-public status of central banks, on the other hand, confronted them with a sort of dual mandate, in which profits had to be weighed against other, “public” responsibilities (ibid., pp. 117ff.). Of the latter, the most pernicious was the perceived obligation to occasionally set aside the requirements for preserving international monetary equilibrium (“external balance”) for the sake of preserving or achieving preferred domestic monetary conditions (“internal balance”). As Barry Ickes observes, playing by the gold standards rules could be “very unpopular, potentially, as it involves sacrificing internal balance for external balance.” Commercial bankers couldn’t care less. Central bankers, on the other hand, had to care when to not care was to risk losing some of their privileges.

Today, of course, achieving internal balance is generally considered the sine qua non of sound central bank practice; and even where fixed or at least stable exchange rates are considered desirable it is taken for granted that external balance ought occasionally to be sacrificed for the sake of preserving domestic monetary stability. But to apply such thinking to the classical gold standard, and thereby conclude that in that context a similar sacrifice of external for internal stability represented a turn toward more enlightened monetary policy, is to badly misunderstand the nature of that arrangement, which was not just a fixed exchange rate arrangement but something more akin to an multinational monetary union or currency area. Within such an area, the fact that one central bank gains reserves while another looses them was itself no more significant, and no more a justification for violating the “rules of the game,” than the fact that a commercial bank somewhere gained reserves at the expense of another.

The presence of central banks did, however, tend to aggravate the disturbing effects of changes in international trade patterns compared to the case of international free banking. Central-bank sterilization of gold flows could, on the other hand, lead to more severe longer-run adjustments, as it was to do, to a far more dramatic extent, in the interwar period.

 

9. “The “Gold Standard” wasn’t to blame for the Great Depression.

I know I’m about to skate onto thin ice, so let me be more precise. To say that “The gold standard caused the Great Depression ” (or words to that effect, like “the gold standard was itself the principal threat to financial stability and economic prosperity between the wars”), is at best extremely misleading. The more accurate claim is that the Great Depression was triggered by the collapse of the jury-rigged version of the gold standard cobbled together after World War I, which was really a hodge-podge of genuine, gold-exchange, and gold-bullion versions of the gold standard, the last two of which were supposed to “economize” on gold. Call it “gold standard light.”

Admittedly there is one sense in which the real gold standard can be said to have contributed to the disastrous shenanigans of the 1920s, and hence to the depression that followed. It contributed by failing to survive the outbreak of World War I. The prewar gold standard thus played the part of Humpty Dumpty to the King’s and Queen’s men who were to piece the still-more-fragile postwar arrangement together. Yet even this is being a bit unfair to gold, for the fragility of the gold standard on the eve of World War I was itself largely due to the fact that, in most of the belligerent nations, it had come to be administered by central banks that were all-too easily dragooned by their sponsoring governments into serving as instruments of wartime inflationary finance.

Kydland and Wynne offer the case of the Bank of Sweden as illustrating the practical impossibility of preserving a gold standard in the face of a major shock:

During the period in which Sweden adhered to the gold standard (1873–1914), the Swedish constitution guaranteed the convertibility into gold of banknotes issued by the Bank of Sweden. Furthermore, laws pertaining to the gold standard could only be changed by two identical decisions of the Swedish Parliament, with an election in between. Nevertheless, when World War I broke out, the Bank of Sweden unilaterally decided to make its notes inconvertible. The constitutionality of this step was never challenged, thus ending the gold standard era in Sweden.

The episode seems rather less surprising, however, when one considers that “the Bank of Sweden,” which secured a monopoly of Swedish paper currency in 1901, is more accurately known as the Sveriges Riksbank, or “Bank of the Swedish Parliament.”

If the world crisis of the 1930s was triggered by the failure, not of the classical gold standard, but of a hybrid arrangement, can it not be said that the U.S. , which was among the few nations that retained a full-fledged gold standard, was fated by that decision to suffer a particularly severe downturn? According to Brad DeLong,

Commitment to the gold standard prevented Federal Reserve action to expand the money supply in 1930 and 1931–and forced President Hoover into destructive attempts at budget-balancing in order to avoid a gold standard-generated run on the dollar.

It’s true that Hoover tried to balance the Federal budget, and that his attempt to do so had all sorts of unfortunate consequences. But the gold standard, far from forcing his hand, had little to do with it. Hoover simply subscribed to the prevailing orthodoxy favoring a balanced budget. So, for that matter, did FDR, until events forced him too change his tune: during the 1932 presidential campaign the New-Dealer-to-be assailed his opponent both for running a deficit and for his government’s excessive spending.

As for the gold standard’s having prevented the Fed from expanding the money supply (or, more precisely, from expanding the monetary base to keep the broader money supply from shrinking), nothing could be further from the truth. Dick Timberlake sets the record straight:

By August 1931, Fed gold had reached $3.5 billion (from $3.1 billion in 1929), an amount that was 81 percent of outstanding Fed monetary obligations and more than double the reserves required by the Federal Reserve Act. Even in March 1933 at the nadir of the monetary contraction, Federal Reserve Banks had more than $1 billion of excess gold reserves.

Moreover,

Whether Fed Banks had excess gold reserves or not, all of the Fed Banks’ gold holdings were expendable in a crisis. The Federal Reserve Board had statutory authority to suspend all gold reserve requirements for Fed Banks for an indefinite period.

Nor, according to a statistical study by Chang-Tai Hsieh and Christina Romer, did the Fed have reason to fear that by allowing its reserves to decline it would have raised fears of a devaluation. On the contrary: by taking steps to avoid a monetary contraction, the Fed would have helped to allay fears of a devaluation, while, in Timberlake’s words, initiating a “spending dynamic” that would have helped to restore “all the monetary vitals both in the United States and the rest of the world.”

 

10. It didn’t manage money according to any economists’ theoretical ideal. But neither has any fiat-money-issuing central bank.

Just as “paper” always beats “rock” in the rock-paper-scissors game, so does managed paper money always beat gold in the rock-paper monetary standards game economists like to play. But that’s only because under a fiat standard any pattern of money supply adjustment is possible, including a “perfect” pattern, where “perfect” means perfect according to the player’s own understanding. Even under the best of circumstances a gold standard is, on the other hand, unlikely to achieve any economist’s ideal of monetary perfection. Hence, paper beats rock. More precisely, paper beats rock, on paper.

And what does this impeccable logic tell us concerning the relative merits of gold versus paper money in practice? Diddly-squat. I mean it. To say something about the relative merits of paper and gold, you have to have theories–good ol’ fashioned, rational optimizing firm and agent theories–of how the supply of basic money adjusts under various conditions in the two sorts of monetary regimes. We have a pretty good theory of the gold standard, meaning one that meshes well with how that standard actually worked. The theory of fiat money is, in contrast, a joke, in part because it’s much harder to pin-down central bankers’ objectives (or any objectives apart from profit-maximization, which is at play in the case of gold), but mostly thanks to economists’ tendency to simply assume that central bankers behave like omniscient angels who, among other things, understand the finer points of DSGE models. That may do for a graduate class, or a paper in the AER. But good economics it most certainly isn’t.

***

I close with a few words concerning why it matters that we get the facts straight about the gold standard. It isn’t simply a matter of winning people over to that standard. Though I’m perhaps as ready as anyone to shed a tear for the old gold standard, I doubt that we can ever again create anything like it. But getting a proper grip on gold serves, not just to make the gold standard seem less unattractive than it is often portrayed to be, but to remove some of the sheen that has been applied to modern fiat-money arrangements using the same brush by which gold has been blackened. The point, in other words, isn’t to make a pitch for gold. It’s to make a pitch for something –anything– that’s better than our present, lousy money.

_____________________________

*I’m astonished to find that Friedman’s important and very interesting 1986 article, despite appearing in one of the leading academic journals, has to date been cited only 64 times (Google Scholar). Of these, nine are in works by myself, Kevin Dowd, and Lawrence White! I only wish I could attribute this neglect to monetary economists’ pro-fiat money bias. More likely it reflects their general lack of interest in alternative monetary arrangements.

As the Export-Import Bank’s charter nears expiration, supporters continue to argue that ending this government agency, which subsidizes loans to major U.S. exporters (mostly Boeing), is unwise because other countries also subsidize exports.  They’re especially eager to point to China, whose own export credit agency is very active in promoting Chinese manufacturers.  They then claim that allowing the bank charter to expire would be “unilateral disarmament.”

Claiming that the United States should pursue any economic policy on the grounds that China is doing it strikes me as bordering on insanity.  Market intervention by the Chinese government has resulted in large-scale misallocation and is a serious liability for the stability of the Chinese economy.  It’s true that Chinese subsidies to domestic industries reduce opportunities for U.S. businesses, and it’s perfectly alright for the U.S. government to condemn those policies.  But should we really seek to emulate them?

Competitive metaphors about trade are generally bad, and martial ones are especially unhelpful.  The United States is simply not engaged in a metaphorical war with its trading partners.  Thinking of trade as a contest inevitably leads to bad policy by giving governments an excuse to intervene in the market for the benefit of crony constituencies.  The fact that some U.S. businesses would make more money if foreign governments pursued better policies is not a legitimate excuse to intervene in the market on their behalf.

Regardless of the reasons offered to justify it, there are real consequences to the U.S. economy when the U.S. government picks winners and losers.

Supporters of the Ex-Im Bank make plenty of other bad arguments, all of which betray a fundamental distrust of free-market capitalism.  But “China does it” may be the worst one.

On Tuesday the House of Representatives unanimously passed an amendment to the  Commerce, Justice, Science, and Related Agencies appropriations bill, introduced by Rep. Joaquin Castro (D-TX), which takes $10 million from Drug Enforcement Administration (DEA) funds for salaries and expenses and puts it towards the Department of Justice’s Body Worn Camera Partnership Program. The program provides 50 percent matching grants for law enforcement agencies that wish to use body cameras.  

Prior to the passage of Castro’s amendment, the appropriations bill provided $15 million for the body-worn camera partnership initiative, $35 million less than requested by the Obama administration.

Castro’s amendment is one of the latest examples of legislation aimed at funding police body cameras which, despite their potential to be great tools for increasing law enforcement accountability, are expensive.

The cameras themselves can cost from around $100 to over $1,000 and are accompanied by costs associated with redaction and storage. The fiscal impact of body cameras is a major reason why some police departments have not used the technology. In 2014 the Police Executive Research Forum received surveys from about 250 police departments and found that “39 percent of the respondents that do not use body-worn cameras cited cost as a primary reason.”

An Illinois body camera bill on Gov. Rauner’s desk not only outlines body camera policies for Illinois police agencies that want to use body camera but also introduces a $5 fee on traffic tickets aimed at mitigating the cost of body cameras.

I have written before about why a federalist approach to body cameras is preferable to a federal top-down approach with attached financial incentives. If Rauner signs the Illinois bill into law it will be interesting to see how effective a traffic ticket fee is in funding the use of police body cameras. If it works state lawmakers may well seek to implement similar plans in their own states.

I am all for the DEA having its budget cut (ideally to $0), but the federal government providing conditional grants for body cameras is risky because some law enforcement agencies may implement federal policy recommendations not because they are the best but because doing so will cut costs. Grant applicants are urged to review a body camera paper published by the Office of Community Oriented Policing Services (COPS) and to “incorporate the most important program design elements in their proposal.” Unfortunately, the COPS body camera paper includes a worrying policy recommendation: allowing police officers to view body camera footage of incidents before they make a statement.

Federal lawmakers ought to be part of the ongoing discussions on police body camera policies, but federal policy proposals and suggestions shouldn’t come with financial assistance attached.

Last year China joined the U.S.-led Rim of the Pacific Exercise for the first time. However, Beijing’s role in RIMPAC has become controversial. Senate Armed Services Committee Chairman John McCain recently opined: “I would not have invited them this time because of their bad behavior.”

The Obama administration is conflicted. Bloomberg’s Josh Rogin worried that “so far, China is paying no price for its aggression.” Bonnie Glaser of CSIS suggested using the exercises to threaten the PRC. Patrick Cronin of the Center for a New American Security was less certain, acknowledging benefits of China’s inclusion: “It all depends on what you think RIMPAC should be.”

That is the key question. In part the exercise is about mutually beneficial cooperation for non-military purposes. With the simultaneous growth in commercial traffic and national navies, there likely will be increasing need and opportunity for joint search and rescue, operational safety, anti-piracy patrols, and humanitarian relief.

The question also involves military-military cooperation. Contacts between the Chinese and U.S. navies are few; those between the PRC’s forces and those of countries at odds with Beijing’s territorial claims, such as Japan and the Philippines, are even fewer.

There is value in allowing potential opponents a better assessment of one’s capabilities. Chinese expectations may be more realistic if they have a better sense of what and who they might face, especially the navies of their neighbors, which are expanding and becoming more competent.

Moreover, demystifying the other side makes it harder to demonize one’s potential adversaries. Obviously, even warm personal relationships don’t prevent governments from careening off to war with one another. However, learning that the other side’s military personnel are not devils incarnate might cause leaders to temper the advice they offer in a crisis.

Participation in the exercise also may be viewed as evidence that the U.S. is or is not attempting to contain the PRC. Hence inviting China in last year made American policy look a little less like containment.

Unfortunately, RIMPAC is too small and unimportant to much matter. No one who looks at U.S. behavior, and certainly no Chinese official who does so, can believe that Washington is engaged in anything except containment.

Granted, it can be pursued more or less ostentatiously. However, strengthening alliances surrounding China, moving more military forces to the Asia-Pacific, bolstering the militaries of neighboring states, and consistently backing the positions taken by the PRC’s antagonists outweigh an invitation to naval maneuvers every two years.

Finally, participation can be seen as a reward and denial as a punishment for China. Thus, Panda suggests barring Beijing participation so long as it does not respect freedom of navigation. He wrote: “The magnitude is severe enough to condition China’s behavior while not derailing decades of fragile U.S.-China goodwill altogether.”

If all it took to bring to heel America’s looming co-superpower and peer competitor was cancelling its navy out of a nonessential ocean exercise, Washington should have tried that tactic long ago. The PRC likely prefers to join than to sit at the sidelines. However, the benefits remain too small to cause China’s leaders to change fundamental policy objectives.

As I wrote for China-US Focus:  “The PRC is a revisionist power, as America once was. The former will seek to reverse or overturn past geopolitical decisions which it believes to be unfair or unrealistic. Beijing will abandon that course only when the costs of doing so rise sufficiently.”

“Losing” China’s RIMPAC invitation won’t make a difference. In contrast, an increasingly well-armed and well-organized set of neighbors willing to stand up to Chinese bullying would.

“Friendship diplomacy” cannot eliminate ideological differences and geopolitical concerns. Nevertheless, the U.S. and its allies and friends should continue to seek opportunities to invest China in a stable geopolitical order. Doing so won’t be easy, but extending an invitation to RIMPAC next year would be a worthwhile step in the meantime.

Since before the Declaration of Independence, equality under the law has been a central feature of American identity. The Fourteenth Amendment expanded that constitutional precept to actions by states, not just the federal government. For example, if a state government wants to use race as a factor in pursuing a certain policy, it must do so in the furtherance of a compelling reason—like preventing prison riots—and it must do so in as narrowly tailored a way as possible.

This means, among other things, that race-neutral solutions must be considered and used as much as possible. So if a state were to, say, set race-based quotas for who receives its construction contracts and then claim that no race-neutral alternatives will suffice—without showing why—that would fall far short of the high bar our laws set for race-conscious government action.

Yet that is precisely what Illinois has done.

Illinois’s Department of Transportation and the Illinois State Toll Highway Authority have implemented the U.S. Department of Transportation’s Disadvantaged Business Entity (“DBE”) program, which aims to remedy past discrimination against minority and women contractors by granting competitive benefits to those groups. While there may be a valid government interest in remedying past discrimination, Illinois’s implementation of the program blows through strict constitutional requirements and bases its broad use of racial preferences on studies that either employ highly dubious methodology or are so patently outdated that they provide no legal basis on which to conclude, as constitutionally required, that there remains ongoing, systemic, widespread racial (or gender) discrimination in the public-construction-contracting industry that only the DBE program can rectify.

Even the studies Illinois used recommended that the state seek to achieve its anti-discrimination goals to the fullest extent possible by race-neutral means. Naturally, Illinois ignored this advice and implemented what could only generously be called a half-hearted pretense of employing race-neutral measures. Even worse, a federal district court upheld Illinois’s implementation. Even though the state failed to show which race-neutral alternatives it considered, tried, or rejected, the court held that the DBE’s grant of benefits still passed strict scrutiny.

The contracting company that brought the suit has now appealed the case to the U.S. Court of Appeals for the Seventh Circuit. Cato has joined the Pacific Legal Foundation and Center for Equal Opportunity in filing a brief supporting that appeal. We argue that Illinois didn’t meet the high constitutional standards governing the use of race-conscious measures in its approach to the DBE program because it (1) failed to establish a strong basis in evidence that there even is ongoing, widespread, systemic racial discrimination that must be remedied, and (2) failed to establish the narrow-tailoring requirement that workable race-neutral measures be tried and found insufficient before the state can turn to using race.

By cutting corners with shoddy studies and paying lip service to race-neutral solutions, Illinois and the lower court have each done a disservice to the hard-won principle of equality under the law. We urge the Seventh Circuit to correct those mistakes when it takes up Midwest Fence Corp. v. USDOT this summer.

Rick Perry, former governor of Texas, will announce his second White House run tomorrow. Perry served as Texas governor from December 2000, following the election of George W. Bush, until January of this year. During his long tenure, Perry showed reasonable fiscal restraint. Perry did not shrink the size of Texas’ government, but he limited its growth both in terms of spending and tax revenue.

Perry appeared in six editions of Cato’s Fiscal Policy Report Card. His first appearance was in 2004. His scores show consistent management of Texas’ budget as governor. He received a “B” in five of his six reports, with a “C” in 2012. Below are his scores in each report card.

2004: B

2006: B

2008: B

2010: B

2012: C

2014: B

Perry’s track record is certainly consistent given the tendency of some governors to slip while in office. For instance, George Pataki’s score fell from an “A” to a “D” between his first and last reports.  Mike Huckabee went from a “B” to an “F.”

From fiscal year 2002 to fiscal year 2015, general fund spending in Texas increased 63 percent. This growth outpaced the growth in average state spending, which grew by 50 percent.

But when these figures are adjusted for population growth, Governor Perry’s record appears much better. Texas’ population grew almost 2.5 times faster than the United States’ population during this time period. In this situation, comparing per capita general fund spending growth is more instructive.

Texas: Per capita spending grew from $1,430 to $1,802, or 26 percent from FY2002 to FY2015

50-State Average: Per capita spending grew from $1,809 to $2,356, or 30 percent from FY2002 to FY2015

Spending did increase in Texas on a per capita basis, but it increased slower than spending in other states.

Inflation should also be considered. Once inflation is added, any remaining growth in general fund spending disappears. Texas’ spending increase of 63 percent from FY2002 to FY2015 exactly matches population growth plus inflation for that time period. Perry also pushed several constitutional provisions to try and ensure this fiscal restraint would last. He championed an amendment to the Texas constitution that would have limited spending growth to population growth plus inflation. He also tried to require that any tax increase must be approved by two-thirds of voters.

On taxes, Perry’s actions were mixed. In 2003, Perry supported a package that included fee increases. In 2006, Perry supported a plan that raised the cigarette tax by one dollar and repealed the state’s franchise tax in exchange for property tax cuts. The plan also created the complex Texas Margin tax as a replacement. The plan did result in a net decrease in taxes, but it was the wrong approach. The 2012 report card summarizes the issue well:

The new tax [the margin tax] hit 180,000 additional businesses and increased state-level taxes by more than $1 billion annually. The added state revenues were used to reduce local property taxes, but the overall effect of the package has been to centralize government power in the state and reduce beneficial tax competition between local jurisdictions.

In 2009, Perry pushed to increase the exemption on the Margin tax so that it would hit fewer businesses and he pushed to extend the exemption in 2013.

Perry’s long tenure as Texas governor shows consistency, earning a “B” on five out of six Cato report cards. Perry did not decrease the size of the state’s government, but he did limit its growth to population growth and inflation.

 

Another step toward criminalizing advocacy: writing in the Washington Post, Sen. Sheldon Whitehouse (D-R.I.) urges the U.S. Department of Justice to consider filing a racketeering suit against the oil and coal industries for having promoted wrongful thinking on climate change, with the activities of “conservative policy” groups an apparent target of the investigation as well. A trial balloon, or perhaps an effort to prepare the ground for enforcement actions already afoot?

Sen. Whitehouse cites as precedent the long legal war against the tobacco industry. When the federal government took the stance that pro-tobacco advocacy could amount to a legal offense, some of us warned tobacco wouldn’t remain the only or final target. To quote what I wrote in The Rule of Lawyers:

In a drastic step, the agreement ordered the disbanding of the tobacco industry’s former voices in public debate, the Tobacco Institute and the Council for Tobacco Research (CTR), with the groups’ files to be turned over to anti-tobacco forces to pick over the once-confidential memos contained therein; furthermore, the agreement attached stringent controls to any newly formed entity that the industry might form intended to influence public discussion of tobacco. In her book on tobacco politics, Up in Smoke, University of Virginia political scientist Martha Derthick writes that these provisions were the first aspect in news reports of the settlement to catch her attention. “When did the governments in the United States get the right to abolish lobbies?” she recalls wondering. “What country am I living in?” Even widely hated interest groups had routinely been allowed to maintain vigorous lobbies and air their views freely in public debate.

By the mid-2000s, calls were being heard, especially in other countries, for making denial of climate change consensus a legally punishable offense or even a “crime against humanity,” while widely known advocate James Hansen had publicly called for show trials of fossil fuel executives. Notwithstanding the tobacco precedent, it had been widely imagined that the First Amendment to the U.S. Constitution might deter image-conscious officials from pursuing such attacks on their adversaries’ speech. But it has not deterred Sen. Whitehouse.

Law professor Jonathan Adler, by the way, has already pointed out that Sen. Whitehouse’s op-ed “relies on a study that doesn’t show what he (it) claims.” And Sen. Whitehouse, along with Sen. Barbara Boxer (D-Calif.) and Edward Markey (D-Mass.), has been investigating climate-dissent scholarship in a fishing-expedition investigation that drew a pointed rebuke from then-Cato Institute President John Allison as an “obvious attempt to chill research into and funding of public policy projects you don’t like…. you abuse your authority when you attempt to intimidate people who don’t share your political beliefs.”

P.S. Kevin Williamson notes that if the idea of criminalizing policy differences was ever something to dismiss as an unimportant fringe position, it is no longer. (cross-posted from Overlawyered)

Bryan Caplan of George Mason University posted some comments I sent him along with some questions about a recent blog post of his.  His questions are in quotes, my responses follow.  First, some background.

It’s important to separate immigration (permanent) from migration (temporary).  Much of what we think of as “immigration” is actually migration as many of them return home.  Dudley Baines (page 35) summarizes some estimates of return migration from America’s past.

Country/Region of Origin            Return Rates

Nordics                                     20%

English & Welsh                         40%

Portuguese                                30-40%

Austro-Hungarians & Poles          30-40%

Italians                                      40-50%           

 

Gould estimates a 60 percent return rate for Italians – similar to Mexican unauthorized immigrants from 1965-1985. 

There were three parts to the Immigration Reform and Control Act of 1986 that all affected both immigration and migration.  The first part was the amnesty.  The second was employer sanctions through the I-9 form that was supposed to turn off the jobs magnet.  The third was increased border security to keep them out.  For the first two questions, I assume the rest of IRCA was passed.

1. How much higher would cumulative Mexican immigration since 1986 have been if the IRCA’s employer sanctions hadn’t been imposed?

Temporary migration would’ve been higher AND more illegal immigrants would have permanently settled in the United States.

IRCA didn’t change the probability of migrating illegally (Figure 6) but it made them more likely to stay once they arrived (Figure 7).  Since IRCA decreased the return rate, the population of illegal immigrants grew rapidly as the inflow was steady.  IRCA plugged the drain with border enforcement, not employer sanctions. 

Eliminating employer sanctions would increase the inflow of illegal immigrants by increasing their wages but not by much (see point 4 here).  The I-9 did not lower illegal immigrant wages enough to dissuade many from coming because the economic chaos in Mexico made the United States even more attractive by comparison.  Mexican unlawful migration thus would’ve been even greater without employer sanctions because the benefits to working here would’ve been larger.

Amnesty increased permanent settlement through chain migration.  IRCA granted green cards to roughly 2.7 million unauthorized immigrants that then allowed those new green card holders to sponsor family members who were then able to sponsor other family members, and so on.  Each green card holder could have sponsored many immigrants.  The amnesty of  the nearly 2.7 million increased the number of legal Mexican immigrants by millions more (my best guess).

2. How much higher would cumulative Mexican immigration since 1986 have been if the IRCA’s border security boost hadn’t been imposed?  (Your comments seem to suggest that it actually would have been lower, since guest workers wouldn’t have bothered to bring their families).

Temporary Mexican migration would’ve been higher, BUT fewer of them would’ve settled here permanently.

From 1965-1986, migrants moved between the United States and Mexico because it was easy to cross the border illegally.  If they returned to Mexico and they couldn’t find work, they could always return to the United States and find a job.  That explains most of the 26.7 million entries and 21.8 million subsequent returns of unauthorized Mexican migrants into the United States from 1965 to 1985.  I don’t know what the comparable figures are for 1986-present, but if the pre-IRCA regime continued, they would have been even larger.

Border security raised the price of entering the United States. It also raised the price of leaving by blocking the option to return if the Mexican economy tanked.  Once many decided to stay after getting passed the increased border security, they decided to send for their families and settle here. 

Later policies like the 3/10 year bars and the post-9/11 increase in border security combined with the Great Recession caused an even steeper decline in return rates, but IRCA started it (Figure 7).

The longer and increasing terms of residency for unlawful immigrants wouldn’t have happened if the border remained de facto open.  Many of the illegal migrants who came after IRCA wouldn’t have brought their families. 

3. How much higher would cumulative Mexican immigration since 1986 have been if the IRCA’s hadn’t been passed at all?

More Mexican workers would’ve migrated BUT many fewer of them would have permanently settled here as immigrants.  The circular flow of 1965-1986 would have continued and probably increased. Without the amnesty, there would’ve been roughly 2.7 million fewer Mexicans with green cards, which would’ve meant many fewer green cards for Mexicans in the future. 

P.S. Do your answers account for diaspora dynamics?

I do account for diaspora effects.  According to Doug Massey’s data (p. 69), the percentage of migrant heads of household with a spouse and/or children in the United States dropped from 1965 to 1985, loosening their familial ties.  Those with immediate family in the United States saw the size of their households increase.  From the 1965-1970, the odds of unlawful immigration by men rose until it stabilized in the 1980s before IRCA (p. 69).  Those behaviors are not consistent with migrants looking to settle permanently in the United States. 

After IRCA, the percentage of undocumented immigrants who were women or nonworkers jumped (p. 134), consistent with permanent settlement rather than temporary migration.      

The amnesty portion of IRCA increased the size of the Mexican-American population.  The legal immigration system emphasizes family reunification.  Amnestied Mexicans were thus able to sponsor their family members after receiving their green cards and even more so after about 45 percent of them earned citizenship.  Green cards for Mexicans would’ve increased at a slower rate without IRCA.  IRCA’s amnesty increased the size of the Mexican diaspora and set it up to grow more in the nearly 30 years since then than it would have otherwise.

The family-based immigration system that turned the amnesty into such a long-term increase in Mexican immigration was created by the Immigration Act of 1965, which is reviled by restrictionists today.  Ironically, the family reunification portion was concocted by restrictionists who lobbied for it.  The American Legion and the Daughters of the American Revolution opposed the abandonment of the national origins system under the 1965 Act.  According to historian (and noted restrictionist) Vernon M. Briggs Jr., they lobbied for a family-based immigration system because they thought it would preserve the European ethnic and racial balance of immigration. 

That backfired.      

I have one question for you, Bryan: If the benefits of immigration are as great as we like to argue, why are there so few illegal immigrants

P.S.  I think about Bryan’s brilliant book The Myth of the Rational Voter more than any other I’ve ever read.  If you have any interest in economics, politics, or are just perplexed by the silly things politicians say, I highly recommend it.   

On Tuesday, Nevada Gov. Brian Sandoval signed into law the nation’s fifth education savings account (ESA) program, and the first to offer ESAs to all students who previously attended a public school. Earlier this year, Sandoval signed the state’s first educational choice law, a very limited scholarship tax credit. Despite their limitations, both programs greatly expand educational freedom, and will serve as much-needed pressure-release valves for the state’s overcrowding challenge.

When Nevada parents remove their child from her assigned district school, the state takes 90 percent of the statewide average basic support per pupil (about $5,100) and instead deposits it into a private, restricted-use bank account. The family can then use those funds to purchase a wide variety of educational products and services, such as textbooks, tutoring, educational therapy, online courses, and homeschool curricula, as well as private school tuition. Low-income students and students with special needs receive 100 percent of the statewide average basic support per pupil (about $5,700). Unspent funds roll over from year to year.

The eligibility requirements for ESA programs in other states are more restrictive. In Florida, Mississippi, and Tennessee, ESAs are limited to students with special needs. Arizona initially restricted ESA eligibility to students with special needs, though lawmakers have since expanded eligibility to include foster children, children of active-duty military personnel, students assigned to district schools rated D or F, gifted students, and children living in Native American reservations.

Gov. Sandoval signs the nation’s first nearly universal ESA program into law. Photo courtesy of Tim Keller.

Research shows that parents in Arizona are overwhelmingly satisfied with the state’s ESA program and, as Lindsey Burke and I recently explained, ESAs are a significant improvement over school vouchers:

ESAs offer several key advantages over traditional school-choice programs. Because families can spend ESA funds at multiple providers and can save unspent funds for later, ESAs incentivize families to economize and maximize the value of each dollar spent, in a manner similar to the way they would spend their own money. ESAs also create incentives for education providers to unbundle services and products to better meet students’ individual learning needs.

One disappointing limitation of Nevada’s ESA is that it is restricted to students who previously attended their assigned district school for at least 100 days. This eligibility requirement unnecessarily excludes students whose assigned school is low-performing, unsafe, or simply not a good fit for that student. It also excludes families and communities who object to what is being taught at the district schools. Hopefully the legislature will expand the ESA eligibility to include all Nevada students in the near future.

At 2pm this Thursday, I will be testifying before the Senate Judiciary Committee’s Subcommittee on Oversight, Agency Action, Federal Rights and Federal Courts at a hearing investigating how the Internal Revenue Service developed the (illegal) “tax-credit rule” challenged in King v. Burwell. Witnesses include three Treasury and IRS officials involved in drafting the rule:

Panel I The Honorable Mark Mazur
Assistant Secretary for Tax Policy
Department of the Treasury
(invited)

Ms. Emily McMahon
Deputy Assistant Secretary for Tax Policy
Department of the Treasury
(invited)

Ms. Cameron Arterton
Deputy Tax Legislative Counsel for Tax Policy
Department of the Treasury
(invited)

The second panel will consist of Michael Carvin (lead attorney for the plaintiffs in King v. Burwell, who argued the case before the Supreme Court), University of Iowa tax-law professor Andy Grewal (who discovered three additional ways, beyond King, that the IRS expanded eligibility for tax credits beyond clear limits imposed by the ACA), and me.

Lincoln Chafee, former U.S. Senator and Governor of Rhode Island, will announce his presidential run this week.  Chafee’s fiscal record as governor was moderately liberal, but much more centrist than Maryland’s Martin O’Malley.

Chafee served as governor of Rhode Island from January 2011 to January 2015, first as an Independent and then as a Democrat. (He was a Republican during his time in the U.S. Senate.) During his tenure, he received a “D” and a “B” on Cato’s Fiscal Policy Report Card on America’s Governors.

State spending grew substantially while Chafee was governor. From fiscal year 2011 to fiscal year 2012, Rhode Island general fund spending grew 5.2 percent and it grew another 5.1 percent from FY2012 to FY2013, according to data from the National Association of State Budget Officers (NASBO). NASBO data shows a 17 percent during Chafee’s entire tenure. This is almost three times population growth plus inflation for the state during his tenure.

Chafee promoted some tax increases to fund his expansion in government. In 2012 the state raised the cigarette tax. The sales tax base was expanded to cover clothing, pet services, and taxi rides. Sales tax base expansions can be the right pro-growth policy if they are combined with tax-rate cuts, but that was not on the case in Rhode Island. Chafee’s original proposal included even more tax increases, but the legislature prevented several from taking effect, such as an increase in the state’s meal and beverage tax.

Thankfully for Rhode Island, Chafee also supported numerous pro-growth reforms. Rhode Island’s tax reform package that passed in 2014 helped Chafee’s score in the most recent report card. The plan cut the corporate income tax from 9 to 7 percent, reduced the estate tax by increasing the exemption, and repealed the state’s franchise tax. He also supported a robust pension reform plan in 2011 that raised the retirement age and eliminated cost-of-living adjustments for beneficiaries.

Chafee joins a crowded presidential field with his announcement this week. For a Democratic Party that has moved far to left, he seems to have a more sensible fiscal approach than others in his party.

You Ought to Have a Look is a feature from the Center for the Study of Science posted by Patrick J. Michaels and Paul C. (“Chip”) Knappenberger.  While this section will feature all of the areas of interest that we are emphasizing, the prominence of the climate issue is driving a tremendous amount of web traffic.  Here we post a few of the best in recent days, along with our color commentary.

We highlight a couple of headlines this week that made us chuckle a bit, although what they portend is far from funny.

The first was from the always amusing “Energy and Environment” section of the Washington Post. Climate change beat writer Chris Mooney penned a piece headlined “The subtle — but real — relationship between global warming and extreme weather events” that was a hit-you-over-the-head piece about how human-caused global warming could be linked to various weather disasters of the past week, including the floods in Houston, the heatwave in India and hurricanes in general.

Mooney starts out, lamenting:

Last week, some people got really mad at Bill Nye the Science Guy. How come? Because he had the gall to say this on Twitter:

Billion$$ in damage in Texas & Oklahoma. Still no weather-caster may utter the phrase Climate Change.

Nye’s comments, and the reaction to them, raise a perennial issue: How do we accurately parse the relationship between climate change and extreme weather events, as they occur in real time?

It’s a particularly pressing question of late, following not only catastrophic floods in Texas and Oklahoma, but also a historic heatwave in India that has killed over 2,000 people so far, and President Obama’s recent trip to the National Hurricane Center in Miami, where he explicitly invoked the idea that global warming will make these storms worse (which also drew criticism).

As the Nye case indicates, there is still a lot of pushback whenever anyone dares to link climate change to extreme weather events. But we don’t have to be afraid to talk about this relationship. We merely have to be scrupulously accurate in doing so, and let scientists lead the way.

We must read different papers than Mr. Mooney. What little pushback there is (with a lot of it coming from us) has done little to impede the ubiquitous and speculative talk (or at the very least, insinuation) that global warming is involved in some material way in (or should we say, “made worse”) each and every extreme weather event. When it comes right down to it, adding significant quantities of greenhouse gases to the atmosphere (which we have done) does impact the flow of radiation through the atmosphere and in some way, ultimately, the weather. But the precise role that it plays in each weather event (extreme or otherwise) and whether or not such an impact is detectable, noticeable, or significant is far from scientifically understood—and almost certainly dwarfed by natural variability. This is true subtlety of the situation. Trotting out some scientist to say, “while we can’t definitively link global warming in to individual weather events, this is the sort of thing that is consistent with our expectations” is hogwash devoid of meaning. Such a statement underplays the scientific complexities involved and almost certainly overplays the role of human-caused climate change. If Mooney were accurately quantifying the subtleties, he’d have no business inserting them into his stories at all.

The fact of the matter is we examined the flooding situation in a 2004 article in the International Journal of Climatology[1] and in the Texas region there was no statistically significant change in the rainfall on the heaviest day of the year. Given that earth’s surface temperature hasn’t budged since then, the same should hold today.

The next piece wasn’t really a headline, but rather a tweet. Dr. Chris Landsea, a multi-talented hurricane specialist (researcher, forecaster, historian) from the National Hurricane Center (NHC) sent out this tweet after President Obama stopped by the NHC last week and made a few comments about, what else, the tie-in between human-caused global warming and hurricanes:

The link in Landsea’s tweet points to his article a few years ago that summarizes his well-studied opinion as to the current state of the science of hurricanes and climate change. Unlike many popular press/government stories, Landsea doesn’t shy away from the complexities and the confounding factors—which in fact aren’t subtle at all.

For example, when it comes to global warming’s role in modifying the strength of hurricanes, Landsea has this to say:

It is likely - in my opinion - that manmade global warming has indeed caused hurricanes to be stronger today. However, such a linkage without answering the more important question of - by how much? - is, at best, incomplete and, at worst, a misleading statement. The 1-2 mph change currently in the peak winds of strong hurricane due to manmade global warming is so tiny that it is not measureable by our aircraft and satellite technologies available today, which are only accurate to about 10 mph (~15 kph) for major hurricanes.

Landsea touches on topics of hurricane strength, number, lifespan, tracking, monitoring, demographics, damages, and, most importantly, implications. For example:

So after straightforward consideration of the non-meteorological factors of inflation, wealth increases, and population change, there remains no indication that there has been a long-term pick up of U.S. hurricane losses that could be related to global warming today. There have been no peer-reviewed studies published anywhere that refute this.

As an easy-to-read and extremely informative and insightful piece by one of the world’s leading hurricane researchers, this article is not to be missed. What’s more frightening than hurricanes themselves, is how far apart the opinion of leading scientists is from that of leading politicians.

But perhaps our favorite was this headline “I Fooled Millions Into Thinking Chocolate Helps Weight Loss. Here’s How” which tells the tale of how a team of conspirators showed how easy it is to get completely meaningless research findings into the scientific literature and then generating front page headlines and articles in diverse media sources from around the world.

The article, by science reporter-cum-dietary health researcher John Bohannon, is a must read. Laid out in chapters like the screenplay for The Sting, Bohannon describes how the whole thing went down, from “The Setup,” to “The Con,” from “The Hook,” to “The Mark” to “The Score.” Even more disconcerting than how they got the bad science in the literature—which is worrisome enough—is Bohannon’s description of the state of scientific reporting. While his remarks are made about science reporters covering diet, they apply, in spades, to many on the climate change beat. From Bohannon:

We journalists have to feed the daily news beast, and diet science is our horn of plenty. Readers just can’t get enough stories about the benefits of red wine or the dangers of fructose. Not only is it universally relevant—it pertains to decisions we all make at least three times a day—but it’s science! We don’t even have to leave home to do any reporting. We just dip our cups into the daily stream of scientific press releases flowing through our inboxes. Tack on a snappy stock photo and you’re done.

The only problem with the diet science beat is that it’s science. You have to know how to read a scientific paper—and actually bother to do it. For far too long, the people who cover this beat have treated it like gossip, echoing whatever they find in press releases. Hopefully our little experiment will make reporters and readers alike more skeptical.

If a study doesn’t even list how many people took part in it, or makes a bold diet claim that’s “statistically significant” but doesn’t say how big the effect size is, you should wonder why. But for the most part, we don’t. Which is a pity, because journalists are becoming the de facto peer review system. And when we fail, the world is awash in junk science.

There’s a lot more really juicy stuff in this piece. You ought to have a look—but your trust in science and the media will certainly be shaken, if it’s not crumbled already.

[1] Michaels, P.J., et al, 2004. Trends in Precipitation on the Wettest Days of the Year across the Contiguous United States.  Int. J. Climatology 24, 1873-1882.

Pages