Policy Institutes

The Trans-Pacific Partnership is a still-evolving trade agreement that would reduce tariffs and other barriers to goods and services trade between the United States and 11 other countries. It also would likely include provisions designed to protect certain U.S. industries from the full effects of competition.  A TPP agreement, then, would likely increase our economic freedoms in some realms and reduce them in others.  How these pros and cons would be manifest is unclear at the moment, given the fact that the deal is not done.  But it would a mistake to forego the opportunity to evaluate a completed trade deal that could deliver significant benefits. 

It is broadly understood that the TPP negotiations cannot be concluded without the Congress passing, and the president signing, Trade Promotion Authority legislation.  Without TPA, the president could not be sure that any trade deal brought home reflected the official wishes of Congress, and the likelihood that foreign negotiators would put their best and final offers on the table—knowing that Congress could unravel the deal’s terms—is close to zero.

The Senate passed TPA legislation (along with language reauthorizing the Trade Adjustment Assistance program) on May 22.  The House is likely to take up the bill this week.  At the moment, the president is in lockstep with a large majority of congressional Republicans, who support trade liberalization and see TPA as essential to the process.  But some Republicans (mostly from the conservative wing), who are wary of giving this president any more power, have joined ranks with the vast majority of congressional Democrats in opposition to TPA.  Meanwhile, Democratic presidential frontrunner Hillary Clinton—an architect of the TPP as Secretary of State and a potential heir to the trade agenda—has refused to take a position on TPA.

The spotlight on trade policy has generated much more heat than light.  Misinformation abounds.  Rationalizations masquerade as rationales.

This new Cato Free Trade Bulletin is intended to dispel some of the nonsense that has been circulating and to present a brief, objective assessment of what has transpired and what lies ahead for TPA and TPP.

Every so often, I’ll assert that some statists are so consumed by envy and spite that they favor high tax rates on the “rich” even if the net effect (because of diminished economic output) is less revenue for government.

In other words, they deliberately and openly want to be on the right side (which is definitely the wrong side) of the Laffer Curve.

Just in case you think I’m exaggerating in order to make my opponents look foolish, check out this poll of left-wing voters who strongly favored soak-the-rich tax hikes even if there was no extra tax collected.

But now I have an even better example.

Writing for Vox, Matthew Yglesias openly argues that we should be on the downward-sloping portion of the Laffer Curve. Just in case you think I’m exaggerating, “the case for confiscatory taxation” is part of the title for his article.

Here’s some of what he wrote.

Maybe at least some taxes should be really high. Maybe even really really high. So high as to useless for revenue-raising purposes — but powerful for achieving other ends. We already accept this principle for tobacco taxes. If all we wanted to do was raise revenue, we might want to slightly cut cigarette taxes. …But we don’t do that because we care about public health. We tax tobacco not to make money but to discourage smoking.

The tobacco tax analogy is very appropriate.

Indeed, one of my favorite arguments is to point out that we have high taxes on cigarettes precisely because politicians want to discourage smoking.

As a good libertarian, I then point out that government shouldn’t be trying to control our private lives, but my bigger point is that the economic arguments about taxes and smoking are the same as those involving taxes on work, saving, investment.

Needless to say, I want people to understand that high tax rates are a penalty, and it’s particularly foolish to impose penalties on productive behavior.

But not according to Matt. He specifically argues for ultra-high tax rates as a “deterrence” to high levels of income.

If we take seriously the idea that endlessly growing inequality can have a cancerous effect on our democracy, we should consider it for top incomes as well. …apply the same principle of taxation-as-deterrence to very high levels of income. …Imagine a world in which we…imposed a 90 percent marginal tax rate on salaries above $10 million. This seems unlikely to raise substantial amounts of revenue.

I suppose we should give him credit for admitting that high tax rates won’t generate revenue. Which means he’s more honest than some of his fellow statists who want us to believe confiscatory tax rates will produce more money.

But honesty isn’t the same as wisdom.

Let’s look at the economic consequences. Yglesias does admit that there might be some behavioral effects because upper-income taxpayers will be discouraged from earning and reporting income.

Maybe…we really would see a reduction of effort, or at least a relaxation of the intensity with which the performers pursue money. But would that be so bad? Imagine the very best hedge fund managers and law firm partners became inclined to quit the field a bit sooner and devote their time to hobbies. What would we lose, as a society? …some would presumably just move to Switzerland or the Cayman Islands to avoid taxes. That would be a real hit to local economies, but hardly a disaster. …Very high taxation of labor income would mean fewer huge compensation packages, not more revenue. Precisely as Laffer pointed out decades ago, imposing a 90 percent tax rate on something is not really a way to tax it at all — it’s a way to make sure it doesn’t happen.

While I suppose it’s good that Yglesias admits that high tax rates change incentives, he clearly underestimates the damaging impact of such a policy.

He presumably doesn’t understand that rich people earn very large shares of their income from business and investment sources. As such, they have considerable ability to alter the timing, level, and composition of their earnings.

But my biggest problem with Yglesias’ proposals is that he seems to believe in the fixed-pie fallacy that public policy doesn’t have any meaningful impact of economic performance. This leads him to conclude that it’s okay to pillage the “rich” since that will simply mean more income and wealth is available for the rest of us.

That’s utter nonsense. The economy is not a fixed pie and there is overwhelming evidence that nations with better policy grow faster and create more prosperity.

In other words, confiscatory taxation will have a negative effect on everyone, not just upper-income taxpayers.

There will be less saving and investment, which translates into lower wages and salaries for ordinary workers.

And as we saw in France, high tax rates drive out highly productive people, and we have good evidence that “super-entrepreneurs” and inventors are quite sensitive to tax policy.

To be fair, I imagine that Yglesias would try to argue that these negative effects are somehow offset by benefits that somehow materialize when there’s more equality of income.

But the only study I’ve seen that tries to make a connection between growth and equality was from the OECD and that report was justly ridiculed for horrible methodology (not to mention that it’s hard to take serious a study that lists France, Spain, and Ireland as success stories).

P.S. This is my favorite bit of real-world evidence showing why there should be low tax rates on the rich (in addition, of course, to low tax rates on the rest of us).

P.P.S. And don’t forget that leftists generally view higher taxes on the rich as a precursor to higher taxes on the rest of the population.

P.P.P.S. In the interests of full disclosure, Yglesias says I’m insane and irrational.

As I noted last week, the GOP’s 2016 contenders didn’t do themselves much credit as they ducked, covered, cringed, and pratfell through a series of interview questions about the Iraq War. Still, Jeb Bush had a point when he noted that, at the time, “almost everybody” in political Washington was for the war. True enough: as policy disasters go, the Iraq War was as bipartisan as the subprime loan crisis

On the war’s tenth anniversary a couple of years back, the New Republic’s John Judis recalled “what it was like to oppose the Iraq War in 2003.” His memory jibes with mine: it was pretty damned lonely. Well before “Shock and Awe,” hawkish arguments had achieved near full-spectrum dominance over the minds of Beltway policy elites, and the invasion and occupation of Iraq was shaping up as a horrific idea whose time had come. 

But it rankled a bit when Judis wrote that “except for Jessica Mathews at the Carnegie Endowment for International Peace, Washington’s thinktank honchos were also lined up behind the war.” Not to take anything away from Ms. Mathews, but the late, great Bill Niskanen had to count as a “think tank honcho” if anyone did, and he opposed the war vigorously, early, and often.

In a December 2001 public debate with former CIA director James Woolsey, Niskanen, then Cato’s chairman, offered the first prominent public statement by a DC think-tank leader against that looming debacle: “An Unnecessary War Is an Unjust War,” Bill argued. In the run-up to the invasion, other Cato scholars argued, among other things, that:

At the time, opposition to the Iraq War was controversial even within the building—and outside of 1000 Massachusetts Ave., Cato’s Iraq War skeptics had very little company among the Beltway cognoscenti. 

For example, both inside and outside the Bush administration, American Enterprise Institute scholars played a key role in making the case for war.  As Bob Woodward reported in his 2006 book State of Denial, in late 2001, Paul Wolfowitz approached then-AEI president Chris DeMuth to put together a secret task force of top thinkers to generate “the kinds of ideas and strategy needed to deal with a crisis of the magnitude of 9/11.” They called themselves “Bletchley II,” after the British cryptographers who cracked Axis communication codes during WWII.  

Bletchley II participants included Fareed Zakaria (who should have known better), James Q. Wilson, Reuel Marc Gehrecht, Fouad Ajami, Robert Kaplan, and Bernard Lewis. The team generated “a seven-page, single-spaced document, called ‘Delta of Terrorism.’ ‘Delta’ was used in the sense of the mouth of a river from which everything flowed.” Rumsfeld adviser Steven Herbits summed up the memo’s message: “We’re facing a two-generation war. And start with Iraq.” (Boy, I’d love to see that memo.) 

Over on the center-left, Brookings scholars proved instrumental as well—Kenneth Pollack’s The Threatening Storm: The Case for Invading Iraq played a key role in getting center-left opinion leaders behind the war, convincing, among many others, the New Yorker’s David Remnick that “a return to a hollow pursuit of containment will be the most dangerous option of all.” 

It’s supposed to be conservatives’ job to stand athwart history, yelling “stop!” and liberals’ role to resist the rush to war. For the most part, DC’s think tanks left that job to the libertarians, with predictable results. In 2004, after the die was cast, Cato was the first major DC think tank to offer an extended argument for Exiting Iraq, in Chris Preble’s book of that name.  

I wouldn’t call Cato’s Iraq skeptics “prescient,” nor would I claim that everything we wrote holds up well a decade and a half later; for example, like several of my colleagues, I engaged in a little WMD fearmongering of my own in the early days of the war, warning that “components for a ‘dirty bomb’ may already be in the wrong hands” thanks to the invasion. (Cato’s Alan Reynolds, who decried the WMD “hype,” had the better view). But it would be nice if, when a major daily prints one of those “what to do in Iraq?” symposia, they occasionally think to call somebody, like Ted Carpenter or Chris Preble—who got it right in the first place. 

It would be even better if the GOP’s 2016 contenders weren’t still, 12 years later, so eager to seek foreign policy advice from people who got the Iraq War Question spectacularly wrong.

The Washington Post reports that Scott Walker’s “crash course” in foreign policy is led by tutors like “[Elliot] Abrams, Bush’s deputy national security adviser, and [AEI’s] Marc A. Thiessen, a Post columnist and former Bush speechwriter known for his staunch defense of waterboarding and other interrogation tactics barred by President Obama. Walker selected Thiessen to co-write his 2013 book, Unintimidated, and the two men became confidants during hours of Skype conversations each weekend.” Marco Rubio’s team includes a kettle of Iraq hawks like Abrams, Eric Edelman, and Brookings’ Robert Kagan. There’s no word yet on who’s advising Rick Perry, but last time around he sought out Iraq War architect Doug Feith, now with the Hudson Institute. It was a bit unfair for then-CentCom commander Gen. Tommy Franks to call Feith “the dumbest [expletive deleted] guy on the planet,” given Earth’s seven billion-plus people, but Feith’s hardly the first person you’d want to turn to if you wanted to avoid the costly foreign policy blunders of the past decade. 

For his part, Jeb Bush has sought out a number of key figures “present at the creation” of the Iraq debacle, like Paul Wolfowitz, who, as US forces drove toward Baghdad, in March 2003, promised Congress icing on the cakewalk: “There is a lot of money to pay for this that doesn’t have to be U.S. taxpayer money…. We are talking about a country that can really finance its own reconstruction and relatively soon.”

Given that background, you might suspect that there’s just no accountability for GOP foreign policy advisers; but that isn’t so. The Wall Street Journal recently reported that “Elbridge Colby, a fellow at the Center for a New American Security, was being seriously considered for a job as foreign policy director in Mr. Bush’s expanding organization, [but] according to a person familiar with the campaign’s internal deliberations, Mr. Bush’s political operation raised concerns about Mr. Colby’s published views on Iran. Mr. Colby has prominently advocated against a military strike on Iran and has called for the Republican Party to move closer to its roots of pragmatism and containment.” So he’s out. You can’t expect to retain your political and professional credibility after a screw-up like that.

It’s hard not to feel satisfaction at the indictment of soccer officials for apparently corrupting the globe’s Beautiful Game—soccer in America but football to most of the world. Yet emotional satisfaction is a bad basis for government policy. While the U.S. is not the only nation to assert extraterritorial jurisdiction, it does so more often and more broadly than anyone else.

Moreover, punishing foreigners creates future risks. Someday Americans might get indicted by other nations for “crimes” committed in the U.S.

How did Washington become the world’s policeman and prosecutor in the case of soccer? The sport remains a modest phenomenon in America. Most of the alleged crimes involve foreigners acting overseas.

The impact in the U.S. is less than that on almost every other nation on earth, since virtually everywhere the sport commands greater loyalty from a larger percentage of the population. Nevertheless, some of the criminal acts took place in America and the corruption affected interstate (and foreign) commerce, the boilerplate justification used by Uncle Sam for regulating most everything.

As American power has grown, so has Washington’s willingness to apply its laws to the rest of the world. Washington has routinely abducted foreigners overseas for drug offenses. Perhaps the most extreme example was the 1989 invasion of Panama, after which ousted dictator Manuel Noriega was transported to America and convicted of violating U.S. drug laws.

Even more problematic has been the Justice Department crusade to turn foreign banks into arms of the IRS. The U.S. has gone after Swiss banks with the greatest enthusiasm, paying informants, filing criminal prosecutions, and imposing multi-billion dollar fines for accepting deposits from Americans. Yet citizens of Switzerland and the rest of the world have no moral obligation to help fill Uncle Sam’s coffers to finance more waste and wars.

Congress also passed the Foreign Account Tax Compliance Act (FATCA) requiring all non-American financial institutions to report any accounts held by Americans. As a result, foreign banks face substantial costs in dealing with U.S. citizens, even those fully compliant with American tax laws. Many foreign banks now refuse to serve Americans.

Perhaps the most expansive form of extraterritoriality is sanctions. By one count Washington imposed 61 different economic penalties between 1993 and 1998, Washington’s dictates are amplified not only by the size of the American market, but through Swift, the Brussels-based organization which manages international financial transfers.

Traditionally sanctions applied to companies formed in the U.S. and their branches, and firms located in America. However, through both legislation and regulation Washington has constantly expanded the extraterritorial reach of U.S. penalties.

Over time Washington began targeting U.S. subsidiaries and licensees. Later sanctions also applied to resale of U.S.-origin goods, transactions with foreign firms, and foreign banks financing prohibited transactions. European companies, in particular, have found themselves fined for activities which are legal under national as well as European Union law.

Explained attorneys Ronald Meltzer and David Ross of WilmerHale, “U.S. law has undergone a significant shift: it effectively creates an expanding regime of secondary sanctions that are triggered by transactions that do not require a nexus to the United States.” Sometimes other governments have enacted “blocking” statutes which prohibit their nationals from complying with foreign, i.e., American, restrictions deemed harmful to their national interest.

As I point out on Forbes online: “The moral fervor behind many of Washington’s many fevered crusades often is laudable. But a desire to do good does not warrant America attempting to play dictatress to the world.”

So it is with the U.S. indictments against corrupt soccer officials, and even more so with Washington’s determination to make foreign banks agents of the IRS and foreign individuals and companies tools of U.S. foreign policy. Such overreach inevitably breeds abuse.

It also invites retaliation in the future, when America no longer so dominates the globe. If Americans eventually find themselves in a foreign court for legal conduct in the U.S., they will have today’s lawmakers and officials to thank.

My new piece at Reason begins:

We’ve seen it happen again and again: libertarians are derided over some supposedly crazy or esoteric position, years pass, and eventually others start to see why our position made sense. It’s happened with asset forfeiture, with occupational licensure, with the Drug War, and soon, perhaps, with libertarians’ once-lonely critique of school truancy laws.

In his 1980 book Free To Choose, economist Milton Friedman argued that compulsory school attendance laws do more harm than good, a prescient view considering what’s come since: both Democratic and Republican lawmakers around the country, prodded by the education lobby, have toughened truancy laws with serious civil and even criminal penalties for both students and parents. Now the horror stories pile up: the mom arrested and shackled because her honor-roll son had a few unexcused sick days too many, the teenagers managing chaotic home lives who are threatened with juvenile detention for their pains, the mother who died in jail after being imprisoned for truancy fines. It’s been called carceral liberalism: we’re jailing you, your child, or both, but don’t worry because it’s for your own good. Not getting enough classroom time could really ruin a kid’s life.

My article also mentions that a bill to reform Texas’s super-punitive truancy laws has reached Gov. Greg Abbott’s desk, following the reported success of an experiment in San Antonio and pressure from a Marshall Project report. Finally, truancy-law reform is looking to become an issue across the political spectrum — but libertarians were there first.

Today marks the second anniversary of The Guardian’s first blockbuster story derived from files provided by former NSA contractor Edward Snowden—launching what would become an unprecedented deluge of disclosures about the scope and scale of communications surveillance by American intelligence agencies. So it seems appropriate that this week saw not only the passage of the USA Freedom Act, but also the approval in the House of several privacy-protective appropriations amendments, about which more momentarily.  Snowden himself takes a quick victory lap in a New York Times editorial reflecting on the consequences of his disclosures, (very much in line with his remarks during our interview at the inaugural Cato Surveillance Conference):

Privately, there were moments when I worried that we might have put our privileged lives at risk for nothing — that the public would react with indifference, or practiced cynicism, to the revelations.

Never have I been so grateful to have been so wrong.

Two years on, the difference is profound. In a single month, the N.S.A.’s invasive call-tracking program was declared unlawful by the courts and disowned by Congress. After a White House-appointed oversight board investigation found that this program had not stopped a single terrorist attack, even the president who once defended its propriety and criticized its disclosure has now ordered it terminated.

He’s referring here to last month’s appellate court ruling against the notorious telephone records dragnet, followed this week by passage of the USA Freedom Act.  That law should bar bulk collection not only under §215 of the Patriot Act, the basis of the phone program, but also under §214—the “pen register” provision previously used to vacuum up international Internet metadata—and National Security Letters, which can be issued by senior FBI officials without judicial approval.  Since the latter two authorities are permanent, they would not have been affected by what quite a few lazy reporters described as “the expiration of the Patriot Act,” though in fact only about 2 percent of the law’s provisions were actually due to sunset.  While the law is far from ideal, incidentally, I think it does constitute more robust reform than many libertarians fear, for reasons I lay out in this piece at Motherboard and this blog post at Just Security.  It will, of course, be necessary to vigilantly watch for efforts to water down the law’s protection—something the public is finally at least somewhat empowered to do by a transparency provision requiring significant legal interpretations by the secret Foreign Intelligence Surveillance Court to be published in unclassfied form.

Perhaps as significant as the law’s substantive reforms, however, is its symbolic importance.  Since the terror attacks of 9/11, we have relentlessly racheted up government’s spying powers, assured that only by trading away ever more privacy could we guarantee safety.  Whenever a surveillance authority was due to lapse—as, unfortunately, only a few were designed to—leadership in Congress invariably waited until the eleventh hour to schedule the relevant statutes for consideration, then used the manufactured “emergency” of looming expiration to steamroll over legislators who hoped to seriously debate reforms or added safeguards, or whether the expanded powers were necessary at all.  Senate Majority Leader Mitch McConnell sought to repeat the strategy that had worked so well in the past this time—only to discover that Americans were no longer so easily cowed. 

That was demonstrated again just days after the Freedom Act’s passage, when a series of amendments to an approrpiations bill aimed at limiting government surveillance passed the House by enormous margins.  The first, offered by Rep. Jared Polis, seeks to prohibit the Drug Enforcement Agency from engaging in bulk collection of Americans’ data under its own supoena authorities, following revelations that it had for decades maintained its own more limited phone records dragnet. The second, from Reps. Ted Poe and Zoe Lofgren, bars the FBI or Justice Department from using government funds to seek to insert backdoors into secure online communications systems. The third, offered by Rep. Thomas Massie, seeks to block the National Security Agency from abusing its role as a consultant to a national standards-setting body to dilute rather than strengthen encryption protocols.

These are, to be sure, heartening developments, but plenty of work remains. We still know precious little about the massive surveillance being conducted under the aegis of Executive Order 12333, which governs intelligence gathering that takes place outside the United States, yet sweeps in large amounts of Americans’ data as it travels around the globe.  Nor do the reforms passed this week  touch §702 of the FISA Amendments Act, an authority that blesses the very general warrants abhorred by the framers of our Constitution, enabling large scale collection of Americans’ communications with foreign persons and websites.  That authority is set to expire at the end of 2017, and after a brief pause to toast the small progress made this week, is the next battle to which privacy advocates will be turning their efforts.

Despite recent gains around the country, civil asset forfeiture reform suffered a setback in Maryland when Gov. Larry Hogan (R) vetoed a bill that would have placed restraints on the state’s civil forfeiture regime.

Civil asset forfeiture is a process by which the government is able to seize property (cash, vehicles, homes, hotels, and virtually any other item you can imagine) and keep the proceeds without ever charging the victim with a crime.  The bill, SB 528, would have established a $300 minimum seizure amount, shifted the burden of proof to the state when someone with an interest in the seized property asserts innocent ownership (e.g. a grandmother whose home is taken when her grandson is suspected of selling drugs out of the basement), and barred state law enforcement agencies from using lax federal seizure laws to circumvent state law.

In vetoing the measure, Gov. Hogan claimed that restraining civil asset forfeiture “would greatly inhibit” the war on drugs in the midst of a heroin epidemic and interfere with joint federal/state drug task forces. Gov. Hogan admitted that asset forfeiture laws “can be abused,” but that their utility outweighed the risk of abuse. 

Each of these assertions is misguided.

Civil forfeiture reform would certainly make it more difficult for law enforcement to seize property from citizens not charged with crimes.  Indeed, that is the entire purpose of reforming the law.  Likewise, the presumption of innocence, right to due process, and warrant requirement make it more difficult for the government to prosecute people suspected of crimes.  Those checks on hostile government action exist because governments with unfettered authority to summarily plunder and punish tend to do just that, and the litany of civil forfeiture horror stories is proof.

Therefore civil asset forfeiture is not merely susceptible to abuse; civil asset forfeiture is abuse.  Under no circumstances should someone be forced to forfeit their money, property, or even their home to the government on suspicion alone. The “inhibitions” Gov. Hogan’s statement laments are in fact the most fundamental defenses for private property and due process in a country founded to protect them.

Governor Hogan’s appeal to the efficacy drug war is similarly misguided.  We’re told that the prevalence of drugs, especially heroin, in Maryland is reason enough to keep forfeiture laws lax.  Decades of a failed drug war have proven the inefficacy of asset forfeiture as a means of stemming the flow of narcotics, and continuing that failure is no justification for abolishing the due process and private property rights of people who aren’t even charged with criminal behavior.

Remember: even an outright abolition of civil forfeiture wouldn’t mean the police couldn’t seize property from drug traffickers; it would just require the state to prove its suspicions in court before it takes someone’s property.  Criminal asset forfeiture would remain available to law enforcement inasmuch as there is any legitimate law enforcement justification for seizing property.

Lastly, Gov. Hogan’s veto statement announces the establishment of a working group, made up primarily of federal and state law enforcement and prosecutors (with a single seat going to the public defender), to decide whether any change to forfeiture law “is warranted” to prevent abuse and ensure law enforcement can still fight the war on drugs.  Tasking the very people who profit from civil forfeiture abuses with deciding whether changes are warranted casts immense doubt on the possibility of meaningful reform.

SB 528 is already a compromise bill.  It doesn’t abolish civil asset forfeiture, as New Mexico did.  It merely raises the protections due to innocent owners and requires state law enforcement to use state laws instead of excessively permissive federal forfeiture laws.  If even that is too much for Governor Hogan to tolerate, it seems unlikely that a working group of police and prosecutors is going to suggest much in the way of meaningful reform.

Civil asset forfeiture reform is not a partisan issue.  New Mexico’s abolition of the practice resulted from a bill that passed unanimously through both houses of the legislature and was signed by Republican Governor Susana Martinez.  Legislation reining in civil forfeiture in Montana was authored by Rep. Kelly McCarthy and signed by Governor Scott Bullock, both Democrats.

This is not a case of Republicans versus Democrats.  It’s a battle between those who believe that due process and private property rights trump the revenue generation and administrative ease of the state and those who believe that those rights are acceptable collateral damage in the war on crime.  Governor Hogan has chosen the wrong side of this debate.

In what has been aptly named “the world’s dumbest trade war,” both Europe and America have fought to limit imports of low-cost Chinese solar panels.  Much to the chagrin of anyone who likes solar power, the United States and the European Union have imposed high tariffs on Chinese panels in order to protect their own subsidized domestic industries. 

In 2013, the EU negotiated a deal with Chinese solar manufacturers that exempted them from the duties as long as they agreed to sell panels above a set minimum price.  By managing trade in this way, European authorities are essentially creating a solar cartel that divvies up market share among established companies who agree not to compete on price.

But cartel arrangements are notoriously difficult to maintain because any member of the group can ruin the scheme by reneging.  This would seem especially likely when the cartel arrangement was forced on them involuntarily by government in the first place.

So it is that some Chinese companies have tried to find innovative ways to compete despite government price controls.  According to the Wall Street Journal:

Among other violations of the settlement, the commission said Canadian Solar offered unreported “benefits” to its customers in Europe to buy their panels, effectively lowering the sales price below the minimum import-price set by the agreement.

The commission also questioned the practice by Canadian Solar and ReneSola of selling solar cells to firms in non-EU countries for assembly into panels that are then sold to the EU. Because the EU tariffs only apply to panels coming from China, the practice, though not a direct violation of the agreement, allows the two firms’ solar cells to enter the 28-nation EU unrestricted by the agreement.

Darn those Chinese and their legal attempts to help Europeans reduce greenhouse gas emissions through mutually beneficial exchange.  Don’t they know the EU wants prices to stay high to prop up subsidized domestic producers?  Shame on them!

In all seriousness, green industrial policy has become a global problem that will only grow as long as governments find the benefits of free trade in wind and solar power equipment less appealing than doling out privilege through managed trade.

Former Texas governor Rick Perry announced his candidacy for the 2016 GOP presidential nomination earlier today. Many recall his 2012 bid, which came to a rather spectacular end when Gov. Perry, on live television, forgot the name of the third federal agency he promised to eliminate if elected president. However, in a recent WSJ op-ed, Gov. Perry redeemed himself by offering a real candidate for elimination: the Export-Import Bank.

The Export-Import Bank (Ex-Im) provides financing and loan guarantees at below-market rates to foreign purchasers looking to buy products from American exporters. For example, if Emirates Air wants to buy planes from Boeing, Ex-Im can provide a loan guarantee, reducing the interest rate Emirates will pay, and thus incentivizing Emirates to buy from Boeing rather than Airbus.

Ex-Im’s supporters claim that these subsidies create jobs and finance domestic economic growth. But, they fail to consider the ensuing downstream effects, which Bastiat termed “ce qu’on ne voit pas”–that which is unseen. As the Cato scholar Daniel Ikenson makes clear, every dollar Ex-Im provides to subsidize foreign purchasers of U.S.-produced products discriminates against U.S. consumers of the same products. For example, when Emirates receives a subsidy for planes because it is a foreign company, Emirates gets a leg up on Delta.

An edifying account of how this system works was presented many years ago by the late Prof. Yale Brozen in his foreword to Prof. Leland Yeager’s classic Proposals for Government Credit Allocation (1977):

Whom you know and with whom you have influence becomes more important in obtaining capital than how productively you can use it. Capital is diverted from more productive uses to politically determined applications […]. The national income pie shrinks as an increasing proportion of our capital is allocated by the political process – not only because of its diversion from more productive uses but also because more and more of our resources are devoted to winning political influence, as that becomes the road to access to available capital and subsidies.

For the record, Ex-Im isn’t small potatoes. In FY 2015, Ex-Im’s loans and loan guarantees will total $30.9 billion, or 6.7% of all non-housing federal credit programs (see the accompanying chart). The Ex-Im’s total cumulative loans and guarantees outstanding (read: credit exposure) currently sits at $112 billion. Because the loans are granted at below-market rates, the Ex-Im does not receive fair compensation for the $112 billion of risk it takes on.

Instead of adopting a policy that makes a few U.S. exporters winners at the expense of many losers, there is a way to make all U.S. firms more competitive: just lower the grueling corporate tax rate. Rick Perry also embraces this idea in his op-ed, mirroring what I have been advocating for years.

The message is clear: taxes on corporations increase costs, decrease margins, and often lead to price increases. The top U.S. corporate tax rate (excluding state taxes) currently stands at 35%.

When our sky-high corporate tax rates are the highest of any of the 34 member countries of the Organization for Economic Co-operation and Development, something is wrong. There is clearly a better way to unburden U.S. corporations than to sponsor a “bank” in which politicians and bureaucrats, not capital markets, choose winners and losers. Rick Perry is right: it is time to move away from a mercantilist view of trade towards one that puts the market back in control. Kill the Export-Import Bank and cut corporate taxes, please.

The Spin Cycle is a reoccurring feature based upon just how much the latest weather or climate story, policy pronouncement, or simply poo-bah blather spins the truth. Statements are given a rating between 1-5 spin cycles, with less cycles meaning less spin. For a more in-depth description, visit the inaugural edition.

Today’s press buzz is about a new paper appearing in this week’s Science magazine which concludes that the “hiatus” in global warming is but a byproduct of bad data. The paper, “Possible artifacts of data biases in the recent global surface warming hiatus,” was authored by a research team led by Director of the National Oceanic and Atmospheric Administration’s National Climatic Data Center, Dr. Thomas Karl. Aside from missing the larger point—that the relevant question is not whether the earth is warming, but why it’s warming so much slower than the computer model projections—the paper’s conclusions have been well-run through the spin cycle.

The spin was largely conducted by the American Association for the Advancement of Science (AAAS), publisher of Science magazine, through its embargo campaign and the courting of major science writers in the media before the article had been made available to the general public (and other scientists). Given the obvious weaknesses in the new paper (see below and here, for starters), there seems the potential for more trouble at Science—something that Editor-in-Chief Marcia McNutt is up to her eyeballs with already.

One major problem with the new Karl and colleagues paper is that the headline-making finding turns out not even to be statistically significant at the standard scientific level—that is, having a less than 1-in-20 chance of being due to chance (unexplained processes) alone.

Instead, the results are reported as being “statistically significant” if they have less than a 1-in-10 chance of being caused by randomness.

More and more we are seeing lax statistical testing being applied in high profile papers (see here and here for recent examples). This tendency is extremely worrisome, as at the same time, the validity of large portions of the scientific literature is being questioned on the basis of (flawed) methodological design and poor application and interpretation of statistics. An illuminating example of how easily poor statistics can make it into the scientific literature and produce a huge influence on the media was given last week in the backstory of a made-up paper claiming eating chocolate could enhance weight loss efforts.

But, as the Karl et al. paper (as well as the other recent papers linked above) shows, some climate scientists are pushing forward with less than robust results anyway.

Why? Here’s a possible clue.

Recall an op-ed in the New York Times a few months back by Naomi Oreskes titled “Playing Dumb on Climate Change.” In it, Oreskes, a science historian (and author of the conspiratorial Merchants of Doubt) argued that since climate change was such an urgent problem, we shouldn’t have to apply the same 1-in-20 set of rigorous statistics to the result—it is slowing down the push for action. Climate scientists, Oreskes argued, were being too conservative in face of a well-known threat and therefore, “lowering the burden of proof” should be acceptable.

Oreskes article and suggestions were summarily panned. Nevertheless, evidence that it is being put into action is plentiful.

The new Karl et al. paper is a perfect example—abrogating normal statistical confidence levels to push a result that will be prime ammunition for the barrage of proposals aimed at restricting greenhouse gas emissions from fossil fuel use in producing energy. This is just more politicized science in the Administration’s relentless campaign over the upcoming UN climate summit in Paris, where it will push for a new international agreement restricting carbon dioxide emissions.

As an amusing side, using similar statistical procedures and confidence levels, the observed trend in Karl et al.’s newly-adjusted data is significantly lower than the average trend forecast to have occurred by the collection of climate models used by the U.N.’s Intergovernmental Panel on Climate Change (IPCC). In other words, these flimsy statistics kill both the “hiatus” and the climate models as well.  

For the spinning non-robust results into major climate-alarming headlines worldwide, we award a Normal Wash spin cycle (three spins) to the AAAS and Science editor-in-chief Marcia McNutt.  

The very day King John pledged to uphold Magna Carta, June 20, 1215, he asked Pope Innocent III to annul it.  The pope replied, “We utterly reject and condemn this settlement and under threat of excommunication we order that the king should not dare to observe it and that the barons and their associates should not require it to be observed.”

So, John reneged on his agreement with the barons, they rebelled and formed an alliance with King Philip II of France who prepared to invade England.  Before long, the French Prince Louis entered London, and the French controlled castles throughout England.  The English Church, however, backed John and refused to crown Lewis as England’s king. 

John fled from his pursuers, but somewhere along the line he contracted dysentery and was dying.  He appointed 13 executors including William Marshal who was among the most revered knights in England.  John died on October 19, 1216,  and his nine-year-old son was hastily crowned Henry III.  Because he was under-age, Marshal formed a regency government.  Although Marshal was able to seize an important English castle from the French, the civil war was substantially stalemated.

With John gone, the rebel barons found themselves in an awkward position – their alliance with foreigners who occupied England.  Patriotic English wanted to get the French out.  Fortunately, Prince Louis was happy to collect a bribe, and soon the French went home.

Regent Marshal recognized that there was more likely to be domestic peace if some fundamental legal issues were resolved and that consequently John’s repudiation of Magna Carta must be reversed.   So Marshal reviewed the document, made some cuts, and reissued Magna Carta in late 1216.   Among the cuts was paragraph 61 about the committee of 25 barons who would monitor the king’s compliance with Magna Carta and, if necessary, try to enforce it.  Perhaps less important than those words was the fact that the barons had demonstrated their willingness to use force against a tyrannical king.

The government needed more money again in 1217, and Marshall proposed a tax on the land held by knights – land that provided food and generated revenue to make possible their feudal military service.  Barons resisted, and Marshall reissued the previous version of Magna Carta with some clauses added to protect feudal privileges of the barons.

In February 1225, there were fears that England might be invaded by the French, and the government needed even more money to mount a defense.  There was much debate and eventually an agreement among the barons to pay a tax on moveable goods – provided the king reissued Magna Carta.  Accordingly, Henry III approved a version similar to that of  2017 and affirmed that he did it with his “spontaneous and free will” as well as with his royal seal.  He declared, “neither we nor our heirs will determine anything by which the liberties contained in this charter be violated or weakened.”

Over the centuries, the 1225 Magna Carta was reaffirmed dozens of times by English sovereigns.  In 1311, Edward II referred to some statutes, saying “that they be not contrary to the Great Charter.”

Other countries issued charters intended to limit  a ruler’s power, too, but Magna Carta really took root, and constitutional development went furthest in England.  For instance:

  • Magna Carta appeared in dozens of compilations of English laws, invariably as the first law.  Initially it was in Latin, then French and finally English.
  • The Due Process of Law Act was adopted in 1368, during the reign of Edward III, and it said, in part, that the “Great Charter be holden and kept in all Points, and if any Statute be made to the contrary, that shall be holden for none.”
  • In 1509, King Henry VIII approved of the beheading of Edmund Dudley and Richard Empson, accused of looting taxpayers and the government.  One of the formal charges involved violating Magna Carta.

Queen Elizabeth I, near the peak of her power in 1587, wanted to establish a new judicial post in her government for one of her cronies, Richard Cavendish, so he could make a lot of money by issuing certain documents in the common law courts.  She asked administrative judges whose approval was needed.  They refused, and they were charged with disobedience.  They had to explain themselves before the queen.

According to constitutional historian Henry Hallam, the judges said they meant no offense to her majesty, but her order was against “the law of the land” – meaning principles affirmed in Magna Carta.  Consequently, they said “no one is bound to obey such an order.”  When further pressed, they pointed out that the queen herself had sworn to uphold the law of the land.  The judges believed they couldn’t obey her order without violating the laws and their oaths.  The judges cited prior practices that had been rejected, because they violated the laws of the land.  Queen Elizabeth left the chamber without commenting, and nothing more was heard about the matter.

During the 17th century, the legal scholar, judge and member of Parliament Edward Coke (pronounced “Cook”) interpreted Magna Carta as a bedrock of the English constitutional law that enabled people to resist and rebel against the tyrannical Stuart kings. 

Many critics have belittled the importance of Magna Carta by dwelling on the fact that the rebel barons were looking out for their own interests as feudal lords.  But establishing constitutional limits on a ruler with arbitrary power is always extraordinarily difficult.  Some people succeed before others, and their success is likely to make it easier for more people to follow.

Although Magna Carta didn’t derive from the principles of a “higher law,” such as were received by Moses and articulated by Sophocles, Marcus Tullius Cicero, John Lilburne, John Locke, Thomas Paine, Thomas Jefferson,  and others, from a constitutional standpoint Magna Carta had similar standing.  It didn’t come from rulers.  It couldn’t be repealed.  It was forever.

A new paper posted today on ScienceXpress (from Science magazine), by Thomas Karl, Director of NOAA’s Climate Data Center, and several co-authors[1], that seeks to disprove the “hiatus” in global warming prompts many serious scientific questions.

The main claim[2] by the authors that they have uncovered a significant recent warming trend is dubious. The significance level they report on their findings (.10) is hardly normative, and the use of it should prompt members of the scientific community to question the reasoning behind the use of such a lax standard.

In addition, the authors’ treatment of buoy sea-surface temperature (SST) data was guaranteed to create a warming trend. The data were adjusted upward by 0.12°C to make them “homogeneous” with the longer-running temperature records taken from engine intake channels in marine vessels. 

As has been acknowledged by numerous scientists, the engine intake data are clearly contaminated by heat conduction from the engine itself, and as such, never intended for scientific use. On the other hand, environmental monitoring is the specific purpose of the buoys. Adjusting good data upward to match bad data seems questionable, and the fact that the buoy network becomes increasingly dense in the last two decades means that this adjustment must put a warming trend in the data.

The extension of high-latitude arctic land data over the Arctic Ocean is also questionable. Much of the Arctic Ocean is ice-covered even in high summer, meaning the surface temperature must remain near freezing. Extending land data out into the ocean will obviously induce substantially exaggerated temperatures.

Additionally, there exist multiple measures of bulk lower atmosphere temperature independent from surface measurements which indicate the existence of a “hiatus”[3]. If the Karl et al., result were in fact robust, it could only mean that the disparity between surface and mid-tropospheric temperatures is even larger that previously noted. 

Getting the vertical distribution of temperature wrong invalidates virtually every forecast of sensible weather made by a climate model, as much of that weather (including rainfall) is determined in large part by the vertical structure of the atmosphere.

Instead, it would seem more logical to seriously question the Karl et al. result in light of the fact that, compared to those bulk temperatures, it is an outlier, showing a recent warming trend that is not in line with these other global records.

And finally, even presuming all the adjustments applied by the authors ultimately prove to be accurate, the temperature trend reported during the “hiatus” period (1998-2014), remains significantly below (using Karl et al.’s measure of significance) the mean trend projected by the collection of climate models used in the most recent report from the United Nation’s Intergovernmental Panel on Climate Change (IPCC). 

It is important to recognize that the central issue of human-caused climate change is not a question of whether it is warming or not, but rather a question of how much. And to this relevant question, the answer has been, and remains, that the warming is taking place at a much slower rate than is being projected.

The distribution of trends of the projected global average surface temperature for the period 1998-2014 from 108 climate model runs used in the latest report of the U.N.’s Intergovernmental Panel on Climate Change (IPCC)(blue bars). The models were run with historical climate forcings through 2005 and extended to 2014 with the RCP4.5 emissions scenario. The surface temperature trend over the same period, as reported by Karl et al. (2015, is included in red. It falls at the 2.4th percentile of the model distribution and indicates a value that is (statistically) significantly below the model mean projection.

[1] Karl, T. R., et al., Possible artifacts of data biases in the recent global surface warming hiatus. Scienceexpress, embargoed until 1400 EDT June 4, 2015.

[2] “It is also noteworthy that the new global trends are statistically significant and positive at the 0.10 significance level for 1998-2012…”

[3] Both the UAH and RSS satellite records are now in their 21st year without a significant trend, for example

Although Venezuela’s inflation has soared (see: Up, Up, and Away), Venezuela is not experiencing a hyperinflationary episode–yet. Since the publication of Prof. Phillip Cagan’s famous 1956 study The Monetary Dynamics of Hyperinflation, the convention has been to define hyperinflation as when the monthly inflation rate exceeds 50%.

I regularly estimate the monthly inflation rates for Venezuela. To calculate those inflation rates, I use dynamic purchasing power parity (PPP) theory. While Venezuela’s monthly inflation rate has not advanced beyond the 50% per month mark on a sustained basis, it is dangerously close. Indeed, Venezuela’s inflation rate is currently 45% per month (see the accompanying chart).

If inflation moves much higher, the legacy of Hugo Chavez’s Bolivarian Revolution will be that Venezuela joins the rather select hyperinflation club as the 57th member. Yes, there have only been 56 documented hyperinflations

GOP Agrees Bush Was Wrong to Invade Iraq, Now What?”—that’s how the US News headline put it last week. A good question, because it’s not at all clear what that grudging concession signifies. It’s nice that 12 years after George W. Bush lumbered into the biggest foreign policy disaster in a generation, the leading Republican contenders are willing to concede, under enhanced interrogation, that maybe it wasn’t the right call. It would be nicer still if we could say they’d learned something from that disaster. 

Alas, the candidates’ peevish and evasive answers to the Iraq Question didn’t provide any evidence for that. Worst of all was Jeb Bush’s attempt to duck the question by using fallen soldiers as the rhetorical equivalent of a human shield. Ohio governor John Kasich flirted with a similar tactic—“There’s a lot of people who lost limbs and lives over there, OK?”—before conceding, “But if the question is, if there were not weapons of mass destruction should we have gone, the answer would’ve been no.” 

That’s how most of the GOP field eventually answered the question, with some version of
the “faulty intelligence” excuse. We thought Saddam Hussein had stockpiles of chemical
and biological weapons and was poised for a nuclear breakout; it was just our bad luck
that turned out not to be true; so the war was—well, not a “mistake,” insists Marco Rubio, just, er—whatever the word is for something you definitely wouldn’t do again if you had the
power to travel back in time. As Scott Walker, who’s been studying up super-hard on
foreign policy, explained: you can’t fault President Bush: invading Iraq just made sense, based on “the information he had available” at the time. 

Well, no—invading Iraq was a spectacularly bad idea based on what we knew at the time. If we’d found stockpiles of so-called WMD, it would still have been a spectacularly bad idea. Saddam’s possession of unconventional weapons was a necessary condition in the Bush administration’s case for war, but it wasn’t—or shouldn’t have been—sufficient to make that case compelling, because with or without chemical and biological weapons, Saddam’s Iraq was never a national security threat to the United States. 

Put aside the fact that, as applied to chem/bio, “WMD” is a misnomer; assume for the sake of argument that President Bush’s claim that “one vial, one canister, one crate” of the stuff could “bring a day of horror like none we have ever known” was an evidence-based, good-faith assessment of those weapons’ potential, instead of a ludicrous and cynical exaggeration. Even so, you’d still have to show that Saddam Hussein was so hell-bent on hitting the U.S., he’d risk near-certain destruction to do it. 

There was never any good reason to believe that. This, after all, was a dictator who, during the 1991 Gulf War, had been deterred from using chemical weapons against US troops in the middle of an ongoing invasion. As then-Secretary of State James Baker later explained, the George H.W. Bush administration:

made it very clear that if Iraq used weapons of mass destruction, chemical weapons, against United States forces that the American people would demand vengeance and that we had the means to achieve it. … we made it clear that in addition to ejecting Iraq from Kuwait, if they used those types of weapons against our forces we would in addition to throwing them out of Kuwait, we would adopt as a goal the elimination of the regime in Baghdad.

Eleven years later, as the George W. Bush administration pushed for another war with Iraq, there wasn’t any convincing evidence that Saddam Hussein had, in the interim, warmed up to the idea of committing regime suicide through the use of CBW. Even the flawed October 2002 National Intelligence Estimate (NIE) prepared during the run-up to the Iraq War vote concluded that “Baghdad for now appears to be drawing a line short of conducting terrorist attacks with conventional or CBW against the United States, fearing that exposure of Iraqi involvement would provide Washington a stronger cause for making war.” 

By that time, with Bush 43 sounding the alarm about Iraq’s “growing fleet of manned and unmanned aerial vehicles that could be used to disperse chemical or biological weapons across broad areas [including] missions targeting the United States,” it should have been apparent that the case for war rested on a series of imaginary hobgoblins. As Jim Henley put it a couple of years ago, “In the annals of projection, the US claim that Saddam was building tiny remote-controlled death planes wins some kind of prize.”  

What if the Iraqi dictator instead passed off those weapons to terrorists, “secretly and without fingerprints”? In the 2003 State of the Union, that’s what President Bush argued Saddam just might do: “imagine those 19 hijackers with other weapons and other plans, this time armed by Saddam Hussein.” But the notion that Hussein was likely to pass chemical or biological weapons to Al Qaeda was only slightly less fantastic than the scenario that had him crop-dusting US cities with short-range, Czech-built training drones. As my colleague Doug Bandow pointed out at the time: “Baghdad would be the immediate suspect and likely target of retaliation should any terrorist deploy [WMD], and Saddam knows this.” 

I made similar arguments two weeks before the war in a piece called “Why Hussein Will Not Give Weapons of Mass Destruction to Al Qaeda”:  

The idea that Hussein views a WMD strike via terrorist intermediaries as a viable strategy is rank speculation, contradicted by his past behavior. Hussein’s hostility toward Israel predates his struggle with the United States. He’s had longstanding ties with anti-Israeli terror groups and he’s had chemical weapons for over 20 years. Yet there has never been a nerve gas attack in Israel. Why? Because Israel has nuclear weapons and conventional superiority, and Hussein wants to live. If he’s ever considered passing off chemical weapons to Palestinian terrorists, he decided that he wouldn’t get away with it. He has even less reason to trust Al Qaeda with a potentially regime-ending secret.

In its 2004 after-action reassessment of the administration’s case for preventive war, the Carnegie Endowment concluded

there was no positive evidence to support the claim that Iraq would have transferred WMD or agents to terrorist groups and much evidence to counter it. Bin Laden and Saddam were known to detest and fear each other, the one for his radical religious beliefs and the other for his aggressively secular rule and persecution of Islamists. Bin Laden labeled the Iraqi ruler an infidel and an apostate, had offered to go to battle against him after the invasion of Kuwait in 1990, and had frequently called for his overthrow. … the most intensive searching over the last two years has produced no solid evidence of a cooperative relationship between Saddam’s government and Al Qaeda. ….the Iraqi regime had a long history of sponsoring terrorism against Israel, Kuwait, and Iran, providing money and weapons to these groups. Yet over many years Saddam did not transfer chemical, biological, or radiological materials or weapons to any of them “probably because he knew that they could one day be used against his secular regime.”

In the judgment of U.S. intelligence, a transfer of WMD by Saddam to terrorists was likely only if he were “sufficiently desperate” in the face of an impending invasion. Even then, the NIE concluded, he would likely use his own operatives before terrorists. Even without the particular relationship between Saddam and bin Laden, the notion that any government would turn over its principal security assets to people it could not control is highly dubious. States have multiple interests and land, people, and resources to protect. They have a future. Governments that made such a transfer would put themselves at the mercy of groups that have none of these. Terrorists would not even have to use the weapons but merely allow the transfer to become known to U.S. intelligence to call down the full wrath of the United States on the donor state, thereby opening opportunities for themselves. 

You don’t have to “know what we know now” to recognize the poverty of the case for war. You just had to know what we knew then. 

Even so, it’s possible that GOP hawks have learned something from the Iraq debacle, however loathe they are to admit it. Like Saddam’s Iraq, the Syrian and Iranian regimes have long had unconventional weapons and links to terrorist proxies. But I haven’t heard even Lindsey Graham or Marco Rubio invoke the risk of terrorist transfer to make the case for war with Iran or Syria. Perhaps that’s because it’s as unpersuasive an argument now as it should have been then. 

Besides, maybe it’s asking too much to expect professional politicians to depart entirely from the sentiments of the people they want to vote for them. A recent Vox Populi/Daily Caller poll asked Republican voters in early primary states: “Looking back now, and regardless of what you thought at the time, do you think it was the right decision for the United States to invade Iraq in 2003?” Nearly 60 percent of them answered in the affirmative. The GOP’s 2016 contenders may not have good answers to the Iraq Question, but, apparently, they’re miles ahead of their constituents.  

At the risk of sounding like a broken record (well, OK–at the risk of continuing to sound like a broken record), I’d like to say a bit more about economists’ tendency to get their monetary history wrong. In particular, I’d like to take aim at common myths about the gold standard.

If there’s one monetary history topic that tends to get handled especially sloppily by monetary economists, not to mention other sorts, this is it. Sure, the gold standard was hardly perfect, and gold bugs themselves sometimes make silly claims about their favorite former monetary standard. But these things don’t excuse the errors many economists commit in their eagerness to find fault with that “barbarous relic.”

The false claims I have in mind are mostly ones I and others–notably Larry White–have countered before. Still I thought it would be useful to address them again here, because they’re still far from being dead horses, and also so that students wrapping-up the semester will have something convenient to send to their misinformed gold-bashing profs (though I urge them to wait until grades are in before sharing!).

For the sake of those who don’t care to wade through the whole post, here is a “jump to” list of the points covered:

1. The Gold Standard wasn’t an instance of government price fixing. Not traditionally, anyway.
2. A gold standard isn’t particularly expensive. In fact, fiat money tends to cost more.
3. Gold supply “shocks” weren’t particularly shocking.
4. The deflation that the gold standard permitted wasn’t such a bad thing.
5. It wasn’t to blame for 19th-century American financial crises.
6. On the whole, the classical gold standard worked remarkably well (while it lasted).
7. It didn’t have to be “managed” by central bankers.
8. In fact, central banking tends to throw a wrench in the works.
9. “The “Gold Standard” wasn’t to blame for the Great Depression.
10. It didn’t manage money according to any economists’ theoretical ideal. But neither has any fiat-money-issuing central bank.

1. The Gold Standard wasn’t an instance of government price fixing. Not traditionally, anyway.

As Larry White has made the essential point as well as I ever could, I hope I may be excused for quoting him at length:

Barry Eichengreen writes that countries using gold as money ‘fix its price in domestic-currency terms (in the U.S. case, in dollars).’ He finds this perplexing:

But the idea that government should legislate the price of a particular commodity, be it gold, milk or gasoline, sits uneasily with conservative Republicanism’s commitment to letting market forces work, much less with Tea Party–esque libertarianism. Surely a believer in the free market would argue that if there is an increase in the demand for gold, whatever the reason, then the price should be allowed to rise, giving the gold-mining industry an incentive to produce more, eventually bringing that price back down. Thus, the notion that the U.S. government should peg the price, as in gold standards past, is curious at the least.

To describe a gold standard as “fixing” gold’s “price” in terms of a distinct good, domestic currency, is to get off on the wrong foot. A gold standard means that a standard mass of gold (so many grams or ounces of pure or standard-alloy gold) defines the domestic currency unit. The currency unit (“dollar”) is nothing other than a unit of gold, not a separate good with a potentially fluctuating market price against gold. That one dollar, defined as so many grams of gold, continues be worth the specified amount of gold—or in other words that one unit of gold continues to be worth one unit of gold—does not involve the pegging of any relative price. Domestic currency notes (and checking account balances) are denominated in and redeemable for gold, not priced in gold. They don’t have a price in gold any more than checking account balances in our current system, denominated in fiat dollars, have a price in fiat dollars. Presumably Eichengreen does not find it curious or objectionable that his bank maintains a fixed dollar-for-dollar redemption rate, cash for checking balances, at his ATM.

Remarkably, as White goes on to show, the rest of Eichengreen’s statement proves that, besides not having understood the meaning of gold’s “fixed” dollar price, Eichengreen has an uncertain grasp of the rudimentary economics of gold production:

As to what a believer in the free market would argue, surely Eichengreen understands that if there is an increase in the demand for gold under a gold standard, whatever the reason, then the relative price of gold (the purchasing power per unit of gold over other goods and services) will in fact rise, that this rise will in fact give the gold-mining industry an incentive to produce more, and that the increase in gold output will in fact eventually bring the relative price back down.

I’ve said more than once that, the more vehement an economist’s criticisms of the gold standard, the more likely he or she knows little about it. Of course Eichengreen knows far more about the gold standard than most economists, and is far from being its harshest critic, so he’d undoubtedly be an outlier in the simple regression, y = α + β(x) (where y is vehemence of criticism of the gold standard and x is ignorance of the subject). Nevertheless, his statement shows that even the understanding of one of the gold standard’s most well-known critics leaves much to be desired.

Although, at bottom, the gold standard isn’t a matter of government “fixing” gold’s price in terms of paper money, it is true that governments’ creation of monopoly banks of issue, and the consequent tendency for such monopolies to be treated as government- or quasi-government authorities, ultimately led to their being granted sovereign immunity from the legal consequences to which ordinary, private intermediaries are usually subject when they dishonor their promises. Because a modern central bank can renege on its promises with impunity, a gold standard administered by such a bank more closely resembles a price-fixing scheme than one administered by a commercial bank. Still, economists should be careful to distinguish the special features of a traditional gold standard from those of central-bank administered fixed exchange rate schemes.


2. A gold standard isn’t particularly expensive. In fact, fiat money tends to cost more.

Back in the early 1950s, and again in 1960, Milton Friedman estimated that the gold required for the U.S. to have a “real” gold standard would have cost 2.5% of its annual GNP. But that’s because Friedman’s idea of a “real” gold standard was one in which gold coins alone served as money, with no fractionally-backed bank-supplied substitutes. As Larry White shows in his Theory of Monetary Institutions (p. 47) allowing for 2% specie reserves–which is more than what some former gold-based free-banking systems needed–the resource cost of a gold standard taking advantage of fractionally-backed banknotes and deposits would be about one-fiftieth of the number Friedman came up with. That’s a helluva bargain for a gold “seal of approval” that could mean having access to international capital at substantially reduced rates, according to research by Mike Bordo and Hugh Rockoff.

Friedman himself eventually changed his mind about the economies to be achieved by employing fiat money:

Monetary economists have generally treated irredeemable paper money as involving negligible real resource costs compared with a commodity currency. To judge from recent experience, that view is clearly false as a result of the decline in long-term price predictability.

I took it for granted that the real resource cost of producing irredeemable paper money was negligible, consisting only of the cost of paper and printing. Experience under a universal irredeemable paper money standard makes it crystal clear that such an assumption, while it may be correct with respect to the direct cost to the government of issuing fiat outside money, is false for society as a whole and is likely to remain so unless and until a monetary structure emerges under an irredeemable paper standard that provides a high degree of long-run price level predictability.*

Unfortunately, neither White’s criticism of Friedman’s early calculations nor Friedman’s own about-face have kept gold standard critics from repeating the old canard that a fiat standard is more economical than a gold standard. Ross Starr, for example, observes in his 2013 book on money that “The use of paper or fiduciary money instead of commodity money is resource saving, allowing commodity inventories to be liquidated.” Although he understands that fractionally-backed banknotes and deposits may go some way toward economizing on commodity-money reserves, Starr (quoting Adam Smith, but failing to look up historic Scottish bank reserve ratios) insists nonetheless that “a significant quantity of the commodity backing must be maintained in inventory to successfully back the currency,” and then proceeds to build a case for fiat money from this unwarranted assertion:

The next step in economizing on the capital tied up in backing the currency is to use a fiat money. Substituting a government decree for commodity backing frees up a significant fraction of the economy’s capital stock for productive use. No longer must the economy hold gold, silver, or other commodities in inventory to back the currency. No longer must additional labor and capital be used to extract them from the earth. Those resources are freed up and a simple virtually costless government decree is substituted for them.

Tempting as it is to respond to such hooey simply by noting that the vaults of the world’s official fiat-money managing institutions presently contain rather more than zero ounces of gold–31,957.5 metric tons more, to be precise–that response only hints at the fundamental flaw in Starr’s reasoning, which is his treatment of fiat money as a culmination, or limiting case, of the resource savings to be had by resort to fractional commodity-money reserves. That treatment overlooks a crucial difference between fiat money and readily redeemable banknotes and deposits, for whereas redeemable banknotes and deposits are generally understood by their users to be close, if not perfect, substitutes for commodity money, fiat money, the purchasing power of which is unhinged from that of any former money commodity, is nothing of the sort. On the contrary: its tendency to depreciate relative to real commodities, and to gold in particular, is notorious. Consequently holders of fiat money have reason to hold “commodity inventories” as a hedge against the risk that fiat money will depreciate.

If the hedge demand for a former money commodity is large enough, resort to fiat money doesn’t save any resources at all. Indeed, as Roger Garrison notes, “a paper standard administered by an irresponsible monetary authority may drive the monetary value of gold so high that more resource costs are incurred under the paper standard than would have been incurred under a gold standard.” A glance at the history of gold’s real price suffices to show that this is precisely what has happened:


From “After the Gold Rush,” The Economist, July 6, 2010.


Taking the long-run average price of gold, in 2010 prices, to be somewhere around $470, prior to the closing of the gold window in 1917, that price was exceeded on only three occasions, and never dramatically: around the time of the California gold rush, around the turn of the 20th century, and for several years following FDR’s devaluation of the dollar. Since 1971, in contrast, it has exceeded that average, and exceeded it substantially, more often than not. Here is Roger Garrison again:

There is a certain asymmetry in the cost comparison that turns the resource-cost argument against paper standards. When an irresponsible monetary authority begins to overissue paper money, market participants begin to hoard gold, which stimulates the gold-mining industry and drives up the resource costs. But when new discoveries of gold are made, market participants do not begin to hoard paper or to set up printing presses for the issue of unbacked currency. Gold is a good substitute for an officially instituted paper money, but paper is not a good substitute for an officially recognized metallic money. Because of this asymmetry, the resource costs incurred by the State in its efforts to impose a paper standard on the economy and manage the supply of paper money could be avoided if the State would simply recognize gold as money. These costs, then, can be counted against the paper standard.

So if it’s avoidance of gold resource costs that’s desired, including avoidance of the very real environmental consequences of gold mining, a gold standard looks like the right way to go.


3. Gold supply “shocks” weren’t particularly shocking

Of the many misinformed criticisms of the gold standard, none seems to me more wrong-headed than the complaint that the gold standard isn’t even a reliable guarantee against serious inflation. The RationalWiki entry on the gold standard is as good an example of this as any:

Even gold can suffer problems with inflation.Gold rushes such as the California Gold Rush expanded the money supply and, when not matched with a simultaneous increase in economic output, caused inflation.The “Price Revolution” of the 16th century demonstrates a case of dramatic long-run inflation. During this period, western European nations used a bimetallic standard (gold and silver). The Price Revolution was the result of a huge influx of silver from central European mines starting during the late 15th century combined with a flood of new bullion from the Spanish treasure fleets and the demographic shift brought about by the Black Plague (i.e., depopulation).

Admittedly the anonymous authors of this article may not be professional economists; but take my word for it that the same arguments might be heard from any number of such professionals. Brad DeLong, for example, in a list of “Talking Points on the Likely Consequences of re-establishment of the Gold Standard” (my emphasis), includes observation that “significant advances in gold mining technology could provide a significant boost to the average rate of inflation over decades.”

Like I said, the gold standard is hardly free of defects. But being vulnerable to bouts of serious inflation isn’t one of them. Consider the “dramatic” 16th century inflation referred to in the RationalWiki entry. Had that entries’ authors referred to plain-old Wikipedia’s entry on “Price revolution,” they would have read there that

Prices rose on average roughly sixfold over 150 years. This level of inflation amounts to 1-1.5% per year, a relatively low inflation rate for the 20th century standards, but rather high given the monetary policy in place in the 16th century.

I have no idea what the authors mean by their second statement, as there was certainly no such thing as “monetary policy” at the time, and they offer no further explanation or citation. So far as I can tell, they mean nothing more than that prices hadn’t been rising as fast before the price revolution than they did during it, which though trivially true says nothing about how “high” the inflation was by any standards, including those of the 16th century. In any case it was not only “not high” but dangerously low according to standards set, rightly or wrongly, by today’s monetary experts. Finally, though the point is often overlooked, the European Price Revolution actually began well in advance of major American specie shipments, which means that, far from being attributable to such shipments alone, it was a result of several causes, including coin debasements.

What about the California Gold rush, which is also supposed to show how changes in the supply of gold will lead to inflation “when not matched with a simultaneous increase in economic output”? To judge from available statistics, it appears that producers of other goods were almost a match for all those indefatigable forty-niners: as Larry White reports, although the U.S. GDP deflator did rise a bit in the years following the gold rush,

The magnitude was surprisingly small. Even over the most inflationary interval, the [GDP deflator] rose from 5.71 in 1849 (year 2000 = 100) to 6.42 in 1857, an increase of 12.4 percent spread over eight years. The compound annual price inflation rate over those eight years was slightly less than 1.5 percent.

Once again, the inflation rate was such as would have had today’s central banks rushing to expand their balance sheets.

Nor do the CPI estimates tell a different story. See if you can spot the gold-rush-induced inflation in this chart:

*Graphing Various Historical Economic Series,” MeasuringWorth, 2015.

Despite popular beliefs, the California gold rush was actually not the biggest 19th-century gold supply innovation, at least to judge from its bearing on the course of prices. That honor belongs instead to the Witwatersrand gold rush of 1886, the effects of which later combined with those of the Klondike rush of 1896 to end a long interval of gradual deflation (discussed further below) and begin one of gradual inflation.

Brad DeLong is thus quite right to refer to the South African discoveries in observing that even a gold standard poses some risk of inflation:

For example, the discovery and exploitation of large gold reserves near present-day Johannesburg at the end of the nineteenth century was responsible for a four percentage point per year shift in the worldwide rate of inflation–from a deflation of roughly two percent per year before 1896 to an inflation of roughly two percent per year after 1896.

Allowing for the general inaccuracy of 19th-century CPI estimates, DeLong’s statistics are correct. But that “For example” is quite misleading. Like I said: this is the most serious instance of an inflationary gold “supply shock” of which I’m aware. Yet even it served mainly to put an end to a deflationary trend, without ever giving rise to an inflation rate substantially above what central banks today consider (rightly or wrongly) optimal. As for the four percentage point change in the rate of inflation “per year,” presumably meaning “in one year,” it’s hardly remarkable: changes as big or larger are common throughout the 19th century, partly owing to the notoriously limited data on which CPI estimates for that era are based. Even so, they can’t be compared to the much larger jumps in inflation with which the history of fiat monies is riddled, even setting hyperinflations aside. Keep this in mind as you reflect upon Brad’s conclusion that

Under the gold standard, the average rate of inflation or deflation over decades ceases to be under the control of the government or the central bank, and becomes the result of the balance between growing world production and the pace of gold mining.

Alas, keeping matters in perspective–that is, comparing the gold standard’s actual inflation record, not to that which might be achieved by means of an ideally-managed fiat money, but to the actual inflation record of historic fiat-money systems, is something many critics of the gold standard seem reluctant to do, perhaps for good reason.

While we’re on the subject, nothing could be more absurd than attempts to demonstrate the unsuitability of gold as a monetary medium by referring to gold’s unstable real value in the years since the gold standard was abandoned. Yet this is a favorite debating point among the gold standard’s less thoughtful critics, including Paul Krugman:

There is a remarkably widespread view that at least gold has had stable purchasing power. But nothing could be further from the truth. Here’s the real price of gold — the price deflated by the consumer price index — since 1968:

Compare Professor Krugman’s chart to the one in the previous section. Then ask yourself (1) Has gold’s price behaved differently since 1968 than it did before?; and (2) Why might this be so? If your answers are “Yes” and “Because gold and paper dollars are no longer close substitutes, and gold is now widely used to hedge against depreciation of the dollar and other fiat currencies,” you understand the gold standard better than Krugman does. But don’t get a swelled head over it, because it really isn’t saying much: Krugman is one of the observations that sits squarely on the upper right end of y = α + β(x).


4. The deflation that the gold standard permitted wasn’t such a bad thing.

The complaint that a gold standard doesn’t rule out inflation is but a footnote to the more frequent complaint that it suffers, in Brad DeLong’s words, from “a deflationary bias which makes it likely that a gold standard regime will see a higher average unemployment rate than an alternative managed regime.” According to Ben Bernanke “There is…a high correlation in the data between deflation (falling prices) and depression (falling output).”

That the gold standard tended to be deflationary–or that it tended to be so for sometimes long intervals between gold discoveries–can’t be denied. But what certainly can be denied is that these periods of slow deflation went hand-in-hand with high unemployment. Having thoroughly reviewed the empirical record, Andrew Atkeson and Patrick Kehoe conclude as follows:

Deflation and depression do seem to have been linked during the1930s. But in the rest of the data for 17 countries and more than 100 years, there is virtually no evidence of such a link.

More recently Claudio Borio and several of his BIS colleagues reported similar findings. How then (you may wonder), did Bernanke arrive at his opposite conclusion? Easy: he looked only at data for the 1930s–the worst deflationary crisis ever–ignoring all the rest.

Why is deflation sometimes depressing, and sometimes not? The simple answer is that there is more than one sort of deflation. There’s the sort that’s caused by a collapse of spending, like the “Great Contraction” of the 1930s, and then there’s the sort that’s driven by greater output of real goods and services–that is, by outward shifts in aggregate supply rather than inward shifts in aggregate demand. Most of the deflation that occurred during the classical gold standard era (1873-1914) was of the latter, “good” sort.

Although I’ve been banging the drum for good deflation since the 1990s, and Mike Bordo and others have made the specific point that the gold standard mostly involved inflation of the good rather than bad sort, too many economists, and way too many of those who have got more than their fare share of the public’s attention, continue to ignore the very possibility of supply-driven deflation.

Of the many misunderstandings propagated by economists’ tendency to assume that deflation and depression must go hand-in-hand, none has been more pernicious than the widespread belief that throughout the U.S. and Europe, the entire period from 1873 to 1896 constituted one “Great” or “Long Depression .” That belief is now largely discredited, except perhaps among some newspaper pundits and die-hard Marxists, thanks to the efforts of G.B. Saul and others. The myth of a somewhat shorter “Long Depression,” lasting from 1873-1879, persists, however, though economic historians have begun chipping away at that one as well.


5. It wasn’t to blame for 19th-century American financial crises.

Speaking of 1873, after claiming that a gold standard is undesirable because it makes deflation (and therefore, according to his reasoning, depression) more likely, Krugman observes:

The gold bugs will no doubt reply that under a gold standard big bubbles couldn’t happen, and therefore there wouldn’t be major financial crises. And it’s true: under the gold standard America had no major financial panics other than in 1873, 1884, 1890, 1893, 1907, 1930, 1931, 1932, and 1933. Oh, wait.

Let me see if I understand this. If financial crises happen under base-money regime X, then that regime must be the cause of the crises, and is therefore best avoided. So if crises happen under a fiat money regime, I guess we’d better stay away from fiat money. Oh, wait.

You get the point: while the nature of an economy’s monetary standard may have some bearing on the frequency of its financial crises, it hardly follows that that frequency depends mainly on its monetary standard rather than on other factors, like the structure, industrial and regulatory, of the financial system.

That U.S. financial crises during the gold standard era had more to do with U.S. financial regulations than with the workings of the gold standard itself is recognized by all competent financial historians. The lack of branch banking made U.S. banks uniquely vulnerable to shocks, while Civil-War rules linked the supply of banknotes to the extent of the Federal government’s indebtedness., instead of allowing that supply to adjust with seasonal and cyclical needs. But there’s no need to delve into the precise ways in which such misguided legal restrictions to the umerous crises to which Krugman refers. It should suffice to point out that Canada, which employed the very same gold dollar, depended heavily on exports to the U.S., and (owing to its much smaller size) was far less diversified, endured no banking crises at all, and very few bank failures, between 1870 and 1939.


6. 0n the whole, the classical gold standard worked remarkably well (while it lasted).

Since Keynes’s reference to gold as a “barbarous relic” is so often quoted by the gold standard’s critics, it seems only fair to repeat what Keynes had to say, a few years before, not about gold per se, itself, but about the gold-standard era:

What an extraordinary episode in the economic progress of man that age was which came to an end in August, 1914! The greater part of the population, it is true, worked hard and lived at a low standard of comfort, yet were, to all appearances, reasonably contented with this lot. But escape was possible, for any man of capacity or character at all exceeding the average, into the middle and upper classes, for whom life offered, at a low cost and with the least trouble, conveniences, comforts, and amenities beyond the compass of the richest and most powerful monarchs of other ages. The inhabitant of London could order by telephone, sipping his morning tea in bed, the various products of the whole earth, in such quantity as he might see fit, and reasonably expect their early delivery upon his doorstep; he could at the same moment and by the same means adventure his wealth in the natural resources and new enterprises of any quarter of the world, and share, without exertion or even trouble, in their prospective fruits and advantages… He could secure forthwith, if he wished it, cheap and comfortable means of transit to any country or climate without passport or other formality, could despatch his servant to the neighboring office of a bank or such supply of the precious metals as might seem convenient, and could then proceed abroad to foreign quarters, without knowledge of their religion, language, or customs, bearing coined wealth upon his person, and would consider himself greatly aggrieved and much surprised at the least interference. But, most important of all, he regarded this state of affairs as normal, certain, and permanent, except in the direction of further improvement, and any deviation from it as aberrant, scandalous, and avoidable.

It would, of course, be foolish to suggest that the gold standard was entirely or even largely responsible for this Arcadia, such as it was. But it certainly did contribute both to the general abundance of goods of all sorts, to the ease with which goods and capital flowed from nation to nation, and, especially, to the sense of a state of affairs that was “normal, certain, and permanent.”

The gold standard achieved these things mainly by securing a degree of price-level and exchange rate stability and predictability that has never been matched since. According to Finn Kydland and Mark Wynne:

The contrast between the price stability that prevailed in most countries under the gold standard and the instability under fiat standards is striking. This reflects the fact that under commodity standards (such as the gold standard), increases in the price level (which were frequently associated with wars) tended to be reversed, resulting in a price level that was stable over long periods. No such tendency is apparent under the fiat standards that most countries have followed since the breakdown of the gold standard between World War I and World War II.

The high degree of price level predictability, together with the system of fixed exchange rates that was incidental to the gold standard’s widespread adoption, substantially reduced the riskiness of both production and international trade, while the commitment to maintain the standard resulted, as I noted, in considerably lower international borrowing costs.

Those pundits who find it easy to say “good riddance” to the gold standard, in either its classical or its decadent variants, need to ask themselves what all the fuss over monetary “reconstruction” was about, following each of the world wars, if not achieving a simulacrum at least of the stability that the classical gold standard achieved. True, those efforts all failed. But that hardly means that the ends sought weren’t very worthwhile ones, or that those who sought them were “lulled by the myth of a golden age.” Though they may have entertained wrong beliefs concerning how the old system worked, they weren’t wrong in believing that it did work, somehow.


7. It didn’t have to be managed by central bankers.

But how? The once common view that the classical gold standard worked well only thanks to its having been carefully managed by the Bank of England and other central banks, as well as the related view that its success depended on international agreements and other forms of central bank cooperation, is now, thankfully, no longer subscribed to even by the gold-standard’s more well-informed critics. Instead, as Julio Gallarotti observes, the outcomes of that standard “were primarily the resultants [sic] of private transactions in the markets for goods and money” rather than of any sort of government or central-bank management or intervention. But the now accepted view doesn’t quite go far enough. In fact, central banks played no essential part at all in achieving the gold standard’s most desirable outcomes, which could have been achieved as well, or better, by systems of competing banks-of-issue, and which were in fact achieved by means of such systems in many participating nations, including the United States, Switzerland (until 1901), and Canada. And although it is common for central banking advocates to portray such banks as sources of emergency liquidity to private banks, during the classical gold standard era liquidity assistance often flowed the other way, and did so notwithstanding monopoly privileges that gave central banks so many advantages over their commercial counterparts. As Gallarotti observes (p. 81),

That central banks sometimes went to other central banks instead of the private market suggests nothing more than the fact that the rates offered by central banks were better, or too great an amount of liquidity may have been needed to be covered in the private market.


8. In fact, central banking tends to throw a wrench in the works.

To the extent that central banks did exercise any special influence on gold-standard era monetary adjustments, that influence, instead of helping, made things worse. Because an expanding central bank isn’t subject to the internal constraint of reserve losses stemming from adverse interbank clearings, it can create an external imbalance that must eventually trigger a disruptive drain of specie reserves. During liquidity crunches, on the other hand, central banks were more likely than commercial banks to become, in Jacob Viner’s words, “engaged in competitive increases of their discount rates and in raid’s on each other’s reserves.” Finally, central banks could and did muck-up the gold standard works by sterilizing gold inflows and outflows, violating the “rules of the gold standard game” that called for loosening in response to gold receipts and tightening in response to gold losses.

Competing banks of issue could be expected to play by these “rules,” because doing so was consistent with profit maximization. The semi-public status of central banks, on the other hand, confronted them with a sort of dual mandate, in which profits had to be weighed against other, “public” responsibilities (ibid., pp. 117ff.). Of the latter, the most pernicious was the perceived obligation to occasionally set aside the requirements for preserving international monetary equilibrium (“external balance”) for the sake of preserving or achieving preferred domestic monetary conditions (“internal balance”). As Barry Ickes observes, playing by the gold standards rules could be “very unpopular, potentially, as it involves sacrificing internal balance for external balance.” Commercial bankers couldn’t care less. Central bankers, on the other hand, had to care when to not care was to risk losing some of their privileges.

Today, of course, achieving internal balance is generally considered the sine qua non of sound central bank practice; and even where fixed or at least stable exchange rates are considered desirable it is taken for granted that external balance ought occasionally to be sacrificed for the sake of preserving domestic monetary stability. But to apply such thinking to the classical gold standard, and thereby conclude that in that context a similar sacrifice of external for internal stability represented a turn toward more enlightened monetary policy, is to badly misunderstand the nature of that arrangement, which was not just a fixed exchange rate arrangement but something more akin to an multinational monetary union or currency area. Within such an area, the fact that one central bank gains reserves while another looses them was itself no more significant, and no more a justification for violating the “rules of the game,” than the fact that a commercial bank somewhere gained reserves at the expense of another.

The presence of central banks did, however, tend to aggravate the disturbing effects of changes in international trade patterns compared to the case of international free banking. Central-bank sterilization of gold flows could, on the other hand, lead to more severe longer-run adjustments, as it was to do, to a far more dramatic extent, in the interwar period.


9. “The “Gold Standard” wasn’t to blame for the Great Depression.

I know I’m about to skate onto thin ice, so let me be more precise. To say that “The gold standard caused the Great Depression ” (or words to that effect, like “the gold standard was itself the principal threat to financial stability and economic prosperity between the wars”), is at best extremely misleading. The more accurate claim is that the Great Depression was triggered by the collapse of the jury-rigged version of the gold standard cobbled together after World War I, which was really a hodge-podge of genuine, gold-exchange, and gold-bullion versions of the gold standard, the last two of which were supposed to “economize” on gold. Call it “gold standard light.”

Admittedly there is one sense in which the real gold standard can be said to have contributed to the disastrous shenanigans of the 1920s, and hence to the depression that followed. It contributed by failing to survive the outbreak of World War I. The prewar gold standard thus played the part of Humpty Dumpty to the King’s and Queen’s men who were to piece the still-more-fragile postwar arrangement together. Yet even this is being a bit unfair to gold, for the fragility of the gold standard on the eve of World War I was itself largely due to the fact that, in most of the belligerent nations, it had come to be administered by central banks that were all-too easily dragooned by their sponsoring governments into serving as instruments of wartime inflationary finance.

Kydland and Wynne offer the case of the Bank of Sweden as illustrating the practical impossibility of preserving a gold standard in the face of a major shock:

During the period in which Sweden adhered to the gold standard (1873–1914), the Swedish constitution guaranteed the convertibility into gold of banknotes issued by the Bank of Sweden. Furthermore, laws pertaining to the gold standard could only be changed by two identical decisions of the Swedish Parliament, with an election in between. Nevertheless, when World War I broke out, the Bank of Sweden unilaterally decided to make its notes inconvertible. The constitutionality of this step was never challenged, thus ending the gold standard era in Sweden.

The episode seems rather less surprising, however, when one considers that “the Bank of Sweden,” which secured a monopoly of Swedish paper currency in 1901, is more accurately known as the Sveriges Riksbank, or “Bank of the Swedish Parliament.”

If the world crisis of the 1930s was triggered by the failure, not of the classical gold standard, but of a hybrid arrangement, can it not be said that the U.S. , which was among the few nations that retained a full-fledged gold standard, was fated by that decision to suffer a particularly severe downturn? According to Brad DeLong,

Commitment to the gold standard prevented Federal Reserve action to expand the money supply in 1930 and 1931–and forced President Hoover into destructive attempts at budget-balancing in order to avoid a gold standard-generated run on the dollar.

It’s true that Hoover tried to balance the Federal budget, and that his attempt to do so had all sorts of unfortunate consequences. But the gold standard, far from forcing his hand, had little to do with it. Hoover simply subscribed to the prevailing orthodoxy favoring a balanced budget. So, for that matter, did FDR, until events forced him too change his tune: during the 1932 presidential campaign the New-Dealer-to-be assailed his opponent both for running a deficit and for his government’s excessive spending.

As for the gold standard’s having prevented the Fed from expanding the money supply (or, more precisely, from expanding the monetary base to keep the broader money supply from shrinking), nothing could be further from the truth. Dick Timberlake sets the record straight:

By August 1931, Fed gold had reached $3.5 billion (from $3.1 billion in 1929), an amount that was 81 percent of outstanding Fed monetary obligations and more than double the reserves required by the Federal Reserve Act. Even in March 1933 at the nadir of the monetary contraction, Federal Reserve Banks had more than $1 billion of excess gold reserves.


Whether Fed Banks had excess gold reserves or not, all of the Fed Banks’ gold holdings were expendable in a crisis. The Federal Reserve Board had statutory authority to suspend all gold reserve requirements for Fed Banks for an indefinite period.

Nor, according to a statistical study by Chang-Tai Hsieh and Christina Romer, did the Fed have reason to fear that by allowing its reserves to decline it would have raised fears of a devaluation. On the contrary: by taking steps to avoid a monetary contraction, the Fed would have helped to allay fears of a devaluation, while, in Timberlake’s words, initiating a “spending dynamic” that would have helped to restore “all the monetary vitals both in the United States and the rest of the world.”


10. It didn’t manage money according to any economists’ theoretical ideal. But neither has any fiat-money-issuing central bank.

Just as “paper” always beats “rock” in the rock-paper-scissors game, so does managed paper money always beat gold in the rock-paper monetary standards game economists like to play. But that’s only because under a fiat standard any pattern of money supply adjustment is possible, including a “perfect” pattern, where “perfect” means perfect according to the player’s own understanding. Even under the best of circumstances a gold standard is, on the other hand, unlikely to achieve any economist’s ideal of monetary perfection. Hence, paper beats rock. More precisely, paper beats rock, on paper.

And what does this impeccable logic tell us concerning the relative merits of gold versus paper money in practice? Diddly-squat. I mean it. To say something about the relative merits of paper and gold, you have to have theories–good ol’ fashioned, rational optimizing firm and agent theories–of how the supply of basic money adjusts under various conditions in the two sorts of monetary regimes. We have a pretty good theory of the gold standard, meaning one that meshes well with how that standard actually worked. The theory of fiat money is, in contrast, a joke, in part because it’s much harder to pin-down central bankers’ objectives (or any objectives apart from profit-maximization, which is at play in the case of gold), but mostly thanks to economists’ tendency to simply assume that central bankers behave like omniscient angels who, among other things, understand the finer points of DSGE models. That may do for a graduate class, or a paper in the AER. But good economics it most certainly isn’t.


I close with a few words concerning why it matters that we get the facts straight about the gold standard. It isn’t simply a matter of winning people over to that standard. Though I’m perhaps as ready as anyone to shed a tear for the old gold standard, I doubt that we can ever again create anything like it. But getting a proper grip on gold serves, not just to make the gold standard seem less unattractive than it is often portrayed to be, but to remove some of the sheen that has been applied to modern fiat-money arrangements using the same brush by which gold has been blackened. The point, in other words, isn’t to make a pitch for gold. It’s to make a pitch for something –anything– that’s better than our present, lousy money.


*I’m astonished to find that Friedman’s important and very interesting 1986 article, despite appearing in one of the leading academic journals, has to date been cited only 64 times (Google Scholar). Of these, nine are in works by myself, Kevin Dowd, and Lawrence White! I only wish I could attribute this neglect to monetary economists’ pro-fiat money bias. More likely it reflects their general lack of interest in alternative monetary arrangements.

As the Export-Import Bank’s charter nears expiration, supporters continue to argue that ending this government agency, which subsidizes loans to major U.S. exporters (mostly Boeing), is unwise because other countries also subsidize exports.  They’re especially eager to point to China, whose own export credit agency is very active in promoting Chinese manufacturers.  They then claim that allowing the bank charter to expire would be “unilateral disarmament.”

Claiming that the United States should pursue any economic policy on the grounds that China is doing it strikes me as bordering on insanity.  Market intervention by the Chinese government has resulted in large-scale misallocation and is a serious liability for the stability of the Chinese economy.  It’s true that Chinese subsidies to domestic industries reduce opportunities for U.S. businesses, and it’s perfectly alright for the U.S. government to condemn those policies.  But should we really seek to emulate them?

Competitive metaphors about trade are generally bad, and martial ones are especially unhelpful.  The United States is simply not engaged in a metaphorical war with its trading partners.  Thinking of trade as a contest inevitably leads to bad policy by giving governments an excuse to intervene in the market for the benefit of crony constituencies.  The fact that some U.S. businesses would make more money if foreign governments pursued better policies is not a legitimate excuse to intervene in the market on their behalf.

Regardless of the reasons offered to justify it, there are real consequences to the U.S. economy when the U.S. government picks winners and losers.

Supporters of the Ex-Im Bank make plenty of other bad arguments, all of which betray a fundamental distrust of free-market capitalism.  But “China does it” may be the worst one.

On Tuesday the House of Representatives unanimously passed an amendment to the  Commerce, Justice, Science, and Related Agencies appropriations bill, introduced by Rep. Joaquin Castro (D-TX), which takes $10 million from Drug Enforcement Administration (DEA) funds for salaries and expenses and puts it towards the Department of Justice’s Body Worn Camera Partnership Program. The program provides 50 percent matching grants for law enforcement agencies that wish to use body cameras.  

Prior to the passage of Castro’s amendment, the appropriations bill provided $15 million for the body-worn camera partnership initiative, $35 million less than requested by the Obama administration.

Castro’s amendment is one of the latest examples of legislation aimed at funding police body cameras which, despite their potential to be great tools for increasing law enforcement accountability, are expensive.

The cameras themselves can cost from around $100 to over $1,000 and are accompanied by costs associated with redaction and storage. The fiscal impact of body cameras is a major reason why some police departments have not used the technology. In 2014 the Police Executive Research Forum received surveys from about 250 police departments and found that “39 percent of the respondents that do not use body-worn cameras cited cost as a primary reason.”

An Illinois body camera bill on Gov. Rauner’s desk not only outlines body camera policies for Illinois police agencies that want to use body camera but also introduces a $5 fee on traffic tickets aimed at mitigating the cost of body cameras.

I have written before about why a federalist approach to body cameras is preferable to a federal top-down approach with attached financial incentives. If Rauner signs the Illinois bill into law it will be interesting to see how effective a traffic ticket fee is in funding the use of police body cameras. If it works state lawmakers may well seek to implement similar plans in their own states.

I am all for the DEA having its budget cut (ideally to $0), but the federal government providing conditional grants for body cameras is risky because some law enforcement agencies may implement federal policy recommendations not because they are the best but because doing so will cut costs. Grant applicants are urged to review a body camera paper published by the Office of Community Oriented Policing Services (COPS) and to “incorporate the most important program design elements in their proposal.” Unfortunately, the COPS body camera paper includes a worrying policy recommendation: allowing police officers to view body camera footage of incidents before they make a statement.

Federal lawmakers ought to be part of the ongoing discussions on police body camera policies, but federal policy proposals and suggestions shouldn’t come with financial assistance attached.

Last year China joined the U.S.-led Rim of the Pacific Exercise for the first time. However, Beijing’s role in RIMPAC has become controversial. Senate Armed Services Committee Chairman John McCain recently opined: “I would not have invited them this time because of their bad behavior.”

The Obama administration is conflicted. Bloomberg’s Josh Rogin worried that “so far, China is paying no price for its aggression.” Bonnie Glaser of CSIS suggested using the exercises to threaten the PRC. Patrick Cronin of the Center for a New American Security was less certain, acknowledging benefits of China’s inclusion: “It all depends on what you think RIMPAC should be.”

That is the key question. In part the exercise is about mutually beneficial cooperation for non-military purposes. With the simultaneous growth in commercial traffic and national navies, there likely will be increasing need and opportunity for joint search and rescue, operational safety, anti-piracy patrols, and humanitarian relief.

The question also involves military-military cooperation. Contacts between the Chinese and U.S. navies are few; those between the PRC’s forces and those of countries at odds with Beijing’s territorial claims, such as Japan and the Philippines, are even fewer.

There is value in allowing potential opponents a better assessment of one’s capabilities. Chinese expectations may be more realistic if they have a better sense of what and who they might face, especially the navies of their neighbors, which are expanding and becoming more competent.

Moreover, demystifying the other side makes it harder to demonize one’s potential adversaries. Obviously, even warm personal relationships don’t prevent governments from careening off to war with one another. However, learning that the other side’s military personnel are not devils incarnate might cause leaders to temper the advice they offer in a crisis.

Participation in the exercise also may be viewed as evidence that the U.S. is or is not attempting to contain the PRC. Hence inviting China in last year made American policy look a little less like containment.

Unfortunately, RIMPAC is too small and unimportant to much matter. No one who looks at U.S. behavior, and certainly no Chinese official who does so, can believe that Washington is engaged in anything except containment.

Granted, it can be pursued more or less ostentatiously. However, strengthening alliances surrounding China, moving more military forces to the Asia-Pacific, bolstering the militaries of neighboring states, and consistently backing the positions taken by the PRC’s antagonists outweigh an invitation to naval maneuvers every two years.

Finally, participation can be seen as a reward and denial as a punishment for China. Thus, Panda suggests barring Beijing participation so long as it does not respect freedom of navigation. He wrote: “The magnitude is severe enough to condition China’s behavior while not derailing decades of fragile U.S.-China goodwill altogether.”

If all it took to bring to heel America’s looming co-superpower and peer competitor was cancelling its navy out of a nonessential ocean exercise, Washington should have tried that tactic long ago. The PRC likely prefers to join than to sit at the sidelines. However, the benefits remain too small to cause China’s leaders to change fundamental policy objectives.

As I wrote for China-US Focus:  “The PRC is a revisionist power, as America once was. The former will seek to reverse or overturn past geopolitical decisions which it believes to be unfair or unrealistic. Beijing will abandon that course only when the costs of doing so rise sufficiently.”

“Losing” China’s RIMPAC invitation won’t make a difference. In contrast, an increasingly well-armed and well-organized set of neighbors willing to stand up to Chinese bullying would.

“Friendship diplomacy” cannot eliminate ideological differences and geopolitical concerns. Nevertheless, the U.S. and its allies and friends should continue to seek opportunities to invest China in a stable geopolitical order. Doing so won’t be easy, but extending an invitation to RIMPAC next year would be a worthwhile step in the meantime.

Since before the Declaration of Independence, equality under the law has been a central feature of American identity. The Fourteenth Amendment expanded that constitutional precept to actions by states, not just the federal government. For example, if a state government wants to use race as a factor in pursuing a certain policy, it must do so in the furtherance of a compelling reason—like preventing prison riots—and it must do so in as narrowly tailored a way as possible.

This means, among other things, that race-neutral solutions must be considered and used as much as possible. So if a state were to, say, set race-based quotas for who receives its construction contracts and then claim that no race-neutral alternatives will suffice—without showing why—that would fall far short of the high bar our laws set for race-conscious government action.

Yet that is precisely what Illinois has done.

Illinois’s Department of Transportation and the Illinois State Toll Highway Authority have implemented the U.S. Department of Transportation’s Disadvantaged Business Entity (“DBE”) program, which aims to remedy past discrimination against minority and women contractors by granting competitive benefits to those groups. While there may be a valid government interest in remedying past discrimination, Illinois’s implementation of the program blows through strict constitutional requirements and bases its broad use of racial preferences on studies that either employ highly dubious methodology or are so patently outdated that they provide no legal basis on which to conclude, as constitutionally required, that there remains ongoing, systemic, widespread racial (or gender) discrimination in the public-construction-contracting industry that only the DBE program can rectify.

Even the studies Illinois used recommended that the state seek to achieve its anti-discrimination goals to the fullest extent possible by race-neutral means. Naturally, Illinois ignored this advice and implemented what could only generously be called a half-hearted pretense of employing race-neutral measures. Even worse, a federal district court upheld Illinois’s implementation. Even though the state failed to show which race-neutral alternatives it considered, tried, or rejected, the court held that the DBE’s grant of benefits still passed strict scrutiny.

The contracting company that brought the suit has now appealed the case to the U.S. Court of Appeals for the Seventh Circuit. Cato has joined the Pacific Legal Foundation and Center for Equal Opportunity in filing a brief supporting that appeal. We argue that Illinois didn’t meet the high constitutional standards governing the use of race-conscious measures in its approach to the DBE program because it (1) failed to establish a strong basis in evidence that there even is ongoing, widespread, systemic racial discrimination that must be remedied, and (2) failed to establish the narrow-tailoring requirement that workable race-neutral measures be tried and found insufficient before the state can turn to using race.

By cutting corners with shoddy studies and paying lip service to race-neutral solutions, Illinois and the lower court have each done a disservice to the hard-won principle of equality under the law. We urge the Seventh Circuit to correct those mistakes when it takes up Midwest Fence Corp. v. USDOT this summer.

Rick Perry, former governor of Texas, will announce his second White House run tomorrow. Perry served as Texas governor from December 2000, following the election of George W. Bush, until January of this year. During his long tenure, Perry showed reasonable fiscal restraint. Perry did not shrink the size of Texas’ government, but he limited its growth both in terms of spending and tax revenue.

Perry appeared in six editions of Cato’s Fiscal Policy Report Card. His first appearance was in 2004. His scores show consistent management of Texas’ budget as governor. He received a “B” in five of his six reports, with a “C” in 2012. Below are his scores in each report card.

2004: B

2006: B

2008: B

2010: B

2012: C

2014: B

Perry’s track record is certainly consistent given the tendency of some governors to slip while in office. For instance, George Pataki’s score fell from an “A” to a “D” between his first and last reports.  Mike Huckabee went from a “B” to an “F.”

From fiscal year 2002 to fiscal year 2015, general fund spending in Texas increased 63 percent. This growth outpaced the growth in average state spending, which grew by 50 percent.

But when these figures are adjusted for population growth, Governor Perry’s record appears much better. Texas’ population grew almost 2.5 times faster than the United States’ population during this time period. In this situation, comparing per capita general fund spending growth is more instructive.

Texas: Per capita spending grew from $1,430 to $1,802, or 26 percent from FY2002 to FY2015

50-State Average: Per capita spending grew from $1,809 to $2,356, or 30 percent from FY2002 to FY2015

Spending did increase in Texas on a per capita basis, but it increased slower than spending in other states.

Inflation should also be considered. Once inflation is added, any remaining growth in general fund spending disappears. Texas’ spending increase of 63 percent from FY2002 to FY2015 exactly matches population growth plus inflation for that time period. Perry also pushed several constitutional provisions to try and ensure this fiscal restraint would last. He championed an amendment to the Texas constitution that would have limited spending growth to population growth plus inflation. He also tried to require that any tax increase must be approved by two-thirds of voters.

On taxes, Perry’s actions were mixed. In 2003, Perry supported a package that included fee increases. In 2006, Perry supported a plan that raised the cigarette tax by one dollar and repealed the state’s franchise tax in exchange for property tax cuts. The plan also created the complex Texas Margin tax as a replacement. The plan did result in a net decrease in taxes, but it was the wrong approach. The 2012 report card summarizes the issue well:

The new tax [the margin tax] hit 180,000 additional businesses and increased state-level taxes by more than $1 billion annually. The added state revenues were used to reduce local property taxes, but the overall effect of the package has been to centralize government power in the state and reduce beneficial tax competition between local jurisdictions.

In 2009, Perry pushed to increase the exemption on the Margin tax so that it would hit fewer businesses and he pushed to extend the exemption in 2013.

Perry’s long tenure as Texas governor shows consistency, earning a “B” on five out of six Cato report cards. Perry did not decrease the size of the state’s government, but he did limit its growth to population growth and inflation.