Cato Op-Eds

Individual Liberty, Free Markets, and Peace
Subscribe to Cato Op-Eds feed

Round two of the NAFTA negotiations wrapped up early this week, without any major new developments. Of course, it is still very early in the process, and until the parties propose actual text for particular chapters, it is difficult to assess how bargaining will unfold. However, there are some issues where the positions of Canada, Mexico and the United States are fairly well known. One such issue, which Canada raised in the second round, is the inclusion of provisions on regulatory cooperation. As I’ve written with my colleagues in a recent working paper, a chapter on regulatory cooperation would be beneficial to an upgraded NAFTA.

First, as traditional tariff barriers have decreased over time, many of the remaining trade frictions take the form of so-called non-tariff barriers. Among these are the various regulations and standards that different countries utilize to regulate their product markets. There are a wide range of reasons these rules may differ—protectionism, consumer preferences, or divergence resulting from regulating in silos. The first of these is already addressed at the World Trade Organization (WTO). The second can entail things like consumer attitudes towards genetically modified foods (GMOs). The third is the range of issues that make up the bulk of what would be addressed in any type of regulatory cooperation forum. Examples include differences in the dimming technology for headlights used in vehicles, or the size of soup cans.

In 2011, there were two bilateral initiatives between the U.S. and Canada and between the U.S. and Mexico to address this type of regulatory divergence (outside of the NAFTA framework). The initiative with Mexico, the High-Level Regulatory Cooperation Council, did not achieve much progress, though Mexico has remained a supporter of regulatory cooperation initiatives. However, the U.S.-Canada Regulatory Cooperation Council had some notable successes, though progress has been very slow. For example, Health Canada and the Federal Drug Administration created a common electronic submission process that allows for a single application to both agencies for pharmaceutical and biological products; progress was also made in establishing mutual recognition of foreign animal disease zoning, as well as a joint review process for crop protection products. Given that this initiative has been in place for six years, however, criticism of the limited number of outcomes is not misplaced.

As Inside U.S. Trade reported, this is but one of the many reasons Canada put forward a proposal to include something similar to what it negotiated as part of its trade agreement with the EU, the EU-Canada Comprehensive Economic and Trade Agreement (CETA). The CETA includes a separate chapter on regulatory cooperation (Chapter 21), which sets up a forum that will meet annually to discuss these issues. In general, the CETA seems to institutionalize much of what has already been happening with the U.S.-Canada RCC, though it goes beyond it in a few areas, for instance, by allowing the participation of other trading partners in certain discussions, pending agreement of both parties.

The Trans-Pacific Partnership (TPP) is the only other agreement that has something similar, but its chapter on regulatory coherence is distinct from CETA in that it is more heavily focused on issues relating to good regulatory practice and regulatory process, such as providing notice and comment on upcoming regulations, and having a central authority like the Office of Information and Regulatory Affairs (OIRA). The CETA has provisions on good regulatory practice as well, but it is clear that regulatory cooperation is the main focus of that chapter. This distinction may seem subtle, but it’s an important one I’ve noted previously with my colleague here.

Recent reports suggest that the U.S. is not too keen on including something like the CETA chapter in the new NAFTA, possibly because it is concerned about the extra burden or interference with domestic regulators. However, when the U.S. first launched negotiations with the EU under the aegis of the Transatlantic Trade and Investment Partnership (TTIP), the concern was not one of regulatory overreach, but rather a fight over this subtle distinction in regulatory process vs. regulatory outcome, with the U.S. pushing to export its domestic agenda to a global level.

I would bet that the same debate is what is taking shape with the NAFTA negotiations, because it would entail expanding the current ad hoc structure of the U.S.-Canada RCC to a broader regulatory cooperation forum that includes Mexico as well. While some of the regulatory issues are different between these two markets, it does not make sense to maintain two separate regulatory cooperation councils. In fact, having Mexico at the table will only enhance the dialogue on regulatory divergence and allow all three countries to tackle longstanding regulatory barriers.

The CETA chapter on regulatory cooperation, though not perfect, is a step in the right direction for the NAFTA countries to move forward in this area. A regulatory cooperation forum that allows for broad input from civil society and business, a regular dialogue on long-standing issues, and establishes a voluntary process for this exchange would be a welcome upgrade to NAFTA. Canada’s proposal is a recognition of the challenges non-tariff barriers pose to international trade today, and the U.S. should support it. 

The pictures tell the story:

The drafters of a North Carolina redistricting plan drew a finger of land from a state senator’s district to take in a second home he’d recently built outside its bounds.

From what I’ve seen in my work on redistricting issues in Maryland and elsewhere, there’s nothing unusual about this tactic (except maybe how blatantly it was done). Often, as in this case, the finger method does not appear to have been welcome to the lawmaker in question, but serves to limit his options or keep him out of a district where he might run strongly.

In my chapter on this subject on Cato’s recently updated Handbook for Policymakers, I wrote:

The American system tends to leave the power of redistricting in the hands of the same officials whose careers are at stake, and they have routinely misused that power to draw lines with the aim of electing or defeating one or another candidate or party….Redistricting reform makes sense for its own sake and as a safeguard against the entrenchment and insulation of a permanent political class.

It would also keep rival lawmakers from giving each other – not to mention the voters – the finger.

In the aftermath of Hurricane Harvey, commentators have been quick to blame Houston’s lack of traditional zoning for the storm’s damage. Last week, I provided some evidence that lack of zoning is not the cause of Houston’s problems. But commentators have been equally quick to minimize the various benefits that accompany Houston’s limited zoning.

That’s short-sighted. To begin with, Houston’s lack of traditional zoning impedes its ability to act in political or exclusionary ways. Take New Orleans, post-Hurricane Katrina as a comparative study. Following Katrina, parishes in the New Orleans metropolitan statistical area (MSA) imposed moratoriums on construction of multi-family housing, threatened changes to zoning that deterred low-cost housing development, and created a blood-relative ordinance that restricted home rentals to blood relatives of owners “within the first, second or third direct ascending or descending generations.” These zoning regulations kept low-income evacuees out of certain neighborhoods and were highly controversial.

The details might sound remarkable, but the impacts of New Orleans’ post-Katrina zoning follow a standard pattern. Academic research suggests zoning acts as a barrier to the provision of low-cost, rental, and multi-family housing, segregates by socio-economic class and by race, and drives the cost of housing up. In fact, one study found that over half of the difference in levels of segregation between strictly zoned Boston and lenient Houston could be attributed to zoning regulations.

Given the average impacts of zoning, it shouldn’t be a surprise that low-income African Americans, low-income whites, and Hispanics have opposed zoning electorally in Houston and other locations.

It’s probably also not a coincidence that about 250,000 Hurricane Katrina evacuees, many of them African American, temporarily settled in inclusive Houston. Between 25,000 and 40,000 Katrina evacuees stayed permanently. According to reports, this was because of greater economic opportunities and affordable housing. Thanks to limited zoning, Houston could accommodate housing needs more quickly and cheaply than other cities.

Limited zoning will be good for Harvey evacuees, too. For example, limited zoning partly explains why Houston has a high apartment vacancy rate. Last year, Houston’s apartment vacancy rate was 6.8%, compared to 2.7% in Manhattan and 3.9% in the United States overall. This means there are thousands of apartments for Harvey evacuees to fall back on while they repair their homes.

These benefits will become increasingly apparent as Houstonians rebuild. The truth is that limited zoning means more opportunity, more low-cost housing, and less politically-motivated and exclusionary policies. That’s good every day, but especially good in case of an emergency.

When the federal government regulates food quality, consumers lose. Unfortunately, a Washington Post article on a recent increase in class-action lawsuits by consumers against food manufacturers over the use of “natural” labels shows how consumer groups are missing this point. In suing food companies, plaintiffs are arguing that these manufacturers (of cheese, in one particular case) are misleadingly labeling their food as “natural” while using milk from cows that use a growth hormone and eat animal feed made from genetically modified grain.

Though the plaintiffs and food companies disagree over what should be labeled as “natural,” one thing they do agree on is that the U.S. Food & Drug Administration, which has so far stayed silent on the issue, needs to provide guidance on what “natural” actually means. Manufacturers argue that clear rules would help them avoid legal battles, while consumer groups believe that government regulation would reduce what they view as deceptive marketing.

The “natural” label fight is a repeat of last decade’s fight over labeling food “organic.” In that case, the federal government did step in, with the U.S. Department of Agriculture creating the “USDA Organic” label and establishing rules on when the label can be used. However, that hardly ended the controversy over the use of the term “organic.”

As I discussed in a previous post, traditional organic farmers are now fighting with new hydroponic farmers over the latter’s use of the “USDA Organic” label. Hydroponic farming seems consistent with organic farming goals: producing environmentally friendly and healthy foods. However, traditional organic farmers don’t want competition from the upstart hydroponics industry because that competition will likely cut into the price premium that organic foods now fetch. An FDA defined “natural” label would compel the same type of jockeying by producers to hurt rivals.

The “organic” label also illustrates that there is no reason to believe regulations actually provide any assurances about health and environmental benefits. I highlighted in another post that the “USDA Organic” label, far from indicating increased health, safety, and quality, is instead a taxpayer-funded marketing tool with dubious benefit to human health or the environment.

My chapter on health and safety policy in the most recent Cato Handbook for Policymakers explains why manufacturers call for these types of regulations. FDA regulation of “natural” foods would create what’s known as a “pooled market”—one in which anonymous producers provide a good without any branding and consumers are reassured about the good’s quality by government inspection and regulation. To appreciate this, think of the supermarket shelf of “normal” bananas that sits next to the shelf of “organic” bananas; can consumers really tell the difference between the two goods that warrants the difference in price? The “USDA Organic” label is supposed to assure consumers of that difference, but there are good reasons to question the value of that government assurance. 

A separated market—a market with different levels of price and quality conveyed to consumers through marketing and branding—would provide more choices for consumers. For example, in the past several years both Whole Foods and Perdue have used concerns about genetically modified foods and antibiotics as opportunities to market the safety and quality of their foods. Consumers who are motivated to pay more for healthy foods incentivize transparency and increased quality from producers.

And while consumer groups are claiming that manufacturers are misleadingly marketing their foods as “natural,” pooling the market through FDA regulation would protect the producers without effectively addressing the groups’ complaints. A pooled market allows manufacturers to hide behind the false assurances regulations offer, but in a competitive, separated market other food companies will step in to offer truly “natural” foods and reap the benefits.

As the “USDA Organic” label has demonstrated, FDA intervention into “natural” foods would stifle competition and limit manufacturer transparency. Consumers concerned with the health and safety of the food they buy should instead push for the choices and accountability that markets provide.

Written with research assistant David Kemp.

As our more regular readers know well, every now and then I like to take another stab at debunking the  myth that fractional reserve banking has fraudulent roots. Besides occurring in numerous textbooks, that myth is routinely expounded in the writings and lectures of certain contemporary Austrian School economists. Moreover, as we’ll see, it is occasionally given credence in reputedly scholarly publications by scholars who don’t identify themselves with that school.

It is relatively slow in DC, as I write this, with Congress out of session, and therefore as good a time as any to rejoin the old debate, which I do first by drawing attention to a paper: “Banks v Whetston (1596),” by David Fox, a Cambridge law professor and barrister, and the author of a fascinating legal treatise on Property Rights in Money (OUP, 2008).

A Hum-Drum Case

Although he wrote “Banks v Whetson” for a 2015 volume titled Landmark Cases in Property Law, Fox hastens to explain that the case in question may not really qualify as a “landmark” since “very few lawyers have heard of it and it does not have a strong history of citation in later decisions.” Its significance, so far as he’s concerned, lies on the contrary fact that it was perfectly hum-drum. Because of that, the case supplies a particularly clear illustration of the common law’s ca. 1596 understanding of property rights in money — an understanding which prevailed, according to Fox, “throughout the middle ages and into the early modern period.”

The plaintiff in Banks v Whetson, having accused the defendant of robbing him of his money, brought an action in detinue (that is, for the return of specific property) against him. The defendant in turn filed a demurrer, which was argued in the Court of King’s Bench. The case was adjudged without argument for the defendant, on the grounds that the money in question consisted of loose coins rather than ones enclosed in a bag or chest. For that reason, the court observed, it was impossible to distinguish them from other, similar coins. Because the plaintiff could not establish that any particular coins in the defendant’s possession had in fact been taken from him, the court held that his case lacked the technical requirements for a suit in detinue.

As Fox explains, the decision in Banks v Whetston  rested on a by-then long-established  distinction between detinue on the one hand and “the varieties of debt action which lay to enforce claims for delivery of generic fungibles” on the other. So far as the common law courts were concerned, the distinction was just as applicable to money as to other fungible goods. Money, Fox explains,

could either be a specific item of property (as when it was bailed for safekeeping in a sealed bag or locked chest) or it could be owed as a fungible amount under a debt expressed in pounds, shillings and pence. In principle, there was no objection to a plaintiff suing in detinue to recover money bailed in specie, provided that the object of his claim could be identified clearly enough… The thing detailed by the defendant had to be identified as the same thing which the plaintiff delivered to him.


If the plaintiff’s case was instead to enforce a generic obligation for the payment of money, then his action was in debt… In contrast to detinue, debt lay to recover a certen summe of money. The distinction…signaled two commercially different kinds of transaction: one involving the enforcement of the plaintiff’s property (where the property in question happened to be coins) and the other for the enforcement of a simple monetary obligation to pay a generic amount denominated in monetary units.

Paper, Plastic, or a Loan?

The legal distinction between detinue and debt had, as its practical counterpart, what Fox calls “the bagging rule.” If someone wished to retain title to a sum of money, despite surrendering possession of it, and to therefore be able to sue in detinue for its recovery, that person had to place  the money in question in a bag or chest, and preferably in a sealed bag or a locked chest. “The bagging of money removed the evidential uncertainty about identifying coins as the property of one person or another, and in a detinue action it allowed the money to be restored in hoc individuo.

More importantly for our purposes, the bagging rule also supplied a simple means for distinguishing between different kinds of financial transactions — one which, whatever its shortcomings, was

readily understood by the commercial parties and by juries who were charged with determining the capacity in which a person received or held sums of money.  The simple question “Was the money in  bag or not?” cut through the conceptual artificialities of determining what might have been the intent of the parties, of the sort encountered in modern-day law.

So long as money was surrendered in a closed bag or chest, it was understood that its possessor “held it in right of another so that he was not free to spend it as his own”:

To seal coins in a bag…constituted an assertion by the person whose seal was on the bag that the property in the money was in him or in some third person to whom the money had to be remitted. Either way, it showed, negatively, that the person holding the bag might not have the full property in it.  He was quite possibly a bailee who might be liable in detinue… (my emphasis).

The common law courts denied, on the other hand, “that one person could maintain an enforceable title to any [loose] money that had passed — voluntarily or involuntarily  — into the possession of another person” (my emphasis again). “In this respect,” Fox observes, “the common law’s treatment of property in money was no different from its treatment of fungible commodities.”

By the time that Banks v Whetston was decided, in 1596, the “bagging principle” was old-hat. Yet another half-century or so was to pass before England’s goldsmiths would pioneer there the practice of fractional-reserve banking. In other words, the first goldsmith-banker to lend or otherwise make use of coins “deposited” at his bank had every right to do so, according to principles of common law that had by then been firmly established for over a century, so long as the coins were tendered loose rather than in sealed bags or other containers. The presumption that the banker had good title to any loose coins he received existed regardless of the other terms of the specific deposit agreement, excepting only such terms expressly indicating that the coins were to be held in trust. A depositor’s right to recover any part of a deposited sum, whether after a specific term or on demand, or a banker’s promise to pay a particular sum, whether to a specific person or to the bearer of a circulating banknote, was proof of the banker’s indebtedness, and nothing more.

Keepers of the Faith

In light of the simplicity of the bagging rule, and the fact that that rule appears to have been perfectly well-established when the practice of fractional reserve banking was but a twinkle in some goldsmith’s eye, one might expect spinners of the yarn that fractional reserve banking was (and perhaps still is) a form of theft — and the related whopper that banknotes and deposit credits were originally (and, by some accounts, still are) “titles” to cash — to respond to doubting Thomases by changing the subject, rather than by boldly declaring their accounts to be fully consistent with the fine points of early modern English common law. Were it only so! Alas, among certain devotees of Murray Rothbard-style Austrian economics, the fractional-reserve-is-fraud fairy tale amounts to a dogma to be upheld, by hook or by crook, in the face of every sort of contradictory evidence.

Two especially relentless defenders of the Rothbardian faith are Philipp Bagus and David Howden, who, with some other coauthors, have maintained a steady output of papers claiming, among other things, that according to legal principles prevailing at the time, in both Roman and common law, early fractional reserve bankers did indeed routinely lend money that didn’t belong to them.

Having once before confronted Bagus and Howden, with both barrels blazing (see here and my reply here), only to have them deny receiving so much as a scratch [1], I doubt that anything said here will faze them, let alone strike a mortal blow. Still I consider it worthwhile, for the sake of those standing on the sidelines, to show how these economists deal with fundamental points of early modern English common law that David Fox and numerous other historians of law and banking, from Henry Dunning Macleod to James Steven Rogers, have painstakingly elucidated.

Consider “Oil and Water Do Not Mix, or: Aliud Est Credere, Aliud Deponere,” a 2015 Journal of Business Ethics paper Bagus and Howden wrote with Amadeus Gabriel, a genuinely Austrian Austrian economist. Like several of Bagus and Howden’s other papers, this cryptically-titled number (the Latin comes from a passage in the Digest of Justinian) rests its case against fractional reserve banking not on a direct appeal to the common law but on the distinction found in more ancient Roman law between “regular” and “irregular” deposits contracts:

In a regular deposit contract, specific things are deposited such as a Rembrandt painting. Such contracts are called bailments in common law. In an irregular deposit contract, fungible goods such as bushels of wheat, gallons of oil or money are deposited. … Most money deposits are irregular.

So far so good. But the authors go on to declare that:

Over time governments failed to enforce the traditional legal principles of monetary irregular deposits. … A special privilege is given to bankers (but not to private persons) to violate these obligations in the case of monetary irregular deposits… . The practice of fractional-reserve banking was legalized ex-post.

The  violations to which Bagus, Howden, and Gabriel refer consist of banks’ having lent some of the cash deposited with them, instead of keeping it on hand, as the terms of a depositum irregulare supposedly obliged them to do.

This would be dandy reasoning, so far as Continental developments are concerned, were it indeed the case that, according to Roman law, an “irregular” money deposit was in fact a bailment in the strict sense of that term, with the depositor retaining a valid title to the deposited sum, rather than a loan. But that simply wasn’t so. Instead, according to just about every authority on the topic, with the singular exception of Jesus Huerta de Soto, upon whom Bagus, Howden, and Gabriel rely, a banker who received an “irregular deposit” became the owner of the deposited money!

Concerning Huerta de Soto’s understanding of what an irregular deposit contract entails, as conveyed in the first chapter of his magnum opus, Money, Bank Credit, and Economic Cycles,  “Lord Keynes” (the pseudonymous blogger at Social Democracy for the 21st Century[2]) concludes, on the basis of a painstaking review of relevant sources, that it

is utterly unorthodox. He cites certain Spanish legal sources and Spanish legal scholars for his definition, but it is clearly eccentric and aberrant, certainly with respect to Roman law and Anglo-American law.

Nor, he adds, did classical Roman jurists themselves ever insist, as Huerta de Soto does, that a banker receiving an irregular deposit was obliged to keep the full amount of the deposit at hand. It was therefore perfectly possible, as a matter of Roman law, for a banker to lend coins received as irregular deposits without breaking the law.

With regard to English experience, the Bagus-Howden-Gabriel view is, believe it or not, even less sound, for as Benjamin Geva explains in The Payment Order of Antiquity and the Middle Ages: A Legal History (p. 433), the English common law went one better than the Roman law “in bypassing altogether the category of the irregular deposit, and thus facilitating an easy route to the characterization of the bank deposit as a loan.” As we’ve seen, that characterization was automatically applied to all “deposits” of loose coin.

Concerning Bagus and Howden’s remarkable ability to ignore or misread the plain testimony of countless authorities, I hope I may be forgiven for instancing as a case in point their reading of my own 2010 article, “Those Dishonest Goldsmiths,” as given in a footnote to another paper of theirs,  published in the Journal of Business Ethics. According to that note, my paper

provides evidence that Goldsmiths…offered contracts that were neither demand deposits nor loans. These contracts were akin to aleatory contracts, whereby a financial institution promises its best to return an invested sum on demand. …While Selgin provides evidence that the Goldsmiths offered such contracts, he maintains that Goldsmiths did not pioneer fractional reserve banking. Selgin’s empirical evidence that Goldsmiths offered a third contract distinct from the two we posit that are legally permissible is not irreconcilable with our own view. Indeed, Selgin’s work would only be problematic if 1) it could be shown that people who agreed to these contracts wanted to maintain the full availability of their money, or 2) if these historical instances were used to argue for the legitimacy of the fractional reserves demand deposit.

I do not exaggerate in saying that every part of this purported précis of my article comes as a great surprise to me. In fact, I’ve never questioned the standard view that, in England at least, goldsmiths pioneered fractional reserve banking. And the whole point of “Those Dishonest Goldsmiths” was to defend the goldsmiths against the charge of misappropriating their customers’ deposits and, to that extent at least, “to argue for the legitimacy of fractional reserve banking”!

As for my supplying evidence that goldsmith bankers took part in “aleatory contracts,” that claim presumably refers to a single footnote in my paper, concerning a specific transaction with a goldsmith recorded in Pepys diary, in which the banker appeared to have acted as a sort of broker, rather than as a strict intermediary. It never occurred to me that, by referring to that one transaction, and suggesting that such transactions weren’t uncommon (in part because they helped bankers and their clients to skirt usury laws), I risked being portrayed as denying that goldsmith-bankers ever engaged in  plain-vanilla fractional-reserve banking!

An Asian Outbreak

Were the fractional reserves = fraud fairy tale encountered only in undergraduate textbooks, manifestly idiotic web pages, and papers written by a coterie of ultra-Rothbardian economists for publication in their own house organs (or in journals edited by persons who are neither economists nor historians nor legal scholars), its persistence might be no more a  cause for concern than the 450-odd samples of variola vera residing, under heavy guard, at the Centers for Disease Control in Atlanta.

I have, unfortunately, come across at least one serious case of fractiophobia  far removed from the bacterial incubators of Madrid and Auburn, Alabama — as far as Seoul, Korea, to be precise. In “How Modern Banking Originated: the London Goldsmith-Bankers’ Institutionalization of Trust,”Jongchul Kim, a political scientist at Sogang University,[3] claims that modern banking rests upon a “double ownership scheme” pioneered by London’s goldsmiths. In that scheme

two groups — the holders of the bankers’ notes and depositors — were the exclusive owners of one and the same cash that was kept safely in the bankers’ vaults; and one amount of cash created two balances of the same amount, one for the holders and the other for depositors. This double ownership remains a central feature of the present banking system.

In fact, Kim’s claim of double ownership is doubly wrong: neither noteholders nor depositors of loose coin owned — that is, possessed a good title to — the cash to which their claims entitled them. Instead, as both the common-law bagging rule and its Continental counterpart, the concept of a depositum irregulare, made perfectly clear, whatever actual cash the banker retained that had originally come to him in the form of loose coin belonged to the banker alone.

Although Kim devotes many pages, in several different (if similar) articles, to embellishing and repeating his “trust scheme” argument, by doing so he merely succeeds in making it all the more evident that he has completely misunderstood the English common law of property in money. Moreover he has managed to do so despite drawing on the works of scholars like James Rogers Stevens and Benjamin Geva (though not David Fox), the plain language of which cannot possibly have misled him.

So, what happened? The answer is that, when it comes to the specific question of the ownership of coins handed over to a banker, Kim leans, not on such highly reputable legal historians, but on — hold on to your hat! — Murray Rothbard & Company, whose distortions he appears to have swallowed hook, line, and sinker!

Kim’s debt to the Rothbardians is particularly clear in his assertions to the effect that goldsmith banking was “self-contradictory”:

Goldsmith-bankers’ deposit-taking was self-contradictory because it was simultaneously a loan contract and not a loan contract. Because deposits were repaid on demand, the ownership of deposits practically remained in the hands of depositors. But bankers lent deposits at their own discretion and in their own names, and they attained and retained the ownership of the loans.

But there’s no contradiction. Notwithstanding what Rothbard and some of his devotees have written, as soon as depositors handed their loose coins over to a banker, that banker became the owner of the coins, while the depositors ceased to own them, either legally or “practically.” What the depositors now “owned” was, no longer a certain set of coins, but a contractual right to demand an equivalent sum, whether after a particular term or on demand. Likewise the banker, upon lending coins received on deposit, though he certainly owns the loan itself — meaning the right to a future payment of principle and interest — ceases to own the lent coins. In short, the coins themselves never have but a single “exclusive” owner.

When Kim appears to muster more qualified authorities in support of his “double ownership” thesis, he does so by quoting from them selectively and misleadingly.  Consider the following passage:

As legal theorist Benjamin Geva rightly argues, “Fungibility of money…explains the depository’s right to mix the deposited money instead of keeping it separate.  It does not necessarily explain the depository’s right to use the money.” A depository is still required to keep an equivalent amount of money deposited.

A reader of this passage might be excused for supposing that Geva himself held the opinion contained in its last sentence. In fact, that opinion belongs to Kim alone. Geva (whose concern is in any case with Roman rather than common law) merely wished to make the logical point that, as he puts it in a subsequent paragraph, “authority to mix does not entail automatically the authority to use” (my emphasis).

This is Serious

If supposedly scholarly elaborations of the myth that fractional reserve banking is inherently fraudulent make claims that are ludicrously at odds with the facts, while more popular presentations of the myth are downright laughable, that doesn’t make the myth itself either funny or harmless. On the contrary: by encouraging people who might otherwise be inclined to oppose heavy-handed government regulation of private industries to favor, on ethical grounds, the outright prohibition of many ordinary banking transactions, the myth that fractional reserve banking is inherently fraudulent strengthens the hand of officials and others who want to hamstring bankers for quite different, but equally unsound, reasons, not excluding a general dislike of free enterprise.

Yet (as I and others have argued often on this site an elsewhere), conventional fractional-reserve banking is capable of yielding enormous benefits to society. What’s more, it has proven most capable of doing so when and where it has been allowed to flourish with the least government interference, including interference aimed at making certain bankers the beneficiaries of government favors. An unbiased and open-minded review of the historical record will make clear to anyone who undertakes it, that it is not those nations that have heaped regulation upon regulation on most of their banks, while favoring one or several with privilege after privilege, that have enjoyed the greatest financial stability. It is those that have mainly relied upon open competition between banks free of special privileges that have witnessed the greatest financial stability. As for those that have attempted, or have succeeded, in banning ordinary banking altogether, if they can be said to have enjoyed financial stability, it is only because they have stagnated.


[1] In this respect Bagus and Howden remind me of my twin brother Peter.  When we used to play army together, I often managed, thanks to my well-honed tactical and stalking skills (and, let’s face it, all around physical and mental superiority), to sneak up on him with my toy Tommy gun and let him have it at point-blank range, only to hear him repeatedly shout, “You missed me!” However, when Peter acted that way, I could always settle matters, without risking legal repercussions, by beating him up.

[2]I should not be surprised if some members of the anti-anti-Rothbard vigilante squad treat my reference to “Lord Keynes’ ” remarks as further proof (Exhibit “A” being my occasional references to “aggregate demand”) that I’m a dyed-in-the-wool Keynesian, and as such someone all right-thinking free market types ought to ignore. For the record: I am not now, nor have I ever been, especially fond of Keynes’ General Theory.

[3]I have since discovered that Mr. Kim did his postdoctoral research at the Department of Economic History and Institutions at Universidad Carlos III in Madrid.

[Cross-posted from]

While same-sex couples ought to be able to get marriage licenses—if the state is involved in marriage at all—a commitment to equality under the law can’t justify the restriction of private parties’ constitutionally protected rights like freedom of speech or association. Masterpiece Cakeshop, a bakery in Lakewood, Colorado, declined to bake a cake for the wedding of Charlie Craig and David Mullins. Jack Phillips, the shop’s owner, considers himself to be both an artist and a committed Christian whose faith permeates his art. Consistent with that faith, he will not create cakes marking events or ideas that violate his beliefs, such as cakes celebrating Halloween, incorporating hateful or vulgar messages, or celebrating any marriage that he believes is contrary to biblical teaching. While he refused to make the wedding cake, he did offer to make the couple any other cake they might like. Craig and Mullins responded by filing a charge of sexual orientation discrimination with the Colorado Civil Rights Commission, which found that Jack violated the Colorado Anti-Discrimination Act and rejected his First Amendment defenses, saying that baking and decorating custom wedding cakes does not constitute artistic expression. The Colorado Court of Appeals affirmed and the U.S. Supreme Court agreed to hear the case. Cato has filed an amicus brief supporting Masterpiece Cakeshop and urging the Court to vindicate Americans’ right not to speak.

Although making cakes may not initially appear to be speech to some, it is a form of artistic expression and therefore constitutionally protected. There are numerous culinary schools throughout the world that teach students how to express themselves through their work; couples routinely spend hundreds or even thousands of dollars for the perfect cake designed specifically for them. Indeed, the Supreme Court has long recognized that the First Amendment protects artistic as well as verbal expression, and that protection should likewise extend to this sort of baking—even if it’s not ideological and even if done to make money. The Court declared nearly 75 years ago that “[i]f there is any fixed star in our constitutional constellation, it is that no official, high or petty, can prescribe what shall be orthodox in politics, nationalism, religion, or other matters of opinion, or force citizens to confess by word or act their faith therein.” W.Va. Board of Education v. Barnette (1943). And the Court ruled in Wooley v. Maynard—the 1977 “Live Free or Die” license-plate case out of New Hampshire—that forcing people to speak is just as unconstitutional as preventing or censoring speech. The First Amendment “includes both the right to speak freely and the right to refrain from speaking at all” and the Supreme Court has never held that the compelled-speech doctrine is only applicable when an individual is forced to serve as a courier for the message of another (as in Wooley). Instead, the justices have said repeatedly that what the First Amendment protects is a “freedom of the individual mind,” which the government violates whenever it tells a person what she must or must not say. Forcing a baker to create a unique piece of art violates that freedom of mind.

Moreover, unlike true cases of public accommodation—the travelers’ inns at common law or the state-segregated restaurants in the Jim Crow South—there are abundant opportunities to choose other bakeries in the same area. Finally, granting First Amendment protection to bakers would not mean that public-accommodation laws could provide no protection to same-sex couples. The Free Speech Clause protects expression, which should include the custom design, baking, and decorating of wedding cakes, as well as photography and floristry, but not businesses like caterers, hotels, and limousine companies, who aren’t creating artistic expression. Those sorts of businesses may have other rights and legal defenses available, constitutional or statutory, but that’s a different matter.

The Supreme Court will hear argument in Masterpiece Cakeshop v. Colorado Civil Rights Commission sometime this fall.

In an address at the American Enterprise Institute today, Nikki Haley, the U.S. ambassador to the United Nations, laid out an assertive and fundamentally misleading case against continuing U.S. participation in the Iranian nuclear deal.

Though Haley was careful to note that she was not calling for the United States to actively withdraw from the Joint Comprehensive Plan of Action (JCPOA), she offered a selection of ‘alternative facts’ and carefully phrased arguments clearly aimed at justifying President Trump’s desire to do just that.  

Haley’s arguments carefully skirted around the actual facts. The key problem for the Trump administration’s desire to withdraw from the JCPOA is simple: Iran is actually adhering to the terms of the deal. Rather than attacking the deal head on, therefore, Haley instead argued that the United States should consider factors outside the legal scope of the deal when deciding its future.

Indeed, though she cited many different reasons to take a harder line against Iran - including a litany of Iran’s past bad behaviors, the regime’s actions in Syria and elsewhere, and its missile testing – none of these are actually covered by the nuclear deal. Haley even suggested that Iran could have hundreds of covert nuclear sites which cannot be inspected under the deal, but offered no evidence for her assertion.

Her portrayal of the nuclear agreement was also misleading. As she described it: “the deal he [President Obama] struck wasn’t supposed to just be about nuclear weapons. It was meant to be an opening with Iran; a welcoming back into the community of nations.” In Haley’s account, these broad goals justify the use of a broader lens in deciding whether to stick with the deal or not. 

There’s just one problem: the Obama administration was always clear to stress that the JCPOA was first and foremost a nonproliferation agreement, focused on preventing an Iranian bomb, not on fixing every problem in the U.S.-Iranian relationship. Though she never stated it so bluntly, Haley’s remarks amount to an argument that these broader issues are worth jettisoning even a successful nonproliferation agreement that is preventing an Iranian nuclear weapon. 

Perhaps the most misleading statement in the Ambassador’s remarks was her assertion that Trump’s choice to decertify the deal would not actually amount to U.S. withdrawal from the JCPOA, but would merely allow congress to debate the issue. Yet it would also result in a congressional vote on re-imposing nuclear related sanctions on Iran, potentially withdrawing the United States from the deal and splitting us from European allies.

Unusually for this administration, Nikki Haley’s arguments today were well-crafted, clearly delivered and plausible-sounding. But listeners should not be fooled: they nonetheless embraced the Trump administration’s universe of ‘alternative facts.’

U.S. withdrawal from the JCPOA could easily set Iran back on the path to a nuclear weapon, and re-open the debate over military action which occurred prior to the finalization of the nuclear deal. By ignoring the risks and eliding basic facts, Haley’s arguments are likely only to undermine U.S. foreign policy.


DACA is a good policy but bad law. Those who were brought to the United States illegally as children and have since led productive lives deserve to stay and earn citizenship. Alas, our immigration laws prevent this and the executive branch didn’t acquire extra powers to remedy this when Congress shamefully failed to pass the DREAM Act on multiple occasions.

President Trump was justified in stopping a program that lacked constitutional authority, but he is now equally obligated to press Congress to fix the laws that created the resulting mess. As my colleagues Alex Nowrasteh and David Bier have pointed out, the moral and economic case for continuing the legal status of the so-called DREAMers and legalizing other unlawful immigrants is overwhelming. Congress has six months in which to legislate such a solution, whether as a stand-alone bill or as part of a larger immigration compromise. If it fails, Congress will have earned opprobrium for the 800,000 lives that are now in turmoil.

This episode illustrates how President Obama’s penchant for government by executive action leads to continued reversals in his key programs. Any legal challenges to DACA rescission have no leg to stand on precisely because he and his lawyers argued that the program, like its DAPA successor, created no new legal rights or statuses. There was no administrative process to create DACA—there could not have been because of the absence of legal foundation—and so no process beyond the new Trump administration guidance is needed to end it.

The ball is now properly in Congress’s court.


President Trump will reportedly end the Deferred Action for Childhood Arrivals (DACA) program, which allows young unauthorized immigrants to live and work legally in the United States. According to ABC News, he will shut off all new applications today and allow those who currently participate in the program to continue to renew their applications for two years for the next six months until March 5, 2018. After that, the administration will accept no more renewals.

As I have explained previously, most DACA recipients receive 2-year “deferred action” forms that prevent their removal during that period as well as 2-year employment authorization documents that allow employers to hire them legally. President Trump will reportedly not attempt to reclaim these authorizations immediately on March 5. He will instead allow them to expire “naturally” at the end of their validity periods.

U.S. Citizenship and Immigration Services only releases data on DACA approvals and renewals on a quarterly basis, so it’s not possible to give precise month-to-month figures, but we can obtain a rough picture of how the program will wind down. As Figure 1 shows, the six-month delay will allow 24 percent of all DACA recipients, or 190,822, to renew their permits before March 5. This is a big deal for the 108,000 beneficiaries who received 3-year renewals under the attempted DACA expansion in late 2014 and early 2015. All of their permits would have expired without the delay, leaving them worse off than if they had received a two-year renewal, which they could have extended in late 2016 or early 2017.

Figure 1: DACA Renewals and Projected Expirations Starting September 5, 2017

Source: USCIS Data Set: Form I-821D Deferred Action for Childhood Arrivals. April 2017 to September 2017 figures are not published yet and are projections based on the number of two-year renewals in those months in 2015.

Roughly three quarters, however, will not benefit at all from the six-month delay. Some 595,000 permits will expire the same as if the program ended today. After March 5, there will be roughly 32,686 expirations per month, 7,543 per week or 1,078 per day. Figure 1 breaks the numbers down on a six-month basis. Roughly 60 percent of DACA recipients (318,015) will retain permits until at least 2019. About 11 percent will retain their permits until 2020.

Figure 2: DACA Projected Expirations Starting September 5, 2017

Source: See Figure 1.

The administration is adamant that DACA beneficiaries will not become priorities for removal after their permits expire. Without employment authorization, they will be forced into the black market for jobs, and according to the administration’s new expansive priorities, anyone who works under a false name or borrowed identity is priority for removal. Already ICE has an indiscriminate policy of taking “enforcement action” against any unauthorized immigrant it “encounters,” which would include Dreamers it finds during raids targeting their parents. In order to prevent this from happening Congress must find a legislative solution.

An argument will soon erupt over the fate of the Affordable Care Act’s mandate that requires health insurance to cover oral contraceptives at no direct out of pocket cost to the patient. This mandate was never explicitly listed in the ACA as one of the “essential health benefits.” Its inclusion was made at the discretion of the HHS Secretary. According to press reports, the Trump Administration is about to relax the requirement.

The arguments made in favor of loosening the mandate mostly revolve around the employers’ right to freedom of conscience. Meanwhile, some advocacy groups fear this will mean many women won’t be able to obtain affordable oral contraceptives. As I recently wrote in Morning Consult, it can help temper the concerns of all parties to the argument if the Food and Drug Administration (FDA) followed the recommendation that the American Congress of Obstetricians and Gynecologists (ACOG) has been making for decades, and reclassify oral contraceptives from prescription to over the counter (OTC). Birth control pills are already available over the counter in 102 countries.

But health insurance plans usually only cover prescription drugs. Making birth control pills available OTC means that women can purchase them directly, like they purchase aspirin, ibuprofen, antihistamines and antacids.

In my Morning Consult piece, I point out that the average cash price of prescription birth control pills runs from $20 to $50 per month but can range as low as $9 per month. Various community health centers across the US, as well as Planned Parenthood, offer free oral contraceptives for those unable to afford them. If birth control pills are made available over the counter prices are likely to drop. That’s because oral contraceptives will be liberated from the third-party spending trap.

As is the case with doctor, hospital, and lab bills, the presence of a third-party payer results in higher prices for prescription drugs than would otherwise be the case if a pharmacy was dealing directly with the patient, not the third-party middleman. That’s because the third party severs the link between the consumer and the producer of goods and services that allows market forces to work. Doctors, hospitals, labs, and pharmacies negotiate with a deeper-pocketed third party, not the consumer, to arrive at a price.

For example, in a March 2017 Consumer Reports  interview, University of Minnesota Professor of Pharmacoeconomics Stephen Schondelmeyer stated:   “…Retail chains such as CVS and Rite Aid aren’t concerned about consumers who pay out of pocket…What does concern them is how much third parties, such as insurance companies, will pay, usually either a negotiated reimbursement fee or the list price —whichever is lower. So retailers intentionally set the list price very high so that there’s no chance it could undercut what they get paid by insurers.”

And here is Douglas Jennings, PharmD, writing in the June 2016 Pharmacy Times: “Cash prices started to dip below co-pays a decade ago, when several stores started offering dozens of generic drugs for as little as $4 per prescription. But, as co-pays increase and high-deductible insurance plans become more common, patients may be overpaying for their prescriptions when using insurance.”

When the FDA reclassifies a prescription drug to over the counter it extracts it from the third-party spending trap. As consumers play their part, market forces often bring prices down and new competitors frequently enter the market. Moving to OTC also adds further savings—like the saving of time taken away from work or other activities to sit in a doctor’s waiting room for a prescription; and the saving of money from not having to pay for the office visit. It also means an improvement in privacy and comfort, as many women prefer to purchase this kind of product discreetly and avoid unwanted discussion or counseling, even if offered by a health care provider. Privacy appears to be a major issue for women purchasing emergency oral contraception, which has been available over the counter since 2006. There is even convincing evidence that reclassifying prescription birth control pills to over the counter increases their continuous use by women.

Some health care practitioners are leery of making oral contraceptives OTC, citing safety concerns. ACOG’s Committee on Gynecologic Practice believes any safety concerns are not great enough to deter reclassification of oral contraceptives to OTC. I further explore that matter here.

Reclassifying birth control pills to over the counter can save women money in the long run. It also adds convenience, choice, and privacy. Medical science supports the move. The only remaining obstacles are political. 

In 2013, Defense Distributed uploaded computer aided design (CAD) files and made them freely available to the public. With the proper equipment and knowledge, someone could use the CAD files to create a 3D-printed gun. The government quickly ordered the files removed (under threat of severe penalties) because it determined that the files ran afoul of the International Traffic in Arms Regulations (ITAR), which prevent people from communicating to foreign persons “technical data” about constructing certain arms. In other words, ITAR is one of the laws that makes it illegal to tell the foreign persons how to make things like an Apache helicopter. Not all arms are listed, and ITAR doesn’t restrict technical data that is merely “general scientific, mathematical, or engineering principles commonly taught in school.”

There are many manuals and documents out there that tell people how to make dangerous things. The Anarchist Cookbook is perhaps the most famous. Many people are surprised that the government lets The Anarchist Cookbook exist, but it is not the government that lets it exist—they’d probably rather it didn’t—it’s the First Amendment. The First Amendment protects communication about making dangerous things, from bombs to napalm, and it certainly protects communication on how to fix guns or even construct them from scratch. If the government is going to restrict such information it must do so narrowly and with good reason, while understanding that there is a difference between instructions for a plutonium trigger for a hydrogen bomb and CAD files for a plastic, one-shot pistol. And if the government goes too far, people should be allowed to challenge it.

ITAR’s regulation of communicating technical data is clearly a content-based prior restraint of speech—it restrains speech before it is published based on the content of the communication—which is one of the most egregious ways to violate the First Amendment. While it is certainly proper for the government to prohibit telling the North Koreans how to make a nuclear bomb, Defense Distributed believes that the government went too far in extending ITAR to cover CAD files for small, 3D-printed guns. Moreover, by uploading the files to the internet—which, yes, foreign persons can access—Defense Distributed believed it wasn’t communicating them to foreigners in the manner contemplated by the statute. In such situations, when a plaintiff believes a law has reached too far and is impinging on their freedom of speech, it is usually proper to seek a preliminary injunction that keeps the government from shutting the speech down until a court has determined the merits of the ultimate issue. But the district court improperly denied the request based on an incorrect approach to preliminary injunction analysis and a wholly inappropriate assessment of the relevant interests at stake. The Fifth Circuit upheld the lower court’s decision and denied a request for rehearing en banc, which is when all the judges on a circuit hear a case rather than the usual three-judge panel. Defense Distributed has now filed a petition for a writ of certiorari asking the Supreme Court to take up their case and protect their First Amendment rights. Cato has filed a supporting brief urging the Court to accept and summarily reverse the decision below. We argue that such disposition is required when a facially content-based prior restraint escapes review just because the government says “national security.”

In essence, the lower courts refused to look at one of the most important considerations in the preliminary injunction analysis—likelihood of success on the merits—because the government intoned the words “national security,” to which the judges said “okay, that clearly outweighs any interests Defense Distributed has.” Defense Distributed certainly has an interest—the rights protected by the First Amendment of the U.S. Constitution—and it was frankly ridiculous that the lower courts so casually let the government’s interests trump the First Amendment.

In a First Amendment case, in order to determine whether an injunction is in the public interest, the merits of a plaintiff’s claim must be evaluated before proceeding to weigh the equities. This is not like an injunction to prevent your neighbor from spilling pollutants on your lawn. In such a case the court would weigh the various interests involved in deciding whether to issue an injunction, but no one’s constitutional rights would be part of the consideration. Constitutional rights get special weight if it is likely they’re being violated. That’s why they’re in the Constitution. Nevertheless, the lower courts said that Defense Distributed failed to show how granting an injunction to enjoin an unlawful restriction of speech was in the public interest. But enforcing the Constitution is always in the public interest, and the government cannot be harmed if its own unconstitutional activity is enjoined. If it seems likely the government is violating the First Amendment, then that strongly indicates that the plaintiff’s equities outweigh the government’s because the First Amendment is being violated.

By concluding that the district court had not abused its discretion by failing to consider the merits of a First Amendment plaintiff’s claims, the Fifth Circuit fundamentally altered the preliminary injunction standard, laid out by the Supreme Court, which should be applied to the most egregious abridgments of speech. Dissenting from denial of rehearing en banc, Judge Elrod put it succinctly: “The panel opinion’s flawed preliminary injunction analysis permits perhaps the most egregious deprivation of First Amendment rights possible: a content-based prior restraint.”

While some people are frightened by the prospect of 3D-printed guns—including, perhaps, some of the judges in the lower courts here—that is no reason to allow the government to shut down speech about such guns without ensuring that the restrictions comport with the strictures of the First Amendment. Even if you don’t like guns, this case should concern you because the government should not be allowed to say “national security” in order to shut down speech it doesn’t like—“first they came for the guns, and I didn’t speak up because I didn’t own guns; then they came for the…” The implications for free speech rights could be catastrophic Defense Distributed fails to prevail in this case.


During the Hurricane Harvey disaster, many reporters and commentators seemed to assume that federal agencies had to take the lead in rescuing the city. And even before water levels had receded in Houston, federal politicians were promising billions of dollars in aid.

However, the large-scale federal intervention in natural disasters we saw during and after Katrina, Sandy, and Harvey is a relatively recent phenomenon. Prior to recent decades, the private sector handled much of the nation’s disaster response and rebuilding. The U.S. military and National Guard have long played important roles during natural disasters, but private charitable groups and businesses have been central to disaster response and rebuilding throughout U.S. history.

In this essay, I discuss the responses to various natural disasters in the past. The 1906 San Francisco earthquake and fire and the 1913 Great Easter Flood illustrate the impressive outpouring of private-sector support during past calamities.

1906 San Francisco Earthquake and Fire

San Francisco was struck by a massive earthquake and fire in 1906 that destroyed 80 percent of the city and killed about 3,000 people. At least 225,000 people out of about 400,000 in the city were left homeless, and 28,000 buildings were wrecked.

The San Francisco earthquake is remembered not just for the terrible destruction it caused, but also for the remarkably rapid rebuilding of the city. More than 200,000 residents initially left the city, but the population recovered to pre-quake levels within just three years, and residents quickly rebuilt about 20,000 buildings.

The private sector response to the disaster was extremely impressive. Voluntary aid poured in from around the country. John D. Rockefeller, Andrew Carnegie, and W.W. Astor, for example, each donated $100,000. Charitable groups, including the Salvation Army and the Red Cross, played a large role in relief efforts. The health care and home-products company Johnson and Johnson quickly loaded rail cars full of donated medical supplies and sent them to San Francisco.

The insurance industry was crucial to the rebuilding. About 90 percent of San Francisco residents had fire insurance from more than 100 different companies. The companies ended up paying out a massive $225 million in claims, which was equal to what the entire U.S. insurance industry had earned in profits in the prior four decades. Insurance payouts totaled about 90 percent of what was owed, as only a relatively small number of companies failed.

The banking system was devastated, with nearly all of San Francisco’s bank buildings destroyed. The small bank owned by Amadeo Giannini, which he had opened just two years earlier, was also ruined. But Giannini was able to rescue his gold and securities, and the next day he opened for business on a wharf on San Francisco Bay. His rapid response and willingness to provide loans to all types of people after the disaster helped him gain the respect of the city. His bank would eventually grow to be one of the largest in the nation, the Bank of America.

Another impressive story is that of the Southern Pacific Railroad, which immediately swung into action and provided free evacuation for more than 200,000 city residents to anywhere in the country. Within five days of the earthquake, the company had filled 5,783 rail cars with passengers leaving the city. Southern Pacific president Edward Harriman made disaster response the highest priority of his rail network. Only one day after the earthquake, the first of his rail cars full of emergency supplies left Omaha for San Francisco. Harriman personally donated $200,000 to relief efforts.

What about the government response to the San Francisco conflagration? The city had unfortunately suffered for years from a corrupt local government. The good news was that in the immediate aftermath of the earthquake, leading citizens formed essentially a new city government called the “Committee of 50,” which was credited with a very organized and effective disaster response. For its part, Congress appropriated just $2.5 million for relief to San Francisco, or about $50 million in today’s dollars.

The main federal organization that responded was the U.S. Army, which moved quickly to take control of the city and provide water, food, tents, and other relief items. Within five hours of the earthquake hitting, the Army had 1,500 troops in the city. Some of the actions of the Army were controversial, but the swift response by the commander of the nearby Presidio base is an example of how local resources and local decisionmaking are crucial in the aftermath of disasters.

1913 Great Easter Flood

The Great Easter Flood in 1913 ravaged a huge area in one of the most widespread and damaging disasters ever to strike the United States. High winds and massive flooding caused destruction and more than 1,000 deaths across 14 states from Vermont to Alabama. The U.S. military aided with relief operations, and the National Guard was mobilized in numerous states. Americans responded with huge contributions to the Red Cross and other charitable organizations aiding victims.

Ohio was the hardest hit state, and Dayton probably the hardest hit city. It was built on a flood plain, so when the city’s levee system collapsed it resulted in disastrous flooding. Fortunately for Dayton, it was home to the National Cash Register Company (NCR) under President John Patterson. Seeing the flood disaster that was about to happen, Patterson seized the initiative and NCR become the central funder and organizer of relief in the city.

NCR built 300 boats to rescue flood victims, organized search teams, and provided meals and shelter for thousands of people. On its peak day, NCR’s kitchens provided meals for 83,000 flood victims. NCR headquarters also became the base of operations for the Red Cross and Ohio National Guard.

John Patterson was an interesting leader. He instituted innovative and enlightened management practices, such as providing a wide range of recreation and medical amenities for workers. But he was also an aggressive businessman, and he and other NCR executives were found guilty of violating federal antitrust laws just weeks before the flood, although this was reversed on appeal. NCR’s leaders apparently saw a chance to redeem themselves in the eyes of the community, and their remarkable efforts to save their city during the flood gained them national praise.

Historian Trudy Bell has written in detail about the 1913 disaster. One of her findings is that there were widespread refusals of aid by affected individuals and communities, apparently because of cultural norms at the time regarding personal pride and the belief in standing on one’s own feet. Some people and communities even gave back unused amounts of aid that they had received after the disaster. These days, sadly, the situation is the reverse: there is usually a large amount of fraud in relief programs in the wake of disasters.

For more on the proper federal role in natural disasters, see


Like the Genesis tales of the Great Flood and Sodom and Gomorrah, several news outlets are blaming Hurricane Harvey’s destruction on its victims’ moral failings. If not for Houstonians’ impious laissez-faire attitude toward zoning and building codes, the storm would have been less damaging—so say Newsweek and the Washington Post, anyway.

Vanessa Brown Calder debunks the zoning claim here. But what of building codes? At first blush, it seems reasonable to argue that government should have required new construction to be more resilient to severe weather. Problem is, the empirical evidence is unclear on whether supposedly storm-toughened building codes make much difference.

The late University of Georgia economist Carolyn Dehring spent much of her career examining the effects of coastal areas’ storm-protection regulations. In this Regulation article with the University of Wisconsin’s Martin Halek, she specifically looked at the effectiveness of federal requirements for building codes in hurricane zones. The authors found that houses built prior to the federally mandated codes were more resilient in hurricanes than houses built under the codes. The codes apparently encouraged a “race to the bottom” in which builders focused on meeting the government requirements rather than nature’s destructiveness.

On the other hand, a new working paper examines the effects of Florida’s 2001 statewide building code that was drafted in response to the damages from 1992’s Hurricane Andrew. The authors find that wind damages to homes built under the code were much less than damages to homes built prior to the code. More important, the savings from the reduced damages more than offset the increased construction costs under the code. Peter Van Doren summarizes this paper here.

So, for now, it seems uncertain whether government building codes provide effective protection against extreme weather—especially weather that drops 50+ inches of rain in a few days’ time.

“The sense of responsibility is always strongest, in proportion as it is undivided,” Hamilton argued in Federalist 74; for that reason, the broad constitutional power to pardon was best vested in “a single man,” the president, who could be expected to wield it with “scrupulousness and caution.” Things aren’t exactly working out according to plan so far in the Trump presidency.

Trump’s first presidential pardon, accomplished with an end-run around his own Justice Department, went to former Maricopa County, AZ sheriff Joe Arpaio, an unrepentant, serial abuser of power. If, as Hamilton suggested, “humanity and good policy” are the ends the pardon power is supposed to serve, its exercise in this case served neither. 

“All agree the U. S. President has the complete power to pardon,” Trump tweeted in July. Subject to a few caveats (has to be a federal offense, no pardons for future acts, doesn’t apply “in Cases of Impeachment”), “complete power” is pretty close to the truth. The legal scholar Sanford Levinson has called the pardon power “Perhaps the most truly monarchical aspect of the presidency.”

The Framers were aware that so broad a prerogative might be abused, and delegates to the Constitutional Convention and Ratification debates repeatedly identified impeachment as the essential check. At the Philadelphia Convention, when Edmund Randolph moved to remove “cases of treason” from the power’s scope, James Wilson retorted that “Pardon is necessary for cases of treason, and is best placed in the hands of the Executive. If he be himself a party to the guilt he can be impeached and prosecuted.” At the Pennsylvania ratifying convention later that year, one delegate addressed the objection that the president could pardon treasonous coconspirators by noting that “the President of the United States may be impeached before the Senate, and punished for his crimes.”

In Virginia, another observed that because “the President himself is personally amenable for his mal-administration, the power of impeachment must be a sufficient check on the President’s power of pardoning before conviction.” And when George Mason warned that the president “may frequently pardon crimes which were advised by himself,” James Madison replied that

“There is one security in this case to which gentlemen may not have adverted: if the President be connected, in any suspicious manner, with any person, and there be grounds to believe he will shelter him, the House of Representatives can impeach him; [and] they can remove him if found guilty.”

Still, no president has ever been impeached for misusing the power. Only one ever came anywhere close. In the House of Representatives’ first, failed attempt to impeach President Andrew Johnson, in 1867, one of the charges specified that Johnson had “abused the pardoning power conferred on him by the Constitution, to the great detriment of the public, in releasing… the most active and formidable of the leaders of the rebellion.” The resolution failed 108-57—Johnson wouldn’t be impeached until the following year, after he defied the Tenure of Office Act by firing Secretary of War Edwin Stanton.

It’s not that the clemency power hasn’t been abused: in his book on the subject, American University’s Jeffrey Crouch notes an increasing trend toward self-interested pardons that shield presidents from legal trouble, or carry political and financial gain. But “each of these clemency decisions was made by a president protected from electoral consequences,” late in his second term.

Trump has already broken that pattern, consequences be damned. As Cato adjunct scholar Josh Blackman notes, the Arpaio pardon “came less than eight months into this presidency, and it went to a sheriff who consistently flouted court orders. This is the beginning, not the end.” Indeed, the Washington Post reported in July that, under pressure of the special counsel’s Russia investigation, Trump had “asked his advisers about his power to pardon aides, family members and even himself.” Trump denounced the report as “FAKE NEWS,” but special counsel Robert Mueller appears to believe it’s a live possibility, judging by his recent maneuvers.

Standing alone, the Arpaio pardon could likely never muster the majority necessary to sustain an impeachment. But it may not stand alone. And, as the Nixon-era House Judiciary Committee staff argued in its comprehensive report on the “Constitutional Grounds for Presidential Impeachment,” “the cause for the removal of a President may be based on his entire course of conduct in office.”

Roger Pielke makes good points about climate change and hurricanes in the Wall Street Journal today, but his ideas for federal policy action are off-base.

Pielke proposes that we “enhance federal capacity” for natural disasters and create a National Disaster Review Board. In my 2014 study on FEMA, I argue the opposite—that enlarging the federal role would be counterproductive.

Federalism is supposed to undergird America’s system of handling disasters, particularly natural disasters. State, local, and private organizations should play the dominant role. Looking at American history, many disasters have generated large outpourings of aid by individuals, businesses, and charities, and we see a similar wonderful response to Hurricane Harvey.

Pielke says that the federal government “plays a crucial role in supporting states and local communities to prepare for, respond to and recover from disasters.” But the federal role in preparation and recovery is not crucial, as it mainly involves handing out cash. The states have their own cash, and my study describes the disadvantages of pushing costs onto federal taxpayers.

As for disaster response, federal involvement is appropriate when agencies have unique capabilities to offer, such as the Coast Guard’s search and rescue capabilities. But it is mainly state, local, and private entities that own the needed resources and are on the scene to assist in emergencies. The states, for example, employ 1.3 million people in police and fire departments. As for the private sector, the 9/11 Commission report noted, “85 percent of our nation’s critical infrastructure is controlled not by governments but by the private sector, [so] private-sector civilians are likely to be the first responders in any future catastrophes.”

When the states need additional resources after a disaster, they can and do rely on help from other states under mutual aid agreements. Similarly, electric utilities have longstanding agreements with each other to share resources when disaster strikes. Such horizontal support makes more sense than top-down interventions from Washington.

Federal intervention can impede disaster response and rebuilding because of the extra paperwork involved and the added complexity of decisionmaking. A growing federal role may also induce states to neglect their own disaster preparedness because officials assume Uncle Sam will bail them out when disaster hits.

Growing federal intervention has been sadly crowding out state, local, and private roles in handling natural disasters. We should reverse course and only task the federal government with those roles that are unique and truly beyond the capabilities of other entities in society.

For further reading, see “The Federal Emergency Management Agency: Floods, Failures, and Federalism”.

“Devastating storm may ultimately boost US GDP” read the headline on CNBC’s Market Insider. Much like the debate around price gouging (addressed here), every storm or natural disaster seems to bring with it a discussion of whether physical destruction, or at least the aftermath and reconstruction arising from it, is somehow “good for the economy.”

Now, the particular CNBC headline above may prove to be right or it may prove to be wrong for a given period, as I’ll explain below. It’s purely an empirical matter. But examining the economic impact of hurricanes such as Harvey through assessing movements in short-term GDP alone is clearly a very partial account of the legacy of such a storm.

Firstly, the obvious point. Hurricane Harvey has destroyed property and hence destroyed wealth. This is unequivocally bad for the economy. Studies from Goldman Sachs and others estimate the economic destruction at anywhere between $30-40 billion.

Yet wealth is a “stock” concept, whereas measured GDP is a flow of activity in a given period. In principle then it is less obvious what the short, medium and long-term impacts of a hurricane on measured GDP (a measure of economic value-added at market prices) would be.

The very short-term impact of such a storm on GDP is almost certain to be negative. Hurricanes destroy productive capacity, disabling factories and, in the case of Texas, curtailing the oil refining sector. This acts as a negative supply-shock to both the local and national economy, with higher gasoline prices (an input to travel and production processes) filtering through to raise input prices and hence production costs.

Of course, as the effects of the storm wade, rebuilding activity and construction will begin. This will count towards GDP, and as much of GDP relates to voluntary activity that has real value-added, so it should. People will genuinely want to replace destroyed or damage homes, cars, fences, and the like, and this is valuable economic activity. In Houston this reconstruction and redevelopment is likely to be quicker than if it occurred in many other areas, due to the less restrictive zoning laws and hence lower transaction costs associated with new building.

But will all this increase GDP overall, relative to a counter-factual in which the storm had not taken place?

As my colleague David Boaz has previously written, to look at this activity alone would be to fall for what Frederic Bastiat described as the “broken window fallacy”. If a window is smashed, one observes the window being repaired and the associated spin-off ventures as economic activity. What is unseen is the economic activity that would have existed had people made decisions to save for a college degree, buy a new suit or invest in a start-up with the resources which they have now have to put towards rebuilding their house or replacing their car.

Whether the counterfactual path of GDP would be exactly the same or even higher than the post-storm economy is difficult to say. If there are unused resources, then it is theoretically possible that observed GDP could be temporarily higher for a period than it would otherwise have been absent a hurricane whilst construction is taking place. But that does not appear to be the empirical record. In fact, the same CNBC article shows that the New York economy performed substantially worse in the 12 quarters after Hurricane Sandy relative to the national economy.

And in the longer-term the consequences are certainly negative, whether represented in measured GDP or some broader conception of economic welfare (see this study, for example). Even if there were a case when GDP did look higher than expected for a while, this would not mean we were “richer” or somehow better off in an economic sense. 

We would have expended a bunch of resources to get back to where we were before the storm, and lost out on much other valuable activity that we would have preferred to have freely chosen to engage in. Circumstances would have altered our choices, and to the extent this meant we would have adjusted our preferences to buy certain repair goods and services, this would show up as economic activity. But overall, whatever measured GDP shows in the next few months, we would still be economically poorer for the destruction wrought by the storm, due the opportunities and decisions we were unable to make as a consequence of it.

In fact, in the very long run the only way a super-damaging storm such as this could improve the economic performance of an economy is if its effects led us to reassess damaging policies such as subsidizing flood insurance.

President Trump is reportedly considering pulling the plug on the Deferred Action for Childhood Arrivals (DACA) program, which allows about 800,000 immigrants who came to the U.S. as children to live and work here lawfully. If the president does decide to end the program, it will impose a massive cost on employers who currently employ these workers. The cost of recruiting and hiring new employees is expensive. Here are the facts:

  • DACA rescission will cost employers $6.3 billion in employee turnover costs, including recruiting, hiring, and training 720,000 new employees.
  • Every week for the next two years, U.S. employers will have to terminate 6,914 employees who currently participate in DACA at a weekly cost of $61 million.
  • Ending DACA would be the equivalent of 31 “major” regulations.

DACA recipients receive employment authorization documents (EADs). It is not illegal to work without authorization, but it is illegal for employers to hire someone who lacks authorization. Thus, DACA EADs essentially grant permission to employers to hire DACA beneficiaries for a given period—in this case, two years—without fear of employer sanctions for hiring an unauthorized worker. Note that the law prohibits employers from discriminating against foreign-born applicants purely because they have temporary authorization. Thus, if President Trump rescinds DACA, employers are the ones who will have to actually implement the policy by policing their workforce and firing DACA recipients. DACA repeal’s regulatory compliance burden will fall directly on American employers.

To estimate these costs, I reviewed 11 studies of the cost of turnover to employers. These studies included a wide variety of occupations with radically different wage levels. The most important component of turnover cost is the leaving employee’s wage, which is the marginal value of the worker’s production. The Table below displays the cost as a percent of annual wages.

As the Table shows, the estimated turnover cost ranges from 12 percent to 37 percent of annual wages with a median of 25 percent (the average is 26 percent). This estimate is slightly lower than a U.S. Department of Labor estimate that concluded that turnover costs an employer 30 percent of the leaving employee’s salary. It is slightly higher than a 2012 literature survey by Boushey and Glynn (2012) that found a median turnover cost of 21 percent of an employee’s annual salary.

Table: Costs of Turnover in Various Occupations

  Turnover Cost Studies Industries Percent of Annual Wages Average Costs Hourly Wage


Seninger, et al (2002) Supported Living





Larson, et al (2004) Direct support professionals





Patterson, et al. (2010) Emergency medical





Hinkin & Tracey (2000) Hotels





Frank (2000) Grocery Stores





Dube, et al (2010) Various





Jones (1990) Nurses





Barnes, et al. (2007) Teachers





Appelbaum & Milkman (2006) Various





Wise (1990) Nurses





Milanowski & Odden (2007) Teachers




  Median All Above




Sources: See links in table and table text

An August 2017 Center for American Progress survey of DACA recipients found that their wages had risen to $17.46 hourly (or $34,920 annually). It also found that 91 percent of DACA recipients have jobs. According to my projections based on U.S. Citizenship and Immigration Services data, 790,148 people have DACA or will have DACA by September 1, 2017. Thus, 719,035 immigrants are earning $25.1 billion per year. If the federal government forces employers to fire all of DACA recipients, it will cost employers $6.3 billion.

The fact that some employers will receive advanced notice of the expiration of their employees’ work authorization could mitigate these costs, but according to these studies, the primary cost associated with turnover is the lower productivity of new hires. Additionally, because DACA recipients’ wages have grown 69 percent over the last five years, it is likely that those DACA participants whose cancellations occur in 2018 and 2019 will have higher wages than those today. Finally, DACA participants’ employment rate has also risen year after year—four percentage points since 2016—and older participants have a higher employment rate. This again indicates that the number of firings could be higher than this projection estimates.

The costs will likely not be imposed all at once as the program will slowly unwind over a two-year period. I previously estimated the quarterly rate of expirations, based on U.S. Citizenship and Immigration Services data, which can give us an estimate of how a DACA cancellation would distribute the costs over time. Every week U.S. employers will have to terminate 6,914 DACA employees at a weekly cost of $61 million.

Figure: DACA Employee Terminations and DACA Recession Turnover Costs


Source: See Table 1 and Cato Institute (Note that 886,000 people have received DACA at some point, but many have had their renewals rejected or have failed to renew for other reasons. 720,000 had jobs in 2017)

For context, the Congressional Review Act regards any new administrative rule as a “major rule” if it will have a likely annual impact of more than $100 million. CRS requires major rules to go through a 60-day notice and public comment period and to allow Congress the opportunity to review it and reject it. Because DACA was not created through a rule-making process, it likely does not require this process to be terminated, but its rescission would still impose $3.2 billion in annual costs. Thus, ending DACA would be the equivalent of more than 30 major regulations.

President Trump is considering DACA rescission only under the threat of a lawsuit that claims DACA was unconstitutionally implemented. If that claim is valid, Congress should immediately act to pass legislation to extend employment authorization and legal status for these young immigrant workers. It should not choose to impose massive costs on employers and immigrants.

An urban fairytale is emerging in the aftermath of Hurricane Harvey. Commentators claim that because Houston lacks a traditional zoning code, Houstonians recklessly built a city with too many roads, buildings, and parking lots, and these impervious surfaces collected water rather than absorb it, exacerbating flooding. They argue Houston doesn’t have enough absorbent surfaces with trees, grasses, and soil because of the lack of zoning.

The facts don’t support this story. It’s true Houston is the only major U.S. city without conventional Euclidean “separate-all-land uses” zoning. But this has not reduced absorbent surface cover relative to other cities with more aggressive regulation.

In fact, a map of Houston indicates the city has a low level of impervious surface cover across more than 90% of the city. Most of the remaining 10% falls under the “average” impervious/pervious surface ratio category, and hardly any falls under the “high levels of pavement” category.


Source: Houston-Galveston Area Council Planning & Development Department

Of course, a more important question is how Houston stacks up against similarly sized cities that have comprehensive zoning regulation. On CNN, the chair of Georgia Tech’s School of Regional and City Planning argued that “when you have a less dense urban fabric, you’re going to have more impervious surface and you’re going to have more runoff … That’s clearly an important consideration in Houston.”

But on the contrary, Houston has substantially less impervious surfaces covered by buildings, roads, and parking lots (39.2%) and substantially more absorbent surfaces with trees, grasses, and soils (60.6%)  than similarily populated American cities.

City Impervious Surface Cover (buildings, roads, parking lots, sidewalks) Absorbent Surface Cover (vegetation, soil) Houston

39.2 %

60.6 %

New York

61.1 %

38.8 %


58.5 %

41.3 %

Los Angeles

54.0 %

45.8 %

New Orleans

41.7 %

57.8 %


Data Source: USDA Forest Service, 2012

Still, is it possible that urban planners would have preserved even more green space? That seems extremely unlikely: New Orleans and New York City experience hurricane and flood risks, and both have more impervious surface cover than Houston despite conventional planning and zoning. 

And it’s not as if Houston is absent planners or land use regulation. What Houston planners currently regulate, like parking requirements, minimum lot size, and paved easement requirements, drive impervious surface cover up, not down. 

Houston should do exactly opposite of what commentators suggest in order to reduce impervious surface cover. It should eliminate existing parking requirements and paved easement requirements, not add to them. Conventionally zoned cities would benefit from the same approach.

The idea that more zoning is a solution to Houston’s Harvey problem is wishful thinking. 

Since its publication in 1963, Milton Friedman and Anna Jacobson Schwartz’s A Monetary History of the United States has stood as a monumental scholarly accomplishment. Even critics of Friedman’s Monetarism have admired the work’s meticulous historical research, particularly its reconstruction of a data series on several measures of the U.S. money stock going back to 1867. Almost all subsequent researchers have accepted and employed Friedman and Schwartz’s numbers.

Recently, however, some have implied that Friedman and Schwartz fudged their data or cooked their numbers. For example, Joe Salerno, in a Mises Institute post entitled “Milton Friedman Debunked — by Econometricians,” explicitly accuses Friedman and Schwartz of “fudging” their data. Another post at the Institute for New Economic Thinking blog makes a similar claim with the title “Did Milton Friedman Cook His Numbers?”

These charges are very serious, but on close examination they turn out to be based entirely on misrepresentations or misunderstandings of some relatively minor and arcane criticisms of Friedman and Schwartz’s analysis of the velocity of money that they published in a volume that appeared in 1982, nearly two decades after their Monetary History came out. These criticisms, whether valid or not, do not challenge the accuracy of the numbers in the Monetary History but in fact rely on those very numbers.

Both posts are reporting on an article at VOX, CEPR’s Policy Portal, entitled “Milton Friedman and Data Adjustment.” Written by Neil Ericsson, David Hendry, and Stedman Hood, the article is a summary of their longer chapter in Milton Friedman: Contributions to Economics and Public Policy, edited by Robert A. Cord and J. Daniel Hammond (Oxford University Press, 2016). What follows is an extended discussion of Ericsson, Hendry, and Hood’s criticisms, but those interested in only a summary can just read the next and final sections.

Money Stock Figures

Although Salerno’s description of the VOX article is generally accurate, his post overall, as well as the title of the Institute for New Economic Thinking post, leave the unwary reader with an exaggerated and misleading impression.

To begin with, the criticisms raised by Ericsson, Hendry, and Hood do not even apply to the data series on the U.S. money stock that Friedman and Schwartz presented in their classic Monetary History (1963) or in their subsequent, massive Monetary Statistics of the United States: Estimates, Sources, and Methods (1970). Indeed, the three econometricians in their VOX article are not challenging Friedman and Schwartz’ money stock figures at all.

Instead, they are challenging the analysis of money’s velocity that Friedman and Schwartz made in the later, much neglected, 1982 volume, Monetary Trends in the United States and the United Kingdom: Their Relation to Income, Prices, and Interest Rates, 1867-1975. Moreover, their criticisms of Friedman and Schwartz’s velocity analysis is based entirely on Friedman and Schwartz’s own money stock figures.

Velocity Analysis

The 1982 volume that Ericsson, Hendry, and Hood are critiquing was initially supposed to be the first of two volumes looking at the relationship between money and other economic variables. Monetary Trends, as the first of these, looks at those relationships over the long run. The second volume was going to take up the relationship between money and other economic variables over the business cycle, but by the time Monetary Trends appeared, Friedman and Schwartz had unfortunately abandoned this final volume.

Monetary Trends, in its analysis of the factors affecting the demand for money, employs the well-known equation of exchange: MV = Py. The variable y captures the impact of real income or output on the real demand for money (M/P), whereas velocity (V) is a residual variable. This makes the equation of exchange an identity, true by definition, with velocity reflecting all factors other than real income that affect money demand and the price level. If velocity falls, ceteris paribus, the demand for money rises, and vice versa. Much of the early debate between Keynesians and Monetarists was about the behavior of velocity, with Friedman long contending that it had predictable relationship with a small number of other variables.

The Ericsson, Hendry, and Hood online critique of Friedman and Schwartz displays a striking chart showing the level of velocity in the U.S. from 1872 to 1975. It contains two sets of lines, one showing a much more drastic decline in velocity over the long run than the other.

[caption caption=”Figure 1: Unadjusted and adjusted US annual and phase-average observations for velocity Data Source: VOX CEPR, Friedman and Schwartz (1982)”]  


The set with the least drastic decline is from Monetary Trends (p. 186) and, as the VOX authors report, results from Friedman and Schwartz “adjusting the US money stock series by a linear trend of 2.5% per annum for observations before 1903, with no trend adjustment thereafter.” In other words, Friedman and Schwartz derived a more stable linear trend for velocity by adjusting upward their money stock figures between 1867 and 1903 and then re-calculating velocity. The VOX post continues: “while the unadjusted money stock for 1867 is $1.28 billion, its adjusted value is $3.15 billion: 246% of its original value.”

Although at first glance, this may seem like a drastic and perhaps unwarranted adjustment, it becomes far less so in the face of several observations.

Relying on Friedman and Schwartz to Criticize Friedman and Schwartz

The raw, unadjusted number of $1.28 billion for the total money stock in 1867 also appears in Friedman and Schwartz’s Monetary Trends (p. 122). It is exactly the same number in their earlier Monetary Statistics (p. 61) and (with trivial differences arising from timing) approximately the same number reported in Monetary History (p. 704; $1.31 billion). All these estimates are for the old M2, and it is only in Monetary Trends that Friedman and Schwartz make the adjustment criticized in the VOX post.

In other words, the authors of the VOX post, as they freely admit, had to rely on Friedman and Schwartz’s own money stock estimates to create the unadjusted estimates of velocity that appear in their graph. Moreover, reading from their graph, their unadjusted series for velocity is roughly identical to velocity series used in Friedman and Schwartz’s older Monetary History (p. 774).

There was absolutely nothing deceptive about the Friedman and Schwartz velocity adjustments in Monetary Trends. Friedman and Schwartz describe in lengthy detail (pp. 216-218) how they made the adjustments and, contrary to the impression left by the VOX authors, provide a persuasive rationale for doing so. Thus, the real question is not whether Friedman and Schwartz’s data was faulty. It is whether their subsequent analysis was correct.

Financial Sophistication

What was the rationale for these adjustments? Friedman and Schwartz noticed a huge difference in the long-run velocity trend for M2 between the U.K. and U.S. during the second half of 19th century, whereas the long run trend was similar in both countries thereafter. They attributed this difference to greater financial sophistication in the U.K. relative to the U.S. as of 1867, with the U.S. converging to the U.K. level over the next half century. As Friedman and Schwartz put it, “the more rapid spread of financial institutions in the United States than in the United Kingdom after 1880 was probably the main reason for the near elimination by 1903 of the wide difference in velocity that prevailed in 1876-77” (p. 216). In other words, increased financial sophistication in the U.S. was generating an increased demand for money as measured by M2, with a concomitant fall in velocity.

Friedman and Schwartz are not the only scholars who have noticed this difference or have tried to explain it. The alternative but complementary explanation that I find the most plausible is provided by Richard Timberlake in chapter 9 of his Monetary Policy in the United States: An Intellectual and Institutional History (1993). He attributes the high velocity (or low demand) for M2 at the end of the Civil War to the shortages of official currency in small denominations that plagued the U.S. economy at that time.

He points out that “denominational hindrances encouraged swaps, barter, payment in kind, and use of unaccounted and unaccountable moneys to a much greater degree than … can be measured” (p. 125). Consequently, the measured money stock was not capturing all transactions in the U.S. To give just two examples, privately issued “shinplasters” used as money were quite common during this period, and the prevalence of sharecropping, essentially a barter transaction, throughout the southern states was in large part the result of postwar debilities in the South’s financial system. Notice that Timberlake’s alternate explanation gives an even more straightforward justification for adjusting the money stock upward.

Isolating a Bone of Contention

Whatever the explanation for the drastic decline in velocity in the U.S., why did Friedman and Schwartz adjust their series to eliminate this effect?

The primary answer is that they wanted to isolate the impact of interest rates on velocity. After all, this was a major bone of contention between Monetarists and Keynesians. Even today, many macro models assume, implicitly or explicitly, that interest rates are the dominate, if not the only, factor affecting velocity.

This is not to claim that Friedman and Schwartz’s adjustment is necessarily the best approach to this question. But there is certainly nothing illegitimate or misleading about what they did.

Cyclical Variability

Ericsson, Hendry, and Hood also offer some sophisticated econometric criticisms that apply to Friedman and Schwartz’s entire velocity series through 1975 and not just to the period prior to 1902. For instance, the VOX authors point out that Friedman and Schwartz “also employed data adjustment to remove cyclical variability.”

Removing cyclical variability was fully justified, given that Friedman and Schwartz were interested in long-run relationships in this volume and had intended to take up cyclical relationships in the never-completed final volume. But the VOX authors show in their graph that the statistical technique employed failed to “fully eliminate the data’s short-run variability [emphasis mine].”

In this case, their almost contradictory complaint is that Friedman and Schwartz did not adjust their data series enough.

Minor Minutiae

More significant, Ericsson, Hendry, and Hood also argue that these data adjustments undermined Friedman and Schwartz’s “empirical model constancy and goodness of fit.” I don’t think it is necessary to get into these arcane statistical quibbles. It is sufficient to point out that econometric techniques have seen major innovations since Monetary Trends was published in 1982 and that even the volume’s initial publication witnessed debates about its statistical methodology. For example, Thomas Mayer raised such issues as early as his overall favorable review of Monetary Trends in the December 1982 issue of the Journal of Economic Literature.

Ultimately, the VOX article criticism boils down to the claim that a “random walk model” provides a better fit. Given that Friedman and Schwartz never contended that velocity was perfectly constant, notice how the argument has now been reduced to relatively minor minutiae.

Moot Modeling

Finally, there is an important sense in which these technical questions about how to precisely model velocity have become somewhat moot. Everyone recognizes that the financial deregulation of the 1980s caused the velocity of money to behave in unpredictable ways. Friedman himself conceded as much in, among other places, a Wall Street Journal article on August 19, 2003. This led to the widespread abandonment of monetary targeting by central banks (to the extent that they ever actually practiced it). Indeed, it is one of the reasons Friedman altered his preferred monetary policy from increasing M2 at some fixed rate to instead freezing the monetary base while permitting banks to issue banknotes.


To sum up, claiming that Friedman and Schwartz “fudged their data” or “cooked their numbers” is a gross misrepresentation. Even critics of their theoretical conclusions rely on their raw numbers. Friedman and Schwartz can be challenged on minor econometric issues, particularly their analysis of velocity’s behavior. But the erratic behavior of velocity beginning in the 1980s has diminished the relevance of even these questions. In contrast, Friedman and Schwartz’s estimates of the money stock in the U.S. (prior to the Fed’s reporting those numbers) not only remain the best we are likely ever to have but set a standard for historical and statistical research that has been rarely, if ever, matched.

[Cross-posted from]

Nobel laureate James Buchanan has been in the news lately, especially because of a book that seeks to link his 7000 pages of economic writing to both Dixiecrat segregationists and Charles Koch’s secret plan “to radically alter our government in ways that will be devastating to millions of people.” The thesis of Democracy in Chains by Nancy MacLean is that public choice economics is a radical plan to “shackle the people’s power,” “to put democracy in chains.” Oddly, she claims (without evidence), he set out on this project because he resented the Supreme Court’s decision in Brown v. Board of Education – which of course used “undemocratic” means to overturn the democratic decisions of legislatures in various states.

Buchanan certainly was concerned with how to achieve justice, efficiency, and “prevention of discrimination against minorities” in the context of majority rule. Throughout his work he explored how to design constitutional rules to bring about optimal outcomes, including a balanced budget requirement, supermajorities, and constitutional protection of individual rights. He worried that both majorities and legislatures would be short-sighted, economically ignorant or inefficient, and indifferent to the imposition of burdens on others.

And today a Washington Post column by Dana Milbank illustrates one of the big problems that Buchanan sought to solve: the temptation of legislatures to spend money with little regard for what two of his students called “deficits, debt, and debasement.” Looking outward from Hurricane Harvey to the upcoming congressional session, Milbank wrings his hands:

Harvey makes landfall in Washington as soon as next week, when President Trump is expected to ask for what could be tens of billions of dollars in storm relief. And paying for storm recovery — probably with few offsetting spending cuts — will be but the first blow to fiscal discipline in what looks to be a particularly active, and calamitous, spending season.

It’s not just disaster relief. The Pentagon is hoping for tens of billions of additional dollars. And Republicans may pivot from “tax reform” to mere tax cuts. It’s easier just to spend money and cut taxes than to reform the flood insurance program, make the tax system more efficient, and focus military spending on actual defense needs, much less to think about the national debt and the next generation.

Trump, who came to power promising to eliminate the $20 trillion debt, or at least to cut it in half, is poised to oversee an exponential increase in that debt. Republicans, who came to power with demands that Washington tackle the debt problem, could wind up doing at least as much damage to the nation’s finances as the Democrats did….

If the red ink rises according to worst-case forecasts, “we’re talking additions to the debt in the trillions,” Maya MacGuineas, president of the Committee for a Responsible Federal Budget, tells me. All from actions to be taken in the next few months. “It turns out the Republican-run Congress is not willing to make the hard choices,” she says. “It is a fiscal free-lunch mentality on all sides.”

We’ve heard a lot over the past few years about a “dysfunctional” Congress. Many critics mean that Congress doesn’t pass enough laws. But this is the real dysfunction: a Congress that spends money with little thought to the future. The national debt doubled under President George W. Bush and doubled again under President Barack Obama. President Trump and the Republican Congress are just getting started, but the prospects don’t look good.

Milbank, MacGuineas, and others who worry about the “fiscal free-lunch mentality” should read some Buchanan. As one scholar put it in a reflection on Buchanan’s work, “Perhaps legislatures would do better if supermajorities were required whenever transfers to current recipients will burden future generations.” Perhaps so. And perhaps constitutional guarantees of individual rights, judicial protection of those rights, and limits on the legislature’s free-lunch mentality are all part of what Buchanan called the constitutional political economy of a free society.