Cato Op-Eds

Individual Liberty, Free Markets, and Peace
Subscribe to Cato Op-Eds feed

When, in my days as a professor, I occasionally assigned term papers, I used to smile when students wondered out loud how they could possibly come up with enough to say to fill a whole 20 (or 15, or 5, or whatever) pages.  After all, the problem, once you got to be where I was, wasn’t having too much space: it was not having space enough to say what needed saying.  It was all I could do sometimes to squeeze my ideas into the 25 double-spaced typescript page-limit that prevailed among scholarly economics journals.

These days I’m no longer compelled to wrestle with academic journal editors, thank goodness.  But I still face strict length limits now and then, like the one I’m confronting as I finally get around to writing my long-overdue review of Roger Lowenstein’s America’s Bank: The Epic Struggle to Create the Federal Reserve. I’m supposed to limit the review to 1000 words.  Yet I could easily write 20,000 words about that book.  In fact I have written 20,000, and then some, in the shape of a Cato Policy Analysis called “New York’s Bank: the National Monetary Commission and the Founding of the Fed.” Our respective titles give you some idea of where Lowenstein and I differ.  Anyway, the PA isn’t ready yet.  When it is, probably about a month from now, I will let you know.

Despite that PAs length, it also leaves much unsaid.  It says nothing at all, for example, about the seemingly innocuous sentence in chapter five of America’s Bank that reads: “The Bank of France was chartered in 1800 as an antidote to the financial turmoil of the French Revolution.”

It is but a passing statement, in a work concerning the founding, not of the Bank of France, but of the Fed; and it is of no importance to that work’s thesis.  And yet…and yet that sentence says plenty, for it represents as well as any sentence in Lowenstein’s book its author’s inclination — a very common one, to be sure — to view even the earliest central banks as sources of financial order and stability, despite the fact that doing so often means overlooking oodles of inconvenient facts.

In the case of the Bank of France, many of these inconvenient facts are, ironically enough, unabashedly set down in one of the volumes published by the National Monetary Commission — volumes that supposedly informed the Aldrich Plan and, indirectly, the Federal Reserve Act.  These volumes generally display a bias in favor of central banking, as their sponsors intended them to do. Were one looking for a rose-colored portrayal of the Bank of France’s origins, one might expect to find it here.

Nevertheless, according to this particular volume’s author,  André Liesse, the Bank of France was conceived, not as a remedy for France’s post-revolutionary financial turmoil, but as one for Napoleon’s fiscal difficulties.  What’s more, far from having represented an improvement upon the status quo ante, its establishment marked the end of a remarkable though short-lived period of relative financial stability.

The disastrous failure, in 1721, of John  Law’s Banque Royale, was, according to Liesse, entirely attributable to that bank’s involvement with the financial operations of the French government, and to its having secured, in return for that involvement, an exclusive right to issue banknotes.  No wonder the bank’s failure resulted in an edict establishing complete freedom of note issue.  Still, it was not until 1776 that the scars left by its collapse had healed sufficiently for another bank of issue to be established.

The new bank, the first Caisse d’Escompte, also ran into trouble as a result of “repeated state loans and government interference,” eventually leading to its becoming “nothing more than a branch of the public administration of finance.”  The episode led the great economist (and Inspector General to Louis XVI) Du Pont de Nemours, in Liesse’s words,

to defend the true principles of banks of issue, asserting that  a bank without a privilege, not involved in business relations with a debt-ridden and needy State, without the prerogative of forced currency, can not do otherwise than pay in coin on demand the value of every note issued.

In 1793 what remained of the Caisse d’Escompte succumbed to the financial “paroxysms” of the Revolution.  Once again, according to Liesse, an institution that “would have been of real service to commerce if it had not allowed itself to become the State’s banker” instead found itself “lending money to the State without sufficient security, and receiving nothing in return but privileges which could not fail to be disastrous to it.”

But other banks of issue founded during the first Caisse d’Escompte’s lifetime managed to keep going despite the Revolutionary turmoil, including the “dangerous and ruinous flood of assignats” that was eventually to result in hyperinflation.  Their owners and managers, mostly Protestants whose families had fled to France after the Edict of Nantes was revoked, had managed, “even in dealing with Napoleon,” to avoid being “cajoled into granting the State favors of credit which would cost them dear.”  Their banks would soon be joined by other private institutions, including the Caisse des Comptes Courants, a central clearinghouse and bankers’ bank (it issued only very large denomination notes, meant for interbank settlements) established in Paris in 1796, and the Caisse d’Escompte du Commerce (or Caisse du Commerce, for short) — organized in 1797.

Thus began a brief but at least relatively glorious free banking interval.[1] “It can not be denied,” Liesse observes,

that after the terrible years of the Revolution, in the midst of the confusion and anarchy of the Directory, these credit establishments, in spite of difficult conditions, survived, maintained their credit, and were of real services to the commerce and bankers of Paris.  They gave not the slightest occasion for complaint or interference on the part of the public authorities.  Without any sort of privilege, having no connection with the Government, they were able to meet their obligations even in the midst of serious panics.

In short, freedom in banking worked just as Du Pont DeNemours said it would.[2]

Yet this success was not allowed to last.  As Charles Conant puts it (History of Modern Banks of Issue, p. 44), the established banks

were doing an active and safe banking business when a new turn was given to the economic history of France by the coup d’état of the Eighteenth Brumaire (November 9, 1799), which made Napoleon Bonaparte First Consul and virtually supreme ruler of France.

Napoleon did not hesitate, despite the lessons of the past, to make plans for yet another government-controlled and privileged bank of issue, the Bank of France.   For Liesse this development, far from seeming perfectly sensible (as modern central bank enthusiasts would have it), was astonishing.  How could it happen, he wonders,

that this most satisfactory state of freedom came to an end and that in the course of a few years there was organized in Paris a bank with the exclusive privilege of issue?  Is it due to a series of natural causes?  No.  Not one of the Caisses just described had occasioned disaster or invited suppression.[3]  The new state of things came from the idea of credit which existed in the mind of General Bonaparte, as well as from his tendency to centralize everything, and because the government at the moment was in great need of money.

The “idea of credit which existed in the mind of General Bonaparte” boiled down to this: that he might have all the credit he wanted, if only he could establish a bank he could control, and award it a monopoly of currency extending throughout all of France.

At very least, Napoleon could have a lot more credit for a lot less than France’s then-existing banks were either willing, or even able, to supply.  According to notes left by a member of Napoleon’s Council of State, to which Professor Liesse refers, the First Consul had “determined to lower” the interest rate at which the government could borrow to something less than the rate of 3 percent permonth banks were then demanding, thanks to the government’s poor credit.  Napoleon “could not get what he wanted from the free banks.  On the other hand, he felt that the Treasury needed money, and wanted to have under his hand an establishment which he could compel to meet his wishes. …It would certainly seem that here originated the idea of creating a new bank of issue.”

Given the circumstances, raising capital for the new bank was no easy proposition.  To address that difficulty, the government first persuaded the  Caisse des Comptes Courants to merge with it.  To make further shares attractive, the new bank secured the privilege of holding various government deposits.  Still, less than 7500 of a requisite 15,000 shares (half of the Bank’s stipulated capital stock) were taken, with Bonaparte’s friends and relatives having pride of place among the subscribers.   (Napoleon himself was the Bank’s first subscriber, with 30 shares.)  Further privileges were duly awarded it, until they sufficed to allow the remaining shares to be disposed of.

At first, the Bank of France had to compete with other banks of issue, including the Caisse du Commerce.  When attempts to persuade the older bank to merge with the Bank of France failed, and especially after the Caisse du Commerce refused the government a loan it sought, Napoleon resorted to coercion.  The details remain obscure.  According to one account (admittedly in an English newspaper) at first the Bank of France, with Napoleon’s support, tried to bring its rival to submission by staging note-redemption raids.  When that strategy failed, Napoleon simply had some of his troops shut the bank down.  What’s certain is that the law of 24 Germinal, An XI (April 14, 1803), against which the Caisse du Commerce protested vehemently, awarded the Bank of France the exclusive right to issue banknotes in Paris, compelling all other banks of issue to surrender their assets to it.

“The Bank of France was chartered in 1800 as an antidote to the financial turmoil of the French Revolution.” It is one of those sentences that exposes a dominating — but distorted — worldview no less effectively than it obscures aspects of reality itself.


[1]This was, in fact, the second such interval in French banking history.  The first was still a still briefer episode, lasting only from 1790 to 1793, during which hundreds of “caisses patriotiques” flourished.  According to Eugene White, that episode also “provides evidence of the success of free banking.”  It ended when the government closed down the caisses in November 1793.  See Eugene N. White, “Free Banking during the French Revolution,” Explorations in Economic History 27 (1990): 251-276.

[2]For a more recent, but equally favorable, assessment of France’s 1796-1803 free banking episode, see Philippe Nataf, “Free banking in France,” in Kevin Dowd, ed., The Experience of Free Banking  (London: Routledge, 1992), pp. 123-36.

[3]Nor did suppressing inflation have anything to do with it.  The raging inflation brought about by the Revolutionary government’s overissuance of assignats had come to a sudden end when, on July 16, 1796, the National Assembly decreed that people might conduct business using whatever money they chose, while allowing mandates, which had superseded assignats, to be accepted at their current value in specie.  From that moment on, France was effectively back on a metallic standard.

[Cross-posted from]

What a day yesterday! First, our National Oceanic and Atmospheric Administration (NOAA) announced that 2015 was the warmest year in the thermometric, and then the Washington Post’s Jason Samenow published an op-ed titled “Global warming in 2015 made weather more extreme and it’s likely to get worse.”

Let’s put NOAA’s claim in perspective.  According to Samenow, 2015 just didn’t break the previous 2014 record, it “smashed” (by 0.16°C).  But 2015 is the height of a very large El Niño, a quasi-periodic warming of tropical Pacific waters that is known to kite global average surface temperature for a year or so. The last big one was in 1998.  It, too set the then-record for warmest surface temperature, and it was (0.12°C) above the previous year, which, like 2014, was the standing record at the time. 

So what happened in 2015 is what is supposed to happen when an El Niño is superimposed upon a warm period or at the end year of a modest warming trend.  If it wasn’t a record-smasher, there would have to be some extraneous reason why, such as a big volcano (which is why 1983 wasn’t more of a record-setter).

El Niño warms up surface temperatures, but the excess heat takes 3 to 6 months or so to diffuse into the middle troposphere, around 16,000 feet up.  Consequently it won’t fully appear in the satellite or weather balloon data, which record  temperatures in that layer, until this year.  So a peek at the satellite (and weather balloon data from the same layer) will show 1) just how much of 2015’s warmth is because of El Niño, and 2) just how bad the match is between what we’re observing and the temperatures predicted by the current (failing) family of global climate models.

On December 8, University of Alabama’s John Christy showed just that comparison to the Senate Subcommittee on Space, Science, and Competitiveness.  It included data through November, so it was a pretty valid record for 2015 (Figure 1).

Figure 1. Comparison of the temperatures in the middle troposphere as projected by the average of a collection of climate models (red) and several different observed datasets (blue and green). Note that these are not the surface temperatures, but five-year moving average of the temperatures in the lower atmopshere.

El Niño’s warmth occurs because it suppresses the massive upwelling of cold water that usually occurs along South America’s equatorial coast.  When it goes away, there’s a surfeit of cold water that comes to the surface, and global average temperatures drop.  1999’s surface temperature readings were 0.19°C below 1998’s.  In other words, the cooling, called La Niña, was larger than the El Niño warming the year before.  This is often the case.

So 2016’s surface temperatures are likely to be down quite a bit from 2015 if La Niña conditions occur for much of this year.  Current forecasts is that this may begin this summer, which would spread the La Niña cooling between 2016 and 2017.

The bottom line is this:  No El Niño, and the big spike of 2015 doesn’t happen.

Now on to Samenow. He’s a terrific weather forecaster, and he runs the Post’s very popular Capital Weather Gang web site.  He used to work for the EPA, where he was an author of the “Technical Support Document” for their infamous finding of “endangerment” from carbon dioxide, which is the only legal excuse President Obama has for his onslaught of expensive and climatically inconsequential restrictions of fossil fuel-based energy.  I’m sure he’s aware of a simple real-world test of the “weather more extreme” meme.  University of Colorado’s Roger Pielke, Jr. tweeted it out on January 20 (Figure 2), with the text “Unreported. Unspeakable. Uncomfortable. Unacceptable.  But there it is.”


Figure 2. Global weather-related disaster losses as a proportion of global GDP, 1990-2015.

It’s been a busy day on the incomplete-reporting-of-climate front, even as some computer models are painting an all-time record snowfall for Washington DC tomorrow.  Jason Samenow and the Capital Weather Gang aren’t forecasting nearly that amount because they believe the model predictions are too extreme.  The same logic ought to apply to the obviously “too-extreme” climate models as well, shouldn’t it?

In response to the wild popularity of the Netflix series, Making a Murderer, the Washington Post is running a series this week about the presumption of innocence for those readers who are hungry to learn more about the American criminal justice system. The Post invited me to submit a piece for the series and it is now available online.  Here’s an excerpt:

Casual observers of our legal system will sometimes say that they would never plead guilty to a crime if they were innocent. An easy claim to make — but it is another thing when your freedom is actually on the line.

Imagine learning that the government has a “witness” who is willing to tell lies about you in court. And then your own attorney tells you that his best advice is for you to go into court, say you’re guilty and accept one year in prison instead of risking a 10-year prison sentence should the jury believe the lying witness. It’s an awful predicament for innocent people who get swept up in criminal cases. As William Young, then chief judge of the U.S. District Court in Boston observed in a 2004 opinion: “The focus of our entire criminal justice system has shifted away from trials and juries and adjudication to a massive system of sentence bargaining that is heavily rigged against the accused.”

Everyone is generally aware that some criminal cases go to trial and others are resolved by plea bargains, but most folks have no idea how lopsided the American criminal justice system has become.  Only about five percent of the cases go to trial.  One law professor says that finding a jury trial is about as likely as finding a hippopotamus in New York City.  It’s not impossible…you just have to know where to look.

For related Cato work, go here, here and here.

At their core, trade agreements like the Trans-Pacific Partnership improve U.S. and foreign trade policy by reducing artificial barriers to mutually beneficial exchange.  That is, the TPP will bring us freer trade.  Unfortunately, the TPP will bring us other things as well.

For decades, trade agreements have included rules that are not strictly related to trade.  One area where this has become especially controversial is intellectual property.  A number of prominent U.S. industries, particularly movie studios and record labels, benefit immensely from strong copyright protection in the United States and want that same protection afforded in foreign markets.

But including copyright rules in trade agreements is problematic for a number of reasons.  For one thing, when U.S. negotiators press for these rules, they have to compromise on other demands.  The TPP will, for example, require Canada to extend copyright duration from its current length of 50 years after an author’s death to 70 years after death.  But it won’t require Canada to dismantle its protectionist supply management system that keeps out U.S. dairy products to the detriment of Canadian consumers. 

Removing protectionist trade barriers brings broad benefits to businesses and consumers throughout the region.  The same cannot be said for lengthening copyright terms from really long to really, really long.

Despite the inappropriateness of linking copyright policy and trade liberalization, intellectual property rules are likely to be a part of trade agreements into the foreseeable future.  It’s important, therefore, that we get the right rules in place.

Historically, trade agreements have been used to strengthen and solidify only the restrictive parts of copyright law.  Like past agreements, the TPP sets minimum standards of protection, meaning that reform can only go in one direction. 

But a well-functioning copyright system depends on limitations and exceptions that respect the rights of users to interact with copyrighted material in new, creative, and useful ways.  Without a proper balance between creator and user rights, copyright monopolies can actually prevent innovation by restricting the use of old things and inviting government intervention in the market.  Vital components of a just and innovation-maximizing copyright system include limited duration, reasonable penalties for infringement, and exceptions to liability—like fair use

This is where the TPP makes an important and beneficial contribution to copyright rules in trade agreements.  The TPP is the first trade agreement to include a provision calling on members to achieve an appropriate balance in their copyright laws through limitations and exceptions to exclusive rights. 

I explain what’s at stake, and why this new provision is such a positive contribution to the TPP, in the new Cato video:

Let’s dive into the issue a little bit deeper. Here’s Article 18.66 of the TPP:

Each Party shall endeavour to achieve an appropriate balance in its copyright and related rights system, among other things by means of limitations or exceptions … giving due consideration to legitimate purposes such as, but not limited to: criticism; comment; news reporting; teaching, scholarship, research, and other similar purposes.

This provision is the first of its kind in imposing a positive obligation to employ limitation and exceptions and to achieve “balance.”  It’s not as strongly worded (“shall endeavour”) as many of the more restrictive provisions in the TPP, but it is broadly worded enough to form a sound foundation for further progress. 

One very important thing about this provision is that it fits well with current U.S. law, which unlike many other countries, applies fair use as a very flexible concept.  Some specific activities (such as parody, commentary, or criticism) have been consistently recognized as meeting the factors that courts weigh to determine when fair use applies, but there is not a closed list of acceptable uses. 

This means that new types of uses not foreseen by a court or legislature are more likely to be permitted in the United States than in foreign countries that have a less flexible approach to fair use.  There are many reasons why the world’s Internet companies are based in the United States, and a robust fair use exception to copyright is one of them.

The TPP’s provision on copyright balance is an important step toward recognizing that more intellectual property protection isn’t always better.  Together with rules protecting the free flow of data, the copyright balance provision does more to promote economic activity and innovative growth than tweaking foreign copyright regulations.  It also furthers the core value of the TPP as a mechanism to reduce the negative impact of rent-seeking and to liberalize markets.

Until now, conventional wisdom held that candidates of both major parties had to back ethanol welfare to win the Iowa caucuses. Like cotton was in the antebellum South, corn–in the form of ethanol–is king in Iowa.

Most of today’s candidates have fallen into line. However, Sen. Ted Cruz has broken ranks to criticize farmers’ welfare. He holds a narrow polling lead over Donald Trump leading up to the upcoming caucuses. (Sen. Rand Paul also rejects the conventional wisdom, but he remains far back in the race.)

Cruz’s political strength has dismayed ethanol makers. The group America’s Renewable Future, whose state director is the governor’s son, is deploying 22 staffers in the presidential campaign. The lobby doesn’t want to look like a paper tiger.

Ethanol subsidies once included a high tariff and generous tax credits, both of which expired at the end of 2011. However, the Renewable Fuel Standard, which requires blending ethanol with gasoline, operates as a huge industry subsidy. Robert Bryce of the Manhattan Institute figured the requirement cost drivers more than $10 billion since 2007.

Ethanol is a political creation. Three decades ago, the Agriculture Department admitted that ethanol could not survive “without massive new government assistance,” which “cannot be justified on economic grounds.” What other reason could there be for an ethanol dole?

Petroleum is the most cost-effective energy source available for transportation, in particular. Ethanol has only about two-thirds of the energy content of gasoline. Given the energy necessary to produce ethanol—fuel tractors, make fertilizer, and distil alcohol, for instance—ethanol actually may consume more in fossil fuels than the energy it yields.

The ethanol lobby claims using this inferior fuel nevertheless promotes “energy independence.” However, ending imports wouldn’t insulate the United States from the impact of disruptions in a global market. Moreover, the price of this energy “insurance” is wildly excessive.

Bryce figured that “Since 1982, on average ethanol has cost 2.4 times more than an energy-equivalent amount of gasoline.” In some years, the former was three times as expensive.

Last year, Terry Dinan of the Congressional Budget Office told House members that “the marginal cost of reducing gasoline consumption by one gallon through substituting corn ethanol” could run as much $3.20. With the United States likely to become a net oil exporter, the call for energy independence makes ever less sense.

As I point out on American Spectator online, “by creating an artificial energy demand for corn—40 percent of the existing supply goes for ethanol—Uncle Sam also is raising food prices. This obviously makes it harder for poor people to feed themselves, and raises costs for those seeking to help them.”

Nor does ethanol welfare yield an environmental benefit, as claimed. In fact, ethanol is bad for the planet.

Two years ago, the Intergovernmental Panel on Climate Change warned that “Increasing bioenergy crop cultivation poses risks to ecosystems and biodiversity.” Scientific American’s David Biello pointed to fertilizer run-off from cornfields which created “vast oxygen-deprived ‘dead zones’ in the Gulf of Mexico.”

Jerry Taylor and Peter Van Doren, formerly and currently at Cato, respectively, also cited research which, after taking “evaporative emissions” into account, determined that ethanol mixed with gasoline “actually increases emissions of total hydrocarbons, non-methane organize compounds and volatile toxins.” Moreover, additional land used for corn production means “more water pollution, less water for other uses, and more ecosystems destruction.”

What of combatting climate change? One study estimated a drop of between one and five percent in greenhouse emissions from the blended fuel, which makes the cost extraordinarily high.

Other reviews don’t even find this reduction. Princeton’s Timothy Searchinger told Biello, “We can’t get to a result with corn ethanol where we can generate greenhouse gas benefits.” Similarly, warned Dinan, “replacing gasoline with corn ethanol has only limited potential for reducing emissions (and some studies indicate that it could increase emissions).”

Ethanol is a bad deal by any standard. Whomever Iowans support for president, King Ethanol deserves a bout of regicide.

The Congressional Budget Office (CBO) released new projections showing the debt disaster being manufactured in Washington. Federal borrowing from global capital markets is expected to soar from an outstanding $14 trillion this year to $24 trillion by 2026 under the baseline.

Every dollar of debt is an added burden on future taxpayers. Debt-fueled spending is both unfair and damaging. Members of Congress know how credit cards work, so either they are in denial or they lack the guts to make tough decisions. Either way, they are failing the nation.

The situation is actually worse that the baseline shows, and members should know that too. The baseline assumes that the economy chugs along with modest growth and avoids a recession, but we’ve had a recession every five and a half years, on average, since World War II. Deficits soar during recessions, pushing up debt and debt as a share of gross domestic product (GDP).

Also, the CBO baseline assumes that Congress sticks to current discretionary budget caps. Discretionary spending is projected to fall from 6.5 percent of GDP this year to 5.2 percent by 2026. But what if Congress continues its spendthrift ways, keeps breaking the budget caps, and spending remains at 6.5 percent? That change alone would push up 2026 debt by roughly $2.7 trillion, including added interest costs.

Here is another scary factor that is rarely discussed: The CBO budget report has included the chart below for many years. The current debt load is the highest ever in peacetime but is still below the World War II peak. I suspect people looking at the chart are comforted by the rapid decline in the debt following the war—if we belt-tightened back then, then we can do so again.

But little of the post-war debt decline was due to belt-tightening. In fact, the government has steadily increased spending and run deficits 85 percent of the years since 1930.

The post-WWII debt decline was partly due to strong economic growth, but mainly due to the government shafting bondholders with unexpected inflation. Inflation reduces the real value of outstanding debt, and thus imposes losses on creditors. The ability to cut real debt by inflation depends on the debt’s maturity and whether creditors expect inflation. If the average maturity is long, the government can reduce the real debt load with unexpected inflation.

That is what happened following WWII. As the chart shows, the debt-to-GDP ratio was cut almost in half between 1946 and 1955. Economists Joshua Aizenman and Nancy Marion found that nearly all that drop was due to the combination of inflation and long maturities on the debt at the time. In subsequent decades, maturities fell, so inflation resulted in less debt shrinkage.

Here’s the upshot: It is unlikely that the government would be able to shaft bondholders like that again, nor would that be a good idea. So Congress and the next president need to make major budget reforms to avert the government’s disastrous debt pile up. They are going to have to cut spending, and that is actually a good thing because spending cuts would reduce economic distortions and help spur more growth.

Just before last weekend’s Democratic debate, Bernie Sanders finally released the long-awaited plan for his health care proposal, which would fundamentally transform the health care sector by replacing all health insurance with a single program administered by the federal government. Michael Cannon has ably explained how Obamacare was really the big loser of the back and forth at the debate, but it’s worth looking further into Sanders’ outline of a plan. At just seven pages of text, it leaves most of the major questions unanswered. It does list a bevy of tax increases that it say will finance the needed $1.38 trillion in new federal spending each year, although even this is a significant underestimate. Bernie’s plan promises universal coverage and savings for families and businesses without delving into of the necessary, and often messy, trade-offs. 

While he calls the plan ‘Medicare for all,’ the plan would actually cover even more services than Medicare and do away with the program’s cost-sharing components like co-payments, deductibles, and premiums. Giving people comprehensive coverage of “the entire continuum” at little cost to themselves would seem to significantly increase utilization, which would strain the system’s capacity while also rendering it unaffordable. The plan makes no effort to answer fundamentally important questions: How would the new system determine payment rates for health care providers? What, if anything, would it do to try to rein in the growth of health care costs?

The “Getting Health Care Spending Under Control” section of the plan is one paragraph long and offers little beyond assurances that “creating a single public insurance system will go a long way towards getting health care spending under control” and under Berniecare “government will finally be able to stand up to drug companies.” That this is hardly a comprehensive plan and gives the impression that in this system, cost control measures would somehow be painless.

The topline estimate in the plan is $1.38 trillion per year, and a memo provided by Professor Gerald Friedman gives some additional details, claiming the plan would need $13.77 trillion in new public spending from 2017-2026. As Avik Roy has pointed out, that estimate fails to account for the trillions in government spending at the state and local level that will have to be replaced under Berniecare. The figure below gives some sense of how much spending comes from sources other than the federal government. Even after Obamacare significantly expanded the federal government’s role in the health care, spending by the federal government only accounts for about 28 percent of national health expenditures. Under Berniecare, the federal government would be the only payer, and thus would have to replace much of this other spending, including the 17 percent that is currently financed at the state and local level. 

Estimated National Health Expenditures by Source of Funds, 2016


Source: Centers for Medicare & Medicaid Services, “NHE Projections 2014-2024.”

Professor Friedman’s memo also reduces the tab of Berniecare by simply assuming massive savings that would reduce costs below the current baseline: $4 trillion from the assumption that the recent health care cost slowdown will continue and another $6.31 trillion in additional savings from moving to single-payer. Given the lack of details in the cost savings section, these are far from certain, and it seems more plausible that comprehensive coverage with minimal cost sharing would increase utilization and expenditures. Berniecare would cost significantly more than the roughly $13.8 trillion cited in the outline.

The plan gets into more detail in an area Bernie is more comfortable, proposing significant tax increases on businesses and high earners: a 6.2 percent income-based health premium paid by employers, a 2.2 percent income-based premium on households, raising the marginal income tax rates for high earners, and taxing capital gains and dividends. These are just some of the increases he would propose. As discussed above, even this raft of significant tax increases would only finance part of the increased federal spending needed, so Berniecare would still add trillions to the debt.

This plan, like most proposals put out by presidential candidates, oversells the benefits and avoids wading into details that would reveal any trade-offs or costs, leaving the most difficult questions unanswered. Even under optimistic assumptions, the trillions in new taxes would not come close to covering the increased expenditures. If a plan promises to cover everything for everyone without any kind of trade-off, it probably can’t.

In this morning’s 6-3 ruling in Campbell-Ewald v. Gomez, the Supreme Court, with Justice Ruth Bader Ginsburg writing for the majority, ruled that a defendant’s offer to settle in full the claim of a named plaintiff did not in itself avail to moot the claim and thus (its goal) knock-out the associated class action. The case, which John Elwood and Conor McEvily previewed in their contribution to the latest Cato Supreme Court Review, is the latest in a series–notably Genesis Healthcare Corp. v. Symczyk three years ago–raising the question of when and whether defendants can end a group action by “picking off” named plaintiffs. While this case on its face is a win for the liberal side and embraces the analysis argued previously by Justice Elena Kagan in her Genesis dissent, it still leaves important elements of the wider question unresolved, while giving Justice Clarence Thomas the chance to write an interesting concurrence asking whether either camp of justices is asking the right questions. 

Dissenting Chief Justice John Roberts (joined by justices Antonin Scalia and Samuel Alito) argues that an individual lawsuit that has been met with a fully adequate offer of settlement has ceased to be a “case or controversy,” the only sorts of disputes our courts may adjudicate. (Because the federal law that underlies the suit – the Telephone Consumer Protection Act, or TCPA – has a statutory maximum for damages, it is reasonably knowable what constitutes full relief for plaintiff Gomez.) By contrast, the majority points out with some force that a valid claim countered with a full offer of settlement is not in quite the same posture as a grievance that never became a valid claim in the first place. Ginsburg, Kagan, et al. would apply principles of contract to an offer of judgment made under federal Rule 68 and, under such principles, a contract offer–handsome or otherwise–need not be accepted. 

Justice Clarence Thomas, concurring separately, disagrees with both sides’ approach. He is not satisfied with the conservatives’ somewhat Legal Realist approach (if one may call it that) as to when a case or controversy has ceased, but is equally wary of the liberals’ resort to contract principles (laying a legal controversy to rest is not quite the same thing as contract-making, even if they have much in common.) Instead, he would look to the early common law of tenders, which preceded (and led up to) what is now Federal Rule 68 on offers of settlement. Thomas concludes that in this particular case common law analysis would lead to the same destination as reached by the majority. 

While this morning’s outcome is being hailed in some quarters as a huge victory for class actions, note well the narrowing language on pages 11 and 12 of Justice Ginsburg’s opinion, which suggests a concern to keep courts rather than the parties or their lawyers in final control: 

We need not, and do not, now decide whether the result would be different if a defendant deposits the full amount of the plaintiff’s individual claim in an account payable to the plaintiff, and the court then enters judgment for the plaintiff in that amount. That question is appropriately reserved for a case in which it is not hypothetical. 

Australian Prime Minister Malcolm Turnbull is in DC, and one of the things he is talking about is the Trans-Pacific Partnership (TPP).  Addressing the U.S. Chamber of Commerce, he said this:

So, when I’m speaking to some of your legislators later today I‘ll be encouraging them to support the TPP. Not to lose sight of the wood for the trees, not to get lost in this detail or that detail or that compromise, because the big picture is: the rules-based international order, which America has underwritten for generations, which has underwritten the prosperity and the economic growth from which we have all benefitted, the TPP is a key element in that.

Along the same lines, this is from a conversation he had with President Obama:

… And can I say, as I’ve just said to the U.S. Chamber of Commerce, encouraging them to encourage their congressmen and senators to support it, that the TPP is much more than a trade deal.  The prosperity of the world, the security of the world has been founded on the peace and order in the Asia Pacific, which has been delivered underwritten by the United States and its allies, including Australia.  

And what we’ve been able to do there is deliver a period of peace, a long period of peace from which everybody has benefited.  And America’s case – its proposition – is more than simply security.  It is standing up for, as you said, the rules-based international order, an order where might is not right, where the law must prevail, where there is real transparency, where people can invest with confidence.  

And the TPP is lifting those standards.  And so it is much more than a trade deal.  And I think when people try to analyze it in terms of what it adds to this amount of GDP or that, that’s important.  But the critical thing is the way it promotes the continued integration of those economies, because that is as important an element in our security in the maintenance of the values which both our countries share as all of our other efforts – whether they are in defense or whether they are in traditional diplomacy.

There’s lots of vague talk here, with the specifics glossed over.  He says we should not “get lost in this detail or that detail,” but for me, the TPP is all about the details. As he notes, the TPP is “more than a trade deal.”  So what else is it?  In terms of its economic impact, that’s what we in Cato’s trade policy center are looking at right now, and we will offer our assessment in the coming months.

On the other hand, when you start hearing about “security,” and “peace,” and “order,” and how the TPP might contribute, I would be a little skeptical about what exactly the TPP can deliver here. That’s not to say it can offer nothing; but this kind of benefit is very hard to measure.

One of the most promising recent developments in education policy has been the widespread interest in education savings accounts (ESAs). Five states have already enacted ESA laws, and several states are considering ESA legislation this year. Whereas traditional school vouchers empower families to choose among numerous private schools, ESAs give parents the flexibility to customize their child’s education using a variety of educational expenditures, including private school tuition, tutoring, textbooks, online courses, educational therapies, and more.

Today the Cato Institute released a new report, “Taking Credit for Education: How to Fund Education Savings Accounts through Tax Credits.” The report, which I coauthored with Jonathan Butcher of the Goldwater Institute and Clint Bolick (then of Goldwater, now an Arizona Supreme Court justice), draws from the experiences of educational choice policies in three states and offers suggestions to policymakers for how to design a tax-credit-funded ESA. Tax-credit ESAs combine the best aspects of existing ESA policies with the best aspects of scholarship tax credit (STC) policies. Like other ESA policies, tax-credit ESAs empower families to customize their child’s education. And like STC policies, tax-credit ESAs rely on voluntary, private contributions for funding, making them more resistant to legal challenges and expanding liberty for donors.

Here’s how it would work: individuals and corporations would receive tax credits in return for donations to nonprofit scholarship organizations that would set up, fund, and oversee the education savings accounts. There’s already precedent for this sort of arrangement. In Florida, the very same nonprofit organizations that grant scholarships under the state’s STC law also administer the state’s publicly funded ESA. Moreover, New Hampshire’s STC law allows scholarship organizations to help homeschoolers cover a variety of educational expenses, similar to ESA policies in other states. 

For more details on how to design tax-credit ESAs, how they would work, and the constitutional issues involved, you can read the full report here. You can also find a summary of the report at Education Next.

An early trope about Bitcoin was that it was ‘non-political’ money. That’s a tantalizing notion, given the ugliness of politics. But a monetary system is a social system, technology is people, and open source software development requires intensive collaboration—particularly around a protocol with strong network effects. When the group is large enough and the subject matter important enough, human relations become politics. I think that is true even when it’s not governmental (read: coercive) power at stake.

Bitcoin’s politics burst into public consciousness last week with the “whiny ragequit” of developer Mike Hearn. In a Medium post published ahead of a New York Times article on his disillusionment and departure from the Bitcon scene, Mike said Bitcoin has “failed,” and he discussed some of the reasons he thinks that.

As do most people responding to the news, I like Mike and I think he’s right to be frustrated. But he’s not right on the merits of Bitcoin, and his exit says more about one smart, impatient man than it does about this fascinating protocol.

But there is much to discover about how governance of a project like Bitcoin will proceed so that politics (in the derogatory sense) can be minimized. Stable governance will help Bitcoin compete with governmental monetary and record-keeping systems. Chaotic governance will retard it. We just need to figure out what “stable governance” is.

If you’re just tuning in, usage of Bitcoin has been steadily rising, to over 150,000 transactions per day. That is arguably putting pressure on the capacity of the network to process transactions. (And it undercuts thin, opportunistic arguments that Bitcoin is dead.)

Anticipating that growth, last May developer Gavin Andresen began pushing for an expansion of the network’s capacity through an increase in the size of “blocks,” or pages on the Bitcoin global public ledger. The current limit, 1 MB about every 10 minutes, supports about three transactions per second.

The following month, Gavin also stepped down as Bitcoin’s lead developer to focus on broader issues. He handed the reins of “Bitcoin Core” to a group that—it later became clear—doesn’t share his vision. And over the summer and fall last year, the arguments in the blocksize debate grew stronger and more intense.

In August, Gavin and Mike introduced a competing version of the Bitcoin software called Bitcoin XT, which, among other things, would increase the blocksize to 8 MB. Their fork of the software included a built-in 75 percent super-majority vote for adoption, which made it fun to discuss as “A Bitcoin Constitutional Amendment.”

This move catalyzed discussion, to be sure, but also deepened animosity in some quarters. Notably, the controller(s) of various fora for discussing Bitcoin on the web began censoring discussion of XT on the premise that this alternative was no longer Bitcoin. Nodes running XT were DDOSed (that is, attacked by floods of data coming from compromised computers), assumedly by defenders of Core.

A pair of conferences entitled “Scaling Bitcoin” brought developers together to address the issues, and the conferences did a lot of good things, but they did not resolve the blocksize debate. The Bitcoin community is in full politics mode and the worst of politics are on display.

Well, actually, not the worst. Politics is at its worst when the winners can force all others to use their protocol or ban open discussion of competing ideas entirely.

Competing ideas. Competing software. To my mind, these seem to be the formative solution to Bitcoin’s current governance challenge. The relatively small Bitcoin community had fallen into the habit of using a small number of web sites to interact. Those sites betrayed the open ethos of the community, which prompted competing alternatives to spring up.

The community has likewise fallen into the habit of relying on a small number of developers–of necessity, in part, because Bitcoin coding talent is so rare. Now, though set back by the censorship and DDOS attacks, Bitcoin XT is joined by Bitcoin Unlimited and Bitcoin Classic as competitors to Bitcoin Core.

The developers of each version of the Bitcoin software must convince the community that their version is the best. That’s hard to do. And it’s supposed to be hard. Competition is great for everybody but the competitors.

The coin of the realm in these competitions–as in all debates–is credibility. Each software team must share the full sweep of their vision, and how their software advances the vision. They must convince the community of users that they have thought through the many technical threats to Bitcoin’s success.

I’ll confess that the Core team’s vision remains relatively opaque to me. I gather that they weight mining centralization as a greater concern than others do and thus resist the centralizing influence of a larger block size. As a technical layman, the best articulation for Core I’ve found is a response to Mike Hearn from BitFury’s Valery Vavilov. In it, one can at least see the reflection of the vision. Core’s recent statement and a December discussion of capacity increases don’t overcome the need for more sense of where they see Bitcoin going and why it’s good. I’m certain that they intend the best, and I’m pretty sure they feel that they’ve already explained their plans until they’re blue in the face. (Or, at least, blue in the hair…) But the community might benefit from more, and Peter R’s presentation in Montreal–though needlessly peppery at the end–is the clearest and thus most plausible explanation of blocksize economics I’ve found. (Much in this paragraph may be evidence of my ignorance.)

The reason Mike Hearn could ragequit is because he no longer wants a place in the Bitcoin community. He set a match to all his political capital. Everyone else in the Bitcoin community, and especially the developers, must do everything they can to build their political capital. They must explain the merits of their ideas and–in the fairest possible terms–the demerits of others. They should back up their ideas with supportive evidence, which–happily–an open technical system allows. And they should turn away “allies” who censor dsicussion forums or sponsor DDOS attacks. They should avoid impugning the motives of others, and, when they lose, lose gracefully.

All these behaviors cultivate credibility and the ability to persuade over the long haul. They offer the prospect of long-term success in the Bitcoin world and success for the Bitcoin ecosystem. Good behavior is good “politics,” which is something this non-political money needs.

As 2015 came to an end, so perhaps did a central tenet of resolving failed companies, the notion that “similarly situated” creditors ought to be treated equally, or, as the lawyers like to say “pari passu” (Latin for “on the same footing”).*  The turning point was Portugal’s treatment of creditors of Novo Banco SA.

Until its failure in August of 2014, Banco Espirito Santo SA had been Portugal’s second largest bank.  When it failed, the Banco de Portugal, acting as receiver, divided the failed bank into  “good” and “bad” components, as the FDIC commonly does in the event of a large U.S. bank failure.  Banco Espirito Santo SA continued as the “bad bank,” which was to be liquidated in an orderly process.  The “good bank” became Novo Banco SA, which would stay in business.

In such “good bank-bad bank” resolutions, all equity holders usually remain with the bad bank, while more senior creditors are transferred to the good bank.  In any event all creditors of the same class are treated alike.  Creditors assigned to the good bank are much more likely to recover some part of their investment.

In the case of Novo Banco, the usual practice was at first followed.  All creditors within certain classes were transferred to it in August 2014.  Those who weren’t transferred took losses instead of taxpayers, which was also the generally correct approach (would that it had been our approach during the financial crisis!).  But last month, something odd happened: a small number of bonds were re-assigned to Banco Espirito Santo SA.  The holders of those bonds were likely to recover less than if they had remained with the good bank.  This was done to reduce leverage at Novo Banco SA.  One can read the listing of bonds and the justification here.  The problem is that other bonds of similar seniority remained with Novo Banco.  That meant that the pari passu principle was violated.  Some bondholders would recover considerably more than others, despite holding bonds having the same priority.

So far as I can tell, what Portugal did was perfectly legal (but I’m not a lawyer, keep that in mind).  And one could even justify it, if the alternative would have been to have the taxpayers take a hit.  Still there are good reasons for regretting Portugal’s action.  The whole point of bankruptcy law and its administrative cousin, receivership, is to establish a chain of priority in the event of insolvency.  Basically where you stand in line is predetermined.  You generally have the ability to contract as to where you stand in line, and generally your expected return reflects that risk (farther back in line you are, less likely are to get paid).  Pari passu dictates that everyone who contracted for a particularly spot in line is treated the same.  While pari passu seems to have arose originally as contractual boilerplate, it has somewhat taken the status of an implied contractual term.  If the recovery is insufficient, the proceeds are share pro rata.  If I hold bond A and you hold bond A, we both get the same pay-off.  If I get 50 cents on the dollar, you get 50 cents on the dollar.  A decent respect for equality under the law demands such, as well as the rule of law.

If pari passu no longer holds, the ability to estimate default recoveries is greater reduced, increasing uncertainty in the debt market.  Particular groups of creditors are also more likely to become playthings of politics.  Witness the treatment of certain pension funds in the auto bankruptcies, which were harmed in order to benefit the auto unions.  Deviations from pari passu risk turning the resolution process into a political game, rather than a legal proceeding.

Unless you’re investor in either Banco Espirito Santo SA or Novo Banco SA, why should you care about this?  You should care because thanks to Dodd-Frank’s Title II resolution process, the same thing is now a lot more likely to happen in the good-ol’ U. S. of A.  That’s because Dodd-Frank’s Title II resolution process explicitly allows for exceptions to pari passu.  Given how the recent financial crisis response played out, one could easily envision, under a Title II resolution, creditors in a Florida pension fund being treated differently than those in a California pension fund, especially in an election year.  One could also envision differing treatment depending upon whether the creditors were domestic or foreign, as was the case with Novo Banco SA.

Section 210 of Dodd-Frank is loosely modeled on Section 11 of the Federal Deposit Insurance Act (FDIA), which calls for strict adherence to the pari passu principle.  But while Dodd-Frank suggests that pari passu generally be followed, Section 210(b)(4) allows for various exceptions.  Pari passu may be set aside when the receiver determines that doing so serves, according to the language of the statute:

(i) to maximize the value of the assets of the covered financial company;

(ii) to initiate and continue operations essential to implementation of the receivership or any bridge financial company;

(iii) to maximize the present value return from the sale or other disposition of the assets of the covered financial company; or

(iv) to minimize the amount of any loss realized upon the sale or other disposition of the assets of the covered financial company.

Although a further clause states that these exceptions can be made only provided that “all claimants that are similarly situated under paragraph (1) receive not less than the amount provided in paragraphs (2) and (3) of subsection (d),” this clause merely requires that a creditor get at least what he would have gotten in a liquidation, allowing the receiver to disregard any going-concern value, including goodwill.  In practice, this is unlikely to be a constraint at all.

In short, I think it is fair to say that Dodd-Frank, far from enforcing pari passu, allows almost anything to happen, especially in a Chevron deference world.  In fact the protections for a receiver are tighter than in the Chevron case (see Section 210(e) of Dodd-Frank and its limit on judicial review).

As depositors have historically been the dominant, and sometimes the only creditors in bank resolutions, the discretion that Dodd-Frank allows may not matter much in such cases.  But Dodd-Frank’s application to non-banks raises a whole new set of disturbing possibilities for the extra-judicial treatment of creditors.

Congress, at the suggestion of the FDIC, included similar flexibility in the resolution procedures for Fannie Mae and Freddie Mac.  That whole process has, of course, gone swimmingly.


*Some additional legal background on pari passu, particularly in the case of sovereign defaults, is here. For a more skeptical legal read this.

[Cross-posted from]

Hillary Clinton and Sen. Bernie Sanders participate in a Democratic primary debate in Charleston, South Carolina, on Jan. 17, 2016.

In their final debate before they face Democratic primary voters, Hillary Clinton and Bernie Sanders traded sharp jabs on health care. Pundits focused on how the barbs would affect the horse race, whether Democrats should be bold and idealistic (Sanders) or shrewd and practical (Clinton), and how Sanders’ “Medicare for All” scheme would raise taxes by a cool $1.4 trillion. (Per. Year.) Almost no one noticed the obvious: the Clinton-Sanders spat shows that not even Democrats like the Affordable Care Act, and that the law remains very much in danger of repeal.

Hours before the debate, Sanders unveiled an ambitious plan to put all Americans in Medicare. According to his web site, “Creating a single, public insurance system will go a long way towards getting health care spending under control.” Funny, Medicare has had the exact opposite effect on health spending for seniors. But no matter. Sanders assures us, “The typical middle class family would save over $5,000 under this plan.” Remember how President Obama promised ObamaCare would reduce family premiums by $2,500? It’s like that, only twice as ridiculous.

Clinton portrayed herself as the protector of ObamaCare. She warned that Sanders would “tear [ObamaCare] up…pushing our country back into that kind of a contentious debate.” She proposed instead to “build on” the law by imposing limits on ObamaCare’s rising copayments, and by imposing price controls on prescription drugs. Sanders countered, “No one is tearing this up, we’re going to go forward,” and so on.

Such rhetoric obscured the fact that the candidates’ differences are purely tactical. Clinton doesn’t oppose Medicare for All. Indeed, her approach would probably reach that goal much sooner. Since ObamaCare literally punishes whatever insurers provide the highest-quality coverage, it therefore forces health insurers into a race to the bottom, where they compete not to provide quality coverage to the sick.  That’s terrible if you or a family member have a high-cost, chronic health condition—or even just an ounce of humanity. But if you want to discredit “private” health insurance in the service of Medicare for All, it’s an absolute boon. After a decade of such misery, voters will beg President (Chelsea) Clinton for a federal takeover. But if President Sanders demands a $1.4 trillion tax hike without first making voters suffer under ObamaCare, he will over-play his hand and set back his cause.

The rhetoric obscured something much larger, too. Clinton and Sanders inadvertently revealed that not even Democrats like ObamaCare all that much, and Democrats know there’s a real chance the law may not be around in four years.

During the debate, Sanders repeatedly noted ObamaCare’s failings : “29 million people still have no health insurance. We are paying the highest prices in the world for prescription drugs, getting ripped off…even more are underinsured with huge copayments and deductibles…we are spending almost three times more than the British, who guarantee health care to all of their people…Fifty percent more than the French, more than the Canadians.”

Sure, he also boasted, repeatedly, that he helped write and voted for the ACA. Nonetheless, Sanders was indicting ObamaCare for failing to achieve universal coverage, contain prices, reduce barriers to care, or eliminate wasteful spending. At least one of the problems he lamented—“even more [people] are underinsured with huge copayments and deductibles”—ObamaCare has made worse. (See “race to the bottom” above, and here.)

When Sanders criticized the U.S. health care system, he was criticizing ObamaCare. His call for immediate adoption of Medicare for All shows that the Democratic party’s left wing is simply not that impressed with ObamaCare, which they have always (correctly) viewed as a giveaway to private insurers and drug companies.

Clinton’s proposals to outlaw some copayments and impose price controls on prescription drugs are likewise an implicit acknowledgement that ObamaCare has not made health care affordable. In addition, her attacks on Sanders reveal that she and many other Democrats know ObamaCare’s future remains in jeopardy.

Seriously, does anyone really think Clinton is worried that something might “push[] our country back into that kind of a contentious debate” over health care? America has been stuck in a nasty, tribal health care debate every day of the six years since Democrats passed ObamaCare despite public disapproval. Or that Republicans would be able to repeal ObamaCare over President Sanders’ veto?

Clinton knows that if the next president is a Republican, all the wonderful, magical powers that ObamaCare bestows upon the elites in Washington, D.C., might disappear.

If we elect a Republican, they’ll roll back all of the progress we’ve made on expanding health coverage. #DemDebate

— The Briefing (@TheBriefing2016) January 18, 2016

“I don’t want to see us start over again. I want us to defend and build on the Affordable Care Act and improve it.” —Hillary #DemDebate

— Hillary Clinton (@HillaryClinton) January 18, 2016

And she wants Democratic primary voters to believe she is the only Democrat who can win the White House. “The Republicans just voted last week to repeal the Affordable Care Act,” she warned, “and thank goodness, President Obama vetoed it.”

Clinton’s attacks on Sanders’ health care plan—her warning about “pushing our country back into that kind of a contentious debate” are just a sly way of warning Democratic voters: Bernie can’t win. Nominate me and I will protect ObamaCare. Nominate him, and ObamaCare dies.

We can’t afford to undo @POTUS’ progress. Health care for millions of Americans is too important.

— Hillary Clinton (@HillaryClinton) January 18, 2016

Health care should be a right for every American. We should build on the progress we’ve made with the ACA—not go back to square one.

— Hillary Clinton (@HillaryClinton) January 14, 2016

Perhaps that prediction is correct. Perhaps it isn’t. But it’s plausible.

Either way, ObamaCare was the biggest loser in this Democratic presidential debate.

Ross Douthat and Reihan Salam, two of the smartest conservative thinkers today, have spilt much ink worrying over immigrant assimilation.  Salam is more pessimistic, choosing titles like “The Melting Pot is Broken” and “Republicans Need a New Approach to Immigration” (with the descriptive url: “Immigration-New-Culture-War”) while relying on a handful of academic papers for support.  Douthat presents a more nuanced, Burkean think-piece reacting to assimilation’s supposed decline, relying more on Salam for evidence. 

Their worries fly against recent evidence that immigrant assimilation is proceeding quickly in the United States.  There’s never been a greater quantity of expert and timely quantitative research that shows immigrants are still assimilating.

The first piece of research is the National Academy of Science’s (NAS) September 2015 book titled The Integration of Immigrants into American SocietyAt 520 pages, it’s a thorough, brilliant summation of the relevant academic literature on immigrant assimilation that ties the different strands of research into a coherent story.  Bottom line:  Assimilation is never perfect and always takes time, but it’s going very well. 

One portion of NAS’ book finds that much assimilation occurs through a process called ethnic attrition, which is caused by immigrant inter-marriage with natives either of the same or different ethnic groups.  Assimilation is also quickened with second or third generation Americans marry those from other, longer-settled ethnic or racial groups.  The children of these intermarriages are much less likely to identify ethnically with their more recent immigrant ancestors and, due to spousal self-selection, to be more economically and educationally integrated as well.  Ethnic attrition is one reason why the much-hyped decline of the white majority is greatly exaggerated

     In an earlier piece, Salam focuses on ethnic attrition but exaggerated the degree to which it declined by confusing stocks of ethnics in the United States with the flow of new immigrants.  He also emphasizes the decrease in immigrant inter-marriage caused by the 1990-2000 influx of Hispanic and Asian immigrants.  That decrease is less dire than he reports.  According to another 2007 paper, 32 percent of American Mexican-Americans married outside of their race or ethnicity while 33 percent of women did (I write about this in more detail here).  That’s close to the 1990 rate of intermarriage reported for all Hispanics in the study Salam favored.  The “problem” disappeared.  

The second set of research is a July 2015 book entitled Indicators of Immigrant Integration 2015 that analyses immigrant and second generation integration on 27 measurable indicators across the OECD and EU countries.  This report finds more problems with immigrant assimilation in Europe, especially for those from outside of the EU, but the findings for the United States are quite positive.

The third work by University of Washington economist Jacob Vigdor offers a historical perspective.  He compares modern immigrant civic and cultural assimilation to that of immigrants form the early 20th century (an earlier draft of his book chapter is here, the published version is available in this collection).  For those of us who think early 20th century immigrants from Italy, Russia, Poland, Eastern Europe, and elsewhere assimilated successfully, Vigdor’s conclusion is reassuring:

“While there are reasons to think of contemporary migration from Spanish-speaking nations as distinct from earlier waves of immigration, evidence does not support the notion that this wave of migration poses a true threat to the institutions that withstood those earlier waves.  Basic indicators of assimilation, from naturalization to English ability, are if anything stronger now than they were a century ago [emphasis added].

American identity in the United States (similar to Australia, Canada, and New Zealand) is not based on nationality or race nearly as much as it is in the old nation states of Europe, likely explaining some of the better assimilation and integration outcomes here.       

Besides ignoring the huge and positive new research on immigrant assimilation, there are a few other issues with Douthat’s piece.

Douthat switches back and forth between Europe and the United States when discussing assimilation, giving the impression that the challenges are similar.  Treating assimilation in Europe and the United States as similar adds confusion, not clarity.  Cherry-picking outcomes from Europe to support skepticism about assimilation in the United States misleads.  Assimilation is a vitally important outcome for immigrants and their descendants but Europe and the United States have vastly different experiences. 

Douthat also argues that immigrant cultural differences can persist just like the various regional cultures have done so in the United States.  That idea, used most memorably in David Hackett Fischer’s Albion’s Seed, is called the Doctrine of First Effective Settlement (DFES).  Under that theory, the creation and persistence of regional cultural differences requires the near-total displacement of the local population by a foreign one, as happened in the early settlement of the United States. 

However, DFES actually gives reasons to be optimistic about immigrant assimilation because Douthat misses a few crucial details when he briefly mentioned it.  First, as Fischer and others have noted, waves of immigrants have continuously assimilated into the settled regional American cultures since the initial settlement – that is the point of DFES.  The first effective settlements set the regional cultures going forward and new immigrants assimilate into those cultures. 

Second, DFES predicts that today’s immigrants will assimilate into America’s regional cultures (unless almost all Americans quickly die and are replaced by immigrants).  The American regional cultures that immigrants are settling into are already set so they won’t be able to create persistent new regional cultures here.  America’s history with DFES is not a reason to worry about immigrant assimilation today and should supply comfort to those worried about it.

Immigrants and their children are assimilating well into American society.  We shouldn’t let assimilation issues in Europe overwhelm the vast empirical evidence that it’s proceeding as it always has in the United States.

Just when you thought the Syrian civil war couldn’t get any messier, developments last week proved that it could.  For the first time in the armed conflict that has raged for nearly five years, militia fighters from the Assyrian Christian community in northern Iraq clashed with Kurdish troops. What made that incident especially puzzling is that both the Assyrians and the Kurds are vehement adversaries of ISIS—which is also a major player in that region of Syria.  Logically, they should be allies who cooperate regarding military moves against the terrorist organization.

But in Syria, very little is simple or straightforward.   Unfortunately, that is a point completely lost on the Western (especially American) news media.  From the beginning, Western journalists have portrayed the Syrian conflict as a simplistic melodrama, with dictator Bashar al-Assad playing the role of designated villain and the insurgents playing the role of plucky proponents of liberty.  Even a cursory examination of the situation should have discredited that narrative, but it continues largely intact to this day.

There are several layers to the Syrian conflict.  One involved an effort by the United States and its allies to weaken Assad as a way to undermine Iran by depriving Tehran of its most significant regional ally.  Another layer is a bitter Sunni-Shite contest for regional dominance.  Syria is just one theater in that contest.  We see other manifestations in Bahrain, where Iran backs a seething majority Shiite population against a repressive Sunni royal family that is kept in power largely by Saudi Arabia’s military support.  Saudi Arabia and other Gulf powers backed Sunni tribes in western Iraq against the Shiite-dominated government in Baghdad.  Some of those groups later coalesced to become ISIS.  In Yemen, direct military intervention by Saudi Arabia and Riyadh’s smaller Sunni Gulf allies is determined to prevent a victory by the Iranian-backed Houthis.

The war in Syria is yet another theater in that regional power struggle.  It is no accident that the Syrian insurgency is overwhelmingly Sunni in composition and receives strong backing from major Sunni powers, including Saudi Arabia, Qatar, and Turkey.  Assad leads an opposing “coalition of religious minorities,” which includes his Alawite base (a Shiite offshoot) various Christian sects, and the Druze.  But there is an added element of complexity.  The Kurds form yet a third faction, seeking to create a self-governing (quasi-independent) region in northern and northeastern Syria inhabited by their ethnic brethren.  In other words, Syrian Kurds are trying to emulate what Iraqi Kurds have enjoyed for many years in Iraqi Kurdistan, where Baghdad’s authority is little more than a legal fiction.  That explains the clash between Assyrian Christians and Kurds.  Both hate ISIS, but the former supports an intact Syria (presumably with Assad or someone else acceptable to the coalition in charge), the latter does not.

Such incidents underscore just how complex the Syrian struggle is and how vulnerable to manipulation well-meaning U.S. mediation efforts might become.  Our news media need to do a far better job of conveying what is actually taking place in that part of the world, not what wannabe American nation builders wish were the case.

Surprise! Venezuela, the world’s most miserable country (according to my misery index) has just released an annualized inflation estimate for the quarter that ended September 2015. This is late on two counts. First, it has been nine months since the last estimate was released. Second, September 2015 is not January 2016. So, the newly released inflation estimate of 141.5% is out of date.

I estimate that the current implied annual inflation rate in Venezuela is 392%. That’s almost three times higher than the latest official estimate.

Venezuela’s notoriously incompetent central bank is producing lying statistics – just like the Soviets used to fabricate. In the Soviet days, we approximated reality by developing lie coefficients. We would apply these coefficients to the official data in an attempt to reach reality. The formula is: (official data) X (lie coefficient) = reality estimate. At present, the lie coefficient for the Central Bank of Venezuela’s official inflation estimate is 3.0.

Some constitutional conservatives, including Texas Gov. Greg Abbott and Rob Natelson for the American Legislative Exchange Council, have been promoting the idea of getting two-thirds of the states to call for an Article V convention to propose amendments to the U.S. Constitution. Florida senator and presidential candidate Marco Rubio recently made headlines by endorsing the notion. But I fear that it’s not a sound one under present conditions, as I argue in a new piece this week (originally published at The Daily Beast, now reprinted at Cato).  It begins:

In his quest to catch the Road Runner, the Coyote in the old Warner Brothers cartoons would always order supplies from the ACME Corporation, but they never performed as advertised. Either they didn’t work at all, or they blew up in his face.

Which brings us to the idea of a so-called Article V convention assembled for the purpose of proposing amendments to the U.S. Constitution, an idea currently enjoying some vogue at both ends of the political spectrum.

Jacob Sullum at Reason offers a quick tour of some of the better and worse planks in Gov. Abbott’s “Texas Plan” (as distinct from the question of whether a convention is the best way of pursuing them).  In using the phrase “Texas Plan,”  Gov. Abbott recognizes that in a convention scenario where any and all ideas for amendments are on the table, other states would be countering with their own plans; one can readily imagine a “California Plan” prescribing limits on campaign speech and affirmative constitutional rights to health and education, a “New Jersey Plan” to narrow the Second Amendment and broaden the General Welfare clause, and so forth. Much more on the convention idea in this Congressional Research Service report from 2014 (post adapted and expanded from Overlawyered).

Cato has published often in the past on the difficulties with and inefficiencies of the constitutional amendment process including Tim Lynch’s 2011 call for amending the amendment process itself and Michael Rappaport’s Policy Analysis No. 691 in 2012 with proposals of similar intent. This past December’s Cato Unbound discussion led by Prof. Sanford Levinson included a response essay by Richard Albert describing the founding document as “constructively unamendable” at present, although as a consequence of current political conditions and “not [as] a permanent feature of the Constitution.” And to be fair I should note also Ilya Shapiro had a 2011 post in this space with a perspective (or at least a choice of emphasis) different from mine.

I’m not known for my clairvoyance – it would be impossible to make a living predicting what the Supreme Court will do – but as the latest round of birtherism continues into successive news cycles, I do have an odd sense of “deja vu all over again.” Two and a half years ago, I looked into Ted Cruz’s presidential eligibility and rather easily came to the conclusion that, to paraphrase a recent campaign slogan, “yes, he can.” Here’s the legal analysis in a nutshell:

In other words, anyone who is a citizen at birth — as opposed to someone who becomes a citizen later (“naturalizes”) or who isn’t a citizen at all — can be president.

So the one remaining question is whether Ted Cruz was a citizen at birth. That’s an easy one. The Nationality Act of 1940 outlines which children become “nationals and citizens of the United States at birth.” In addition to those who are born in the United States or born outside the country to parents who were both citizens — or, interestingly, found in the United States without parents and no proof of birth elsewhere — citizenship goes to babies born to one American parent who has spent a certain number of years here.

That single-parent requirement has been amended several times, but under the law in effect between 1952 and 1986 — Cruz was born in 1970 — someone must have a citizen parent who resided in the United States for at least 10 years, including five after the age of 14, in order to be considered a natural-born citizen. Cruz’s mother, Eleanor Darragh, was born in Delaware, lived most of her life in the United States, and gave birth to little Rafael Edward Cruz in her 30s. Q.E.D.

We all know that this wouldn’t even be a story if weren’t being pushed by the current Republican frontrunner (though Cruz is beating Trump in the latest Iowa polls). Nevertheless, here we are. 

For more analysis and a comprehensive set of links regarding this debate, see Jonathan Adler’s excellent coverage at the Volokh Conspiracy.

Of course we’re referring to Hurricane Alex here, which blew up in far eastern Atlantic waters thought to be way too cold to spin up such a storm.  Textbook meteorology says hurricanes, which feed off the heat of the ocean, won’t form over waters cooler than about  80°F.  On the morning of January 14, Alex exploded over waters that were a chilly 68°.

Alex is (at least) the third hurricane observed in January, with others in 1938 and 1955.  The latter one, Hurricane Alice2, was actually alive on New Year’s Day.

The generation of Alex was very complex.  First, a garden-variety low pressure system formed over the Bahamas late last week and slowly drifted eastward.  It was derived from the complicated, but well-understood processes associated with the jet stream and a cold front, and that certainly had nothing to do with global warming.

The further south cold fronts go into the tropical Atlantic, the more likely that they will just dissipate, and that’s what happened last week, too.  Normally the associated low-pressure would also wash away.  But after it initially formed near the Bahamas  and drifted eastward, it was  in a region where sea-surface temperatures (SSTs) are running about 3°F above the long-term average consistent  with a warmer world. This may have been just enough to fuel the persistent remnant cluster of thunderstorms that meandered in the direction of Spain.

Over time, the National Hurricane Center named this collection “Alex” as a “subtropical” cyclone, which is what we call a tropical low pressure system that doesn’t have the characteristic warm core of a hurricane.

(Trivia note:  the vast majority of cyclones in temperate latitudes have a cold core at their center.  Hurricanes have a warm core.  There was once a move to call the subtropical hybrids “himicanes” (we vote for that!), then “neutercanes” (not bad, either) but the community simply adopted the name “subtropical.”)

In the early hours of January 14, thanks to a cold low pressure system propagating through the upper atmosphere, temperatures plummeted above the storm to a rather astounding -76°F.  So even though the SSTs were a mere 68°, far to cold to promote a hurricane, the difference between there and high altitudes was a phenomenal  144°, was so large that one could form.

Vertical motion, which is what causes the big storm clouds that form the core of a hurricane, is greatest when the change in temperature between the surface that the upper atmosphere is largest, and that 144° differential  exploded the storms that were in subtropical Alex, quickly creating a warm core and a hurricane eyewall. 

A far-south invasion of such cold air over the Atlantic subtropics is less likely in a warmer world, as the pole-to-equator temperature contrast lessens.  Everything else being equal, that would tend to confine such an event to higher latitudes.

So, yes, warmer surface temperatures may have kept the progenitor storms of Alex alive, but warmer temperatures would have made the necessary outbreak of extremely cold air over the storm less likely.

Consequently, it’s really not right to blame global warming for Hurricane Alex, though it may have contributed to subtropical storm Alex.

On December 1, 2015, the Bank of England released the results of its second round of annual stress tests, which aim to measure the capital adequacy of the UK banking system. This exercise is intended to function as a financial health check for the major UK banks, and purports to test their ability to withstand a severe adverse shock and still come out in good financial shape.

The stress tests were billed as severe. Here are some of the headlines:

“Bank of England stress tests to include feared global crash” “Bank of England puts global recession at heart of doomsday scenario” “Banks brace for new doomsday tests”

This all sounds pretty scary. Yet the stress tests appeared to produce a comforting result: despite one or two small problems, the UK banking system as a whole came out of the process rather well. As the next batch of headlines put it:

“UK banks pass stress tests as Britain’s ‘post-crisis period’ ends” “Bank shares rise after Bank of England stress tests” “Bank of England’s Carney says UK banks’ job almost done on capital”

At the press conference announcing the stress test results, Bank of England Governor Mark Carney struck an even more reassuring note:

The key point to take is that this [UK banking] system has built capital steadily since the crisis. It’s within sight of [its] resting point, of what the judgement of the FPC is, how much capital the system needs. And that resting point — we’re on a transition path to 2019, and we would really like to underscore the point that a lot has been done, this is a resilient system, you see it through the stress tests.[1] [italics added]

But is this really the case? Let’s consider the Bank’s headline stress test results for the seven financial institutions involved: Barclays, HSBC, Lloyds, the Nationwide Building Society, the Royal Bank of Scotland, Santander UK and Standard Chartered.

In this test, the Bank sets its minimum pass standard equal to 4.5%: a bank passes the test if its capital ratio as measured by the CET1 ratio — the ratio of Common Equity Tier 1 capital to Risk-Weighted Assets (RWAs) — is at least 4.5% after the stress scenario is accounted for; it fails the test otherwise.

The outcomes are shown in in Chart 1:

Chart 1: Stress Test Outcomes for the CET1 Ratio with a 4.5% Pass Standard

Note: The data are obtained from Annex 1 of the Bank’s stress test report (Bank of England, December 2015).

Based solely on this test, the UK banking system might indeed look to be in reasonable shape. Every bank passes the test, although one (Standard Chartered) does so by a slim margin of under 100 basis points and another (RBS) does not perform much better. Nonetheless, according to this test, the UK banking system looks broadly healthy overall.

Unfortunately, that is not the whole story.

One concern is that the RWA measure used by the Bank is essentially nonsense — as its own (now) chief economist demonstrated a few years back. So it is important to consider the second set of stress tests reported by the Bank, which are based on the leverage ratio. This is defined by the Bank as the ratio of Tier 1 capital to leverage exposure, where the leverage exposure attempts to measure the total amount at risk. We can think of this measure as similar to total assets.

In this test, the pass standard is set at 3% — the bare minimum leverage ratio under Basel III.

The outcomes for this stress test are given in the next chart:

Chart 2: Stress Test Outcomes Using the Tier 1 Leverage Ratio with a 3% Pass Standard

Based on this test, the UK banking system does not look so healthy after all. The average post-stress leverage ratio across the banks is 3.5%, making for an average surplus of 0.5%. The best performing institution (Nationwide) has a surplus (that is, the outcome minus the pass standard) of only 1.1%, while four banks (Barclays, HSBC, Lloyds and Santander) have surpluses of less than one hundred basis points, and the remaining two don’t have any surpluses at all — their post-stress leverage ratios are exactly 3%.

To make matters worse, this stress test also used a soft measure of core capital — Tier 1 capital — which includes various soft capital instruments (known as additional Tier 1 capital) that are of questionable usefulness to a bank in a crisis.

The stress test would have been more convincing had the Bank used a harder capital measure. And, in fact, the ideal such measure would have been the CET1 capital measure it used in the first stress test. So what happens if we repeat the Bank’s leverage stress test but with CET1 instead of Tier 1 in the numerator of the leverage ratio?

Chart 3: Stress Test Outcomes Using the CET1 Leverage Ratio with a 3% Pass Standard

In this test, one bank fails, four have wafer-thin surpluses and only two banks are more than insignificantly over the pass standard.

Moreover, this 3% pass standard is itself very low. A bank with a 3% leverage ratio will still be rendered insolvent if it makes a loss of 3% of its assets.

The 3% minimum is also well below the potential minimum that will be applied in the UK when Basel III is fully implemented — about 4.2% by my calculations — let alone the 6% minimum leverage ratio that the Federal Reserve is due to impose in 2018 on the federally insured subsidiaries of the eight globally systemically important banks in the United States.

Here is what we would get if the Bank of England had carried out the leverage stress test using both the CET1 capital measure and the Fed’s forthcoming minimum standard of 6%:

Chart 4: Stress Test Outcomes for the CET1 Leverage Ratio with a 6% Pass Standard

Oh my! Every bank now fails and the average deficit is nearly 3 percentage points.

Nevertheless, I leave the last word to Governor Carney: “a lot has been done, this is a resilient system, you see it through the stress tests.”


[1] Bank of England Financial Stability Report Q&A, 1st December 2015, p. 11.

[Cross-posted from]