Cato Op-Eds

Individual Liberty, Free Markets, and Peace
Subscribe to Cato Op-Eds feed

Just before last weekend’s Democratic debate, Bernie Sanders finally released the long-awaited plan for his health care proposal, which would fundamentally transform the health care sector by replacing all health insurance with a single program administered by the federal government. Michael Cannon has ably explained how Obamacare was really the big loser of the back and forth at the debate, but it’s worth looking further into Sanders’ outline of a plan. At just seven pages of text, it leaves most of the major questions unanswered. It does list a bevy of tax increases that it say will finance the needed $1.38 trillion in new federal spending each year, although even this is a significant underestimate. Bernie’s plan promises universal coverage and savings for families and businesses without delving into of the necessary, and often messy, trade-offs. 

While he calls the plan ‘Medicare for all,’ the plan would actually cover even more services than Medicare and do away with the program’s cost-sharing components like co-payments, deductibles, and premiums. Giving people comprehensive coverage of “the entire continuum” at little cost to themselves would seem to significantly increase utilization, which would strain the system’s capacity while also rendering it unaffordable. The plan makes no effort to answer fundamentally important questions: How would the new system determine payment rates for health care providers? What, if anything, would it do to try to rein in the growth of health care costs?

The “Getting Health Care Spending Under Control” section of the plan is one paragraph long and offers little beyond assurances that “creating a single public insurance system will go a long way towards getting health care spending under control” and under Berniecare “government will finally be able to stand up to drug companies.” That this is hardly a comprehensive plan and gives the impression that in this system, cost control measures would somehow be painless.

The topline estimate in the plan is $1.38 trillion per year, and a memo provided by Professor Gerald Friedman gives some additional details, claiming the plan would need $13.77 trillion in new public spending from 2017-2026. As Avik Roy has pointed out, that estimate fails to account for the trillions in government spending at the state and local level that will have to be replaced under Berniecare. The figure below gives some sense of how much spending comes from sources other than the federal government. Even after Obamacare significantly expanded the federal government’s role in the health care, spending by the federal government only accounts for about 28 percent of national health expenditures. Under Berniecare, the federal government would be the only payer, and thus would have to replace much of this other spending, including the 17 percent that is currently financed at the state and local level. 

Estimated National Health Expenditures by Source of Funds, 2016


Source: Centers for Medicare & Medicaid Services, “NHE Projections 2014-2024.”

Professor Friedman’s memo also reduces the tab of Berniecare by simply assuming massive savings that would reduce costs below the current baseline: $4 trillion from the assumption that the recent health care cost slowdown will continue and another $6.31 trillion in additional savings from moving to single-payer. Given the lack of details in the cost savings section, these are far from certain, and it seems more plausible that comprehensive coverage with minimal cost sharing would increase utilization and expenditures. Berniecare would cost significantly more than the roughly $13.8 trillion cited in the outline.

The plan gets into more detail in an area Bernie is more comfortable, proposing significant tax increases on businesses and high earners: a 6.2 percent income-based health premium paid by employers, a 2.2 percent income-based premium on households, raising the marginal income tax rates for high earners, and taxing capital gains and dividends. These are just some of the increases he would propose. As discussed above, even this raft of significant tax increases would only finance part of the increased federal spending needed, so Berniecare would still add trillions to the debt.

This plan, like most proposals put out by presidential candidates, oversells the benefits and avoids wading into details that would reveal any trade-offs or costs, leaving the most difficult questions unanswered. Even under optimistic assumptions, the trillions in new taxes would not come close to covering the increased expenditures. If a plan promises to cover everything for everyone without any kind of trade-off, it probably can’t.

In this morning’s 6-3 ruling in Campbell-Ewald v. Gomez, the Supreme Court, with Justice Ruth Bader Ginsburg writing for the majority, ruled that a defendant’s offer to settle in full the claim of a named plaintiff did not in itself avail to moot the claim and thus (its goal) knock-out the associated class action. The case, which John Elwood and Conor McEvily previewed in their contribution to the latest Cato Supreme Court Review, is the latest in a series–notably Genesis Healthcare Corp. v. Symczyk three years ago–raising the question of when and whether defendants can end a group action by “picking off” named plaintiffs. While this case on its face is a win for the liberal side and embraces the analysis argued previously by Justice Elena Kagan in her Genesis dissent, it still leaves important elements of the wider question unresolved, while giving Justice Clarence Thomas the chance to write an interesting concurrence asking whether either camp of justices is asking the right questions. 

Dissenting Chief Justice John Roberts (joined by justices Antonin Scalia and Samuel Alito) argues that an individual lawsuit that has been met with a fully adequate offer of settlement has ceased to be a “case or controversy,” the only sorts of disputes our courts may adjudicate. (Because the federal law that underlies the suit – the Telephone Consumer Protection Act, or TCPA – has a statutory maximum for damages, it is reasonably knowable what constitutes full relief for plaintiff Gomez.) By contrast, the majority points out with some force that a valid claim countered with a full offer of settlement is not in quite the same posture as a grievance that never became a valid claim in the first place. Ginsburg, Kagan, et al. would apply principles of contract to an offer of judgment made under federal Rule 68 and, under such principles, a contract offer–handsome or otherwise–need not be accepted. 

Justice Clarence Thomas, concurring separately, disagrees with both sides’ approach. He is not satisfied with the conservatives’ somewhat Legal Realist approach (if one may call it that) as to when a case or controversy has ceased, but is equally wary of the liberals’ resort to contract principles (laying a legal controversy to rest is not quite the same thing as contract-making, even if they have much in common.) Instead, he would look to the early common law of tenders, which preceded (and led up to) what is now Federal Rule 68 on offers of settlement. Thomas concludes that in this particular case common law analysis would lead to the same destination as reached by the majority. 

While this morning’s outcome is being hailed in some quarters as a huge victory for class actions, note well the narrowing language on pages 11 and 12 of Justice Ginsburg’s opinion, which suggests a concern to keep courts rather than the parties or their lawyers in final control: 

We need not, and do not, now decide whether the result would be different if a defendant deposits the full amount of the plaintiff’s individual claim in an account payable to the plaintiff, and the court then enters judgment for the plaintiff in that amount. That question is appropriately reserved for a case in which it is not hypothetical. 

Australian Prime Minister Malcolm Turnbull is in DC, and one of the things he is talking about is the Trans-Pacific Partnership (TPP).  Addressing the U.S. Chamber of Commerce, he said this:

So, when I’m speaking to some of your legislators later today I‘ll be encouraging them to support the TPP. Not to lose sight of the wood for the trees, not to get lost in this detail or that detail or that compromise, because the big picture is: the rules-based international order, which America has underwritten for generations, which has underwritten the prosperity and the economic growth from which we have all benefitted, the TPP is a key element in that.

Along the same lines, this is from a conversation he had with President Obama:

… And can I say, as I’ve just said to the U.S. Chamber of Commerce, encouraging them to encourage their congressmen and senators to support it, that the TPP is much more than a trade deal.  The prosperity of the world, the security of the world has been founded on the peace and order in the Asia Pacific, which has been delivered underwritten by the United States and its allies, including Australia.  

And what we’ve been able to do there is deliver a period of peace, a long period of peace from which everybody has benefited.  And America’s case – its proposition – is more than simply security.  It is standing up for, as you said, the rules-based international order, an order where might is not right, where the law must prevail, where there is real transparency, where people can invest with confidence.  

And the TPP is lifting those standards.  And so it is much more than a trade deal.  And I think when people try to analyze it in terms of what it adds to this amount of GDP or that, that’s important.  But the critical thing is the way it promotes the continued integration of those economies, because that is as important an element in our security in the maintenance of the values which both our countries share as all of our other efforts – whether they are in defense or whether they are in traditional diplomacy.

There’s lots of vague talk here, with the specifics glossed over.  He says we should not “get lost in this detail or that detail,” but for me, the TPP is all about the details. As he notes, the TPP is “more than a trade deal.”  So what else is it?  In terms of its economic impact, that’s what we in Cato’s trade policy center are looking at right now, and we will offer our assessment in the coming months.

On the other hand, when you start hearing about “security,” and “peace,” and “order,” and how the TPP might contribute, I would be a little skeptical about what exactly the TPP can deliver here. That’s not to say it can offer nothing; but this kind of benefit is very hard to measure.

One of the most promising recent developments in education policy has been the widespread interest in education savings accounts (ESAs). Five states have already enacted ESA laws, and several states are considering ESA legislation this year. Whereas traditional school vouchers empower families to choose among numerous private schools, ESAs give parents the flexibility to customize their child’s education using a variety of educational expenditures, including private school tuition, tutoring, textbooks, online courses, educational therapies, and more.

Today the Cato Institute released a new report, “Taking Credit for Education: How to Fund Education Savings Accounts through Tax Credits.” The report, which I coauthored with Jonathan Butcher of the Goldwater Institute and Clint Bolick (then of Goldwater, now an Arizona Supreme Court justice), draws from the experiences of educational choice policies in three states and offers suggestions to policymakers for how to design a tax-credit-funded ESA. Tax-credit ESAs combine the best aspects of existing ESA policies with the best aspects of scholarship tax credit (STC) policies. Like other ESA policies, tax-credit ESAs empower families to customize their child’s education. And like STC policies, tax-credit ESAs rely on voluntary, private contributions for funding, making them more resistant to legal challenges and expanding liberty for donors.

Here’s how it would work: individuals and corporations would receive tax credits in return for donations to nonprofit scholarship organizations that would set up, fund, and oversee the education savings accounts. There’s already precedent for this sort of arrangement. In Florida, the very same nonprofit organizations that grant scholarships under the state’s STC law also administer the state’s publicly funded ESA. Moreover, New Hampshire’s STC law allows scholarship organizations to help homeschoolers cover a variety of educational expenses, similar to ESA policies in other states. 

For more details on how to design tax-credit ESAs, how they would work, and the constitutional issues involved, you can read the full report here. You can also find a summary of the report at Education Next.

An early trope about Bitcoin was that it was ‘non-political’ money. That’s a tantalizing notion, given the ugliness of politics. But a monetary system is a social system, technology is people, and open source software development requires intensive collaboration—particularly around a protocol with strong network effects. When the group is large enough and the subject matter important enough, human relations become politics. I think that is true even when it’s not governmental (read: coercive) power at stake.

Bitcoin’s politics burst into public consciousness last week with the “whiny ragequit” of developer Mike Hearn. In a Medium post published ahead of a New York Times article on his disillusionment and departure from the Bitcon scene, Mike said Bitcoin has “failed,” and he discussed some of the reasons he thinks that.

As do most people responding to the news, I like Mike and I think he’s right to be frustrated. But he’s not right on the merits of Bitcoin, and his exit says more about one smart, impatient man than it does about this fascinating protocol.

But there is much to discover about how governance of a project like Bitcoin will proceed so that politics (in the derogatory sense) can be minimized. Stable governance will help Bitcoin compete with governmental monetary and record-keeping systems. Chaotic governance will retard it. We just need to figure out what “stable governance” is.

If you’re just tuning in, usage of Bitcoin has been steadily rising, to over 150,000 transactions per day. That is arguably putting pressure on the capacity of the network to process transactions. (And it undercuts thin, opportunistic arguments that Bitcoin is dead.)

Anticipating that growth, last May developer Gavin Andresen began pushing for an expansion of the network’s capacity through an increase in the size of “blocks,” or pages on the Bitcoin global public ledger. The current limit, 1 MB about every 10 minutes, supports about three transactions per second.

The following month, Gavin also stepped down as Bitcoin’s lead developer to focus on broader issues. He handed the reins of “Bitcoin Core” to a group that—it later became clear—doesn’t share his vision. And over the summer and fall last year, the arguments in the blocksize debate grew stronger and more intense.

In August, Gavin and Mike introduced a competing version of the Bitcoin software called Bitcoin XT, which, among other things, would increase the blocksize to 8 MB. Their fork of the software included a built-in 75 percent super-majority vote for adoption, which made it fun to discuss as “A Bitcoin Constitutional Amendment.”

This move catalyzed discussion, to be sure, but also deepened animosity in some quarters. Notably, the controller(s) of various fora for discussing Bitcoin on the web began censoring discussion of XT on the premise that this alternative was no longer Bitcoin. Nodes running XT were DDOSed (that is, attacked by floods of data coming from compromised computers), assumedly by defenders of Core.

A pair of conferences entitled “Scaling Bitcoin” brought developers together to address the issues, and the conferences did a lot of good things, but they did not resolve the blocksize debate. The Bitcoin community is in full politics mode and the worst of politics are on display.

Well, actually, not the worst. Politics is at its worst when the winners can force all others to use their protocol or ban open discussion of competing ideas entirely.

Competing ideas. Competing software. To my mind, these seem to be the formative solution to Bitcoin’s current governance challenge. The relatively small Bitcoin community had fallen into the habit of using a small number of web sites to interact. Those sites betrayed the open ethos of the community, which prompted competing alternatives to spring up.

The community has likewise fallen into the habit of relying on a small number of developers–of necessity, in part, because Bitcoin coding talent is so rare. Now, though set back by the censorship and DDOS attacks, Bitcoin XT is joined by Bitcoin Unlimited and Bitcoin Classic as competitors to Bitcoin Core.

The developers of each version of the Bitcoin software must convince the community that their version is the best. That’s hard to do. And it’s supposed to be hard. Competition is great for everybody but the competitors.

The coin of the realm in these competitions–as in all debates–is credibility. Each software team must share the full sweep of their vision, and how their software advances the vision. They must convince the community of users that they have thought through the many technical threats to Bitcoin’s success.

I’ll confess that the Core team’s vision remains relatively opaque to me. I gather that they weight mining centralization as a greater concern than others do and thus resist the centralizing influence of a larger block size. As a technical layman, the best articulation for Core I’ve found is a response to Mike Hearn from BitFury’s Valery Vavilov. In it, one can at least see the reflection of the vision. Core’s recent statement and a December discussion of capacity increases don’t overcome the need for more sense of where they see Bitcoin going and why it’s good. I’m certain that they intend the best, and I’m pretty sure they feel that they’ve already explained their plans until they’re blue in the face. (Or, at least, blue in the hair…) But the community might benefit from more, and Peter R’s presentation in Montreal–though needlessly peppery at the end–is the clearest and thus most plausible explanation of blocksize economics I’ve found. (Much in this paragraph may be evidence of my ignorance.)

The reason Mike Hearn could ragequit is because he no longer wants a place in the Bitcoin community. He set a match to all his political capital. Everyone else in the Bitcoin community, and especially the developers, must do everything they can to build their political capital. They must explain the merits of their ideas and–in the fairest possible terms–the demerits of others. They should back up their ideas with supportive evidence, which–happily–an open technical system allows. And they should turn away “allies” who censor dsicussion forums or sponsor DDOS attacks. They should avoid impugning the motives of others, and, when they lose, lose gracefully.

All these behaviors cultivate credibility and the ability to persuade over the long haul. They offer the prospect of long-term success in the Bitcoin world and success for the Bitcoin ecosystem. Good behavior is good “politics,” which is something this non-political money needs.

As 2015 came to an end, so perhaps did a central tenet of resolving failed companies, the notion that “similarly situated” creditors ought to be treated equally, or, as the lawyers like to say “pari passu” (Latin for “on the same footing”).*  The turning point was Portugal’s treatment of creditors of Novo Banco SA.

Until its failure in August of 2014, Banco Espirito Santo SA had been Portugal’s second largest bank.  When it failed, the Banco de Portugal, acting as receiver, divided the failed bank into  “good” and “bad” components, as the FDIC commonly does in the event of a large U.S. bank failure.  Banco Espirito Santo SA continued as the “bad bank,” which was to be liquidated in an orderly process.  The “good bank” became Novo Banco SA, which would stay in business.

In such “good bank-bad bank” resolutions, all equity holders usually remain with the bad bank, while more senior creditors are transferred to the good bank.  In any event all creditors of the same class are treated alike.  Creditors assigned to the good bank are much more likely to recover some part of their investment.

In the case of Novo Banco, the usual practice was at first followed.  All creditors within certain classes were transferred to it in August 2014.  Those who weren’t transferred took losses instead of taxpayers, which was also the generally correct approach (would that it had been our approach during the financial crisis!).  But last month, something odd happened: a small number of bonds were re-assigned to Banco Espirito Santo SA.  The holders of those bonds were likely to recover less than if they had remained with the good bank.  This was done to reduce leverage at Novo Banco SA.  One can read the listing of bonds and the justification here.  The problem is that other bonds of similar seniority remained with Novo Banco.  That meant that the pari passu principle was violated.  Some bondholders would recover considerably more than others, despite holding bonds having the same priority.

So far as I can tell, what Portugal did was perfectly legal (but I’m not a lawyer, keep that in mind).  And one could even justify it, if the alternative would have been to have the taxpayers take a hit.  Still there are good reasons for regretting Portugal’s action.  The whole point of bankruptcy law and its administrative cousin, receivership, is to establish a chain of priority in the event of insolvency.  Basically where you stand in line is predetermined.  You generally have the ability to contract as to where you stand in line, and generally your expected return reflects that risk (farther back in line you are, less likely are to get paid).  Pari passu dictates that everyone who contracted for a particularly spot in line is treated the same.  While pari passu seems to have arose originally as contractual boilerplate, it has somewhat taken the status of an implied contractual term.  If the recovery is insufficient, the proceeds are share pro rata.  If I hold bond A and you hold bond A, we both get the same pay-off.  If I get 50 cents on the dollar, you get 50 cents on the dollar.  A decent respect for equality under the law demands such, as well as the rule of law.

If pari passu no longer holds, the ability to estimate default recoveries is greater reduced, increasing uncertainty in the debt market.  Particular groups of creditors are also more likely to become playthings of politics.  Witness the treatment of certain pension funds in the auto bankruptcies, which were harmed in order to benefit the auto unions.  Deviations from pari passu risk turning the resolution process into a political game, rather than a legal proceeding.

Unless you’re investor in either Banco Espirito Santo SA or Novo Banco SA, why should you care about this?  You should care because thanks to Dodd-Frank’s Title II resolution process, the same thing is now a lot more likely to happen in the good-ol’ U. S. of A.  That’s because Dodd-Frank’s Title II resolution process explicitly allows for exceptions to pari passu.  Given how the recent financial crisis response played out, one could easily envision, under a Title II resolution, creditors in a Florida pension fund being treated differently than those in a California pension fund, especially in an election year.  One could also envision differing treatment depending upon whether the creditors were domestic or foreign, as was the case with Novo Banco SA.

Section 210 of Dodd-Frank is loosely modeled on Section 11 of the Federal Deposit Insurance Act (FDIA), which calls for strict adherence to the pari passu principle.  But while Dodd-Frank suggests that pari passu generally be followed, Section 210(b)(4) allows for various exceptions.  Pari passu may be set aside when the receiver determines that doing so serves, according to the language of the statute:

(i) to maximize the value of the assets of the covered financial company;

(ii) to initiate and continue operations essential to implementation of the receivership or any bridge financial company;

(iii) to maximize the present value return from the sale or other disposition of the assets of the covered financial company; or

(iv) to minimize the amount of any loss realized upon the sale or other disposition of the assets of the covered financial company.

Although a further clause states that these exceptions can be made only provided that “all claimants that are similarly situated under paragraph (1) receive not less than the amount provided in paragraphs (2) and (3) of subsection (d),” this clause merely requires that a creditor get at least what he would have gotten in a liquidation, allowing the receiver to disregard any going-concern value, including goodwill.  In practice, this is unlikely to be a constraint at all.

In short, I think it is fair to say that Dodd-Frank, far from enforcing pari passu, allows almost anything to happen, especially in a Chevron deference world.  In fact the protections for a receiver are tighter than in the Chevron case (see Section 210(e) of Dodd-Frank and its limit on judicial review).

As depositors have historically been the dominant, and sometimes the only creditors in bank resolutions, the discretion that Dodd-Frank allows may not matter much in such cases.  But Dodd-Frank’s application to non-banks raises a whole new set of disturbing possibilities for the extra-judicial treatment of creditors.

Congress, at the suggestion of the FDIC, included similar flexibility in the resolution procedures for Fannie Mae and Freddie Mac.  That whole process has, of course, gone swimmingly.


*Some additional legal background on pari passu, particularly in the case of sovereign defaults, is here. For a more skeptical legal read this.

[Cross-posted from]

Hillary Clinton and Sen. Bernie Sanders participate in a Democratic primary debate in Charleston, South Carolina, on Jan. 17, 2016.

In their final debate before they face Democratic primary voters, Hillary Clinton and Bernie Sanders traded sharp jabs on health care. Pundits focused on how the barbs would affect the horse race, whether Democrats should be bold and idealistic (Sanders) or shrewd and practical (Clinton), and how Sanders’ “Medicare for All” scheme would raise taxes by a cool $1.4 trillion. (Per. Year.) Almost no one noticed the obvious: the Clinton-Sanders spat shows that not even Democrats like the Affordable Care Act, and that the law remains very much in danger of repeal.

Hours before the debate, Sanders unveiled an ambitious plan to put all Americans in Medicare. According to his web site, “Creating a single, public insurance system will go a long way towards getting health care spending under control.” Funny, Medicare has had the exact opposite effect on health spending for seniors. But no matter. Sanders assures us, “The typical middle class family would save over $5,000 under this plan.” Remember how President Obama promised ObamaCare would reduce family premiums by $2,500? It’s like that, only twice as ridiculous.

Clinton portrayed herself as the protector of ObamaCare. She warned that Sanders would “tear [ObamaCare] up…pushing our country back into that kind of a contentious debate.” She proposed instead to “build on” the law by imposing limits on ObamaCare’s rising copayments, and by imposing price controls on prescription drugs. Sanders countered, “No one is tearing this up, we’re going to go forward,” and so on.

Such rhetoric obscured the fact that the candidates’ differences are purely tactical. Clinton doesn’t oppose Medicare for All. Indeed, her approach would probably reach that goal much sooner. Since ObamaCare literally punishes whatever insurers provide the highest-quality coverage, it therefore forces health insurers into a race to the bottom, where they compete not to provide quality coverage to the sick.  That’s terrible if you or a family member have a high-cost, chronic health condition—or even just an ounce of humanity. But if you want to discredit “private” health insurance in the service of Medicare for All, it’s an absolute boon. After a decade of such misery, voters will beg President (Chelsea) Clinton for a federal takeover. But if President Sanders demands a $1.4 trillion tax hike without first making voters suffer under ObamaCare, he will over-play his hand and set back his cause.

The rhetoric obscured something much larger, too. Clinton and Sanders inadvertently revealed that not even Democrats like ObamaCare all that much, and Democrats know there’s a real chance the law may not be around in four years.

During the debate, Sanders repeatedly noted ObamaCare’s failings : “29 million people still have no health insurance. We are paying the highest prices in the world for prescription drugs, getting ripped off…even more are underinsured with huge copayments and deductibles…we are spending almost three times more than the British, who guarantee health care to all of their people…Fifty percent more than the French, more than the Canadians.”

Sure, he also boasted, repeatedly, that he helped write and voted for the ACA. Nonetheless, Sanders was indicting ObamaCare for failing to achieve universal coverage, contain prices, reduce barriers to care, or eliminate wasteful spending. At least one of the problems he lamented—“even more [people] are underinsured with huge copayments and deductibles”—ObamaCare has made worse. (See “race to the bottom” above, and here.)

When Sanders criticized the U.S. health care system, he was criticizing ObamaCare. His call for immediate adoption of Medicare for All shows that the Democratic party’s left wing is simply not that impressed with ObamaCare, which they have always (correctly) viewed as a giveaway to private insurers and drug companies.

Clinton’s proposals to outlaw some copayments and impose price controls on prescription drugs are likewise an implicit acknowledgement that ObamaCare has not made health care affordable. In addition, her attacks on Sanders reveal that she and many other Democrats know ObamaCare’s future remains in jeopardy.

Seriously, does anyone really think Clinton is worried that something might “push[] our country back into that kind of a contentious debate” over health care? America has been stuck in a nasty, tribal health care debate every day of the six years since Democrats passed ObamaCare despite public disapproval. Or that Republicans would be able to repeal ObamaCare over President Sanders’ veto?

Clinton knows that if the next president is a Republican, all the wonderful, magical powers that ObamaCare bestows upon the elites in Washington, D.C., might disappear.

If we elect a Republican, they’ll roll back all of the progress we’ve made on expanding health coverage. #DemDebate

— The Briefing (@TheBriefing2016) January 18, 2016

“I don’t want to see us start over again. I want us to defend and build on the Affordable Care Act and improve it.” —Hillary #DemDebate

— Hillary Clinton (@HillaryClinton) January 18, 2016

And she wants Democratic primary voters to believe she is the only Democrat who can win the White House. “The Republicans just voted last week to repeal the Affordable Care Act,” she warned, “and thank goodness, President Obama vetoed it.”

Clinton’s attacks on Sanders’ health care plan—her warning about “pushing our country back into that kind of a contentious debate” are just a sly way of warning Democratic voters: Bernie can’t win. Nominate me and I will protect ObamaCare. Nominate him, and ObamaCare dies.

We can’t afford to undo @POTUS’ progress. Health care for millions of Americans is too important.

— Hillary Clinton (@HillaryClinton) January 18, 2016

Health care should be a right for every American. We should build on the progress we’ve made with the ACA—not go back to square one.

— Hillary Clinton (@HillaryClinton) January 14, 2016

Perhaps that prediction is correct. Perhaps it isn’t. But it’s plausible.

Either way, ObamaCare was the biggest loser in this Democratic presidential debate.

Ross Douthat and Reihan Salam, two of the smartest conservative thinkers today, have spilt much ink worrying over immigrant assimilation.  Salam is more pessimistic, choosing titles like “The Melting Pot is Broken” and “Republicans Need a New Approach to Immigration” (with the descriptive url: “Immigration-New-Culture-War”) while relying on a handful of academic papers for support.  Douthat presents a more nuanced, Burkean think-piece reacting to assimilation’s supposed decline, relying more on Salam for evidence. 

Their worries fly against recent evidence that immigrant assimilation is proceeding quickly in the United States.  There’s never been a greater quantity of expert and timely quantitative research that shows immigrants are still assimilating.

The first piece of research is the National Academy of Science’s (NAS) September 2015 book titled The Integration of Immigrants into American SocietyAt 520 pages, it’s a thorough, brilliant summation of the relevant academic literature on immigrant assimilation that ties the different strands of research into a coherent story.  Bottom line:  Assimilation is never perfect and always takes time, but it’s going very well. 

One portion of NAS’ book finds that much assimilation occurs through a process called ethnic attrition, which is caused by immigrant inter-marriage with natives either of the same or different ethnic groups.  Assimilation is also quickened with second or third generation Americans marry those from other, longer-settled ethnic or racial groups.  The children of these intermarriages are much less likely to identify ethnically with their more recent immigrant ancestors and, due to spousal self-selection, to be more economically and educationally integrated as well.  Ethnic attrition is one reason why the much-hyped decline of the white majority is greatly exaggerated

     In an earlier piece, Salam focuses on ethnic attrition but exaggerated the degree to which it declined by confusing stocks of ethnics in the United States with the flow of new immigrants.  He also emphasizes the decrease in immigrant inter-marriage caused by the 1990-2000 influx of Hispanic and Asian immigrants.  That decrease is less dire than he reports.  According to another 2007 paper, 32 percent of American Mexican-Americans married outside of their race or ethnicity while 33 percent of women did (I write about this in more detail here).  That’s close to the 1990 rate of intermarriage reported for all Hispanics in the study Salam favored.  The “problem” disappeared.  

The second set of research is a July 2015 book entitled Indicators of Immigrant Integration 2015 that analyses immigrant and second generation integration on 27 measurable indicators across the OECD and EU countries.  This report finds more problems with immigrant assimilation in Europe, especially for those from outside of the EU, but the findings for the United States are quite positive.

The third work by University of Washington economist Jacob Vigdor offers a historical perspective.  He compares modern immigrant civic and cultural assimilation to that of immigrants form the early 20th century (an earlier draft of his book chapter is here, the published version is available in this collection).  For those of us who think early 20th century immigrants from Italy, Russia, Poland, Eastern Europe, and elsewhere assimilated successfully, Vigdor’s conclusion is reassuring:

“While there are reasons to think of contemporary migration from Spanish-speaking nations as distinct from earlier waves of immigration, evidence does not support the notion that this wave of migration poses a true threat to the institutions that withstood those earlier waves.  Basic indicators of assimilation, from naturalization to English ability, are if anything stronger now than they were a century ago [emphasis added].

American identity in the United States (similar to Australia, Canada, and New Zealand) is not based on nationality or race nearly as much as it is in the old nation states of Europe, likely explaining some of the better assimilation and integration outcomes here.       

Besides ignoring the huge and positive new research on immigrant assimilation, there are a few other issues with Douthat’s piece.

Douthat switches back and forth between Europe and the United States when discussing assimilation, giving the impression that the challenges are similar.  Treating assimilation in Europe and the United States as similar adds confusion, not clarity.  Cherry-picking outcomes from Europe to support skepticism about assimilation in the United States misleads.  Assimilation is a vitally important outcome for immigrants and their descendants but Europe and the United States have vastly different experiences. 

Douthat also argues that immigrant cultural differences can persist just like the various regional cultures have done so in the United States.  That idea, used most memorably in David Hackett Fischer’s Albion’s Seed, is called the Doctrine of First Effective Settlement (DFES).  Under that theory, the creation and persistence of regional cultural differences requires the near-total displacement of the local population by a foreign one, as happened in the early settlement of the United States. 

However, DFES actually gives reasons to be optimistic about immigrant assimilation because Douthat misses a few crucial details when he briefly mentioned it.  First, as Fischer and others have noted, waves of immigrants have continuously assimilated into the settled regional American cultures since the initial settlement – that is the point of DFES.  The first effective settlements set the regional cultures going forward and new immigrants assimilate into those cultures. 

Second, DFES predicts that today’s immigrants will assimilate into America’s regional cultures (unless almost all Americans quickly die and are replaced by immigrants).  The American regional cultures that immigrants are settling into are already set so they won’t be able to create persistent new regional cultures here.  America’s history with DFES is not a reason to worry about immigrant assimilation today and should supply comfort to those worried about it.

Immigrants and their children are assimilating well into American society.  We shouldn’t let assimilation issues in Europe overwhelm the vast empirical evidence that it’s proceeding as it always has in the United States.

Just when you thought the Syrian civil war couldn’t get any messier, developments last week proved that it could.  For the first time in the armed conflict that has raged for nearly five years, militia fighters from the Assyrian Christian community in northern Iraq clashed with Kurdish troops. What made that incident especially puzzling is that both the Assyrians and the Kurds are vehement adversaries of ISIS—which is also a major player in that region of Syria.  Logically, they should be allies who cooperate regarding military moves against the terrorist organization.

But in Syria, very little is simple or straightforward.   Unfortunately, that is a point completely lost on the Western (especially American) news media.  From the beginning, Western journalists have portrayed the Syrian conflict as a simplistic melodrama, with dictator Bashar al-Assad playing the role of designated villain and the insurgents playing the role of plucky proponents of liberty.  Even a cursory examination of the situation should have discredited that narrative, but it continues largely intact to this day.

There are several layers to the Syrian conflict.  One involved an effort by the United States and its allies to weaken Assad as a way to undermine Iran by depriving Tehran of its most significant regional ally.  Another layer is a bitter Sunni-Shite contest for regional dominance.  Syria is just one theater in that contest.  We see other manifestations in Bahrain, where Iran backs a seething majority Shiite population against a repressive Sunni royal family that is kept in power largely by Saudi Arabia’s military support.  Saudi Arabia and other Gulf powers backed Sunni tribes in western Iraq against the Shiite-dominated government in Baghdad.  Some of those groups later coalesced to become ISIS.  In Yemen, direct military intervention by Saudi Arabia and Riyadh’s smaller Sunni Gulf allies is determined to prevent a victory by the Iranian-backed Houthis.

The war in Syria is yet another theater in that regional power struggle.  It is no accident that the Syrian insurgency is overwhelmingly Sunni in composition and receives strong backing from major Sunni powers, including Saudi Arabia, Qatar, and Turkey.  Assad leads an opposing “coalition of religious minorities,” which includes his Alawite base (a Shiite offshoot) various Christian sects, and the Druze.  But there is an added element of complexity.  The Kurds form yet a third faction, seeking to create a self-governing (quasi-independent) region in northern and northeastern Syria inhabited by their ethnic brethren.  In other words, Syrian Kurds are trying to emulate what Iraqi Kurds have enjoyed for many years in Iraqi Kurdistan, where Baghdad’s authority is little more than a legal fiction.  That explains the clash between Assyrian Christians and Kurds.  Both hate ISIS, but the former supports an intact Syria (presumably with Assad or someone else acceptable to the coalition in charge), the latter does not.

Such incidents underscore just how complex the Syrian struggle is and how vulnerable to manipulation well-meaning U.S. mediation efforts might become.  Our news media need to do a far better job of conveying what is actually taking place in that part of the world, not what wannabe American nation builders wish were the case.

Surprise! Venezuela, the world’s most miserable country (according to my misery index) has just released an annualized inflation estimate for the quarter that ended September 2015. This is late on two counts. First, it has been nine months since the last estimate was released. Second, September 2015 is not January 2016. So, the newly released inflation estimate of 141.5% is out of date.

I estimate that the current implied annual inflation rate in Venezuela is 392%. That’s almost three times higher than the latest official estimate.

Venezuela’s notoriously incompetent central bank is producing lying statistics – just like the Soviets used to fabricate. In the Soviet days, we approximated reality by developing lie coefficients. We would apply these coefficients to the official data in an attempt to reach reality. The formula is: (official data) X (lie coefficient) = reality estimate. At present, the lie coefficient for the Central Bank of Venezuela’s official inflation estimate is 3.0.

Some constitutional conservatives, including Texas Gov. Greg Abbott and Rob Natelson for the American Legislative Exchange Council, have been promoting the idea of getting two-thirds of the states to call for an Article V convention to propose amendments to the U.S. Constitution. Florida senator and presidential candidate Marco Rubio recently made headlines by endorsing the notion. But I fear that it’s not a sound one under present conditions, as I argue in a new piece this week (originally published at The Daily Beast, now reprinted at Cato).  It begins:

In his quest to catch the Road Runner, the Coyote in the old Warner Brothers cartoons would always order supplies from the ACME Corporation, but they never performed as advertised. Either they didn’t work at all, or they blew up in his face.

Which brings us to the idea of a so-called Article V convention assembled for the purpose of proposing amendments to the U.S. Constitution, an idea currently enjoying some vogue at both ends of the political spectrum.

Jacob Sullum at Reason offers a quick tour of some of the better and worse planks in Gov. Abbott’s “Texas Plan” (as distinct from the question of whether a convention is the best way of pursuing them).  In using the phrase “Texas Plan,”  Gov. Abbott recognizes that in a convention scenario where any and all ideas for amendments are on the table, other states would be countering with their own plans; one can readily imagine a “California Plan” prescribing limits on campaign speech and affirmative constitutional rights to health and education, a “New Jersey Plan” to narrow the Second Amendment and broaden the General Welfare clause, and so forth. Much more on the convention idea in this Congressional Research Service report from 2014 (post adapted and expanded from Overlawyered).

Cato has published often in the past on the difficulties with and inefficiencies of the constitutional amendment process including Tim Lynch’s 2011 call for amending the amendment process itself and Michael Rappaport’s Policy Analysis No. 691 in 2012 with proposals of similar intent. This past December’s Cato Unbound discussion led by Prof. Sanford Levinson included a response essay by Richard Albert describing the founding document as “constructively unamendable” at present, although as a consequence of current political conditions and “not [as] a permanent feature of the Constitution.” And to be fair I should note also Ilya Shapiro had a 2011 post in this space with a perspective (or at least a choice of emphasis) different from mine.

I’m not known for my clairvoyance – it would be impossible to make a living predicting what the Supreme Court will do – but as the latest round of birtherism continues into successive news cycles, I do have an odd sense of “deja vu all over again.” Two and a half years ago, I looked into Ted Cruz’s presidential eligibility and rather easily came to the conclusion that, to paraphrase a recent campaign slogan, “yes, he can.” Here’s the legal analysis in a nutshell:

In other words, anyone who is a citizen at birth — as opposed to someone who becomes a citizen later (“naturalizes”) or who isn’t a citizen at all — can be president.

So the one remaining question is whether Ted Cruz was a citizen at birth. That’s an easy one. The Nationality Act of 1940 outlines which children become “nationals and citizens of the United States at birth.” In addition to those who are born in the United States or born outside the country to parents who were both citizens — or, interestingly, found in the United States without parents and no proof of birth elsewhere — citizenship goes to babies born to one American parent who has spent a certain number of years here.

That single-parent requirement has been amended several times, but under the law in effect between 1952 and 1986 — Cruz was born in 1970 — someone must have a citizen parent who resided in the United States for at least 10 years, including five after the age of 14, in order to be considered a natural-born citizen. Cruz’s mother, Eleanor Darragh, was born in Delaware, lived most of her life in the United States, and gave birth to little Rafael Edward Cruz in her 30s. Q.E.D.

We all know that this wouldn’t even be a story if weren’t being pushed by the current Republican frontrunner (though Cruz is beating Trump in the latest Iowa polls). Nevertheless, here we are. 

For more analysis and a comprehensive set of links regarding this debate, see Jonathan Adler’s excellent coverage at the Volokh Conspiracy.

Of course we’re referring to Hurricane Alex here, which blew up in far eastern Atlantic waters thought to be way too cold to spin up such a storm.  Textbook meteorology says hurricanes, which feed off the heat of the ocean, won’t form over waters cooler than about  80°F.  On the morning of January 14, Alex exploded over waters that were a chilly 68°.

Alex is (at least) the third hurricane observed in January, with others in 1938 and 1955.  The latter one, Hurricane Alice2, was actually alive on New Year’s Day.

The generation of Alex was very complex.  First, a garden-variety low pressure system formed over the Bahamas late last week and slowly drifted eastward.  It was derived from the complicated, but well-understood processes associated with the jet stream and a cold front, and that certainly had nothing to do with global warming.

The further south cold fronts go into the tropical Atlantic, the more likely that they will just dissipate, and that’s what happened last week, too.  Normally the associated low-pressure would also wash away.  But after it initially formed near the Bahamas  and drifted eastward, it was  in a region where sea-surface temperatures (SSTs) are running about 3°F above the long-term average consistent  with a warmer world. This may have been just enough to fuel the persistent remnant cluster of thunderstorms that meandered in the direction of Spain.

Over time, the National Hurricane Center named this collection “Alex” as a “subtropical” cyclone, which is what we call a tropical low pressure system that doesn’t have the characteristic warm core of a hurricane.

(Trivia note:  the vast majority of cyclones in temperate latitudes have a cold core at their center.  Hurricanes have a warm core.  There was once a move to call the subtropical hybrids “himicanes” (we vote for that!), then “neutercanes” (not bad, either) but the community simply adopted the name “subtropical.”)

In the early hours of January 14, thanks to a cold low pressure system propagating through the upper atmosphere, temperatures plummeted above the storm to a rather astounding -76°F.  So even though the SSTs were a mere 68°, far to cold to promote a hurricane, the difference between there and high altitudes was a phenomenal  144°, was so large that one could form.

Vertical motion, which is what causes the big storm clouds that form the core of a hurricane, is greatest when the change in temperature between the surface that the upper atmosphere is largest, and that 144° differential  exploded the storms that were in subtropical Alex, quickly creating a warm core and a hurricane eyewall. 

A far-south invasion of such cold air over the Atlantic subtropics is less likely in a warmer world, as the pole-to-equator temperature contrast lessens.  Everything else being equal, that would tend to confine such an event to higher latitudes.

So, yes, warmer surface temperatures may have kept the progenitor storms of Alex alive, but warmer temperatures would have made the necessary outbreak of extremely cold air over the storm less likely.

Consequently, it’s really not right to blame global warming for Hurricane Alex, though it may have contributed to subtropical storm Alex.

On December 1, 2015, the Bank of England released the results of its second round of annual stress tests, which aim to measure the capital adequacy of the UK banking system. This exercise is intended to function as a financial health check for the major UK banks, and purports to test their ability to withstand a severe adverse shock and still come out in good financial shape.

The stress tests were billed as severe. Here are some of the headlines:

“Bank of England stress tests to include feared global crash” “Bank of England puts global recession at heart of doomsday scenario” “Banks brace for new doomsday tests”

This all sounds pretty scary. Yet the stress tests appeared to produce a comforting result: despite one or two small problems, the UK banking system as a whole came out of the process rather well. As the next batch of headlines put it:

“UK banks pass stress tests as Britain’s ‘post-crisis period’ ends” “Bank shares rise after Bank of England stress tests” “Bank of England’s Carney says UK banks’ job almost done on capital”

At the press conference announcing the stress test results, Bank of England Governor Mark Carney struck an even more reassuring note:

The key point to take is that this [UK banking] system has built capital steadily since the crisis. It’s within sight of [its] resting point, of what the judgement of the FPC is, how much capital the system needs. And that resting point — we’re on a transition path to 2019, and we would really like to underscore the point that a lot has been done, this is a resilient system, you see it through the stress tests.[1] [italics added]

But is this really the case? Let’s consider the Bank’s headline stress test results for the seven financial institutions involved: Barclays, HSBC, Lloyds, the Nationwide Building Society, the Royal Bank of Scotland, Santander UK and Standard Chartered.

In this test, the Bank sets its minimum pass standard equal to 4.5%: a bank passes the test if its capital ratio as measured by the CET1 ratio — the ratio of Common Equity Tier 1 capital to Risk-Weighted Assets (RWAs) — is at least 4.5% after the stress scenario is accounted for; it fails the test otherwise.

The outcomes are shown in in Chart 1:

Chart 1: Stress Test Outcomes for the CET1 Ratio with a 4.5% Pass Standard

Note: The data are obtained from Annex 1 of the Bank’s stress test report (Bank of England, December 2015).

Based solely on this test, the UK banking system might indeed look to be in reasonable shape. Every bank passes the test, although one (Standard Chartered) does so by a slim margin of under 100 basis points and another (RBS) does not perform much better. Nonetheless, according to this test, the UK banking system looks broadly healthy overall.

Unfortunately, that is not the whole story.

One concern is that the RWA measure used by the Bank is essentially nonsense — as its own (now) chief economist demonstrated a few years back. So it is important to consider the second set of stress tests reported by the Bank, which are based on the leverage ratio. This is defined by the Bank as the ratio of Tier 1 capital to leverage exposure, where the leverage exposure attempts to measure the total amount at risk. We can think of this measure as similar to total assets.

In this test, the pass standard is set at 3% — the bare minimum leverage ratio under Basel III.

The outcomes for this stress test are given in the next chart:

Chart 2: Stress Test Outcomes Using the Tier 1 Leverage Ratio with a 3% Pass Standard

Based on this test, the UK banking system does not look so healthy after all. The average post-stress leverage ratio across the banks is 3.5%, making for an average surplus of 0.5%. The best performing institution (Nationwide) has a surplus (that is, the outcome minus the pass standard) of only 1.1%, while four banks (Barclays, HSBC, Lloyds and Santander) have surpluses of less than one hundred basis points, and the remaining two don’t have any surpluses at all — their post-stress leverage ratios are exactly 3%.

To make matters worse, this stress test also used a soft measure of core capital — Tier 1 capital — which includes various soft capital instruments (known as additional Tier 1 capital) that are of questionable usefulness to a bank in a crisis.

The stress test would have been more convincing had the Bank used a harder capital measure. And, in fact, the ideal such measure would have been the CET1 capital measure it used in the first stress test. So what happens if we repeat the Bank’s leverage stress test but with CET1 instead of Tier 1 in the numerator of the leverage ratio?

Chart 3: Stress Test Outcomes Using the CET1 Leverage Ratio with a 3% Pass Standard

In this test, one bank fails, four have wafer-thin surpluses and only two banks are more than insignificantly over the pass standard.

Moreover, this 3% pass standard is itself very low. A bank with a 3% leverage ratio will still be rendered insolvent if it makes a loss of 3% of its assets.

The 3% minimum is also well below the potential minimum that will be applied in the UK when Basel III is fully implemented — about 4.2% by my calculations — let alone the 6% minimum leverage ratio that the Federal Reserve is due to impose in 2018 on the federally insured subsidiaries of the eight globally systemically important banks in the United States.

Here is what we would get if the Bank of England had carried out the leverage stress test using both the CET1 capital measure and the Fed’s forthcoming minimum standard of 6%:

Chart 4: Stress Test Outcomes for the CET1 Leverage Ratio with a 6% Pass Standard

Oh my! Every bank now fails and the average deficit is nearly 3 percentage points.

Nevertheless, I leave the last word to Governor Carney: “a lot has been done, this is a resilient system, you see it through the stress tests.”


[1] Bank of England Financial Stability Report Q&A, 1st December 2015, p. 11.

[Cross-posted from]

As part of his 2017 budget proposal, Secretary of Transportation Anthony Foxx proposes to spend $4 billion on self-driving vehicle technology. This proposal comes late to the game, as private companies and university researchers have already developed that technology without government help. Moreover, the technology Foxx proposes is both unnecessary and intrusive of people’s privacy.

In 2009, President Obama said he wanted to be remembered for promoting a new transportation network the way President Eisenhower was remembered for the Interstate Highway System. Unfortunately, Obama chose high-speed rail, a 50-year-old technology that has only been successful in places where most travel was by low-speed trains. In contrast with interstate highways, which cost taxpayers nothing (because they were paid for out of gas taxes and other user fees) and carry 20 percent of all passenger and freight travel in the country, high-speed rail would have cost taxpayers close to a trillion dollars and carry no more than 1 percent of passengers and virtually no freight.

The Obama adminstration has also promoted a 120-year-old technology, streetcars, as some sort of panacea for urban transportation. When first developed in the 1880s, streetcars averaged 8 miles per hour. Between 1910 and 1966, all but six American cities replaced streetcars with buses that were faster, cost half as much to operate, and cost almost nothing to start up on new routes. Streetcars funded by the Obama administration average 7.3 miles an hour (see p. 40), cost twice as much to operate as buses, and typically cost $50 million per mile to start up.

The point is that this administration, if not government in general, has been very poor at choosing transportation technologies for the twenty-first century. While I’ve been a proponent of self-driving cars since 2010, I believe the administration is making as big a mistake with its latest $4 billion proposal as it made with high-speed rail and streetcars.

The problem is that the technology the government wants is very different from the technology being developed by Google, Volkswagen, Ford, and other companies. The cars designed by these private companies rely on GPS, on-board sensors, and extremely precise maps of existing roadways and other infrastructure. A company called HERE, which was started by Nokia but recently purchased by BMW, Daimler, and Volkswagen, has already mapped about two-thirds of the paved roads in the United States and makes millions of updates to its maps every day.

Foxx proposes to spend most of the $4 billion on a very different technology called “connected vehicle” or vehicle-to-infrastructure communications. In this system, the government would have to install new electronic infrastructure in all streets and highway that would help guide self-driving cars. But states and cities today can’t fill potholes or keep traffic lights coordinated, so they are unlikely to be able to install an entirely new infrastructure system in any reasonable amount of time.

Moreover, the fixed infrastructure used for connected corridors will quickly become obsolete. Your self-driving car will be able to download software upgrades while sitting in your garage overnight–Teslas already do so. However, upgrading the hardware for a connected vehicle system could take years and might never happen due to the expense of converting from one technology to another. Thus, Foxx’s plan would lock us into a system that will be obsolete long before it is fully implemented.

Privacy advocates should also worry that connected roads would also connect cars to government command centers. The government will be able to monitor everyone’s travel and even, if you drive more than some planner thinks is the appropriate amount, remotely turn your car off to “save the planet.” Of course, Foxx will deny that this is his goal. Yet the Washington legislature has passed a law mandating a 50 percent reduction in per capita driving by 2050, and California and Oregon have similar if not quite-so-draconian rules, and it is easy to imagine that the states, if not the feds, will take advantage of Foxx’s technology to enforce their targets. No such monitoring or control is possible in the Google-like self-driving cars.

Foxx’s infrastructure is entirely unnecessary for self-driving cars, as Google, Audi, Delphi, and other companies have all proven that their cars can work without it. Not to worry: Foxx also promises that his department will write national rules that all self-driving cars must follow. No doubt these rules will mandate that the cars work on connected streets, whether they need to or not.

Some press reports suggest that Foxx’s plan will make Google happy, but it is more likely to disappoint. Google is already disappointed with self-driving car rules written by the California Department of Motor Vehicles. But what are the chances that federal rules will be any better–especially if the federal government is dead-set on its own technology that is very different from Google’s? If the states come up with 50 different sets of rules, some of them are likely to be better than the others, and the others can follow the best examples.

If Congress approves Foxx’s program, the best we can hope for is that Google and other private companies are able to ignore the new technology. The worst case is that the department’s new rules not only mandate that cars be able to use connected streets, but that they work in self-driving mode only on roads that have connected-streets technology. In that case, the benefits of self-driving cars will be delayed for the decades that it takes to install that technology–and may never happen at all if people don’t the extra cost for cars that can drive themselves only on a few selected roads and streets.

All government needs to do for the next transportation revolution to happen is keep the potholes filled, the stripes painted, and otherwise get out of the road. In contrast, Foxx’s is a costly way of doing more harm than good.

Americans often move between different income brackets over the course of their lives. As covered in an earlier blog post, over 50 percent of Americans find themselves among the top 10 percent of income-earners for at least one year during their working lives, and over 11 percent of Americans will be counted among the top 1 percent of income-earners for at least one year.   

Fortunately, a great deal of what explains this income mobility are choices that are largely within an individual’s control. While people tend to earn more in their “prime earning years” than in their youth or old age, other key factors that explain income differences are education level, marital status, and number of earners per household.  As Advisory Board member Mark Perry recently wrote

The good news is that the key demographic factors that explain differences in household income are not fixed over our lifetimes and are largely under our control (e.g. staying in school and graduating, getting and staying married, etc.), which means that individuals and households are not destined to remain in a single income quintile forever.  

According to the U.S. economist Thomas Sowell, whom Perry cites, “Most working Americans, who were initially in the bottom 20% of income-earners, rise out of that bottom 20%. More of them end up in the top 20% than remain in the bottom 20%.”  

While people move between income groups over their lifetime, many worry that income inequality between different income groups is increasing. The growing income inequality is real, but its causes are more complex than the demagogues make them out to be. 

Consider, for example, the effect of “power couples,” or people with high levels of education marrying one another and forming dual-earner households. In a free society, people can marry whoever they want, even if it does contribute to widening income disparities. 

Or consider the effects of regressive government regulations on exacerbating income inequality. These include barriers to entry that protect incumbent businesses and stifle competition. To name one extreme example, Louisiana recently required a government-issued license to become a florist. Lifting more of these regressive regulations would aid income mobility and help to reduce income inequality, while also furthering economic growth. 

Chaos and conflict have become constants in the Middle East. Frustrated U.S. policymakers tend to blame ancient history. Said President Barack Obama in his State of the Union speech, the region’s ongoing transformation was “rooted in conflicts that date back millennia.”

Of course, war is a constant of human history. But while today’s most important religious divisions go back thousands of years, bitter sectarian conflict does not. The Christian Crusades and Muslim conquests into Europe ended long ago.

All was not always calm within the region, of course. Sectarian antagonism existed. Yet religious divisions rarely caused the sort of hateful slaughter we see today.

Tolerance lived on even under political tyranny. The Baathist Party, which ruled Iraq and Syria until recently, was founded by a Christian. Christians played a leading role in the Palestinian movement.

The fundamental problem today is politics. Religion has become a means to forge political identities and rally political support.

As I point out in Time: “Blame is widely shared. Artificial line-drawing by the victorious allies after World War I, notably the Sykes-Picot agreement, created artificial nation states for the benefit of Europeans, not Arabs. Dynasties were created with barely a nod to the desires of subject peoples.”

Lebanon’s government was created as a confessional system, which exacerbated political reliance on religion. The British/American-backed overthrow of Iran’s democratic government in 1953 empowered the Shaw, an authoritarian, secular-minded modernizer. His rule was overturned by the Islamic Revolution.

This seminal event greatly sharpened the sectarian divide, which worsened through the Iran-Iraq war and after America’s invasion of Iraq. Out of the latter emerged the Islamic State. The collapse of Syria’s Assad regime has provided another political opportunity for radical movements.

Nothing about the history of the Middle East makes conflict inevitable. To reverse the process both Shiites and Sunnis must reject the attempt of extremists to misuse their faith for political advantage. And Western nations, especially the United States, must stay out of Middle East conflicts.

Global Science Report is a feature from the Center for the Study of Science, where we highlight one or two important new items in the scientific literature or the popular media. For broader and more technical perspectives, consult our monthly “Current Wisdom.”

We realize that we are 180° out of sync with the news cycle when we discuss heat-related death in the middle of Northern Hemisphere winter, but we’ve come across a recent paper that can’t wait for the heat and hype of next summer.

The paper, by Arizona State University’s David Hondula and colleagues, is a review of the recent scientific literature on “human health impacts of observed and projected increases in summer temperature.”

This topic is near and dear to our hearts, as we have ourselves contributed many papers to the scientific literature on this matter (see here).  We are especially interested in seeing how the literature has evolved over the past several years and Hondula and colleagues’ paper, which specifically looked at findings published in the 2012-2015 timeframe, fills this interest nicely.

Here’s how they summed up their analysis:

We find that studies based on projected changes in climate indicate substantial increases in heat-related mortality and morbidity in the future, while observational studies based on historical climate and health records show a decrease in negative impacts during recent warming. The discrepancy between the two groups of studies generally involves how well and how quickly humans can adapt to changes in climate via physiological, behavioral, infrastructural, and/or technological adaptation, and how such adaptation is quantified.

Did you get that? When assessing what actually happens to heat-related mortality rates in the face of rising temperatures, researchers find that “negative impacts” decline. But, when researchers attempt to project the impacts of rising temperature in the future on heat-related mortality, they predict “substantial increases.”

In other words, in the real world, people adapt to changing climate conditions (e.g., rising temperatures), but in the modeled world of the future, adaptation can’t keep up. 

But rather than assert this as a problem with model world behavior that needs serious attention, most assessments of the projected impacts of climate change (such as the one produced by our federal government as a foundation for its greenhouse gas mitigation policies) embrace model world forecasts and run with storylines like “global warming set to greatly increase deaths from heat waves.”

We’ve been railing against this fact for years. But, it never seems gain any traction with federal climatologists.

Interestingly, in all the literature surveyed by Hondula’s group, they cite only one study which suggested that climate change itself may be aiding and abetting the adaptive processes. The idea forwarded in that study was that since people adapt to heat waves, and since global warming may be, in part, leading to more heat waves, that global warming itself may be helping to drive the adaptive response.  Rather than leading to more heat-related deaths, global warming may actually be leading to fewer.

Who were the authors of that study? Perhaps they may by familiar to you: Chip Knappenberger, Pat Michaels, and Anthony Watts.

While Hondula and colleagues seem to be amenable to our premise, they point out that putting an actual magnitude on this effect is difficult:

If changing climate is itself a modifier of the relationship between temperature and mortality (e.g., increasing heat wave frequency or severity leads to increasing public awareness and preventative measures), a quantitative approach for disentangling these effects has yet to be established.

We concur with this, but, as we point out in our paper using the history of heat-related mortality in Stockholm as an example, it doesn’t take much of a positive influence from climate change to offset any negatives:

[R]aised awareness from climate change need only be responsible for 288 out of 2,304 (~13%) deaths saved through adaptation to have completely offset the climate-related increase in heat-related mortality [there].  For any greater contribution, climate change would have resulted in an overall decline in heat-related mortality in Stockholm County despite an increase in the frequency of extreme-heat events.

We went on to say (in somewhat of an understatement):

Our analysis highlights one of the many often overlooked intricacies of the human response to climate change.

Hondula’s team adds this, from their conclusion:

By directing our research efforts to best understand how reduction in heat mortality and morbidity can be achieved, we have the opportunity to improve societal welfare and eliminate unnecessary health consequences of extreme weather—even in a hotter future.

Well said.


Hondula, D. M., R. C. Balling, J. K. Vanos, and M. Georgescu, 2015. Rising temperatures, human health, and the role of adaptation. Current Climate Change Reports, 1, 144-154.

Knappenberger, P. C., P. J. Michaels, and A. Watts, 2014. Adaptation to extreme heat in Stockholm County, Sweden. Nature Climate Change, 4, 302–303.

On January 14th, the White House announced that Gen. Joseph Votel - the current head of U.S. Special Operations Command – will take over as the head of U.S. Central Command, a position which will place him in charge of America’s wars in Iraq, Syria, and Afghanistan. The symbolism of the appointment could not be clearer. As Foreign Policy noted,

“With 3,000 special operations troops currently hunting down Taliban militants in Afghanistan, and another 200 having just arrived on the ground in Iraq to take part in kill or capture missions against Islamic State leadership, Votel’s nomination underscores the central role that the elite troops play in the wars that President Barack Obama is preparing to hand off to the next administration.”

The growing use of special operations forces has been a hallmark of the Obama administration’s foreign policy, an attempt to thread the needle between growing public opposition to large-scale troop deployments and public demands for the United States to ‘do more’ against terrorist threats, all while dancing around the definition of the phrase ‘boots on the ground.’ But the increasing use of such non-traditional forces – particularly since the start of the Global War on Terror – is also reshaping how we think about U.S. military intervention overseas.

It’s not just the growing use of special operations forces. New technologies like drones permit America’s military to strike terrorist training camps and high value targets abroad with limited risk to operators. The diffusion of terrorist groups and non-state actors across the globe enables terrorist groups and their affiliates to be present in many states. And the breadth of the 2001 Congressional Authorization to Use Military Force (AUMF) – which permits attacks on any forces ‘associated’ with Al Qaeda – has permitted the executive branch to engage in numerous small military interventions around the globe without congressional approval or much public debate.

The result has been a series of conflicts which are effectively invisible to the public. Indeed, depending on your definition, America is currently fighting between three and nine wars. Iraq, Syria, and Afghanistan are obvious. But U.S. troops are also actively fighting in counterterrorism operations in Somalia, Nigeria, and Uganda. The United States is conducting drone strikes in Pakistan, Libya, and Somalia. And our commitment to the Saudi-led campaign in Yemen is even more ambiguous: though the U.S. is not engaged in fighting, it is certainly providing material support in the form of logistical and intelligence support.

On January 25th, Cato is hosting a panel discussion on the issues raised by the growth of these small, ‘invisible’ wars, and by the growing ubiquity of U.S. military intervention around the world. Moderated by Mark Mazzetti of the New York Times, and featuring Bronwyn Bruton of the Atlantic Council, Charles Schmitz of Towson University and Moeed Yusuf of the United States Institute of Peace, the event will seek to explore three key ‘invisible wars’ - Yemen, Pakistan, and Somalia – and the broader questions they raise. What is the nature and scope of America’s involvement in such conflicts? Does lack of public awareness impact U.S. national security debates? And does U.S. involvement actually serve U.S. interests?

The event will be held on January 25th at 11am. You can register here.

Parker and Ollier (2015) set the tone for their new paper on sea level change along the coastline of India in the very first sentence of their abstract: “global mean sea level (GMSL) changes derived from modelling do not match actual measurements of sea level and should not be trusted” (emphasis added). In contrast, it is their position that “much more reliable information” can be obtained from analyses of individual tide gauges of sufficient quality and length. Thus, they set out to obtain such “reliable information” for the coast of India, a neglected region in many sea level studies, due in large measure to its lack of stations with continuous data of sufficient quality.

A total of eleven stations were selected by Parker and Ollier for their analysis, eight of which are archived in the PSMSL database (PSMSL, 2014) and ten in a NOAA sea level database (NOAA, 2012). The average record length of the eight PSMSL stations was 54 years, quite similar to the average record length of 53 years for the eleven NOAA stations.

Results indicated an average relative rate of sea level rise of 1.07 mm/year for all eleven Indian stations, with an average record length of 51 years. However, the two Australian researchers report this value is likely “overrated because of the short record length and the multi-decadal and interannual oscillations” of several of the stations comprising their Indian database. Indeed, as they further report, “the phase of the 60-year oscillation found in the tide gauge records is such that sea level in the North Atlantic, western North Pacific, Indian Ocean and western South Pacific has been increasing since 1985-1990,” which increase most certainly skews the rate trend of the shorter records over the most recent period of record above the actual rate of rise.

One additional important finding of the study was gleaned from the longer records in the database, which revealed that the rates of sea level rise along the Indian coastline have been “decreasing since 1955,” which observation of deceleration stands in direct opposition to model-based claims that sea level rise should be accelerating in recent decades in response to CO2-induced global warming.

In comparing their findings to those reported elsewhere, Parker and Ollier note there is a striking similarity between the trends they found for the Indian coastline and for other tide gauge stations across the globe. Specifically, they cite Parker (2014), who calculated a 1.04 ± 0.45 mm/year average relative rate of sea level rise from 560 tide gauges comprising the PSMSL global database. And when that database is restricted in analysis to the 170 tide gauges with a length of more than 60 years at the present time, the average relative rate of rise declines to a paltry 0.25 ± 0.19 mm/year, without any sign of positive or negative acceleration.

The significance of Parker and Ollier’s work is noted in the “sharp contrast” they provide when comparing the rates of sea level rise computed from tide gauge data with model-based sea level reconstructions produced from satellites, such as the 3.2 mm/year value observed by the CU Sea Level Research Group (2014), which Parker and Ollier emphatically claim “cannot be trusted because it is so far from observed data.” Furthermore, it is clear from the observational tide gauge data that there is nothing unusual, unnatural, or unprecedented about current rates of sea level rise, with the exception that they appear to be decelerating, as opposed to accelerating, despite a period of modern warmth that climate alarmists contend is unequaled over the past millennium, and which should be melting away the polar ice caps and rapidly rising sea levels.



CU Sea Level Research Group. 2014. Global Mean Sea Level. (retrieved May 30, 2014).

National Oceanic and Atmospheric Administration (NOAA). 2012. MSL global trend table, (retrieved May 30, 2014).

Parker, A. 2014. Accuracy and reliability issues in computing absolute sea level rises. Submitted paper.

Parker, A. and Ollier, C.D. 2015. Sea level rise for India since the start of tide gauge records. Arabian Journal of Geosciences 8: 6483-6495.

Permanent Service on Mean sea level (PSMSL). 2013. Data, (retrieved October 1, 2013).