Policy Institutes

Over at his “Bench Memos” blog at NRO, Ed Whelan has taken exception to my Cato@Liberty post of Friday last wherein I called into question his critique of George Will’s column of Thursday last, which had defended Randy Barnett’s recent speech at Berkeley, drawn from his 2008 B. Kenneth Simon Lecture at Cato, arguing that the Constitution is libertarian and that judges should actively enforce its protection not only of enumerated but of unenumerated rights as well, pursuant to the Ninth Amendment. Got that? Now let’s get to the substance of the matter.

Whelan’s latest, entitled “More Ninth Amendment Confusion,” is mercifully brief. I had argued, among other things, that conservatives’ long-standing (and often understandable) fear of what they see as “judicial activism” has led them to read the Ninth Amendment (“The enumeration in the Constitution of certain rights shall not be construed to deny or disparage others retained by the people.”) as a mere rule of construction, not as an affirmation of unenumerated rights. Thus they are wary of judges finding rights that are not fairly clearly “in” the Constitution. (There is some wiggle room there: thus, for example, most would allow freedom of speech to entail the right to burn the flag.)

But a core problem with that view, I wrote, is that it implies that “prior to the ratification of the Bill of Rights, two years after the ratification of the Constitution, we enjoyed almost no rights against congressional majorities—save for those few mentioned in the original document.” Whelan responds:

The “rights against congressional majorities” that existed before the Bill of Rights was ratified arose from the Constitution’s limitations on Congress’s powers. In Madison’s words: “If a line can be drawn between the powers granted and the rights retained, it would seem to be the same thing, whether the latter be secured by declaring that they shall not be abridged, or that the former shall not be extended.”

Just so! Where there is no power—by virtue of the doctrine of enumerated powers—there is a right. In other words, prior to the ratification of the Bill of Rights we had a vast sea of rights within which there were islands of federal power. But as I noted in an exchange with Randy over the weekend, way back in 1991 I had written:

Indeed, if the Framers intended unenumerated rights to be protected without a bill of rights, how can we imagine that those rights were meant to be any less secure with a bill of rights.

The addition of the Bill of Rights, in short, did not reduce the number of rights we enjoy, limiting them to those fairly clearly “in” that document. It simply enumerated some of the rights in that vast sea of previously unenumerated rights—all of which, enumerated and unenumerated alike, were later incorporated against the states through the Fourteenth Amendment, properly read.

Whether judges discover those unenumerated rights expressly—as when they discover a right to sell and use contraceptives (Griswold v. Connecticut) or a right of fit parents to control access to their children (Troxel v. Granville) or many other such rights—or do so only implicitly by finding no power is not the issue since either method comes to the same thing—as Madison said. The Ninth Amendment simply affirms that we “retain” all the unenumerated rights we held prior to the ratification of the Bill of Rights. In expressly stating that, it can be said to be a font of rights, even though the actual font is the theory of natural rights, which rights we retained when we reconstituted ourselves in 1787.

Ever since two econometric studies (CEPR and Bertelsmann) purporting to estimate the gains from a successful Transatlantic Trade and Investment Partnership agreement were published in 2013 revealing positive – but vastly disparate – outcomes, TTIP opponents have been on the offensive, dismissing economic modelling as a subjective and politically motivated exercise.  The fact that the studies were both commissioned by paying clients (the European Commission and the German government, both presumably interested in obtaining affirmative talking points to promote TTIP) served to reinforce that perspective.

Although estimating the benefits and costs of a massive trade agreement (for which the terms remain unknown) can hardly be considered an exact science, there is value to the public and to policymakers in understanding the range of possibilities. So, in other words, perhaps the problem is not the production of econometric estimates but, rather, the manner in which those estimates can be misused or misinterpreted that should concern us.

At the Cato TTIP conference earlier this month, there was a whole session devoted to the topic: Understanding the Economic Models and the Estimates They Produce.  That discussion is fleshed out a bit in two Cato Online Forum essays, which I want to bring to your attention.

The first is a critique of the models from University of Manchester economics professor Gabriel Siles-Brugge, who articulates his perception of the problem and suggests some remedies.  The second is a defense and broader explaination of the models from University of Munich professor and director of the Ifo Center for International Economics Gabriel Felbermayr, who is the primary modeller/author of the Bertelsmann study.

Other conference-related essays, including a couple more on econometric models (from Laura Baughman and Dan Pearson), can be found here.

Markets once again are waiting breathlessly for a decision on short-term interest rates by the FOMC, the Federal Reserve’s monetary policy making arm. All signs point to no change in interest rates. More interesting is a possible change in how members of the FOMC are thinking about the economy.

For years, most members of the FOMC have used the Phillips Curve framework in setting monetary policy. This is done against the backdrop of the Fed’s so-called dual mandate to promote maximum employment and low inflation. The Phillips Curve postulates a negative relationship between unemployment and inflation. Thus, a falling unemployment rate foretells higher inflation in the future.

Now two Fed Governors (members of the FOMC) have questioned the relevance of the Phillips Curve in separate speeches recently. The one-two punch was delivered by Lael Brainard and Daniel Tarullo. In Brainard’s words, “I do not view the improvement in the labor market as a sufficient statistic for judging the outlook for inflation.”

At the Kansas City Fed’s annual Jackson Hole conference last August, two former Fed economists questioned the Phillips Curve. They argued the relationship between inflation and unemployment has never been tight.

The Fed Chair, Janet Yellen, who has been largely absent from public view, remains wedded to the Phillips Curve. It is unusual for two Governors to so publicly deviate not only from the Chair’s policy guidance but also from the policymaking framework. Is Janet Yellen losing control of the FOMC?

In reality, labor markets are not so tight and there is just no sign of higher inflation in the near-term. I made those points in an August 24th op-ed in the Wall Street Journal.

Adding to the dilemma facing the FOMC is that markets are signaling that short-term interest rates should be lower not higher. New issues of short-term Treasury bills have been issued with a zero interest rate (though the most recent auction produced mildly positive interest rates). In secondary markets, bills have traded at mildly negative interest rates. Moreover, short-term interest rates are negative in around 20 countries mostly in the European Union (HT: Walker Todd).

In sum, we have a Fed divided and markets signaling a move down not up in interest rates. That makes for enhanced uncertainty in financial markets.

[Cross-posted from Alt-M.org]

Earlier this year, the National Committee for Responsive Philanthropy (NCRP) issued a report titled “Philamplify Poll Results: Nonprofits Don’t Criticize Foundations Because of Funding Fears.”

The report seeks to explain a phenomenon whose existence it does not bother to establish. Rather than presenting evidence that that non-profits are in fact intimidated into silence by grantors, the report instead simply assumes that they are. Its first paragraph declares that “Given the power imbalance between foundations and grantees, grantees are often wary of providing foundations with constructive criticism.”

No evidence is presented to substantiate or quantify this claim. How often? How wary? Says who?

Assumption in hand, the NCRP is off to the races, asking its website visitors to speculate on this hypothetical question “What is the top reason why a nonprofit would choose not to openly criticize a foundation?”

Of course those speculations would be of dubious value even if the report had bothered to establish this phenomenon’s existence. The poll answers would not tell us if even one actual non-profit had held its tongue for the reason alleged, merely that some anonymous website visitor(s) thought it plausible.

Even if we grant that it might be difficult to collect hard evidence on cases of non-profits refusing to criticize prospective donors, it does not excuse publishing a “report” devoid of relevant facts.

Consider, too, that it might be comparatively easy to collect data on non-profits that have criticized foundations. An advantage of this flip-side approach to the question is that the critics themselves can be asked why they published their criticisms. I say this as the author of an empirical study whose findings were deeply unflattering to philanthropies seeking to scale-up charter school networks.

Why did I do it? It’s my job. I study comparative education policy, seeking to understand which policies are most effective in delivering the outcomes that families value. A key question within that field is to determine which policies lead most consistently to the “scaling-up” of educational excellence—which is to say the replication and/or imitation of best practices. Since that has long been a goal of donors to charter school networks I felt it important to determine empirically the extent to which their efforts were proving effective. It being an empirical study based on a large dataset (all the charter networks operating in the state of California) there was no way to predict the outcome prior to crunching the numbers. Nor was there any need for such a prediction.

Contrary to the speculations of NCRP’s website visitors, my highest priority as a think tank researcher is not to avoid antagonizing potential donors, it is maintaining my personal integrity and guarding my reputation and that of my employer for producing reliable, useful empirical research. I am certainly not alone in holding these priorities among think tank scholars. With that observation in mind, dear reader, please contact me if you have another example in which a non-profit published work critical of foundations/potential donors. I will relay the results to NCRP in the hope that they may wish to make amends for their earlier baseless, question-begging speculations.

For the past two or more years the question of whether or not (and when) the Fed might “raise interest rates” has been a constant feature in the news. With the FOMC, the Fed’s rate setting body, meeting next week, this is especially true at the moment. But no one seems to understand the nitty-gritty of just how interest rates are raised (or lowered), and very few non-economists have any knowledge of how it happens.

At times, the news reports seem to suggest that somewhere within the bowels of the New York Fed there is an interest-rate-machine, and that the monetary authorities have only to push a button, and, Voila!, interest rates will be raised. No one asks just how much they might be increased, or what happens then, although some accounts suggest that such an action would signal an end to the stagnant economy and better times ahead.

The Fed has no interest rate machine. To “hike rates” the Fed Open Market Committee (FOMC) must use its power to diminish the economy’s quantity of spending money through its control over the Monetary Base (MB), which is the accounted sum of the monetary obligations of the 12 Fed Banks. This Base includes two liabilities of the Fed Banks—the stock of hand-to-hand currency and the reserve-deposits of almost all of the commercial banks. To hike rates, the FOMC must decide to SELL government securities in the financial markets from its huge stockpile of security holdings—think ‘national debt.’ Fed Banks have accumulated these securities by previous purchases that lowered rates and increased the economy’s stock of money. Now, however, the call is for a ‘rate hike’ so the FOMC would have to sell off some of the Fed’s securities. It would make the sale through the offices of the Fed Bank of NY, by offering them on the financial market at attractive (low) prices relative to the annual dollar payments the securities promise their holders.

A large fraction of the sales would be to commercial banks thereby reducing their reserves, which would ordinarily force them , in turn, to reduce their lending to business firms. However, the banks at the moment have a huge volume of excess reserves on which they get one-fourth of one percent (0.25%) interest from the Fed Banks. These reserves are “excess” because they are not being utilized to “back” checkbook deposits that all households and businesses count (properly) as money. If the reserves had been used to finance the investments of private business , industry and households, the economy’s accounted stock of money would be perhaps double what it is now. But a large fraction of the Reserves has been “sterilized” by the device of this trivial interest rate paid on them.

With the FOMC buying, however, market rates might rise enough so that commercial banks would use up these sequestered reserves, and the depressing effect of a ‘rate hike’ might be avoided. But no one knows for sure. That’s why the officers in the Fed system are procrastinating.

Another important factor is also in the picture: the huge annual government deficit of more than a trillion dollars that must be funded every year. A ‘rate hike’ would make these annual borrowings much more difficult for the U.S. Treasury to finance. And if the fiscal problem becomes unstable—more deficit to finance than security markets will allow, the Fed will obey its political masters and finance the deficit by a hyper-inflation, or hyper-tax, as a burgeoning inflation simply taxes all fixed dollar wealth—bonds, dollars, life insurance values, etc.—by the rate of price level increase.

Finally, we might ask: ‘Why are rates so pervasively low. And why haven’t previous Fed and Treasury spending policies provided a path back to a healthy economy?’

The answer is that the Fed controls the monetary system, but it does not control the real system — the production of goods, services, and capital. The real system also has an interest rate — the real interest rate. At the moment, the real rate is effectively zero, because its determinants — the real rate of gross private domestic investment, and the real savings to finance those investments — are near zero. Taxes, regulations — the “Ten Thousand Commandments,” litigation, and harassment of venture capital has alienated real investment severely. Consequently, no matter what the Fed does or does not do, real values, including real incomes, are going to remain depressed into the foreseeable future. No amount of money and no monetary policy is going to change the government-imposed real depression.

[Cross-posted from Alt-M.org]

As predictable as the sun’s rising in the East is NRO’s Ed Whelan’s rush to the barricades when George Will (or many others, for that matter) is found defending a judiciary “actively” engaged in defending a right not expressly found in the Constitution.

The occasion this time was Will’s piece in yesterday’s Washington Post, “The false promise of ‘judicial restraint’ in America.” In it, Will notes that, given the advanced age of several Supreme Court justices, a supremely important presidential issue is being generally neglected in the presidential debates, namely, the criteria by which a candidate would select judicial nominees. And that is “because Democrats have nothing interesting to say about it and Republicans differ among themselves about it.” Drawing on a speech that Randy Barnett recently gave at UC Berkeley, Will defends what we at Cato have long defended, namely, a judiciary actively engaged in reading and applying the Constitution as written. And that includes accurately reading the Ninth Amendment: “The enumeration in the Constitution of certain rights shall not be construed to deny or disparage others retained by the people”

It’s on that matter, especially, that Whelan leaps to the defense of “judicial restraint.” When the Constitution’s text “fails to yield a sufficiently clear answer to a constitutional question,” he writes, judges should not be inventing rights “that are not in the Constitution” but instead should defer to the people—to democratic majorities.

But when one reads not only the Ninth Amendment itself but about why it was written—to make it clear that the rights we “retain” when we establish government are far more numerous than could ever be enumerated in a constitution—then it becomes clear that the discovery and protection of those rights cannot be left to the very political majorities against which they are most likely being invoked in a legal action before a court. Indeed, if Whelan were right—that we enjoy only those rights that are expressly stated in the Bill of Rights—then prior to the ratification of the Bill of Rights, two years after the ratification of the Constitution, we enjoyed almost no rights against congressional majorities—save for those few mentioned in the original document.

To make good, then, on the Ninth Amendment’s plain text, to say nothing of the Constitution’s larger promise, judges have to actively discover the rights we retained—enumerated and unenumerated alike—when we created a limited government and later incorporated those limits against the states through the Civil War Amendments. That takes an engaged judiciary, one that is cognizant of the basic theory of the Constitution—a document that secures liberty as our first principle, majoritarian democracy as merely the means for securing that liberty.

I’ll give Whelan this much: He raises a serious practical problem with Will’s proposal:

I confess that I don’t understand what Will expects Republican presidential candidates to do with Barnett’s vocabulary. I don’t think that we’ll see candidates accusing each other of being Hobbesians. And if they try to go deeper into the Lockean propositions, there are lots of traps that await them. Do we want to make it easy for Hillary Clinton, or whoever the Democratic candidate will be, to allege that the Republican candidate will appoint justices who will repeal the New Deal and strike down civil-rights laws?

We’ve reached a point in our public discourse at which constitutional understanding is at the nadir. When so many American’s believe that the purpose of government is to provide them with all manner of goods and services, as demanded by democratic majorities, it’s difficult to explain the proper role of the courts under a Constitution for limited government.  But neither will it help to feed that appetite for public goods by justifying the majoritarian force that satisfies it. Better it would be to appeal to the liberty we all want and the constitutional means for securing it.

Addendum: As I send this to be posted, I note that Whelan has just sent around a new post responding to Barnett’s response to him, none of which I’ve yet read.

Here is yet another installment in the series on incremental change. As ever, check out data on the improving state of the world at www.humanprogress.org.

Prawn Sex-Change Boosts Yields   

Male prawns grow faster and get to be 60% larger than female prawns. As such, they are more economically valuable. By slicing the prawn genome, scientists from the Ben Gurion University found, it is possible to generate all-male populations of prawns. In trials, female prawns were injected with a molecule that silenced certain genes thus allowing for all-male prawn yields. This method eliminates the need for chemicals or hormones, which have been used to increase prawn yields in the past. The breakthrough in prawn yields could also be used in the fight against bilharzia in Africa. Bilharzia is carried by snails and prawns are snails’ natural predators. By increasing prawn yields, snail populationscould be controlled more easily.   

Artificial Skin that Mimics Touch             

Currently, prosthetic limbs allow for patients to complete physical activities. However, the sense of “feeling” is nonexistent. Researches at Stanford University have recently discovered a way to create artificial skin that has the ability to mimic the sense of touch. It works like this: small pyramids made out of carbon nanotube-elastomer composite have their conductivity altered when changes in pressure occur. The pressure signals are then routed through organic circuits, where they are converted into electrical pulses and sent to the nerves.   

Is Reversal of Parkinson’s Disease Possible?             

Alan Hoffman has a condition called Parkinson’s disease, which makes it very difficult for him to complete any simple physical task or even read short articles. He has tried a variety of medications and even a surgery to try and reduce the symptoms of the disease, but none of them really worked. So he agreed to a 6 month clinical trial of nilotinib - a drug that helps to rid cells of a protein that is attributed to Parkinson’s disease. Five weeks later, Alan’s cognition had improved. He was able to complete various physical activities and was even able to read a book.   

Update on the Malaria Vaccine             

Malaria kills 500,000 African children under the age of 5 each year and now the World Health Organization has thrown its weight behind a vaccine that could help to fight the disease. In a trial that was recently completed in seven African countries, malaria vaccination consisted of three doses and an additional booster injected 18 months later. Children under 5 who received the full treatment experienced a 36 percent decrease in malaria cases over the following 4 years. However, the children who were only administered a partial treatment, were more likely to contract malaria two years later than children who did.

In her Cato Online Forum essay about the strategic dimensions of the Transatlantic Trade and Investment Partnership, Fran Burwell of the Atlantic Council sees both opportunity and necessity in its successful conclusion.  The opportunity comes from – among other things – combining the strength of the transatlantic economies (which currently account for 46% of global GDP) through greater economic integration, which will provide the leverage necessary for the United States and Europe to continue to exert dominance over global trade rulemaking and standards setting.

The necessity of TTIP’s success stems from the threat to Europe (and, thus, to the transatlantic relationship) posed by Vladimir Putin, who is working to subvert the deal.  ”[F]ailure of the negotiations,” Burwell writes, “would be one of the best indications possible to Vladimir Putin and others that the U.S.-European partnership is just rhetoric without the capacity for action.”

Read Fran’s essay here.

Read the other essays published in conjunction with Cato’s TTIP conference last week here.

This morning, the Washington Post ran an article titled, “How free markets make us fatter, poorer and less happy.” Actually, the data suggest the exact opposite: free markets make us healthier, richer and happier. 

Free markets make us healthier 

First, the authors argue that free markets result in an abundance of temptations, such as candy and fattening food, and that encourages obesity. Obesity is a problem, but let’s put matters in proper perspective. The best proximate measure of the health of a nation is life expectancy. That is increasing. In fact, Americans have never lived longer. 

Moreover, a ban on fatty foods raises questions about personal freedom and responsibility. We allow people to buy alcohol, but discourage them from drinking and driving. Why not allow for sale of fatty foodstuffs, while discouraging gluttony through, for example, increased medical insurance premiums?   

The free market has been amazingly successful in increasing food production across the globe. In 1962, people in 51 countries consumed fewer than 2,000 calories per person per day. By 2011 that number fell to one (Zambia). All the while, life expectancy around the world has increased. 

Free markets make us richer

The authors of the Washington Post article note that the average U.S. household does not save very much money. This, they say, is evidence of increasing poverty. This argument confuses two separate issues: spending and earning. If we want to encourage more saving, we may want to switch to a consumption tax, instead of an income tax. But regardless of what tax system we opt for, the data clearly show that the U.S. is becoming richer.In 2015, average income, adjusted for inflation, reached an all-time high.

Free markets make us happier 

Economically freer countries are wealthier, and wealthier countries tend to be happier.  One early study suggested that greater wealth only correlates with happiness up to a certain point. That correlation came to be known as the “Easterlin Paradox.” Subsequent research suggests that wealthier people are indeed, on average, happier.  

As HumanProgress.org advisory board member Matt Ridley summarized in his book The Rational Optimist

Rich people are happier than poor people; rich countries have happier people than poor countries; and people get happier as they get richer. The earlier study simply had samples too small to find significant differences. In all three categories of comparison—within countries, between countries and between times—extra income does indeed buy general well-being. That is to say, on average, across the board, on the whole, all other things being equal, more money does make you happier.

Find out more about improvements in health, wealth, happiness, and other areas of human wellbeing on HumanProgress.org.

As I’ve written before, the case for “free” college is decrepit, and Bernie Sanders’s op-ed in today’s Washington Post does nothing to bolster it. It sounds wonderful to say “everyone, go get a free education!” but of course it wouldn’t be free – taxpayers would have to foot the bill – and more importantly, it would spur even more wasteful over-consumption of higher ed than we have now.

Because I’ve rehearsed the broad argument against free college quite often, I’m not going to go over it again. But Sen. Sanders’ op-ed does furnish some “evidence” worth looking at: the notion that the post-World War II GI Bill was a huge economic catalyst. Writes Sanders:

After World War II, the GI Bill gave free education to more than 2 million veterans, many of whom would otherwise never have been able to go to college. This benefited them, and it was good for the economy and the country, too. In fact, scholars say that this investment was a major reason for the high productivity and economic growth our nation enjoyed during the postwar years.

I’ve seen this sort of argument before, as I’ve seen for government provision of education generally, and have always found it wanting, especially since we have good evidence that people will seek out the education they need in the absence of government provision, and will get it more efficiently. Since Sanders links to two sources that presumably support his GI Bill assertion, however, I figured I’d better give them a look.

Surprisingly, not only does neither illustrate that the GI Bill spurred economic growth, neither even contends it did. They say it spurred some college enrollment growth, and one says veterans ended up being better students than some high-profile college presidents expected them to be, but neither makes the Sanders’ growth claim. Indeed, in line with what we’ve seen broadly in education, one says that at least 80 percent of veterans who went to college on the Bill would likely have gone anyway, and in seemingly direct opposition to what Sanders would like to see, the other notes that the Bill disproportionately helped the well-to-do, not the working class. As the Stanley study says right in its abstract: “The impacts of both programs [the World War II and Korean War GI Bills] on college attainment were apparently concentrated among veterans from families in the upper half of the distribution of socioeconomic status.”

If we really want to do what’s best for the nation – not just what sounds or feels best – we need to ground our policies in reality. In education, as in Sanders’ op-ed, that often doesn’t happen.  

Providing the rationale for their work, Roberts et al. (2015) write that “the short and sparse instrumental record in the high latitudes of the Southern Hemisphere means investigating long-term precipitation variability in this region is difficult without access to appropriate proxy records.” It was therefore the objective of this team of nine researchers to extend the duration of the Law Dome, East Antarctica, snowfall accumulation record back in time an additional 750 years so that it would cover over two millennia.

The resultant 2035 year-long proxy (22 BC to 2012 AD) is presented in the figure below. As reported by the authors, the average long-term snow accumulation rate was calculated as 0.686 m yr-1 (27 inches) ice equivalent, which rate they say “is in agreement with previous estimates, and further supports the notion that there is no long-term trend in snow accumulation rates, or that any trend is constant and linear over the [2035-year] period of measurement.”

If this number seems low for such an icy continent, the fact is that most high-latitude locations in both hemispheres would qualify as deserts based upon annual precipitation. In many places, it is literally “too cold to snow” as the frigid air can hold only tiny amounts of moisture.

There were several decadal-scale oscillations in the record, described by the authors as “common,” with “74 events (33 positive and 41 negative) of at least a 10-year duration in the record.”  The three longest periods of above average integrated snowfall occurred over the intervals 380-442, 727-783, and 1970-2009, while the three longest periods of below average integrated snowfall occurred during 663-704, 933-975, and 1429-1468.

Annual (grey) and smoothed (green) snow accumulation rate history for Law Dome, East Antarctica over the period 22 BC–2012 AD. Adapted from Roberts et al. (2015).

With respect to the cause of the interannual and decadal variability in the record, Roberts et al. report they found no significant correlation between snowfall accumulation and (1) the Southern Oscillation Index, (2) volcanic activity, (3) the Southern Annular Mode or (4) the Law Dome CO2 record. Spectral analysis, however, revealed periodicities that “may be related to El Niño-Southern Oscillation (ENSO) and Interdecadal Pacific Oscillation (IPO) frequencies.”

In considering the above, it is abundantly clear there is nothing unusual, unnatural or unprecedented about present snowfall accumulation rates in this region of the Antarctic. Furthermore, there is nothing to suggest a global warming/CO2-induced influence on this record across the entire time span of the Industrial Revolution, despite a 40% increase in atmospheric CO2. And, lastly, there is no indication the great millennial-scale temperature oscillations that brought the Roman Warm Period, the Dark Ages Cold Period, the Medieval Warm Period, the Little Ice Age, and the Current Warm Period, had any effect on the snowfall record either. Thus, it seems safe to conclude that, in the forseeable future, annual snowfall accumulation totals will likely continue much as they have for the past 2035 years, never straying too far (or too long) from their long-term mean, unaffected by changes in CO2 or temperature.


Roberts, J., Plummer, C., Vance, T., van Ommen, T., Moy, A., Poynter, S., Treverrow, A., Curran, M. and George, S. 2015. A 2000-year annual record of snow accumulation rates for Law Dome, East Antarctica. Climate of the Past 11: 697-707.

Hundreds of thousands people will lose their insurance plans as a raft of health insurance cooperatives (CO-OPs) created by the Affordable Care Act will cease operations. Just last week, CO-OPs in Oregon, Colorado, Tennessee and Kentucky announced that they would be winding down operations due to lower than expected enrollment and solvency concerns (although the one in Colorado is suing the state over the shutdown order).  They join four other CO-OPs that have announced that they would be closing their doors. In total, only 15 out of the 23 CO-OPs created by the law remain. These closures reveal how ill-advised this aspect of the ACA was both in terms of lost money and the turmoil for the people who enrolled in them. The eight that have failed have received almost $1 billion in loans, and overall CO-OPs received loans totaling $2.4 billion that might never get paid back. In addition, roughly 400,000 people will lose their plans.

Sources:  Sabrina Corlette et al. “The Affordable Care Act CO-OP Program: Facing Both Barriers and Opportunities for More Competitive Health Insurance Markets,” The Commonwealth Fund, March 12, 2015; Erin Marshal, “8 Things to Know About Insurance CO-OP Closures,” Becker’s Hospital Review, October 20, 2015. Created using Tableau.

Notes: Hawaii and Alaska not shown. Neither state had a CO-OP. CoOportunity Health served both Iowa and Nebraska.

Proponents of the CO-OPs believed that they would be able to offer lower premiums than for-profit insurers because they did not have the same profit motive, but even non-profit insurers cannot operate at a financial loss indefinitely. When they were created, these CO-OPs had no customers, no experience in setting premiums, no networks and limited capital. The government tried to subsidize the early period of uncertainty by disbursing loans to help with startup and solvency issues, and money from other provisions like risk corridors would dampen losses in the initial years. Lower than expected payments from the risk corridors have exacerbated the issues facing some of these CO-OPs, who were counting on substantial payments to stay afloat. But this is hardly the only factor contributing to their struggles, some of them the product of other government policies like delaying employer mandate penalties and giving states the option to allow transitional policies through 2017. Some of these later developments could not have been anticipated, but many analysts, including Cato scholars, were skeptical about the prospects of CO-OPs from the beginning.  Even some ACA supporters recognized the flaws inherent in the CO-OP design: Paul Krugman derided them as a “sham”  and in a 2009 interview Professor Timothy Jost said could not see how a CO-OP “does anything to control costs.” There have been multiple warning signs that many CO-OPs were in trouble.  Earlier this year The Centers for Medicare and Medicaid Services sent letters to 11 CO-OPs placing them on “enhanced oversight” due to financial concerns, and a 2014 report from the HHS Office of Inspector General found that “most of the 23 CO-OPs we reviewed had not met their initial program enrollment and profitability projections,” and that the government “had not established guidance or criteria to assess whether a CO-OP was viable or sustainable.”

These CO-OPs were not a good idea at inception and were always going to face many obstacles to success.  Multiple changes to the law since they were established have exacerbated these problems, and already struggling CO-OPs have folded. Competition is indeed vital in health insurance markets, but the CO-OPs were a bad way to try to foster this competition. With these closures, billions of taxpayer dollars could be lost and hundreds of thousands of people will discover that the “if you like your plan, you can keep it” promise does not apply to them.

I have never been entirely satisfied with how either economists or historians identify and date past U.S. recessions and banking crises. Economists, as their studies go further back in time, have a tendency to rely on highly unreliable data series that exaggerate the number of recessions and panics, something most strikingly but not exclusively documented in the notable work of Christina Romer (1986b, 1989, 2009). Historians, on the other hand, relying on more anecdotal and less quantitative evidence, tend to exaggerate the duration and severity of recessions. So I have created a revised chronology in the table below. From the nineteenth century to the present, it distinguishes between three types of events: major recessions, bank panics, and periods of bank failures. I have tried to integrate the best of the approaches of both economists and historians, using them to cross check each other. My chronology therefore differs in important ways from prior lists.

One of the table’s benefits is that it gives a visual presentation of which recessions were accompanied by bank panics and which were not. Equally important, it distinguishes between bank panics and periods of significant numbers of bank failures. These two categories are often confused or conflated, and yet this distinction is critical. Not all bank panics (periods of contagious runs and sometimes bank suspensions) were accompanied by numerous bank failures, nor were all periods of numerous failures accompanied by panics.

Among other advantages, the table helps highlight how sui generis the Great Depression was. Not only does it have the longest downturn (43 months), but it also is one of the few depressions accompanied by both bank panics and numerous bank failures. Once the Great Depression is thrown out as a statistical outlier, we observe no significant change in the frequency, duration, or magnitude of recessions between the period before and the period after that unique downturn. Given that the Great Depression witnessed the initiation of extensive government policies to alleviate depressions and that the Federal Reserve had been created fifteen years earlier explicitly to prevent such crises, this overall historical continuity with a single exception indicates that government intervention and central banking has done little, if anything, to dampen the business cycle.

There has been a dramatic elimination of bank panics, at least until the financial crisis of 2007-2008, but the timing suggests that deposit insurance more than the Federal Reserve deserves the credit. Furthermore, note that more outbreaks of numerous bank failures occurred in the hundred years after the Federal Reserve was created than the hundred years before, with the Federal Reserve presiding over the most serious case of all: the Great Depression.

Because my table departs from previous lists and dating, in what follows, I explain the most important differences for each of the three categories. At the end of the post is a list of the most useful references I consulted.


I have almost entirely confined the list of major recessions to those constituting part of a standard business cycle, omitting periods of economic dislocation resulting from U.S. wars or government embargoes. For the number and dating of recessions from 1948 forward, I have exclusively followed the National Bureau of Economic Research (NBER). But prior to 1929, the NBER notoriously exaggerates the volatility of the U.S. economy. Moreover, between 1929 and 1948, the NBER reports a post-World War II recession lasting from February to October 1945 that no one was aware of at time, as easily confirmed by looking at the unemployment data as well as contemporary writings. Richard K. Vedder and Lowell E. Gallaway (1993) pointed out in their neglected study of U.S. unemployment that this alleged postwar recession is a statistical artifact that varies in severity with the regular comprehensive revisions of GNP/GDP estimates by the Bureau of Economic Analysis (BEA). The BEA’s original estimates showed only a minor downturn, subsequent revisions converted it into a major downturn, and the latest comprehensive revision of 2013 have reduced its magnitude, although not to the level of the BEA’s original estimates.

For the pre-1929 period, therefore, I have only listed recessions that can be documented with unemployment data or more traditional historical evidence. The unemployment data I have employed are the revisions of both J. R. Vernon (1994) and Romer (1986b). I have still accepted NBER dating, which only goes back to 1857, for those pre-1929 recessions that I consider genuine, with the notable exception of 1873. In that case, the NBER dating (based on the Kuznets-Kendrick series) of a 65-month recession is so inconsistent with other evidence that it was even questioned by Milton Friedman and Anna Jacobson Schwartz in their Monetary History (1963). This is one of the most striking cases in which some observers at the time and many economists today have confused mild secular deflation with a depression – a confusion exposed by George Selgin in Less than Zero (1997). Even the Kuznets-Kendrick estimates show no decline in real net national product during this recession, and an acceleration of its growth after 1875. I have therefore accepted Joseph H. Davis’s (2006) revised dating, shortening this recession to not more than 27 months, and probably less if he had attempted a monthly rather than just an annual revision of the depression’s end point.

Estimates of U.S. GDP prior to the Civil War are even more problematic, making precise monthly dating of recessions impossible. So I have relied upon the consensus of standard historical accounts along with the GDP statistics in Historical Statistics: Millennial Edition (Carter 2006) to determine what qualifies as an actual recession and its annual dating. The one case where I diverge from some (but not all) mainstream historical accounts is the alleged recession during the banking crisis that began in 1839, after the recovery from the 1837 recession. As Friedman and Schwartz (1963), Douglass C. North (1961), and Peter Temin (1969) have all noted, estimates of real GDP growth over the next four years are quite robust. Thus, 1839-1843 appears to be another case were deflation (in this case, quite severe) is confused with depression.

Bank Panics

The number of bank panics is also often exaggerated. For the post-Civil War period, many authors follow the enumeration first compiled by O. M. W. Sprague (1910), and some even add in a few more. But Elmus Wicker (2000) has persuasively demonstrated that the alleged Panics of 1884 and 1890 were really only incipient financial crises nipped in the bud by the actions of bank clearinghouses. For the pre-Civil War period, especially egregious in its listing of panics is the widely cited work of Willard Long Thorp (1926), which even mistakenly attributes to the United Sates panics that affected only England (those in 1825 and 1847).

I have confined my own list to those panics that Andrew J. Jalil (2015) in his comprehensive survey of previous literature defines as “major,” with two exceptions. First, I have omitted the very minor economic contraction of 1833, following Andrew Jackson’s phased withdrawal of government deposits from the Second Bank of the United States, since the impact on banks was almost entirely confined to the Second Bank and its branches. Second, I have included the more pronounced global financial crisis at the outbreak of World War I, in which the U.S. stock market was shut down for four months, although the emergency currency authorized under the Aldrich-Vreeland Act prevented bank suspensions. The monthly dating of other panics listed is confined to the period during which major suspensions or runs occurred and does not always reflect how long banks suspended, which for the War of 1812 was until January 1817.

Bank Failures

Bank panics, even when accompanied by numerous suspensions (or what Friedman and Schwartz prefer to call “restrictions on cash payments” to distinguish them from government suspensions of redeemability), do not always result in a major number of bank failures.

For instance, Calomiris and Gorton report the failure of only six national banks out of a total of 6412 during the Panic of 1907, or less than 0.1 percent. Of course the Panic of 1907 was concentrated among state banks and trust companies. Unfortunately, as far as I can tell, there are no good time series on the failures of state banks for the period prior to the creation of the Federal Reserve. Yet there were over 12,000 state banks at the outset of the Panic of 1907. One very fragmentary and incomplete estimate of total bank suspensions (rather than failures) in Historical Statistics (1975), including both state and national banks, puts the number during that panic at 153. Even if all suspensions had resulted in failures, which of course did not happen, we still have a failure rate of 0.7 percent for all commercial banks.

Confusion of bank suspensions with bank failures can even infect serious scholarly work. For example, in Michael D. Bordo and David C. Wheelock (1998), charts meant to show bank failures are instead clearly depicting statistics on the annual number of bank suspensions. Similarly, periods of numerous bank failures do not always coincide with bank panics, as the S&L crisis dramatically illustrates. So it is crucial to distinguish between periods of panics and failures, although specifying the latter requires judgment calls. For the monthly number of national bank failures prior to the Fed’s creation, I have depended heavily on Comptroller of the Currency (1915), v. 2, Table 35, pp. 66-103.


To be sure, banking in the United States has never been fully deregulated. Even from 1846 until 1861, under the Independent Treasury during the alleged free-banking period, when there was almost no significant national regulation of the financial system, state governments still imposed extensive, counter-productive banking regulation. This fact obviously complicates any comparison of the periods before and after the creation of the Federal Reserve System in 1914. Nonetheless, such a comparison offers more than a prima facie case against the Fed’s success at either stabilizing the U.S. economy or preventing banking crises. In short, the widespread belief among economists, historians, and journalists that the Federal Reserve was an essential, major improvement appears to be no more than unreflective faith in government economic management, with little foundation in the historical evidence.

Acknowledgments: I would like to thank Graham Newell and Kurt Schuler for their invaluable assistance and comments while preparing this table. Any remaining errors or oversights are my responsibility.

[Cross-posted from Alt-M.org]

South Korean Park Geun-hye met President Barack Obama in Washington last week. Nominally, it was a meeting between equals. But Park reaffirmed her nation’s continuing dependence on America.

The U.S.-ROK alliance was created in the aftermath of the Korean War, in a world that no longer exists. America’s security commitment is an anachronism.

There’s no doubt why Seoul continues to support the security relationship. The South saves money relying on the global superpower for protection.

As Scott Snyder of the Council on Foreign Relations put it, the alliance lessens “South Korea’s vulnerability to North Korea and rising Asian rivalries.” A related point was made by Van Jackson of the Center for a New American Century, who argued that America’s defense commitment helps deter the North from attacking the ROK.

Of course, that is the usual point of defense. But the U.S. guarantee acts as a deterrent for the South, not America. The alliance serves South Korea, not the United States.

Like most of America’s alliances, the U.S.-ROK treaty is entirely one-sided. Americans do the defending. South Koreans get defended.

If there was no cost to the United States, there would be little complaint with Washington’s policy. Alas, military spending is the price of America’s foreign policy. Most of the Pentagon’s efforts are devoted to protecting other nations rather than the United States.

Moreover, Washington’s constant promise to go to war creates a greater risk of conflict involving America. Deterrence frequently fails.

Those protected also are more likely to be confrontational, creating a greater risk of conflict. If deterrence fails, the alliance ensures that the United States will be drawn into an otherwise avoidable conflict.

And if things go wrong in Korea, they could go really wrong. Kim Jong-un almost certainly doesn’t want war, but he may not be prudent enough to avoid it.

Of course, America has plenty of interests around in the world, including in the Korean Peninsula, but most are not worth the risk of war. Charity is no basis for foreign policy.

After all, Washington could seek to deter all war by scattering American garrisons even more widely. Doing so presumably would enhance deterrence, as Jackson wishes, but for other nations and at great cost to American taxpayers.

South Korea is capable of protecting itself, both by deterrence and winning any conflict. Indeed, as I point out in Forbes online: “it beggars the imagination that a nation with a 40-1 economic edge, 2-1 population advantage, significant technological lead, dramatically larger industrial base, more resilient infrastructure, and vastly stronger international position could not build the military necessary to deter its far weaker antagonist.”

Jackson also worried about U.S. credibility should Washington restructure a defense relationship a mere 62 years after forging it. Actually, credibility is at risk when one makes promises that one does not keep, not changes old promises to fit new circumstances.

More interesting is Jackson’s concern insisting South Korea act on its own behalf might cause it to revive its nuclear program. Nonproliferation is an important goal, but not one of unlimited value.

Today, Northeast Asia demonstrates the ill-effects of an international version of gun control: only criminals have guns. China, Russia, and North Korea possess the ultimate weapon. None of America’s democratic allies are so armed. So America is expected to risk Los Angeles to protect Seoul, Taipei, Tokyo, and who knows where else.

Washington needs to consider whether the second best of a South Korean nuclear weapon is worse than the second best of an American nuclear umbrella. Especially since the possibility of proliferation to U.S. friends might cause Beijing to more seriously pressure North Korea to halt the nuclear parade.

Unfortunately, President Park’s visit was wasted. The two governments should discuss how to transform the alliance into a relationship of equals. South Korea has reached the forefront of nations. It should act the part.

After Germany, the Netherlands, and Great Britain, it is now Italy’s turn to privatize its postal service. It seems that even “old Europe” welfare states are more reform-minded on some economic matters than Congress and the current U.S. administration.

The Financial Times reports:

Italy will this week launch its biggest privatisation in more than a decade with the partial sale of Poste Italiane — an initial public offering on which the government of prime minister Matteo Renzi has staked its reformist reputation.

The government is planning to sell up to a 40 percent stake in Poste Italiane, Italy’s national post office, and raise a maximum of €3.9bn in proceeds. The IPO is due to have a price range of €6 to €7.5 a share, giving the company an equity value of up to €9.8bn.

Poste Italiane is a 153-year old behemoth, and generates €28.5bn in annual group revenue, holds €420bn in postal savings deposits, and has 32m customers.

Why is the government selling Poste Italiane? “Because of the decline in its letter business caused by email, and the rise of e-commerce,” notes the FT. That is one of the same concerns that prompted the 2013 sell-off of Britain’s Royal Mail, and it is also central to the downward spiral of the U.S. Postal Service (USPS). If Italy can sell its 153-year old behemoth, and Britain can sell its 500-year old behemoth, then America can sell its dinosaur, USPS.

On the Italian sale, Reuters notes, “A successful listing of Poste Italiane will then open the way for the sale of air traffic control operator Enav in the first half of 2016, while the listing of the national railway company is scheduled for the second half of next year.”

Americans look to Italy for the best in fashion. Our government-run postal services, air traffic control, and national railway company are looking very drab and old-fashioned. It is time for an Italian-style privatization makeover.    

For more on postal privatization, see here, here, and here.

Yesterday, in a move being described as “a major shift,” the American Cancer Society changed its guidelines on when and how often women should undergo professional physical exams and mammograms for breast cancer.

Under previous guidelines that the organization had trumpeted for years, women “of average risk” were to begin both at age 40 and repeat them every year. Now the ACS is recommending annual mammography start at age 45, cutting back to once every two years at age 55, and eliminating the screen altogether when a woman’s future life expectancy falls inside of 10 years. As for the physical exam, the ACS no longer recommends it at all.

The reason for the change is that both screens provide so many stressful false positives that the ACS doesn’t believe regular testing passes a cost-benefit test unless the woman is of “higher than average risk.”

The shift should be welcome news for women. Mammograms and doctor breast exams are charitably described as “uncomfortable,” and probably more accurately described as “painful and embarrassing.” But the ACS change could become painful and embarrassing for the architects of the 2010 Patient Protection and Affordable Care Act (ACA).

One of the most scrutinized provisions of the ACA is the creation of the Independent Payment Advisory Board (IPAB), whose ostensible job is to recommend cost-containment measures if Medicare expenditure projections begin to outpace a previously determined growth rate. In reality, IPAB is to monitor the cost and effectiveness of various types of care to determine which will be covered by Medicare, with the expectation that those decisions will serve as a template for private health insurers and other third-party payers. The hope is that IPAB’s decisions will eliminate coverage of procedures that don’t measure up, thereby “bending the cost curve”—that is, reducing the nation’s overall spending on health care.

IPAB has been derided by critics as a “death panel” that could eliminate crucial care, and criticized by more thoughtful scholars as an unaccountable rationing board that will inject itself in decisions that ought to be private. In contrast, I’ve argued that IPAB is more likely to be a paper tiger that may occasionally block some treatment or another, but will usually cave to political pressure and approve popularly appealing procedures and treatments that pass no reasonable cost-benefit test. Those decisions will then pressure third party payers to also cover the care. That way, IPAB will bend the cost curve—just in the opposite direction from what the ACA writers intended.

So think of the ACS shift as a looming test of IPAB, as not-recommended breast cancer screenings are exactly the sort of Medicare expenditure the board should identify for elimination. So far, the “projected expenditures” provision for the board (or the secretary of health and human services, acting in IPAB’s stead) has not been triggered, so no cost-containment recommendations are currently forthcoming. Thus give IPAB an “incomplete” on this test for now—but don’t expect a good grade later.

BEIRUT—Lebanon is the Middle East’s only melting pot. Never has the region more needed a peaceful oasis.

However, the country is a sectarian volcano. If the country crashes, so will the only Middle Eastern model for tolerant coexistence. Lebanon desperately needs statesmen willing to look beyond their personal and group interests.

Full-scale civil war erupted in 1975. That conflict ended in 1990. Since then, the country has suffered through conflict with Israel, spasms of sectarian violence, and now Syria’s implosion.

Despite all this, Lebanon remains generally free and uniquely diverse. But politics systematically undermines the country’s economic potential.

The implosion of Syria poses an even bigger threat to Lebanese stability. The Shia Hezbollah movement has directly intervened on the side of the Assad regime. At the same time, the Sunni party Future Current has backed the Syrian opposition. Tensions also have risen between Sunnis and Alawites, who support the Syrian government, and Christians, who criticize the Islamic State.

With Syria to the north and east, Lebanon also is vulnerable to a influx of violence. Military leaders with whom I spoke, generally not for attribution, acknowledged the challenging security environment. “We work hard not to have spillover from Syria,” one general told me.

Directly responsible for internal security is Interior Minister Nuhad Mashnouq, who emphasized the importance of cooperation with the military. While refusing to discuss details, he said the government had successfully thwarted a number of terrorist attacks.

However, as I point out in Forbes online: “Lebanon can blame no one else for its political crisis. The confessional system emphasizes consensus and effectively grants major factions veto power. Stasis is the natural result and today pervades the entire political system.”

The president is to be a Christian, the prime minister a Sunni, and National Assembly Speaker a Shiite. But the system rests on compromise, which has been sorely lacking lately.

Nominated in 2013, the prime minister took months to form a government, and only then by appointing members of all factions. The president’s term ended in May 2014. With two Maronite Christians contending for the position, backed by different Muslim factions, the National Assembly has deadlocked in choosing his successor.

Parliament was elected in 2009, but divisions over election law reform caused legislators to postpone the ballot from June 2013 to November 2014, and then to June 2017. The government has been unable even to resolve a trash crisis, leading to a youthful protest known as “You Stink.”

No solution is forthcoming. The divided, superannuated government staggers on. Druze leader Walid Jumblatt told me: “Lebanon is crumbling under the garbage.”

Out of desperation, many people are looking outside Lebanon for a solution. For instance, Beirut Governor Liad Chebib argued that “different domestic agendas make the political crisis impossible to resolve within Lebanon.” Proposals advanced include Washington using its new influence with Tehran to encourage discussions between Iran and Saudi Arabia and Oman, an independent but respected Gulf State, hosting an international conference in Muscat about Lebanon.

Washington should be concerned about Lebanon. The country combines a mix of tolerance, economic openness, pluralism, human rights, democracy, cultural freedom, and liberty unusual for the Middle East.

All of these are under threat. If these trappings of liberalism disappear in Lebanon, the Arab world will lose its only example of a humane political and social order.

However, Washington will not solve Lebanon’s problems. Military intervention is inconceivable and the Obama administration is unlikely to find the time and resources to devote to another Middle Eastern state.

Moreover, the Lebanese political system remains the basic problem. Those who benefit from the country’s fragile stability must cooperate to ensure the system’s survival.

Many people rely on Lebanon’s oft-demonstrated resilience. One person called Lebanon’s survival “a miracle.”

However, today the entire system appears poised on the precipice amid worsening regional chaos. The price of failure would be catastrophic. Only the Lebanese can ensure that their nation survives and thrives. 

The Senate Judiciary Committee held a hearing this week on sentencing reform.  One of the witnesses was Debi Campbell, who was sentenced to 19 years in a federal penitentiary for selling meth.  One of her colleagues in the meth trade was busted by the police first–so that person cut a deal and agreed to testify against Campbell.  Federal authorities rewarded that drug offender with zero jail time. 

Conservatives sometimes argue that longer prison sentences will “send a message” to the community and deter drug offenders.  Ms. Campbell’s testimony offers a dose of reality on that one.  She says she knew selling meth was against the law, but had no idea she could face close to 20 years in prison for what she was doing.  She was hooked on meth and was selling to support her addiction.  She was not reading the Congressional Record to see what messages Congress was sending. 

Another conservative argument is that long prison sentences will incapacitate the offender.  That holds true for a rapist, but one wonders how the community was made safer by keeping Ms. Campbell locked up for so many years.  But there’s a former federal prosecutor out there who probably thinks he did good work on Ms. Campbell’s case.

Check out Ms. Campbell’s testimony.  Only 7 minutes, but powerful.

Related items here and here.

This week, the New York Times editorial board wrote in support of greater taxes on sweetened drinks, citing new research from a team Mexican and American researchers. They praise the novel design of the tax, which is levied on drink distributors rather than consumers. This caused the tax to be included in shelf prices, making the increase in total cost clear to consumers. The research found that soda consumption fell 12 percent in a year, and 17 percent among the poorest Mexicans.

The Times admits that we do not know whether any health benefits will actually result from soda taxes.  In this article in Regulation, the University of Pennsylvania’s Jonathan Klick and Claremont McKenna’s Eric Helland examined the effects of soda taxes. They conclude that a one percent increase in soda taxes led to a five percent reduction in soda consumption among young people.  But consumers substituted to other beverages.  A 6-calorie reduction in soda consumption was accompanied by an 8-calorie increase in milk consumption and a 2-calorie increase in juice consumption. Thus, the tax on soda led to an increase in overall calorie consumption, which offset the benefits of falling soda consumption. Moreover, there was “no statistically significant effect of soda taxes on body weight or the likelihood of being obese or overweight”.

Five years ago, Bob Poole and I wrote that Canada’s privatized air traffic control (ATC) system would be “a very good reform model for the United States.” U.S. policymakers—including the chairman of the House committee that oversees ATC—are now coming around to that view. 

The Wall Street Journal reports:

The headquarters of Canada’s air traffic control corporation is becoming a busy destination for U.S. transportation officials and airline executives looking for a model to privatize U.S. airspace management.

John Crichton, chief executive of Nav Canada, has hosted more than a dozen U.S. delegations in the past 18 months as Congress considers stripping U.S. air-traffic control from the Federal Aviation Administration—much as Ottawa did 19 years ago.

U.S. admirers—including Rep. Bill Shuster (R., Pa.), chairman of the House Transportation and Infrastructure Committee—advocate similarly extricating air-traffic control from the FAA and its parent, the Transportation Department. They say that would assure more reliable funding than the current mix of taxes and congressional appropriations, and could help advance NextGen, a $40 billion FAA air-traffic modernization program criticized by government watchdogs for being delayed and over budget.

Mr. Shuster has been preparing a bill that could establish a Nav Canada-like corporate structure for the U.S. He may introduce it as soon as November, people familiar with the matter said.

What are the advantages of air traffic control privatization? The head of Canada’s system, John Crichton, described some of them to the Journal:

Nav Canada operates through user fees that Mr. Crichton says are 35% lower than the government formerly levied in ticket taxes, not adjusted for inflation. The operation is safer while handling more traffic with fewer people, he says. It can sell bonds to fund upgrades, unlike the FAA, and airlines save fuel through more efficient altitudes and routes, Nav Canada says.

“This business of ours has evolved long past the time when government should be in it,” Mr. Crichton argues. “Governments are not suited to run … dynamic, high-tech, 24-hour businesses. When they try, they mess up.”

Exactly. Imagine if the federal government, with all its red tape, tried to run Apple Computer. It would be a total screw-up, which is the direction the government’s ATC is headed without privatization reforms. While the FAA has long struggled with technology upgrades, Nav Canada is now a leader in developing ATC technologies and exporting them to the world.

Some critics claim that Canada’s successful reform model could not be scaled up to the larger U.S. market. That is ridiculous, and it goes against everything we know about private enterprise and governments. The former successfully scales from the smallest sole proprietorship to the largest multinational corporation. By contrast, government performance gets increasingly more bureaucratic and dysfunctional as it expands in scope from local and state governments, to the federal government, to the United Nations.

Kudos to chairman Shuster for seizing the initiative and pursuing a reform that would be a boon to one of the nation’s most important industries. America pioneered the aviation industry, but it will fall behind if it keeps critical parts of it locked inside old-fashioned government bureaucracies.

For more, see here, here, and here.