Cato Op-Eds

Individual Liberty, Free Markets, and Peace
Subscribe to Cato Op-Eds feed

People criticize business subsidies because they harm taxpayers. But there is another group harmed by business subsidies: the recipients. Government welfare for low-income families induces unproductive behaviors, but the same is true for companies taking corporate welfare.

When subsidized, businesses get lazy and less agile. Their cost structures get bloated, and they make decisions divorced from market realities. Lobbying replaces innovation.

Corporate leaders get paid the big bucks for their decisionmaking skills, yet many of them get duped by politicians promoting faddish subsidy schemes.

From the Wall Street Journal yesterday:

A Mississippi power plant intended as a showcase for clean-coal technology has turned into a costly mess for utility Southern Co., which is now facing an investigation by the Securities and Exchange Commission, a lawsuit from unhappy customers and a price tag that has more than doubled to $6.6 billion.

The power plant, financed in part with federal subsidies, aims to take locally mined coal, convert it into a flammable gas and use it to make electricity.

Conceived as a first-of-a-kind plant, it currently looks to be the last of its kind in the U.S., though China and other nations have expressed interest in the technology. Kemper costs have swelled to $6.6 billion, far above the $3 billion forecast in 2010.

A former Kemper project manager said he was let go after complaining to top company officials that public estimates for the project’s completion were unrealistic and misleading.

Brett Wingo, who was a project manager for the gasification portion of the plant, said he thinks the company put a positive spin on construction so it wouldn’t have to acknowledge to investors it was likely to lose federal subsidies due to delays.

To date, Southern has paid back $368 million in federal tax credits for missing deadlines, but believes it will be able to keep $407 million in grants from the Energy Department.

My advice to corporate leaders: don’t take corporate welfare. Grabbing hand-outs will undermine your cost control, induce you to make bad investments, and distract you from serving your customers. Subsidies will make you weaker.

More on Southern’s subsidized coal blunder see here.

Under our criminal justice system, ignorance of the law is no defense.  But what if the law is undefined?  Or what if it seems to change with every new case that’s brought?  What if unelected judges (with life tenure) started to invent crimes, piece by piece, case by case?  Holding people accountable for knowing the law is just only if the law is knowable, and only if those creating the law are accountable to the people. 

On Friday, Cato filed an amicus brief in Salman v. U.S. that is aimed at limiting the reach of just such an ill-defined, judicially created law. “Insider trading” is a crime that can put a person away for more than a decade, and yet this crime is judge-made and, as such, is ever-changing. Although individuals may know generally what is prohibited, the exact contours of the crime have remained shrouded, creating traps for the unwary.

The courts, in creating this crime, have relied on a section of the securities laws that prohibits the use of “any manipulative or deceptive device or contrivance” in connection with the purchase or sale of a security. The courts’ rationale has been that by trading on information belonging to the company, and in violation of a position of trust, the trader has committed a fraud.  The law, however, does not mention “insiders” or “insider trading.”  And yet, in 2015 alone, the Securities and Exchange Commission (SEC) charged 87 individuals with insider trading violations.  

Broadly speaking, insider trading occurs when someone uses a position of trust to gain information about a company and later trades on that company, without permission, to receive a personal benefit.  But what constitutes a “benefit”?  The law doesn’t say.

Left to their own devices, the SEC has pushed the boundaries of what constitutes a “benefit,” making it more and more difficult for people to know when they are breaking the law.  In the case currently before the Court, Bassam Salman was charged with trading on information he received from his future brother-in-law, Mounir Kara, who had, in turn, received the information from his own brother, Maher.  The government has never alleged that Maher Kara received anything at all from either his brother or Salman in exchange for the information.  The government has instead claimed that the simple familial affection the men feel for each other is the “benefit.”  Salman’s trade was illegal because he happens to love the brothers-in-law who gave him the inside information.

Under this rationale, a person who trades on information received while making idle talk in a grocery line would be safe from prosecution while the same person trading on the same information heard at a family meal would be guilty of a felony.  Or maybe not.  After all, if we construe “benefit” this broadly, why not say that whiling away time chit-chatting in line is a “benefit”?    

No one should stumble blindly into a felony.  We hope the Court will take this opportunity to clarify the law and return it to its legislative foundation.  Anything else courts tyranny. 

In January, the International Monetary Fund (IMF) told us that Venezuela’s annual inflation rate would hit 720 percent by the end of the year. The IMF’s World Economic Outlook, which was published in April, stuck with the 720 percent inflation forecast. What the IMF failed to do is tell us how they arrived at the forecast. Never mind. The press has repeated the 720 percent inflation forecast ad nauseam.

Since the IMF’s 720 percent forecast has been elevated to the status of a factoid, it is worth a bit of reflection and analysis. We can reverse engineer the IMF’s inflation forecast to determine the Bolivar to U.S. greenback exchange rate implied by the inflation forecast.

When we conduct that exercise, we calculate that the VEF/USD rate moves from today’s black market (read: free market) rate of 1,110 to 6,699 by year’s end. So, the IMF is forecasting that the bolivar will shed 83 percent of its current value against the greenback by New Year’s Day, 2017. The following chart shows the dramatic plunge anticipated by the IMF.

Changes in the general level of prices are capable, as we’ve seen, of eliminating shortages or surpluses of money, by adding to or subtracting from the purchasing power of existing money holdings.  But because such changes place an extra burden on the price system, increasing the likelihood that individual prices will fail to accurately reflect the true scarcity of different goods and services at any moment, the less they have to be relied upon, the better.  A better alternative, if only it can somehow be achieved, or at least approximated, is a monetary system that adjusts the stock of money in response to changes in the demand for money balances, thereby reducing the need for changes in the general level of prices.

Please note that saying this is not saying that we need to have a centrally-planned money supply, let alone one that’s managed by a committee that’s unconstrained by any explicit rules or commitments.  Whether such a committee would in fact come closer to the ideal I’m defending than some alternative arrangement is a crucial question we must come to later on.  For now I will merely observe that, although it’s true that unconstrained central monetary planners might manage the money stock according to some ideal, that’s only so because there’s nothing that such planners might not do.

The claim that an ideal monetary regime is one that reduces the extent to which changes in the general level of prices are required to keep the quantity of money supplied in agreement with the quantity demanded might be understood to imply that what’s needed to avoid monetary troubles is a monetary system that avoids all changes to the general level of prices, or one that allows that level to change only at a steady and predictable rate.  We might trust a committee of central bankers to adopt such a policy.  But then again, we could also insist on it, by eliminating their discretionary powers in favor of having them abide by a strict stable price level (or inflation rate) mandate.

Monetary and Non-Monetary Causes of Price-Level Movements

But things aren’t quite so simple.  For while changes in the general price level are often both unnecessary and undesirable, they aren’t always so.  Whether they’re desirable or not depends on the reason for the change.

This often overlooked point is best brought home with the help of the famous “equation of exchange,” MV = Py.   Here, M is the money stock, V is its “velocity” of circulation, P is the price level, and y is the economy’s real output of goods and services.  Since output is a flow, the equation necessarily refers to an interval of time.  Velocity can then be understood as representing how often a typical unit of money is traded for output during that interval.  If the interval is a year, then both Py and MV stand for the money value of output produced during that year or, alternatively, for that years’ total spending.

From this equation, it’s apparent that changes in the general price level may be due to any one of three underlying causes: a change in the money stock, a change in money’s velocity, or a change in real output.

Once upon a time, economists (or some of them, at least) distinguished between changes in the price level made necessary by developments in the “goods” side of the economy, that is, by changes in real output that occur independently of changes in the flow of spending, and those made necessary by changes in that flow, that is, in either the stock of money or its velocity.   Deflation — a decline in the equilibrium price level — might, for example, be due to a decline in the stock of money, or in its velocity, either of which would mean less spending on output.  But it could also be due to a greater abundance of goods that, with spending unchanged, must command lower prices.  It turns out that, while the first sort of deflation is something to be regretted, and therefore something that an ideal monetary system should avoid, the second isn’t.  What’s more, attempts to avoid the second, supply-driven sort of deflation can actually end up doing harm.  The same goes for attempts to keep prices from rising when the underlying cause is, not increased spending, but reduced real output of goods and services.  In short, what a good monetary system ought to avoid is, not fluctuations in the general price level or inflation rate per se, but fluctuations in the level or growth rate of total spending.

Prices Adjust Readily to Changes in Costs

But what about those “sticky” prices?  Aren’t they a reason to avoid any need for changes in the price level, and not just those changes made necessary by underlying changes in spending?  It turns out that they aren’t, for a number of reasons.[1]

First of all, whether a price is “sticky” or not depends on why it has to adjust.  When, for example, there’s a general decline in spending, sellers have all sorts of reasons to resist lowering their prices.  If the decline might be temporary, sellers would be wise to wait and see before incurring price-adjustment costs.  Also, sellers will generally not profit by lowering their prices until their own costs have also been lowered, creating what Leland Yeager calls a “who goes first” problem.  Because the costs that must themselves adjust downwards in order for sellers to have a strong motive to follow suit include very sticky labor costs, the general price level may take a long time “groping” its way (another Yeager expression) to its new, equilibrium level.  In the meantime, goods and services, being overpriced, go unsold.

When downward pressure on prices comes from an increase in the supply of goods, and especially when the increase reflects productivity gains, the situation is utterly different.  For gains in productivity are another name for falling unit costs of production; and for competing sellers to reduce their products’ prices in response to reduced costs is relatively easy.  It is, indeed, something of a no-brainer, because it promises to bring a greater market share, with no loss in per-unit profits.  Heck, companies devote all sorts of effort to being able to lower their costs precisely so that they can take advantage of such opportunities to profitably lower their prices.  By the same token, there is little reason for sellers to resist raising prices in response to adverse supply shocks. The widespread practice of “mark-up pricing” supplies ample proof of these claims.  Macroeconomic theories and models (and their are plenty of them, alas) that simply assign a certain “stickiness” parameter to prices, without allowing for the possibility that they respond more readily to some underlying changes than to others, lead policymakers astray by overlooking this important fact.

A Changing Price Level May be Less “Noisy” Than a Constant One

Because prices tend to respond relatively quickly to productivity gains and setbacks, there’s little to be gained by employing monetary policy to prevent their movements related to such gains or setbacks.  On the contrary: there’s much to lose, because productivity gains and losses tend to be uneven across firms and industries, making any resulting change to the general price level a mere average of quite different changes to different equilibrium prices.  Economists’ tendency — and it hard to avoid — to conflate a “general” movement in prices, in the sense of a change in their average level, with a general movement in the across-the-board sense, is in this regard a source of great mischief.  A policy aimed at avoiding what is merely a change in the average, stemming from productivity innovations, increases instead of reducing the overall burden of price adjustment, introducing that much more “noise” into the price system.

Nor is it the case that a general decline or increase in prices stemming from productivity gains or setbacks itself conveys a noisy signal.  On the contrary: if things are generally getting cheaper to produce, a falling price level conveys that fact of reality in the most straightforward manner possible.  Likewise, if productivity suffers — if there is a war or a harvest failure or OPEC-inspired restriction in oil output or some other calamity — what better way to let people know, and to encourage them to act economically, than by letting prices generally go up?  Would it really help matters if, instead of doing that, the monetary powers-that-be decided to shrink the money stock, and thereby MV, for the sake of keeping the price level constant?  Yet that is what a policy of strict price-level stability would require.

Reflection on such scenarios ought to be enough to make even the most die-hard champion of price-level or inflation targeting reconsider.  But in case it isn’t, allow me to take still another tack, by observing that, when policymakers speak of stabilizing the price level or the rate of inflation, they mean stabilizing some measure of the level of output prices, such as the Consumer Price Index, or the GDP deflator, or the current Fed favorite, the PCE (“Personal Consumption Expenditure”) price-index.  So long as changes in total spending (“aggregate demand”) are the only source of changes in the overall level of prices,  those changes will tend to affect input as well as output prices, so policies that stabilize output prices will also tend to stabilize input prices.   General changes in productivity, in contrast, necessarily imply changes in the relation of input to output prices: general productivity gains (meaning gains in numerous industries that outweigh setbacks in others) mean that output prices must decline relative to input prices; while general productivity setbacks mean that output prices must increase relative to input prices.  In such cases, to stabilize output prices is to destabilize input prices, and vice versa.

So, which?  Appeal to menu costs supplies a ready answer: if a burden of price adjustment there must be, let the burden fall on the least sticky prices.  Since “input” prices include wages and salaries, that alone makes a policy that would impose the burden on them a poor choice, and a dangerous one at that.  As we’ve seen, it means adding insult to injury during productivity setbacks, when wage earners would have to take cuts (or settle for smaller or less frequent raises).  It also increases the risk of productivity gains being associated with asset-price bubbles, because those gains will inspire corresponding boosts to aggregate demand which, in the presence of sticky input prices, can cause profits to swell temporarily.  Unless the temporary nature of the extraordinary profits is recognized, asset prices will be bid up, but only for as long as it takes for costs to clamber their way upwards in response to the overall increase in spending.

What About Debtor-Creditor Transfers?

But if the price level is allowed to vary, and to vary unexpectedly, doesn’t that mean that the terms of fixed-interest rate contracts will be distorted, with creditors gaining at debtors expense when prices decline, and the opposite happening when they rise?

Usually it does; but, when price-level movements reflect underlying changes in productivity, it doesn’t.  That’s because productivity changes tend to be associated with like changes in  “neutral” or “full information” interest rates.  Suppose that, with each of us anticipating a real return on capital of four percent, and zero inflation, I’d happily lend you, and you’d happily borrow, $1000 at four percent interest.  The anticipated real interest rate is of course also four percent.  Now suppose that productivity rises unexpectedly, raising the actual real return on capital by two percentage points, to six percent rather than four percent.  In that case, other things equal, were I able to go back and renegotiate the contract, I’d want to earn a real rate of six percent, to reflect the higher opportunity cost of lending.  You, on the other hand, can also employ your borrowings more productively, or are otherwise going to be able (as one of the beneficiaries of the all-around gain in productivity) to bear a greater real interest-rate burden, other things equal, and so should be willing to pay the higher rate.

Of course, we can’t go back in time and renegotiate the loan.  So what’s the next best thing?  It is to let the productivity gains be reflected in proportionately lower output prices — that is, in a two percent decline in the the price level over the course of the loan period — and thus in an increase, of two percentage points, in the real interest rate corresponding to the four percent nominal rate we negotiated.

The same reasoning applies, mutatis mutandis, to the case of unexpected, adverse changes in productivity.  Only the argument for letting the price level change in this case, so that an unexpected increase in prices itself compensates for the unexpected decline in productivity, is even more compelling.  Why is that?  Because, as we’ve seen, to keep the price level from rising when productivity declines, the authorities would have to shrink the flow of spending.  Ask yourself whether doing that will make life easier or harder for debtors with fixed nominal debt contracts, and you’ll see my point.

Next: The Supply of Money

_____________________________

[1] What follows is a brief summary of arguments I develop at greater length in my 1997 IEA pamphlet, Less Than Zero.  In that pamphlet I specifically make the case for a rate of deflation equal to an economies (varying) rate of total factor productivity growth.  But the arguments may just as well be read as supplying grounds for preferring a varying yet generally positive inflation rate to a constant rate.

[Cross-posted from Alt-M.org]

May 16, 1966, is regarded as the beginning of Mao Zedong’s Cultural Revolution in China. Post-Maoist China has never quite come to terms with Mao’s legacy and especially the disastrous Cultural Revolution

Many countries have a founding myth that inspires and sustains a national culture. South Africa celebrates the accomplishments of Nelson Mandela, the founder of that nation’s modern, multi-racial democracy. In the United States, we look to the American Revolution and especially to the ideas in the Declaration of Independence of July 4, 1776. 

The Declaration of Independence, written by Thomas Jefferson, is the most eloquent libertarian essay in history, especially its philosophical core:

We hold these truths to be self-evident, that all men are created equal, that they are endowed by their Creator with certain unalienable Rights, that among these are Life, Liberty and the pursuit of Happiness.–That to secure these rights, Governments are instituted among Men, deriving their just powers from the consent of the governed, –That whenever any Form of Government becomes destructive of these ends, it is the Right of the People to alter or to abolish it, and to institute new Government, laying its foundation on such principles and organizing its powers in such form, as to them shall seem most likely to effect their Safety and Happiness.

The ideas of the Declaration, given legal form in the Constitution, took the United States of America from a small frontier outpost on the edge of the developed world to the richest country in the world in scarcely a century. The country failed in many ways to live up to the vision of the Declaration, notably in the institution of chattel slavery. But over the next two centuries, that vision inspired Americans to extend the promises of the Declaration—life, liberty, and the pursuit of happiness—to more and more people.

China, of course, followed a different vision, the vision of Mao Zedong. Take Mao’s speech on July 1, 1949, as his Communist armies neared victory. The speech was titled, “On the People’s Democratic Dictatorship.” Instead of life, liberty, and the pursuit of happiness, it spoke of “the extinction of classes, state power and parties,” of “a socialist and communist society,” of the nationalization of private enterprise and the socialization of agriculture, of a “great and splendid socialist state” in Russia, and especially of “a powerful state apparatus” in the hands of a “people’s democratic dictatorship.”

Tragically, and unbelievably, this vision appealed not only to many Chinese but even to Americans and Europeans, some of them prominent. But from the beginning, it went terribly wrong, as should have been predicted. Communism created desperate poverty in China. The “Great Leap Forward” led to mass starvation. The Cultural Revolution unleashed “an extended paroxysm of revolutionary madness” in which “tens of millions of innocent victims were persecuted, professionally ruined, mentally deranged, physically maimed and even killed.” Estimates of the number of unnatural deaths during Mao’s tenure range from 15 million to 80 million. This is so monstrous that we can’t really comprehend it. What inspired many American and European leftists was that Mao really seemed to believe in the communist vision. And the attempt to actually implement communism leads to disaster and death.

When Mao died in 1976, China changed rapidly. His old comrade Deng Xiaoping, a victim of the Cultural Revolution, had learned something from the 30 years of calamity. He began to implement policies he called “socialism with Chinese characteristics,” which looked a lot like freer markets: decollectivization and the “responsibility system” in agriculture, privatization of enterprises, international trade, liberalization of residency requirements.

The changes in China over the past generation are the greatest story in the world—more than a billion people brought from totalitarianism to a largely capitalist economic system that is eroding the continuing authoritarianism of the political system. On its 90th birthday, the CCP still rules China with an iron fist. There is no open political opposition, and no independent judges or media. And yet the economic changes are undermining the party’s control, a challenge of which the party is well aware. In 2008, Howard W. French reported in the New York Times:

Political change, however gradual and inconsistent, has made China a significantly more open place for average people than it was a generation ago.

Much remains unfree here. The rights of public expression and assembly are sharply limited; minorities, especially in Tibet and Xinjiang Province, are repressed; and the party exercises a nearly complete monopoly on political decision making.

But Chinese people also increasingly live where they want to live. They travel abroad in ever larger numbers. Property rights have found broader support in the courts. Within well-defined limits, people also enjoy the fruits of the technological revolution, from cellphones to the Internet, and can communicate or find information with an ease that has few parallels in authoritarian countries of the past.

The Chinese Communist Party remains in control. And there’s a resurgence of Maoism under the increasingly authoritarian rule of Xi Jinping, as my former colleague Jude Blanchette is writing about. But at least one study finds ideological groupings in China divided between statists who are both socialist and culturally conservative, and liberals who tend toward “constitutional democracy and individual liberty, … market-oriented reform … modern science and values such as sexual freedom.” 

Xi’s government struggles to protect its people from acquiring information, routinely battling with Google, Star TV, and other media. Howard French noted that “the country now has 165,000 registered lawyers, a five-fold increase since 1990, and average people have hired them to press for enforcement of rights inscribed in the Chinese Constitution.” People get used to making their own decisions in many areas of life and wonder why they are restricted in other ways. I am hopeful that the 100th anniversary of the CCP in 2021 will be of interest mainly to historians of China’s past and that the Chinese people will by then enjoy life, liberty, and the pursuit of happiness under a government that derives its powers from the consent of the governed. 

Computer modeling plays an important role in all of the sciences, but there can be too much of a good thing. A simple semantic analysis indicates that climate science has become dominated by modeling. This is a bad thing.

What we did

We found two pairs of surprising statistics. To do this we first searched the entire literature of science for the last ten years, using Google Scholar, looking for modeling. There are roughly 900,000 peer reviewed journal articles that use at least one of the words model, modeled or modeling. This shows that there is indeed a widespread use of models in science. No surprise in this.

However, when we filter these results to only include items that also use the term climate change, something strange happens. The number of articles is only reduced to roughly 55% of the total.

In other words it looks like climate change science accounts for fully 55% of the modeling done in all of science. This is a tremendous concentration, because climate change science is just a tiny fraction of the whole of science. In the U.S. Federal research budget climate science is just 4% of the whole and not all climate science is about climate change.

In short it looks like less than 4% of the science, the climate change part, is doing about 55% of the modeling done in the whole of science. Again, this is a tremendous concentration, unlike anything else in science.

We next find that when we search just on the term climate change, there are very few more articles than we found before. In fact the number of climate change articles that include one of the three modeling terms is 97% of those that just include climate change. This is further evidence that modeling completely dominates climate change research.

To summarize, it looks like something like 55% of the modeling done in all of science is done in climate change science, even though it is a tiny fraction of the whole of science. Moreover, within climate change science almost all the research (97%) refers to modeling in some way.

This simple analysis could be greatly refined, but given the hugely lopsided magnitude of the results it is unlikely that they would change much.

What it means

Climate science appears to be obsessively focused on modeling. Modeling can be a useful tool, a way of playing with hypotheses to explore their implications or test them against observations. That is how modeling is used in most sciences.

But in climate change science modeling appears to have become an end in itself. In fact it seems to have become virtually the sole point of the research. The modelers’ oft stated goal is to do climate forecasting, along the lines of weather forecasting, at local and regional scales.

Here the problem is that the scientific understanding of climate processes is far from adequate to support any kind of meaningful forecasting. Climate change research should be focused on improving our understanding, not modeling from ignorance. This is especially true when it comes to recent long term natural variability, the attribution problem, which the modelers generally ignore. It seems that the modeling cart has gotten far ahead of the scientific horse.

Climate modeling is not climate science. Moreover, the climate science research that is done appears to be largely focused on improving the models. In doing this it assumes that the models are basically correct, that the basic science is settled. This is far from true.

The models basically assume the hypothesis of human-caused climate change. Natural variability only comes in as a short term influence that is negligible in the long run. But there is abundant evidence that long term natural variability plays a major role climate change. We seem to recall that we have only very recently emerged from the latest Pleistocene glaciation, around 11,000 years ago.

Billions of research dollars are being spent in this single minded process. In the meantime the central scientific question – the proper attribution of climate change to natural versus human factors – is largely being ignored.

I spent the latter part of last week on a too-short trip to Alicante, Spain, to present some of my latest work on the reuse of former defense facilities in the United States. The occasion was a conference on “Defence Heritage” – the third since 2012 – hosted by the Wessex Institute (.pdf) in which scholars from more than a dozen countries shared their findings about how various defense installations around the world have been repurposed for everything from recreational parks to educational institutions to centers of business and enterprise.

This sort of research is sorely needed as Congress appears poised to deny the Pentagon’s request to close unneeded or excess bases. It is the fifth time that Congress has told the military that it must carry surplus infrastructure, and continue to misallocate resources where they aren’t needed, in order to protect narrow parochial interests in a handful of congressional districts that might house an endangered facility.

In a cover letter to a new Pentagon report that provides ample justification for the need to close bases, Deputy Secretary of Defense Robert Work explained:

Under current fiscal restraints, local communities will experience economic impacts regardless of a congressional decision regarding BRAC authorization. This has the harmful and unintended consequence of forcing the Military Departments to consider cuts at all installations, without regard to military value. A better alternative is to close or realign installations with the lowest military value. Without BRAC, local communities’ ability to plan and adapt to these changes is less robust and offers fewer protections than under BRAC law.

Work is almost certainly correct. But in my latest post at The National Interest’s The Skeptics, I urge him “and other advocates for another BRAC round” not to “limit themselves to green-eyeshade talk of cost savings and greater efficiency. They must also show how former defense sites don’t all become vast, barren wastelands devoid of jobs and people.”

It obviously isn’t enough to stress the potential savings, even though the savings are substantial. The DoD report estimates that the five BRAC rounds, plus the consolidation of bases in Europe, have generated annual recurring savings of $12.5 billion, and that a new BRAC round would save an additional $2 billion per year, after a six-year implementation period. A GAO study conducted in 2002 concluded that “BRAC savings are real and substantial and are related to cost reduction in key operational areas.”

Members of Congress who are uninterested in such facts, and who remain adamantly opposed to any base closures, anywhere, should consider what has actually happened to many of the bases dealt with during the five BRAC rounds, and the hundreds of other bases closed in the 1950s and 60s, before there was a BRAC. 

They don’t have to go far. They could start by speaking with the Association of Defense Communities and the Pentagon’s Office of Economic Adjustment, who keep track of these stories.

House Armed Services Committee Chairman Mac Thornberry (R-TX) could visit Austin-Bergstrom International Airport in Austin, Texas. He probably has, many times. The closure of Bergstrom Air Force Base was a thinly disguised blessing for a city that had struggled for years to find an alternative for its inadequate regional airport. Austin-Bergstrom today services millions of passengers, and has won awards for its design and customer service.

Sen. Kelly Ayotte (R-NH), Chair of the Senate Armed Services Readiness Subcommittee, might stop by the former Pease Air Force Base in Portsmouth, New Hampshire, during one of her trips home. One of the very first bases closed under the BRAC process, the sprawling site still hosts several massive runways, and the 157th Air Refueling Wing of the Air National Guard. But the base has chiefly been reborn as the Pease International Tradeport, which is now home to over 250 businesses that employ more than 10,000 people.

And both would benefit from a visit to the former Brunswick Naval Air Station in my home state of Maine. They used to launch P-3 submarine-hunting airplanes (pictured), now they host dozens of businesses, including 28 start-ups in a new business incubator, TechPlace, that opened 14 months ago. 

It’s particularly lovely in the summer time, if you don’t mind all the tourists. If they go, Thornberry and Ayotte should talk to some of the people who are responsible for its rapid turnaround, including Steve Levesque, the Executive Director of the Midcoast Regional Redevelopment Authority (MRRA), who contributed a chapter in this forthcoming volume on the renovation and reuse of former military sites, and Jeffrey Jordan, the MRRA’s Deputy Director, who I interviewed in 2014. I’m sure they’d be happy to show HASC and SASC members around Brunswick Landing.

You Ought to Have a Look is a feature from the Center for the Study of Science posted by Patrick J. Michaels and Paul C. (“Chip”) Knappenberger.  While this section will feature all of the areas of interest that we are emphasizing, the prominence of the climate issue is driving a tremendous amount of web traffic.  Here we post a few of the best in recent days, along with our color commentary.

Badges? Do we need these stinking badges?

Need, perhaps not, but apparently some of us actually want them and will go to lengths to get them. We‘re not talking about badges for say, for example, being a Federal Agent At-Large for the Bureau of Narcotics and Dangerous Drugs:

 

(source: Smithsonianmag.org)

But rather badges like these, being given out by the editors of Psychological Journal for being a good data sharer and playing well with others:

A new paper, authored by Mallory Kidwell and colleagues, examined the impact of the Psychological Journal’s badge/award system and found it to be quite effective at getting authors to make their data and material available to others via an open access repository. Compared with four “comparison journals,” the implementation of the badge system at Psychological Journal led to a rapidly rising rate of participation and level of research transparency (Figure 1).

Figure 1. Percentage of articles reporting open data by half year by journal. Darker line indicates Psychological Science, and dotted red line indicates when badges were introduced in Psychological Science and none of the comparison journals. (Source: Kidwell et al., 2016).

Why is this important? They authors explain:

Transparency of methods and data is a core value of science and is presumed to help increase the reproducibility of scientific evidence. However, sharing of research materials, data, and supporting analytic code is the exception rather than the rule. In fact, even when data sharing is required by journal policy or society ethical standards, data access requests are frequently unfulfilled, or available data are incomplete or unusable. Moreover, data and materials become less accessible over time. These difficulties exist in contrast to the value of openness in general and to the move toward promoting or requiring openness by federal agencies, funders, and other stakeholders in the outcomes of scientific research.

For an example of data protectionism taken to the extreme, we remind you of the Climategate email tranche, where you’ll find gems like this:

“We have 25 or so years invested in the work. Why should I make the data available to you, when your aim is to try and find something wrong with it.”
-Phil Jones email Feb. 21, 2005

This type of attitude, on display throughout the Climategate emails, makes the need for a push for more transparency plainly evident.

Kidwell et al. conclude that the badge system, as silly as it may seem, actually works quite well:

Badges may seem more appropriate for scouts than scientists, and some have suggested that badges are not needed. However, actual evidence suggests that this very simple intervention is sufficient to overcome some barriers to sharing data and materials. Badges signal a valued behavior, and the specifications for earning the badges offer simple guides for enacting that behavior. Moreover, the mere fact that the journal engages authors with the possibility of promoting transparency by earning a badge may spur authors to act on their scientific values. Whatever the mechanism, the present results suggest that offering badges can increase sharing by up to an order of magnitude or more. With high return coupled with comparatively little cost, risk, or bureaucratic requirements, what’s not to like?

The entire findings of Kidwell et al. are to be found here, in the open access journal PLOS Biology—and yes, they’ve made all their material readily available!

Another article that caught our eye this week provides further indication why transparency in science is more necessary than ever. In a column in Nature magazine, Daniel Sarewitz, suggests that the pressure to publish has the tendency to result in lower quality papers through what he describes as “a destructive feedback between the production of poor-quality science, the responsibility to cite previous work and the compulsion to publish.” He cites and example of a contaminated cancer cell line that gave rise to hundreds of (wrong) published studies which receive over 10,000 citations per year. Sarewitz points to the internet and other search engines for the hyper citations which make doing literature searches vastly easier than the old days, which required a trip to the library stacks and lots of flipping through journals, etc.  Now, not so much.

Sarewitz doesn’t see this rapid expansion of the scientific literature and citation numbers as a good trend, and offers an interesting way out:

More than 50 years ago, [it was] predicted that the scientific enterprise would soon have to go through a transition from exponential growth to “something radically different”, unknown and potentially threatening. Today, the interrelated problems of scientific quantity and quality are a frightening manifestation of what [was foreseen]. It seems extraordinarily unlikely that these problems will be resolved through the home remedies of better statistics and lab practice, as important as they may be. Rather, they would seem …to announce that the enterprise of science is evolving towards something different and as yet only dimly seen.

Current trajectories threaten science with drowning in the noise of its own rising productivity… Avoiding this destiny will, in part, require much more selective publication. Rising quality can thus emerge from declining scientific efficiency and productivity. We can start by publishing less, and less often…

With the publish or perish culture securely ingrained in our universities, coupled with evaluation systems based on how often your research is cited by others, its hard to see Sarewitz’s suggestion taking hold anytime soon, as good as it may be.

In the same vein is this article by Paula Stephan and colleagues titled “Bias against novelty in science: A cautionary tale for users of bibliometric indicators.” Here the authors make the point that:

There is growing concern that funding agencies that support scientific research are increasingly risk-averse and that their competitive selection procedures encourage relatively safe and exploitative projects at the expense of novel projects exploring untested approaches. At the same time, funding agencies increasingly rely on bibliometric indicators to aid in decision making and performance evaluation.

This situation, the authors argue, depresses novel research and instead encourages safe research that supports the status quo:

Research underpinning scientific breakthroughs is often driven by taking a novel approach, which has a higher potential for major impact but also a higher risk of failure.  It may also take longer for novel research to have a major impact, because of resistance from incumbent scientific paradigms or because of the longer time-frame required to incorporate the findings of novel research into follow-on research…

The finding of delayed recognition for novel research suggests that standard bibliometric indicators which use short citation time-windows (typically two or three years) are biased against novelty, since novel papers need a sufficiently long citation time window before reaching the status of being a big hit.

Stephan’s team ultimately concludes that this “bias against novelty imperils scientific progress.”

Does any of this sound familiar?

The American presidency has accumulated an unprecedented set of institutional advantages in the conduct of foreign policy. Unlike on the domestic side where presidents face an activist and troublesome Congress, the Constitution, the bureaucratic and legal legacies of previous wars, the overreaction to 9/11, and years of assiduous executive branch privilege-claiming now afford the White House great latitude to run foreign policy without interference from Congress.

But one of the most tragic reasons for this situation stems from the abject failure of the marketplace of ideas to check the growth in executive power. In theory, the marketplace of ideas consists of free-wheeling debate over the ends and means of foreign policy and critical analysis of the ongoing execution of foreign policy that help the public and its political leaders to distinguish good ideas from poor ones. Philosophers since Immanuel Kant and John Stuart Mill have championed this dynamic. The Founding Fathers enshrined its logic in the First Amendment. Recent scholarship argues that the marketplace of ideas is central to the democratic peace and the ability of democracies to conduct smarter foreign policies than other nations.

In practice, however, today’s marketplace of ideas falls terribly short of this ideal.

The most famous recent example is the run up to the 2003 Iraq War. The Bush administration used an assortment of half-baked intelligence, exaggerations, and flat out lies about Iraqi WMD programs to urge the public into supporting the war. Shockingly, however, the national debate over the war was muted. Though false and based on flimsy evidence, the Bush administration’s claims received surprisingly little criticism. Reality reasserted itself, of course, as the failure to find any evidence of such programs made it clear that the administration had waged war under false pretenses. Where was the vaunted marketplace of ideas?

In an influential article in International Security written just after the 2003 Iraq War, political scientist Chaim Kaufmann argued that a good deal of the reason for the Bush administration’s ability to sell the war lay in the president’s institutional advantages. As president and Commander in Chief, Bush not only controlled the flow of critical intelligence information, he also enjoyed greater authority in the debate than his critics, allowing him to (falsely) frame the operation as part of the war on terrorism, thus taking advantage of the public’s outrage over the 9/11 attacks.

But Kaufmann (among many others) made another argument about why Bush succeeded: the news media simply failed to do its job. Indeed, after a review of its coverage of the run up to war, the New York Times editorial board took the unusual step of acknowledging it had failed in its core mission: “Looking back, we wish we had been more aggressive in re-examining the claims [about Iraqi WMD] as new evidence emerged – or failed to emerge.”

At this point, one might assume that more than a decade of intervention, chaos, and terrorism in the Middle East would have provided the news media with a powerful set of lessons. These lessons might include things like: scrutinize the basis for intervention; ask hard questions about the plans for what happens after the initial military operation ends, work to appreciate how U.S. actions affect the attitudes and actions of other people, groups, and nations around the world.

Unfortunately it does not appear that the news media has learned much if anything. President Obama has spent eight years talking about withdrawing the United States from the Middle East but has in fact expanded the military footprint of the United States. He has done so without much real debate in the mainstream news about the wisdom of his actions. Tellingly, what debate has occurred has focused on erroneous claims that Obama has appeased our enemies by withdrawing too much.

The figure below reveals some telling evidence of this sad state of affairs. Libya, Yemen, Afghanistan, and Syria all represent countries in which the United States is engaged in combat in various ways and at varying levels of intensity. Crucially, each represents a situation with the potential to involve the United States military in a bigger and messier conflict.

Figure 1. Stories per Newspaper per Month Mentioning Obama & U.S. Military & Country Name

 Source: New York Times, Washington Post, Wall Street Journal via Factiva

The figure indicates how often each month the three most important newspapers in the country (the New York Times, Washington Post, and Wall Street Journal) mention President Obama, the U.S. military, and the name of the country in the same story. In each case the newspapers are writing fewer stories per month in 2016 than they did in 2015 and the numbers over all are quite low. The average reader of one of these newspapers would have read just three stories about U.S. military involvement with Yemen, for example, assuming he or she was diligent enough to have caught all three stories over the past four and half months. 

This is no doubt an imperfect measure of the national debate about Obama’s foreign policy; it certainly fails to capture some of the other sources of information and debate. That said, it is difficult to imagine that the marketplace of ideas could be very robust without a good number of such stories in the mainstream news media. Moreover, this is something of a best-case metric for the marketplace since this figure only includes data from three newspapers that cover foreign affairs far more intensively than almost all other American news outlets.

For those who were hopeful that the American marketplace of ideas on foreign policy would improve as 9/11 receded into history, this comes as bad news. It suggests that the challenges to free-wheeling debate do not lie simply in emotional overreactions to terrorism, or to temporary Congressional obsequiousness to the White House. Recent concerns about presidential foreign policy narratives aside, it also suggests that the problem isn’t simply political spin. The problem is deeper than that. At root is a failure of the marketplace of ideas in at least one if not both of its most fundamental elements. The first possibility is that the news media in its current form – dominated by big corporations and yet weakened economically by the Internet, audience fragmentation, and increasing partisanship – is incapable of doing the job the marketplace of ideas requires of it. The second, even darker possibility, is that the public, the ultimate arbiter of what the news must look like, is simply uninterested in having the necessary debate required to force the White House to be honest and transparent about foreign policy.

The American Association of Motor Vehicle Administrators is the umbrella group for DMV bureaucrats across the nation. It’s a non-profit group, but it does more than earnestly educate government officials and the public about the nuances of driver licensing. Since the 1930s, it has advocated for increased government spending on licensing bureaucracy—and it has advocated against driver’s rights. (It’s all discussed in my book Identity Crisis.) That doesn’t mean AAMVA can’t have fun. Indeed, AAMVA’s social season gets underway next week.

You see, AAMVA is a growing business. A decade ago, when the Capitol Hill staffer with lead responsibility for the REAL ID Act came through AAMVA’s revolving door, I noted the dollar-per-driver fee it collects in the Commercial Driver Licensing system. That $13 million in revenue has surely grown since then.

AAMVA’s revenues will grow far more when it runs the back-end of the REAL ID system, potentially pulling in from three-and-a-quarter cents to five cents per driver in the United States. At 210 million licensed drivers, AAMVA could make upwards of ten million dollars per year.

To help that business flow, every year AAMVA holds not one, but five lavish conferences, each of which has its own awards ceremonies aimed at saluting DMV officials and workers. There, AAMVA leadership, vendors, and officials from government agencies both state and federal gather to toast their successes in advancing their cause, including progress in implementating our national ID law, the REAL ID Act.

The first conference and ceremony, for “Region IV” (roughly, everything west of Texas), starts this coming Monday, May 16, 2016, in Portland, Oregon. Several awards will be distributed. Last year’s Region IV winners included Washington State’s Department of Licensing (“Excellence in Government Partnership” for a “Use Tax Valuation Project”) and California’s Department of Motor Vehicles (“USC Freshman Orientation Project”). Stay tuned to find out who will win prizes at this year’s taxpayer subsidized extravaganza!

AAMVA is doing everything right to cultivate friendship with its membership and to advance the aims of the driver licensing industry. Department of motor vehicle officials, after all, are the ones who elected legislators turn to first when they have questions about policy.

It helps AAMVA a lot if DMV officials sing from the industry songbook. Heaven forfend if a DMV official were to tell his or her legislature that implementing REAL ID is unnecessary because the costs are disporportionate to the benefits, that REAL ID allows tracking of race, and that the federal government will always back down if a state declines to implement.

AAMVA regional conferences occur monthly between now and August, when their international conference kicks off in Colonial Williamsburg, “a location that is ideal to bring the entire family”! We will be taking a close look at awardees and top DMV officials who are close to AAMVA. There is a distinct possiblity that they represent the interests of AAMVA to the legislature when called upon, rather than giving dispassionate advice about what’s best for taxpayers and the people of their states. That inclination is helped along by AAMVA’s busy national ID social calendar.

The federal district court sitting in D.C. yesterday handed a victory to those who believe in following statutory text, potentially halting the payment of billions of dollars to insurers under the Affordable Care Act’s entitlement “cost-sharing” provisions.

Since January 14, 2014, the Treasury Department has been authorizing payments of reimbursements to insurers providing Obamacare coverage. The problem is that Congress never appropriated the funds for those expenditures, so the transfers constitute yet another executive overreach.

Article I of the Constitution provides quite clearly that “No Money shall be drawn from the Treasury but in Consequence of Appropriations made by Law.” The “power of the purse” resides in Congress, a principle that implements the overall constitutional structure of the separation of powers and that was noted as an important bulwark against tyranny by Alexander Hamilton in the Federalist 78.

It’s a basic rule that bears repeating: the executive branch cannot disburse funds that Congress has not appropriated.

Accordingly, in a win for constitutional governance, Judge Rosemary Collyer held in House of Representatives v. Burwell that the cost-sharing reimbursements authorized under the ACA’s section 1402 must be appropriated by Congress annually, and are not assumed to be appropriated.

Judge Collyer gave a biting review of the federal government’s argument in the case: “It is a most curious and convoluted argument whose mother was undoubtedly necessity.” The Department of Health and Human Services claimed that another part of the ACA that is a permanent appropriation—section 1401, which provides tax credits—also somehow included a permanent appropriation for Section 1402. Hearkening to the late Justice Scalia’s lyrical prose, Collyer explained that the government was trying to “squeeze the elephant of Section 1402 reimbursements into the mousehole of Section 1401(d)(1).”

Indeed, this ruling is a bit of a feather in Cato’s cap as well. The legal argument that prevailed here—that the section 1402 funds cannot be disbursed without congressional appropriation—first was discussed publicly at a 2014 Cato policy forum. The lawyer who came up with the idea, David Rivkin of BakerHostetler, refined it in conjunction with his colleague Andrew Grossman, also a Cato adjunct scholar who spoke at the forum. After BakerHostetler had to withdraw from the case due to a conflict, George Washington University law professor Jonathan Turley (who also spoke at the forum) took over the case.

Judge Collyer stayed her injunction against the Treasury Department pending appeal before the U.S. Court of Appeals for the D.C. Circuit. Regardless of how that court decides – as in King v. Burwell, even if there’s a favorable panel, President Obama has stacked the overall deck – the case is likely to end up before the Supreme Court. If Chief Justice Roberts sees this as a technical case (like Hobby Lobby or Zubik/Little Sisters) rather an existential one (like NFIB v. Sebelius or King), the challengers have a shot. But because Democrat-appointed justices simply will not interpret clear law in a way that hurts Obamacare, this case, like so much else, turns on the presidential election and the nominee who fills the current high-court-vacancy.

Whatever happens down that line, Judge Collyer’s succinct ruling makes a powerful statement in favor of constitutional separation of powers as a bulwark for liberty and the rule of law.

“Obama said ‘so sue me.’ The House did, and Obama just lost.” That’s how the Wall Street Journal sub-heads its lead editorial this morning discussing the president’s latest court loss, nailing this most arrogant of presidents who believes he can rule “by pen and phone,” ignoring Congress in the process. With an unmatched record of losses before the Supreme Court, this onetime constitutional law instructor persists in ignoring the Constitution, even when the language is crystal clear.

Article I, section 9, clause 7 of the Constitution provides that “No Money shall be drawn from the Treasury but in Consequence of Appropriations made by Law.” Not much wiggle room there. So what did the president do? He committed billions of dollars from the Treasury without the approval of Congress. In her opinion yesterday Judge Rosemary Collyer noted, the Journal reports, “that Congress had expressly not appropriated money to reimburse health insurers under Section 1402 of the Affordable Care Act. The Administration spent money on those reimbursements anyway.”

George Washington Law’s Jonathan Turley, lead counsel for the House in this case, House v. Burwell, called yesterday’s decision “a resounding victory not just for Congress but for our constitutional system as a whole. We remain a system based on the principle of the separation of powers and the guarantee that no branch or person can govern alone.”

But don’t expect the president to be any more chastened by this decision than by his many previous losses in the courts. Indeed, as he was smarting from yesterday’s loss he was preparing, the Washington Post reports, to release a letter this morning “directing schools across the nation to provide transgender students with access to suitable facilities—including bathrooms and locker rooms—that match their chosen gender identity.” And where did he get his authority for that? Not from Congress. It’s based on his reading of Title IX of the Civil Rights Act of 1964 that for over half a century no one else has seen, doubtless because Title IX prohibits discrimination on the basis of sex, not chosen sex. Reading Title IX as we want it to be is of a piece with reading the Constitution that way too. Thus do objectivity and the rule of law fade into the rule of man.

The day before yesterday, The Washington Post ran a piece with the alarming headline, “The middle class is shrinking just about everywhere in America.” Although you wouldn’t know it from the first few paragraphs, a shrinking middle class isn’t necessarily a bad thing. As HumanProgress.org Advisory Board member Mark Perry has pointed out, America’s middle class is disappearing primarily because people are moving into higher income groups, not falling into poverty. Data from the U.S. Census Bureau shows that after adjusting for inflation, households with an annual income of $100,000 or more rose from a mere 8% of households in 1967 to a quarter of households in 2014.

According to the Pew Research Center, 11% fewer Americans were middle class in 2015 than in 1971, because 7% moved into higher income groups and 4% moved into lower income groups. The share of Americans in the upper middle and highest income tiers rose from 14% in 1971 to 21% in 2015. 

One has to read fairly far into the Washington Post’s coverage before seeing any mention of the fact that a shrinking middle class can mean growing incomes: 

“[In many] places, the shrinking middle class is actually a sign of economic gains, as more people who were once middle class have joined the ranks at the top. [For example, in] the Washington, D.C. metropolitan area, the share of adults living in lower-income households has actually held steady [from 2000 to 2014]. The households disappearing from the middle-class, rather, are reflected in the growing numbers at the top.”

Other cities with a shrinking middle class, a growing upper class and very little change in the lower class include New York, San Francisco and New Orleans. So the next time you hear someone bemoan the “shrinking middle class,” take a closer look at the data and keep in mind that it may actually be a sign of growing prosperity. 

The 2016 Milton Friedman Prize for Advancing Liberty has been awarded to Flemming Rose and will be formally presented at a dinner in New York on May 25. (Tickets still available!)

Flemming Rose is a Danish journalist. In the 1980s and 1990s he was the Moscow correspondent for Danish newspapers. He saw the last years of Soviet communism, with all its poverty, dictatorship, and censorship, and the fall of communism, only to be disappointed again with the advance of Russian authoritarianism. After also spending time in the United States, he became an editor at the Danish newspaper Jyllands-Posten. In 2005 he noticed “a series of disturbing instances of self-censorship” in Europe. In particular, “a Danish children’s writer had trouble finding an illustrator for a book about the life of Muhammad. Three people turned down the job for fear of consequences. The person who finally accepted insisted on anonymity, which in my book is a form of self-censorship.”

Rose decided to take a stand for free speech and the open society. He asked 25 Danish cartoonists “to draw Muhammad as you see him.” Later, he explained that 

We [Danes] have a tradition of satire when dealing with the royal family and other public figures, and that was reflected in the cartoons. The cartoonists treated Islam the same way they treat Christianity, Buddhism, Hinduism and other religions. And by treating Muslims in Denmark as equals they made a point: We are integrating you into the Danish tradition of satire because you are part of our society, not strangers. The cartoons are including, rather than excluding, Muslims.

Rose promised to publish all the cartoons he received. He got 12. They were by turns funny, provocative, insightful, and offensive. One implied that the children’s book author was a publicity seeker.  One mocked the anti-immigration Danish People’s Party. One portrayed the editors of Jyllands-Posten as a bunch of reactionary provocateurs. The most notorious depicted the prophet with a bomb in his turban.

A firestorm erupted. Protests were made. Western embassies were attacked in some Muslim countries. As many as 200 people were killed in violent protests. Rose and the turban cartoonist were the subject of death threats. To this day Rose travels with security. 

Is Rose in fact a provocateur or anti-Muslim? No. When we discovered that his book A Tyranny of Silence had not been published in English, that was the first question we asked. From reading the manuscript, and from talking to contacts in Denmark and Europe, we became confident that Rose was a genuine liberal with a strong anti-authoritarian bent, sharpened during his years as a reporter in the Soviet Union. His book, recently reissued with a new afterword, confirms that. Chapter 10, “A Victimless Crime,” traces the history of religious freedom from the Protestant Reformation to the challenges faced today by Muslims of different religious and political views.

Through it all, and through future attacks such as those at the French magazine Charlie Hebdo, Rose has continued to speak out for free speech and liberal values. He has made clear that his concern has always been – in the Soviet Union, in Europe, in the United States, and in Muslim countries – for individual dignity, freedom of religion, and freedom of thought. But he has insisted that there is no “right not to be offended.” He has become a leading public intellectual in a time when free speech is threatened in many ways by many factions. Today, in Politico Europe, he deplores a proposed law that would deny admission to Denmark to Islamists and criminalize anti-democratic speech. He worries:

What’s at stake in this controversy, and visible in similar developments across Europe, is the success of the Continent’s struggle to manage cultural and religious diversity. Most politicians believe we need to promote a diversity of opinions and beliefs, but manage that diversity with more tightly-controlled speech. That is wrong. A more diverse society needs more free speech, not less. This will be the key challenge for Denmark and Europe in the years ahead. The prospects do not look bright.

The prospects are brighter as long as free speech has defenders such as Flemming Rose.

The first few recipients of the Milton Friedman Prize were economists. Later came a young man who stopped Hugo Chavez’s referendum to create a socialist dictatorship, and a writer who spent 6 years in Iranian jails, followed by economic reformers from China and Poland.

I think the diversity of the recipients reflects the many ways in which liberty must be defended and advanced. People can play a role in the struggle for freedom as scholars, writers, activists, organizers, elected officials, and many other ways. Some may be surprised that a Prize named for a great scholar, a winner of the Nobel Prize in Economics, might go to a political official, a student activist, or a newspaper editor. But Milton Friedman was not just a world-class scholar. He was also a world-class communicator and someone who worked for liberty in issues ranging from monetary policy to conscription to drug prohibition to school choice. When he discussed the creation of the Prize with Cato president Ed Crane, he said that he didn’t want it to go just to great scholars. The Prize is awarded every other year “to an individual who has made a significant contribution to advance human freedom.” Friedman specifically cited the man who stood in front of the tank in Tiananmen Square as someone who would qualify for the Prize by striking a blow for liberty. Flemming Rose did not shy away from danger when he encountered it. He kept on advocating for a free and open society. Milton Friedman would be proud. 

For more than a century, America has been the global leader of the aviation industry. But these days, the government-run parts of the industry are inefficient and falling behind, including airports, security screening, and air traffic control (ATC). International experience shows that these activities can be better run outside of government bureaucracies.

House Transportation Committee chairman Bill Shuster introduced legislation to shake-up our moribund ATC system and move it out of the government. Shuster modeled his bill on highly successful Canadian reforms that established ATC as a self-funded nonprofit corporation, Nav Canada. For America, Canadian-style reforms could reduce airspace congestion, improve efficiency, benefit the environment, and save taxpayer money.

Such reforms should appeal to conservatives and Republicans, but there is resistance. Some Republicans are carrying water for the general aviation industry, which opposes the bill for apparently short-sighted financial reasons. And some conservative wonks oppose the bill because it does not reform the labor union structure of the ATC workforce. That objection is also short-sighted.

Economist Diana Furchtgott-Roth opposes Shuster’s legislation over labor issues. Diana is an expert on labor unions, but she is letting the perfect be the enemy of the good here. Our ATC system—run by the Federal Aviation Administration (FAA)—is being held back by government bureaucracy and congressional micromanagement, not so much by unionization.

The reason why the FAA has a long history of cost overruns, mismanaged technology projects, and other failures is the bad incentive structure that exists within all federal agencies. The more complex the task, the more that government bureaucracies fail, and ATC is becoming increasingly complex. Bob Poole has described the FAA’s bureaucracy problems in this study, and I have discussed federal bureaucratic failure more generally in this study.

Marc Scribner at CEI does a fantastic job of countering Diana’s arguments, and Bill Shuster responds to Diana’s labor-related complaints here. Personally, I would repeal “collective bargaining” (monopoly unionism) completely in the public and private sectors, for both economic and freedom reasons. But until that happens, I would take a private-sector unionized company any day over a unionized federal bureaucracy. Diana would apparently prefer the latter, which I find perplexing.

Private ATC managers would be more likely to push back against unreasonable union demands than government managers. And even as a nonprofit entity, a self-funded and unsubsidized ATC company would have a bottom line to meet. In Canada’s case, that structure has driven major improvements in productivity and created perhaps the best ATC system in the world. Nav Canada has more freedom to innovate than the FAA, and it has a strong incentive to do so because foreign sales of its technologies help the company meet its bottom line.

Nav Canada has won three International Air Transport Association “Eagle” Awards as the world’s best ATC provider. The system is handling 50 percent more traffic than before privatization, but with 30 percent fewer employees. And, as Marc Scribner notes, “Since the Canadian reforms 20 years ago, the fees charged to aircraft operators are now more than 30 percent lower than the taxes they replaced.”

And that progress in Canada was achieved with a unionized workforce and collective bargaining. In the long run, I favor freedom of association for ATC workers, but the top priority today is to overhaul the institutional structure of the system and bring in private management.

More on ATC reform here.

More on labor union reform here.

For the Wall Street Journal’s comparison of U.S. and Canadian ATC, see here.

Marc Scribner provides more excellent analysis here.

Bob Poole weighs in on these issues here.

Kudos to Marc and Bob, who both deserve “Eagle” awards for their top-class ATC analyses.

Copepods are small crustaceans and encompass a major group of secondary producers in the planktonic food web, often serving as a key food source for fish. And in the words of Isari et al. (2015), these organisms “have generally been found resilient to ocean acidification levels projected about a century ahead, so that they appear as potential ‘winners’ under the near-future CO2 emission scenarios.” However, many copepod species remain under-represented in ocean acidification studies. Thus, it was the goal of Isari et al. to expand the knowledge base of copepod responses to reduced levels of seawater pH that are predicted to occur over the coming century.

To accomplish this objective, the team of five researchers conducted a short (5-day) experiment in which they subjected adults of two copepod species (the calanoid Acartia grani and the cyclopoid Oithona davisae) to normal (8.18) and reduced (7.77) pH levels in order to assess the impacts of ocean acidification (OA) on copepod vital rates, including feeding, respiration, egg production and egg hatching success. At a pH value of 7.77, the simulated ocean acidification level is considered to be “at the more pessimistic end of the range of atmospheric CO2 projections.” And what did their experiment reveal?

In the words of the authors, they “did not find evidence of OA effects on the reproductive output (egg production, hatching success) of A. grani or O. davisae, consistent with the numerous studies demonstrating generally high resistance of copepod reproductive performance to the OA projected for the end of the century,” citing the works of Zhang et al. (2011), McConville et al. (2013), Vehmaa et al. (2013), Zervoudaki et al. (2014) and Pedersen et al. (2014). Additionally, they found no differences among pH treatments in copepod respiration or feeding activity for either species. As a result, Isari et al. say their study “shows neither energy constraints nor decrease in fitness components for two representative species, of major groups of marine planktonic copepods (i.e. Calanoida and Cyclopoida), incubated in the OA scenario projected for 2100.” Thus, this study adds to the growing body of evidence that copepods will not be harmed by, or may even benefit from, even the worst-case projections of future ocean acidification.

 

References

Isari, S., Zervoudake, S., Saiz, E., Pelejero, C. and Peters, J. 2015. Copepod vital rates under CO2-induced acidification: a calanoid species and a cyclopoid species under short-term exposures. Journal of Plankton Research 37: 912-922.

McConville, K., Halsband, C., Fileman, E.S., Somerfield, P.J., Findlay, H.S. and Spicer, J.I. 2013. Effects of elevated CO2 on the reproduction of two calanoid copepods. Marine Pollution Bulletin 73: 428-434.

Pedersen, S.A., Håkedal, O.J., Salaberria, I., Tagliati, A., Gustavson, L.M., Jenssen, B.M., Olsen, A.J. and Altin, D. 2014. Multigenerational exposure to ocean acidification during food limitation reveals consequences for copepod scope for growth and vital rates. Environmental Science & Technology 48: 12,275-12,284.

Vehmaa, A., Hogfors, H., Gorokhova, E., Brutemark, A., Holmborn, T. and Engström-Öst, J. 2013. Projected marine climate change: Effects on copepod oxidative status and reproduction. Ecology and Evolution 3: 4548-4557.

Zervoudaki, S., Frangoulis, C., Giannoudi, L. and Krasakopoulou, E. 2014. Effects of low pH and raised temperature on egg production, hatching and metabolic rates of a Mediterranean copepod species (Acartia clausi) under oligotrophic conditions. Mediterranean Marine Science 15: 74-83.

Zhang, D., Li, S., Wang, G. and Guo, D. 2011. Impacts of CO2-driven seawater acidification on survival, egg production rate and hatching success of four marine copepods. Acta Oceanologica Sinica 30: 86-94.

Last year, I mentioned a Canadian court case that could help promote free trade within Canada. Well, a lower court has now ruled for free trade, finding that the Canadian constitution does, in fact, guarantee free trade among the provinces. Here are the basic facts, from the Toronto Globe and Mail:

In 2013, Gérard Comeau was caught in what is likely the lamest sting operation in Canadian police history. Mr. Comeau drove into Quebec, bought 14 cases of beer and three bottles of liquor, and headed home. The Mounties were waiting in ambush. They pulled him over, along with 17 other drivers, and fined him $292.50 under a clause in the New Brunswick Liquor Control Act that obliges New Brunswick residents to buy all their booze, with minor exceptions set out in regulations, from the provincial Liquor Corporation.

And here’s how the court ruled:

Mr. Comeau went to court and challenged the law on the basis of Section 121 of the Constitution: “All articles of the growth, produce or manufacture of any of the provinces shall, from and after the Union, be admitted free into each of the other provinces.”

The judge said Friday that the wording of Section 121 is clear, and that the provincial law violates its intention. The Fathers of Confederation wanted Canada to be one economic union, a mari usque ad mare. That’s why they wrote the clause.

If you are into these sort of things, I highly recommend reading the judge’s decision, which looks deeply into the historical background of the Canadian constitutional provision at issue.

This is all very good for beer lovers, but also has much broader implications for internal Canadian trade in general. This is from an op-ed by Marni Soupcoff of the Canadian Constitution Foundation, which assisted with the case: 

But most Canadians aren’t interested in precisely how many more six packs will now be flowing through New Brunswick’s borders. They want to know what the Comeau decision means for them. They want to know whether there will be a lasting and far-reaching impact from one brave Maritimer’s constitutional challenge. And so, as the executive director of the Canadian Constitution Foundation (CCF), the organization that supported Comeau’s case, I’d like to answer that.

Canada is rife with protectionist laws and regulations that prevent the free flow of goods from one province to another. These laws affect Canadians’ ability to buy and sell milk, chickens, eggs, cheese and many other things, including some that neither you nor I have ever even thought about. And that is the beauty of this decision. It will open up a national market in everything. Yes, the CCF, Comeau and Comeau’s pro bono defence lawyers Mikael Bernard, Arnold Schwisberg and Ian Blue can all be proud that we have “freed the beer.” But we’ve done more than that — we’ve revived the idea that Canada should have free trade within its borders, which is what the framers of our Constitution intended. That means that the Supreme Court will likely have to revisit the constitutionality of this country’s marketing boards and other internal trade restrictions. In other words, this is a big deal.

As always with lower court decisions, there is the possibility of appeal. I haven’t heard anything definitive yet as to whether that will happen here. But whatever happens down the road, this lower court decision is something to be celebrated.

In a previous blog posting, I suggested that there is no case for capital adequacy regulation in an unregulated banking system.  In this ‘first-best’ environment, a bank’s capital policy would be just another aspect of its business model, comparable to its lending or reserving policies, say.  Banks’ capital adequacy standards would then be determined by competition and banks with inadequate capital would be driven out of business.

Nonetheless, it does not follow that there is no case for capital adequacy regulation in a ‘second-best’ world in which pre-existing state interventions — such as deposit insurance, the lender of last resort and Too-Big-to-Fail — create incentives for banks to take excessive risks.  By excessive risks, I refer to the risks that banks take but would not take if they had to bear the downsides of those risks themselves.

My point is that in this ‘second-best’ world there is a ‘second-best’ case for capital adequacy regulation to offset the incentives toward excessive risk-taking created by deposit insurance and so forth.  This posting examines what form such capital adequacy regulation might take.

At the heart of any system of capital adequacy regulation is a set of minimum required capital ratios, which were traditionally taken to be the ratios of core capital[1] to some measure of bank assets.

Under the international Basel capital regime, the centerpiece capital ratios involve a denominator measure known as Risk-Weighted Assets (RWAs).  The RWA approach gives each asset an arbitrary fixed weight between 0 percent and 100 percent, with OECD government debt given a weight of a zero.  The RWA measure itself is then the sum of the individual risk-weighted assets on a bank’s balance sheet.

The incentives created by the RWA approach turned Basel into a game in which the banks loaded up on low risk-weighted assets and most of the risks they took became invisible to the Basel risk measurement system.

The unreliability of the RWA measure is apparent from the following chart due to Andy Haldane:

Figure 1: Average Risk Weights and Leverage

This chart shows average Basel risk weights and leverage for a sample of international banks over the period 1994–2011.  Over this period, average risk weights show a clear downward trend, falling from just over 70 percent to about 40 percent.  Over the same period, bank leverage or assets divided by capital — a simple measure of bank riskiness — moved in the opposite direction, rising from about 20 to well over 30 at the start of the crisis.  The only difference is that while the latter then reversed itself, the average risk weight continued to fall during the crisis, continuing its earlier trend.  “While the risk traffic lights were flashing bright red for leverage [as the crisis approached], for risk weights they were signaling ever-deeper green,” as Haldane put it: the risk weights were a contrarian indicator for risk, indicating that risk was falling when it was, in fact, increasing sharply.[2]  The implication is that the RWA is a highly unreliable risk measure.[3]

Long before Basel, the preferred capital ratio was core capital to total assets, with no adjustment in the denominator for any risk-weights.  The inverse of this ratio, the bank leverage measure mentioned earlier, was regarded as the best available indicator of bank riskiness: the higher the leverage, the riskier the bank.

These older metrics then went out of fashion.  Over 30 years ago, it became fashionable to base regulatory capital ratios on RWAs because of their supposedly greater ‘risk sensitivity.’  Later the risk models came along, which were believed to provide even greater risk sensitivity.  The old capital/assets ratio was now passé, dismissed as primitive because of its risk insensitivity.  However, as RWAs and risk models have themselves become discredited, this risk insensitivity is no longer the disadvantage it once seemed to be.

On the contrary.

The old capital to assets ratio is making a comeback under a new name, the leverage ratio:[4] what is old is new again.  The introduction of a minimum leverage ratio is one of the key principles of the Basel III international capital regime.  Under this regime, there is to be a minimum required leverage ratio of 3 percent to supplement the various RWA-based capital requirements that are, unfortunately, its centerpieces.

The banking lobby hate the leverage ratio because it is less easy to game than RWA-based or model-based capital rules.  They and their Basel allies then argue that we all know that the RWA measure is flawed, but we shouldn’t throw out the baby with the bathwater.  (What baby? I ask. RWA is a pretend number and it’s as simple as that.)  They then assert that the leverage ratio is also flawed and conclude that we need the RWA to offset the flaws in the leverage ratio.

The flaw they now emphasize is the following: a minimum required leverage ratio would encourage banks to load up on the riskiest assets because the leverage ratio ignores the riskiness of individual assets.  This argument is commonly made and one could give many examples.  To give just one, a Financial Times editorial — ironically entitled “In praise of bank leverage ratios” — published on July 10, 2013 stated flatly:

Leverage ratios …  encourage lenders to load up on the riskiest assets available, which offer higher returns for the same capital.

Hold on right there!  Those who make such claims should think them through: if the banks were to load up on the riskiest assets, we first need to consider who would bear those higher risks.

The FT statement is not true as a general proposition and it is false in the circumstances that matter, i.e., where what is being proposed is a high minimum leverage ratio that would internalize the consequences of bank risk-taking.  And it is false in those circumstances precisely because it would internalize such risk-taking.

Consider the following cases:

In the first, imagine a bank with an infinitesimal capital ratio.  This bank benefits from the upside of its risk-taking but does not bear the downside.  If the risks pay off, it gets the profit; but if it makes a loss, it goes bankrupt and the loss is passed to its creditors.  Because the bank does not bear the downside, it has an incentive to load up on the riskiest assets available in order to maximize its expected profit.  In this case, the FT statement is correct.

In the second case, imagine a bank with a high capital-to-assets ratio.  This bank benefits from the upside of its risk-taking but also bears the downside if it makes a loss.  Because the bank bears the downside, it no longer has an incentive to load up on the riskiest assets.  Instead, it would select a mix of low-risk and high-risk assets that reflected its own risk appetite, i.e., its preferred trade-off between risk and expected return.  In this case, the FT statement is false.

My point is that the impact of a minimum required leverage ratio on bank risk-taking depends on the leverage ratio itself, and that it is only in the case of a very low leverage ratio that banks will load up on the riskiest assets.  However, if a bank is very thinly capitalized then it shouldn’t operate at all.  In a free-banking system, such a bank would lose creditors’ confidence and be run out of business.  Even in the contemporary United States, such a bank would fall foul of the Prompt Corrective Action statutes and the relevant authorities would be required to close it down.

In short, far from encouraging excessive risk-taking as is widely believed, a high minimum leverage ratio would internalize risk-taking incentives and lead to healthy rather than excessive risk-taking.

Then there is the question of how high ‘high’ should be.  There is of course no single magic number, but there is a remarkable degree of expert consensus on the broad order of magnitude involved.  For example, in an important 2010 letter to the Financial Times drafted by Anat Admati, she and 19 other renowned experts suggested a minimum required leverage ratio of at least 15 percent — at least five times greater than under Basel III — and some advocate much higher minima.  Independently, John Allison, Martin Hutchinson, Allan Meltzer and yours truly have also advocated minimum leverage ratios of at least 15 percent.  By a curious coincidence, 15 percent is about the average leverage ratio of U.S. banks at the time the Fed was founded.

There is one further and much under-appreciated benefit from a leverage ratio.  Suppose we had a leverage ratio whose denominator was not total assets or some similar measure.  Suppose instead that its denominator was the total amount at risk: one would take each position, establish the potential maximum loss on that position, and take the denominator to be the sum of these potential losses.  A leverage-ratio capital requirement based on a total-amount-at-risk denominator would give each position a capital requirement that was proportional to its riskiness, where its riskiness would be measured by its potential maximum loss.

Now consider any two positions with the same fair value.  With a total asset denominator, they would attract the same capital requirement, independently of their riskiness.  But now suppose that one position is a conventional bank asset such as a commercial loan, where the most that could be lost is the value of the loan itself.  The other position is a long position in a Credit Default Swap (i.e., a position in which the bank sells credit insurance).  If the reference credit in the CDS should sharply deteriorate, the long position could lose much more than its current value.  Remember AIG! Therefore, the CDS position is much riskier and would attract a much greater capital requirement under a total-amount-at-risk denominator.

The really toxic positions would be revealed to be the capital-hungry monsters that they are.  Their higher capital requirements would make many of them unattractive once the banks themselves were to made to bear the risks involved.  Much of the toxicity in banks’ positions would soon disappear.

The trick here is to get the denominator right.  Instead of measuring positions by their accounting fair values as under, e.g., U.S. Generally Accepted Accounting Principles, one should measure those positions by how much they might lose.

Nonetheless, even the best-designed leverage ratio regime can only ever be a second-best reform: it is not a panacea for all the ills that afflict the banking system.  Nor is it even clear that it would be the best ‘second-best’ reform: re-establishing some form of unlimited liability might be a better choice.

However, short of free banking, under which no capital regulation would be required in the first place, a high minimum leverage ratio would be a step in the right direction.

_____________

[1] By core capital, I refer the ‘fire-resistant’ capital available to support the bank in the heat of a crisis.  Core capital would include, e.g., tangible common equity and some retained earnings and disclosed reserves.  Core capital would exclude certain ‘softer’ capital items that cannot be relied upon in a crisis.  An example of the latter would be Deferred Tax Assets (DTAs).  DTAs allow a bank to claim back tax on previously incurred losses in the event it subsequently returns to profitability, but are useless to a bank in a solvency crisis.

[2] A. G. Haldane, “Constraining discretion in bank regulation.” Paper given at the Federal Reserve Bank of Atlanta Conference on ‘Maintaining Financial Stability: Holding a Tiger by the Tail(s)’, Federal Reserve Bank of Atlanta 9 April 2013, p. 10.

[3] The unreliability of the RWA measure is confirmed by a number of other studies.  These include, e.g.: A. Demirgüç-Kunt, E. Detragiache, and O. Merrouche, “Bank Capital: Lessons from the Financial Crisis,” World Bank Policy Research Working Paper Series No. 5473 2010); A. N. Berger and C. H. S. Bouwman, “How Does Capital Affect Bank Performance during Financial Crises?” Journal of Financial Economics 109 (2013): 146–76; A. Blundell-Wignall and C. Roulet, “Business Models of Banks, Leverage and the Distance-to-Default,” OECD Journal: Financial Market Trends 2012, no. 2 (2014); T. L. Hogan, N. Meredith and X. Pan, “Evaluating Risk-Based Capital Regulation,” Mercatus Center Working Paper Series No. 13-02 (2013); and V. V. Acharya and S. Steffen, “Falling short of expectation — stress testing the Eurozone banking system,” CEPS Policy Brief No. 315, January 2014.

[4] Strictly speaking, Basel III does not give the old capital-to-assets ratio a new name.  Instead, it creates a new leverage ratio measure in which the old denominator, total assets, is replaced by a new denominator measure called the leverage exposure.  The leverage exposure is meant to take account of the off-balance-sheet positions that the total assets measure fails to include.  However, in practice, the leverage exposure is not much different from the total assets measure, and for present purposes one can ignore the difference between the two denominators.  See Basel Committee on Banking Supervision, “Basel III: A global regulatory framework for more resilient banks and banking systems.”  Basel: Bank for International Settlements, June 2011, pp. 62-63.

[Cross-posted from Alt-M.org]

There are a great many reasons to support educational choice: maximizing freedomrespecting pluralism, reducing social conflict, empowering the poor, and so on. One reason is simply this: it works.

This week, researchers Patrick J. Wolf, M. Danish Shakeel, and Kaitlin P. Anderson of the University of Arkansas released the results of their painstaking meta-analysis of the international, gold-standard research on school choice programs, which concluded that, on average, such programs have a statistically significant positive impact on student performance on reading and math tests. Moreover, the magnitude of the positive impact increased the longer students participated in the program.

As Wolf observed in a blog post explaining the findings, the “clarity of the results… contrasts with the fog of dispute that often surrounds discussions of the effectiveness of private school choice.”

That’s So Meta

One of the main advantages of a meta-analysis is that it can overcome the limitations of individual studies (e.g., small samples sizes) by pooling the results of numerous studies. This meta-analysis is especially important because it includes all random-assignment studies on school choice programs (the gold standard for social science research), while excluding studies that employed less rigorous methods. The analysis included 19 studies on 11 school choice programs (including government-funded voucher programs as well as privately funded scholarship programs) in Colombia, Indiana, and the United States. Each study compared the performance of students who had applied for and randomly won a voucher to a “control group” of students who had applied for a voucher but randomly did not receive one. As Wolf explained, previous meta-analyses and research reviews omitted some gold-standard studies and/or included less rigorous research:

The most commonly cited school choice review, by economists Cecilia Rouse and Lisa Barrow, declares that it will focus on the evidence from existing experimental studies but then leaves out four such studies (three of which reported positive choice effects) and includes one study that was non-experimental (and found no significant effect of choice).  A more recent summary, by Epple, Romano, and Urquiola, selectively included only 48% of the empirical private school choice studies available in the research literature.  Greg Forster’s Win-Win report from 2013 is a welcome exception and gets the award for the school choice review closest to covering all of the studies that fit his inclusion criteria – 93.3%.

Survey Says: School Choice Improves Student Performance

The meta-analysis found that, on average, participating in a school choice program improves student test scores by about 0.27 standard deviations in reading and 0.15 standard deviations in math. In laymen’s terms, these are “highly statistically significant, educationally meaningful achievement gains of several months of additional learning from school choice.”

Interestingly, the positive results appeared to be larger for the programs in developing countries rather than the United States, especially in reading. That might stem from a larger gap in quality between government-run and private schools in the developing world. In addition, American students who lost the voucher lotteries “often found other ways to access school choices.” For example, in Washington, D.C., 12% of students who lost the voucher lottery still managed to enroll in a private school, and 35% enrolled in a charter school, meaning barely more than half of the “control group” attended their assigned district school.

The meta-analysis also found larger positive results from publicly funded rather than privately funded programs. The authors note that public finding “could be a proxy for the voucher amount” because the publicly funded vouchers were worth significantly more, on average, than the privately funded scholarships. The authors suggest that parents who are “relieved of an additional financial burden… might therefore be more likely to keep their chid enrolled in a private school long enough to realize the larger academic benefits that emerge after three or more years of private schooling.” Moreover, the higher-value vouchers are more likely to “motivate a higher-quality population of private schools to participate in the voucher program.” The authors also note that differences in accountability regulations may play a role.

The Benefits of Choice and Competition

The benefits of school choice are not limited to participating students. Last month, Wolf and Anna J. Egalite of North Carolina State University released a review of the research on the impact of competition on district schools. Although it is impossible to conduct a random-assignment study on the effects of competition (as much as some researchers would love to force different states to randomly adopt different policies in order to measure the difference in effects, neither the voters nor their elected representatives are so keen on the idea), there have been dozens of high-quality studies addressing this question, and a significant majority find that increased competition has a positive impact on district school performance: 

Thirty of the 42 evaluations of the effects of school-choice competition on the performance of affected public schools report that the test scores of all or some public school students increase when schools are faced with competition. Improvement in the performance of district schools appear to be especially large when competition spikes but otherwise, is quite modest in scale.

In other words, the evidence suggests that when district schools know that their students have other options, they take steps to improve. This is exactly what economic theory would predict. Monopolists are slow to change while organizations operating in a competitive environment must learn to adapt or they will perish.

On Designing School Choice Policies

Of course, not all school choice programs are created equal. Wolf and Egalite offer several wise suggestions to policymakers based on their research. Policymakers should “encourage innovative and thematically diverse schools” by crafting legislation that is “flexible and thoughtful enough to facilitate new models of schooling that have not been widely implemented yet.” We don’t know what education will look like in the future, so our laws should be platforms for innovations rather than constraints molded to the current system.

That means policymakers should resist the urge to over-regulate. The authors argue that private schools “should be allowed to maintain a reasonable degree of autonomy over instructional practices, pedagogy, and general day-to-day operations” and that, beyond a background check, “school leaders should be the ones determining teacher qualifications in line with their mission. We don’t know the “one best way” to teach students, and it’s likely that no “one best way” even exists. For that matter, we have not yet figured out a way to determine in advance whether a would-be teacher will be effective or not. Indeed, as this Brookings Institute chart shows (see page 8), there is practically no difference in effectiveness between traditionally certified teachers and their alternatively certified or even uncertified peers:

In other words, if in the name of “quality control,” the government mandated that voucher-accepting schools only hire traditionally certified teachers, not only would such a regulation fail to prevent the hiring of less-effective teachers, it would also prevent private schools from hiring lots of effective teachers. Sadly, too many policymakers never tire of crafting new ways to “ensure quality” that fall flat or even have the opposite impact.

School choice policies benefit both participating and nonparticipating students. Students who use vouchers or tax-credit scholarships to attend the school of their choice benefit by gaining access to schools that better fit their needs. Students who do not avail themselves of those options still benefit because the very access to alternatives spurs district schools to improve. These are great reasons to expand educational choice, but policymakers should be careful not to undermine the market mechanisms that foster competition and innovation.

For more on the impact of regulations on school choice policies, watch our recent Cato Institute event: “School Choice Regulations: Friend or Foe?”

 

Fresh off his resounding victory in the West Virginia primary, Senator Bernie Sanders has intimated that he has no intent of dropping out of the race any time soon, even though he trails his rival Hillary Clinton significantly in pledged delegates. One of the cornerstones of the Sanders campaign has been his health care plan, which would replace the entirety of the current health care system with a more generous version of Medicare. His campaign has claimed the plan would cost a little more than $13.8 trillion over the next decade, and he has proposed to fund these new expenditures with a clutch of tax increases, many of them levied on higher-income households. At the time, analysts at Cato and elsewhere expressed skepticism that the cost estimates touted by the campaign accurately accounted for all the increases in federal health expenditures the plan would require, and incorporated costs savings estimates that were overly optimistic. Now, a new study from the left-leaning Urban Institute corroborates many of these concerns, finding that Berniecare would cost twice as much as the $13.8 trillion price tag touted by the Sanders campaign.

The authors from the Urban Institute estimate that Berniecare would increase federal expenditures by $32 trillion, 233 percent, over the next decade. The $15 trillion in additional taxes proposed by Sanders would fail to even cover half of the health care proposal’s price tag, leaving a funding gap of $16.6 trillion. In the first year, federal spending would increase by $2.34 trillion. To give some context, total national health expenditures in the United States were $3 trillion in 2014.

Sanders was initially able to restrict most of the tax increases needed to higher-income households through income-based premiums, significantly increasing taxes on capital gains and dividends, and hiking marginal tax rates on high earners. Sanders cannot squeeze blood from the same stone twice, and there’s likely not much more he could do to propose higher taxes on these households, which means if he were to actually have to find ways to finance Berniecare, he’d have to turn to large tax increases on the middle class.

There are different reasons Berniecare would increase federal health spending so significantly. The most straightforward is that it would replace all other forms of health care, from employer sponsored insurance to state and local programs, with one federal program. The second factor is that the actual program would be significantly more generous than Medicare (and the European health systems Sanders so often praises), while also removing even cursory cost-sharing requirements. In addition, this proposal would add new benefits, like a comprehensive long-term services and support (LTSS) component that the Urban Institute estimates would cost $308 billion in its first year and $4.14 trillion over the next decade. These estimates focus on annual cash flows over a relatively short time period, so the study doesn’t delve into the longer-term sustainability issues that might develop from this new component, although they do note that “after this 10-year window, we would anticipate that costs would grow faster than in previous years as baby boomers reach age 80 and older, when rates of severe disability and LTSS use are much higher. Revenues would correspondingly need to grow rapidly over the ensuing 20 years.”

Even at twice the initial price tag claimed by the Sanders campaign, these cost estimates from the Urban Institute might actually underestimate the total costs. As they point out, the authors do not incorporate estimates for the higher utilization of health care services that would almost certainly occur when people move from the current system to the generous, first-dollar coverage in the more generous version of Medicare they would have under this proposal. They also chose not to incorporate higher provider payment rates for acute care services that might be necessary, and include “assumptions about reductions in drug prices [that] are particularly aggressive and may fall well short of political feasibility.”

Berniecare would increase federal government spending by $32 trillion over the next decade, more than twice as much as the revenue from the trillions in taxes Sanders has proposed. And this might not be underselling the actual price tag, and only considers the cash flow issues in the short-term. There could be even greater sustainability problems over a longer time horizon. One thing is for certain the plan would require even more trillions in additional tax hikes.

Pages