Cato Op-Eds

Individual Liberty, Free Markets, and Peace
Subscribe to Cato Op-Eds feed

Copepods are small crustaceans and encompass a major group of secondary producers in the planktonic food web, often serving as a key food source for fish. And in the words of Isari et al. (2015), these organisms “have generally been found resilient to ocean acidification levels projected about a century ahead, so that they appear as potential ‘winners’ under the near-future CO2 emission scenarios.” However, many copepod species remain under-represented in ocean acidification studies. Thus, it was the goal of Isari et al. to expand the knowledge base of copepod responses to reduced levels of seawater pH that are predicted to occur over the coming century.

To accomplish this objective, the team of five researchers conducted a short (5-day) experiment in which they subjected adults of two copepod species (the calanoid Acartia grani and the cyclopoid Oithona davisae) to normal (8.18) and reduced (7.77) pH levels in order to assess the impacts of ocean acidification (OA) on copepod vital rates, including feeding, respiration, egg production and egg hatching success. At a pH value of 7.77, the simulated ocean acidification level is considered to be “at the more pessimistic end of the range of atmospheric CO2 projections.” And what did their experiment reveal?

In the words of the authors, they “did not find evidence of OA effects on the reproductive output (egg production, hatching success) of A. grani or O. davisae, consistent with the numerous studies demonstrating generally high resistance of copepod reproductive performance to the OA projected for the end of the century,” citing the works of Zhang et al. (2011), McConville et al. (2013), Vehmaa et al. (2013), Zervoudaki et al. (2014) and Pedersen et al. (2014). Additionally, they found no differences among pH treatments in copepod respiration or feeding activity for either species. As a result, Isari et al. say their study “shows neither energy constraints nor decrease in fitness components for two representative species, of major groups of marine planktonic copepods (i.e. Calanoida and Cyclopoida), incubated in the OA scenario projected for 2100.” Thus, this study adds to the growing body of evidence that copepods will not be harmed by, or may even benefit from, even the worst-case projections of future ocean acidification.



Isari, S., Zervoudake, S., Saiz, E., Pelejero, C. and Peters, J. 2015. Copepod vital rates under CO2-induced acidification: a calanoid species and a cyclopoid species under short-term exposures. Journal of Plankton Research 37: 912-922.

McConville, K., Halsband, C., Fileman, E.S., Somerfield, P.J., Findlay, H.S. and Spicer, J.I. 2013. Effects of elevated CO2 on the reproduction of two calanoid copepods. Marine Pollution Bulletin 73: 428-434.

Pedersen, S.A., Håkedal, O.J., Salaberria, I., Tagliati, A., Gustavson, L.M., Jenssen, B.M., Olsen, A.J. and Altin, D. 2014. Multigenerational exposure to ocean acidification during food limitation reveals consequences for copepod scope for growth and vital rates. Environmental Science & Technology 48: 12,275-12,284.

Vehmaa, A., Hogfors, H., Gorokhova, E., Brutemark, A., Holmborn, T. and Engström-Öst, J. 2013. Projected marine climate change: Effects on copepod oxidative status and reproduction. Ecology and Evolution 3: 4548-4557.

Zervoudaki, S., Frangoulis, C., Giannoudi, L. and Krasakopoulou, E. 2014. Effects of low pH and raised temperature on egg production, hatching and metabolic rates of a Mediterranean copepod species (Acartia clausi) under oligotrophic conditions. Mediterranean Marine Science 15: 74-83.

Zhang, D., Li, S., Wang, G. and Guo, D. 2011. Impacts of CO2-driven seawater acidification on survival, egg production rate and hatching success of four marine copepods. Acta Oceanologica Sinica 30: 86-94.

Last year, I mentioned a Canadian court case that could help promote free trade within Canada. Well, a lower court has now ruled for free trade, finding that the Canadian constitution does, in fact, guarantee free trade among the provinces. Here are the basic facts, from the Toronto Globe and Mail:

In 2013, Gérard Comeau was caught in what is likely the lamest sting operation in Canadian police history. Mr. Comeau drove into Quebec, bought 14 cases of beer and three bottles of liquor, and headed home. The Mounties were waiting in ambush. They pulled him over, along with 17 other drivers, and fined him $292.50 under a clause in the New Brunswick Liquor Control Act that obliges New Brunswick residents to buy all their booze, with minor exceptions set out in regulations, from the provincial Liquor Corporation.

And here’s how the court ruled:

Mr. Comeau went to court and challenged the law on the basis of Section 121 of the Constitution: “All articles of the growth, produce or manufacture of any of the provinces shall, from and after the Union, be admitted free into each of the other provinces.”

The judge said Friday that the wording of Section 121 is clear, and that the provincial law violates its intention. The Fathers of Confederation wanted Canada to be one economic union, a mari usque ad mare. That’s why they wrote the clause.

If you are into these sort of things, I highly recommend reading the judge’s decision, which looks deeply into the historical background of the Canadian constitutional provision at issue.

This is all very good for beer lovers, but also has much broader implications for internal Canadian trade in general. This is from an op-ed by Marni Soupcoff of the Canadian Constitution Foundation, which assisted with the case: 

But most Canadians aren’t interested in precisely how many more six packs will now be flowing through New Brunswick’s borders. They want to know what the Comeau decision means for them. They want to know whether there will be a lasting and far-reaching impact from one brave Maritimer’s constitutional challenge. And so, as the executive director of the Canadian Constitution Foundation (CCF), the organization that supported Comeau’s case, I’d like to answer that.

Canada is rife with protectionist laws and regulations that prevent the free flow of goods from one province to another. These laws affect Canadians’ ability to buy and sell milk, chickens, eggs, cheese and many other things, including some that neither you nor I have ever even thought about. And that is the beauty of this decision. It will open up a national market in everything. Yes, the CCF, Comeau and Comeau’s pro bono defence lawyers Mikael Bernard, Arnold Schwisberg and Ian Blue can all be proud that we have “freed the beer.” But we’ve done more than that — we’ve revived the idea that Canada should have free trade within its borders, which is what the framers of our Constitution intended. That means that the Supreme Court will likely have to revisit the constitutionality of this country’s marketing boards and other internal trade restrictions. In other words, this is a big deal.

As always with lower court decisions, there is the possibility of appeal. I haven’t heard anything definitive yet as to whether that will happen here. But whatever happens down the road, this lower court decision is something to be celebrated.

In a previous blog posting, I suggested that there is no case for capital adequacy regulation in an unregulated banking system.  In this ‘first-best’ environment, a bank’s capital policy would be just another aspect of its business model, comparable to its lending or reserving policies, say.  Banks’ capital adequacy standards would then be determined by competition and banks with inadequate capital would be driven out of business.

Nonetheless, it does not follow that there is no case for capital adequacy regulation in a ‘second-best’ world in which pre-existing state interventions — such as deposit insurance, the lender of last resort and Too-Big-to-Fail — create incentives for banks to take excessive risks.  By excessive risks, I refer to the risks that banks take but would not take if they had to bear the downsides of those risks themselves.

My point is that in this ‘second-best’ world there is a ‘second-best’ case for capital adequacy regulation to offset the incentives toward excessive risk-taking created by deposit insurance and so forth.  This posting examines what form such capital adequacy regulation might take.

At the heart of any system of capital adequacy regulation is a set of minimum required capital ratios, which were traditionally taken to be the ratios of core capital[1] to some measure of bank assets.

Under the international Basel capital regime, the centerpiece capital ratios involve a denominator measure known as Risk-Weighted Assets (RWAs).  The RWA approach gives each asset an arbitrary fixed weight between 0 percent and 100 percent, with OECD government debt given a weight of a zero.  The RWA measure itself is then the sum of the individual risk-weighted assets on a bank’s balance sheet.

The incentives created by the RWA approach turned Basel into a game in which the banks loaded up on low risk-weighted assets and most of the risks they took became invisible to the Basel risk measurement system.

The unreliability of the RWA measure is apparent from the following chart due to Andy Haldane:

Figure 1: Average Risk Weights and Leverage

This chart shows average Basel risk weights and leverage for a sample of international banks over the period 1994–2011.  Over this period, average risk weights show a clear downward trend, falling from just over 70 percent to about 40 percent.  Over the same period, bank leverage or assets divided by capital — a simple measure of bank riskiness — moved in the opposite direction, rising from about 20 to well over 30 at the start of the crisis.  The only difference is that while the latter then reversed itself, the average risk weight continued to fall during the crisis, continuing its earlier trend.  “While the risk traffic lights were flashing bright red for leverage [as the crisis approached], for risk weights they were signaling ever-deeper green,” as Haldane put it: the risk weights were a contrarian indicator for risk, indicating that risk was falling when it was, in fact, increasing sharply.[2]  The implication is that the RWA is a highly unreliable risk measure.[3]

Long before Basel, the preferred capital ratio was core capital to total assets, with no adjustment in the denominator for any risk-weights.  The inverse of this ratio, the bank leverage measure mentioned earlier, was regarded as the best available indicator of bank riskiness: the higher the leverage, the riskier the bank.

These older metrics then went out of fashion.  Over 30 years ago, it became fashionable to base regulatory capital ratios on RWAs because of their supposedly greater ‘risk sensitivity.’  Later the risk models came along, which were believed to provide even greater risk sensitivity.  The old capital/assets ratio was now passé, dismissed as primitive because of its risk insensitivity.  However, as RWAs and risk models have themselves become discredited, this risk insensitivity is no longer the disadvantage it once seemed to be.

On the contrary.

The old capital to assets ratio is making a comeback under a new name, the leverage ratio:[4] what is old is new again.  The introduction of a minimum leverage ratio is one of the key principles of the Basel III international capital regime.  Under this regime, there is to be a minimum required leverage ratio of 3 percent to supplement the various RWA-based capital requirements that are, unfortunately, its centerpieces.

The banking lobby hate the leverage ratio because it is less easy to game than RWA-based or model-based capital rules.  They and their Basel allies then argue that we all know that the RWA measure is flawed, but we shouldn’t throw out the baby with the bathwater.  (What baby? I ask. RWA is a pretend number and it’s as simple as that.)  They then assert that the leverage ratio is also flawed and conclude that we need the RWA to offset the flaws in the leverage ratio.

The flaw they now emphasize is the following: a minimum required leverage ratio would encourage banks to load up on the riskiest assets because the leverage ratio ignores the riskiness of individual assets.  This argument is commonly made and one could give many examples.  To give just one, a Financial Times editorial — ironically entitled “In praise of bank leverage ratios” — published on July 10, 2013 stated flatly:

Leverage ratios …  encourage lenders to load up on the riskiest assets available, which offer higher returns for the same capital.

Hold on right there!  Those who make such claims should think them through: if the banks were to load up on the riskiest assets, we first need to consider who would bear those higher risks.

The FT statement is not true as a general proposition and it is false in the circumstances that matter, i.e., where what is being proposed is a high minimum leverage ratio that would internalize the consequences of bank risk-taking.  And it is false in those circumstances precisely because it would internalize such risk-taking.

Consider the following cases:

In the first, imagine a bank with an infinitesimal capital ratio.  This bank benefits from the upside of its risk-taking but does not bear the downside.  If the risks pay off, it gets the profit; but if it makes a loss, it goes bankrupt and the loss is passed to its creditors.  Because the bank does not bear the downside, it has an incentive to load up on the riskiest assets available in order to maximize its expected profit.  In this case, the FT statement is correct.

In the second case, imagine a bank with a high capital-to-assets ratio.  This bank benefits from the upside of its risk-taking but also bears the downside if it makes a loss.  Because the bank bears the downside, it no longer has an incentive to load up on the riskiest assets.  Instead, it would select a mix of low-risk and high-risk assets that reflected its own risk appetite, i.e., its preferred trade-off between risk and expected return.  In this case, the FT statement is false.

My point is that the impact of a minimum required leverage ratio on bank risk-taking depends on the leverage ratio itself, and that it is only in the case of a very low leverage ratio that banks will load up on the riskiest assets.  However, if a bank is very thinly capitalized then it shouldn’t operate at all.  In a free-banking system, such a bank would lose creditors’ confidence and be run out of business.  Even in the contemporary United States, such a bank would fall foul of the Prompt Corrective Action statutes and the relevant authorities would be required to close it down.

In short, far from encouraging excessive risk-taking as is widely believed, a high minimum leverage ratio would internalize risk-taking incentives and lead to healthy rather than excessive risk-taking.

Then there is the question of how high ‘high’ should be.  There is of course no single magic number, but there is a remarkable degree of expert consensus on the broad order of magnitude involved.  For example, in an important 2010 letter to the Financial Times drafted by Anat Admati, she and 19 other renowned experts suggested a minimum required leverage ratio of at least 15 percent — at least five times greater than under Basel III — and some advocate much higher minima.  Independently, John Allison, Martin Hutchinson, Allan Meltzer and yours truly have also advocated minimum leverage ratios of at least 15 percent.  By a curious coincidence, 15 percent is about the average leverage ratio of U.S. banks at the time the Fed was founded.

There is one further and much under-appreciated benefit from a leverage ratio.  Suppose we had a leverage ratio whose denominator was not total assets or some similar measure.  Suppose instead that its denominator was the total amount at risk: one would take each position, establish the potential maximum loss on that position, and take the denominator to be the sum of these potential losses.  A leverage-ratio capital requirement based on a total-amount-at-risk denominator would give each position a capital requirement that was proportional to its riskiness, where its riskiness would be measured by its potential maximum loss.

Now consider any two positions with the same fair value.  With a total asset denominator, they would attract the same capital requirement, independently of their riskiness.  But now suppose that one position is a conventional bank asset such as a commercial loan, where the most that could be lost is the value of the loan itself.  The other position is a long position in a Credit Default Swap (i.e., a position in which the bank sells credit insurance).  If the reference credit in the CDS should sharply deteriorate, the long position could lose much more than its current value.  Remember AIG! Therefore, the CDS position is much riskier and would attract a much greater capital requirement under a total-amount-at-risk denominator.

The really toxic positions would be revealed to be the capital-hungry monsters that they are.  Their higher capital requirements would make many of them unattractive once the banks themselves were to made to bear the risks involved.  Much of the toxicity in banks’ positions would soon disappear.

The trick here is to get the denominator right.  Instead of measuring positions by their accounting fair values as under, e.g., U.S. Generally Accepted Accounting Principles, one should measure those positions by how much they might lose.

Nonetheless, even the best-designed leverage ratio regime can only ever be a second-best reform: it is not a panacea for all the ills that afflict the banking system.  Nor is it even clear that it would be the best ‘second-best’ reform: re-establishing some form of unlimited liability might be a better choice.

However, short of free banking, under which no capital regulation would be required in the first place, a high minimum leverage ratio would be a step in the right direction.


[1] By core capital, I refer the ‘fire-resistant’ capital available to support the bank in the heat of a crisis.  Core capital would include, e.g., tangible common equity and some retained earnings and disclosed reserves.  Core capital would exclude certain ‘softer’ capital items that cannot be relied upon in a crisis.  An example of the latter would be Deferred Tax Assets (DTAs).  DTAs allow a bank to claim back tax on previously incurred losses in the event it subsequently returns to profitability, but are useless to a bank in a solvency crisis.

[2] A. G. Haldane, “Constraining discretion in bank regulation.” Paper given at the Federal Reserve Bank of Atlanta Conference on ‘Maintaining Financial Stability: Holding a Tiger by the Tail(s)’, Federal Reserve Bank of Atlanta 9 April 2013, p. 10.

[3] The unreliability of the RWA measure is confirmed by a number of other studies.  These include, e.g.: A. Demirgüç-Kunt, E. Detragiache, and O. Merrouche, “Bank Capital: Lessons from the Financial Crisis,” World Bank Policy Research Working Paper Series No. 5473 2010); A. N. Berger and C. H. S. Bouwman, “How Does Capital Affect Bank Performance during Financial Crises?” Journal of Financial Economics 109 (2013): 146–76; A. Blundell-Wignall and C. Roulet, “Business Models of Banks, Leverage and the Distance-to-Default,” OECD Journal: Financial Market Trends 2012, no. 2 (2014); T. L. Hogan, N. Meredith and X. Pan, “Evaluating Risk-Based Capital Regulation,” Mercatus Center Working Paper Series No. 13-02 (2013); and V. V. Acharya and S. Steffen, “Falling short of expectation — stress testing the Eurozone banking system,” CEPS Policy Brief No. 315, January 2014.

[4] Strictly speaking, Basel III does not give the old capital-to-assets ratio a new name.  Instead, it creates a new leverage ratio measure in which the old denominator, total assets, is replaced by a new denominator measure called the leverage exposure.  The leverage exposure is meant to take account of the off-balance-sheet positions that the total assets measure fails to include.  However, in practice, the leverage exposure is not much different from the total assets measure, and for present purposes one can ignore the difference between the two denominators.  See Basel Committee on Banking Supervision, “Basel III: A global regulatory framework for more resilient banks and banking systems.”  Basel: Bank for International Settlements, June 2011, pp. 62-63.

[Cross-posted from]

There are a great many reasons to support educational choice: maximizing freedomrespecting pluralism, reducing social conflict, empowering the poor, and so on. One reason is simply this: it works.

This week, researchers Patrick J. Wolf, M. Danish Shakeel, and Kaitlin P. Anderson of the University of Arkansas released the results of their painstaking meta-analysis of the international, gold-standard research on school choice programs, which concluded that, on average, such programs have a statistically significant positive impact on student performance on reading and math tests. Moreover, the magnitude of the positive impact increased the longer students participated in the program.

As Wolf observed in a blog post explaining the findings, the “clarity of the results… contrasts with the fog of dispute that often surrounds discussions of the effectiveness of private school choice.”

That’s So Meta

One of the main advantages of a meta-analysis is that it can overcome the limitations of individual studies (e.g., small samples sizes) by pooling the results of numerous studies. This meta-analysis is especially important because it includes all random-assignment studies on school choice programs (the gold standard for social science research), while excluding studies that employed less rigorous methods. The analysis included 19 studies on 11 school choice programs (including government-funded voucher programs as well as privately funded scholarship programs) in Colombia, Indiana, and the United States. Each study compared the performance of students who had applied for and randomly won a voucher to a “control group” of students who had applied for a voucher but randomly did not receive one. As Wolf explained, previous meta-analyses and research reviews omitted some gold-standard studies and/or included less rigorous research:

The most commonly cited school choice review, by economists Cecilia Rouse and Lisa Barrow, declares that it will focus on the evidence from existing experimental studies but then leaves out four such studies (three of which reported positive choice effects) and includes one study that was non-experimental (and found no significant effect of choice).  A more recent summary, by Epple, Romano, and Urquiola, selectively included only 48% of the empirical private school choice studies available in the research literature.  Greg Forster’s Win-Win report from 2013 is a welcome exception and gets the award for the school choice review closest to covering all of the studies that fit his inclusion criteria – 93.3%.

Survey Says: School Choice Improves Student Performance

The meta-analysis found that, on average, participating in a school choice program improves student test scores by about 0.27 standard deviations in reading and 0.15 standard deviations in math. In laymen’s terms, these are “highly statistically significant, educationally meaningful achievement gains of several months of additional learning from school choice.”

Interestingly, the positive results appeared to be larger for the programs in developing countries rather than the United States, especially in reading. That might stem from a larger gap in quality between government-run and private schools in the developing world. In addition, American students who lost the voucher lotteries “often found other ways to access school choices.” For example, in Washington, D.C., 12% of students who lost the voucher lottery still managed to enroll in a private school, and 35% enrolled in a charter school, meaning barely more than half of the “control group” attended their assigned district school.

The meta-analysis also found larger positive results from publicly funded rather than privately funded programs. The authors note that public finding “could be a proxy for the voucher amount” because the publicly funded vouchers were worth significantly more, on average, than the privately funded scholarships. The authors suggest that parents who are “relieved of an additional financial burden… might therefore be more likely to keep their chid enrolled in a private school long enough to realize the larger academic benefits that emerge after three or more years of private schooling.” Moreover, the higher-value vouchers are more likely to “motivate a higher-quality population of private schools to participate in the voucher program.” The authors also note that differences in accountability regulations may play a role.

The Benefits of Choice and Competition

The benefits of school choice are not limited to participating students. Last month, Wolf and Anna J. Egalite of North Carolina State University released a review of the research on the impact of competition on district schools. Although it is impossible to conduct a random-assignment study on the effects of competition (as much as some researchers would love to force different states to randomly adopt different policies in order to measure the difference in effects, neither the voters nor their elected representatives are so keen on the idea), there have been dozens of high-quality studies addressing this question, and a significant majority find that increased competition has a positive impact on district school performance: 

Thirty of the 42 evaluations of the effects of school-choice competition on the performance of affected public schools report that the test scores of all or some public school students increase when schools are faced with competition. Improvement in the performance of district schools appear to be especially large when competition spikes but otherwise, is quite modest in scale.

In other words, the evidence suggests that when district schools know that their students have other options, they take steps to improve. This is exactly what economic theory would predict. Monopolists are slow to change while organizations operating in a competitive environment must learn to adapt or they will perish.

On Designing School Choice Policies

Of course, not all school choice programs are created equal. Wolf and Egalite offer several wise suggestions to policymakers based on their research. Policymakers should “encourage innovative and thematically diverse schools” by crafting legislation that is “flexible and thoughtful enough to facilitate new models of schooling that have not been widely implemented yet.” We don’t know what education will look like in the future, so our laws should be platforms for innovations rather than constraints molded to the current system.

That means policymakers should resist the urge to over-regulate. The authors argue that private schools “should be allowed to maintain a reasonable degree of autonomy over instructional practices, pedagogy, and general day-to-day operations” and that, beyond a background check, “school leaders should be the ones determining teacher qualifications in line with their mission. We don’t know the “one best way” to teach students, and it’s likely that no “one best way” even exists. For that matter, we have not yet figured out a way to determine in advance whether a would-be teacher will be effective or not. Indeed, as this Brookings Institute chart shows (see page 8), there is practically no difference in effectiveness between traditionally certified teachers and their alternatively certified or even uncertified peers:

In other words, if in the name of “quality control,” the government mandated that voucher-accepting schools only hire traditionally certified teachers, not only would such a regulation fail to prevent the hiring of less-effective teachers, it would also prevent private schools from hiring lots of effective teachers. Sadly, too many policymakers never tire of crafting new ways to “ensure quality” that fall flat or even have the opposite impact.

School choice policies benefit both participating and nonparticipating students. Students who use vouchers or tax-credit scholarships to attend the school of their choice benefit by gaining access to schools that better fit their needs. Students who do not avail themselves of those options still benefit because the very access to alternatives spurs district schools to improve. These are great reasons to expand educational choice, but policymakers should be careful not to undermine the market mechanisms that foster competition and innovation.

For more on the impact of regulations on school choice policies, watch our recent Cato Institute event: “School Choice Regulations: Friend or Foe?”