Cato Op-Eds

Individual Liberty, Free Markets, and Peace
Subscribe to Cato Op-Eds feed

The Tax Foundation released its inaugural “International Tax Competitiveness Index” (ITCI) on September 15th, 2014. The United States was ranked an abysmal 32nd out of the 34 OECD member countries for the year 2014. (See accompanying Table 1.) The European welfare states such as Norway, Sweden and Denmark, with their large social welfare systems, still managed to have less burdensome tax systems on local businesses than the U.S. The U.S. is even ranked below Italy, the country that has had such a pervasive problem with tax evasion that the head of its Agency of Revenue (roughly equivalent to the Internal Revenue Service in the United States) recently joked that Italians don’t pay taxes because they were Catholic and hence were used to “gaining absolution.” In fact, according to the ranking, only France and Portugal have the dubious honor of operating less competitive tax systems than the United States.

The ITCI measures “the extent to which a country’s tax system adheres to two important principles of tax policy: competitiveness and neutrality.” The competitiveness of a tax system can be measured by the overall tax rates faced by domestic businesses operating within the country. In the words of the Tax Foundation, when tax rates are too high, it “drives investment elsewhere, leading to slower economic growth.” Tax competitiveness is measured from 40 different variables across five different categories: consumption taxes, individual taxes, corporate income taxes, property taxes, and the treatment of foreign earnings. Tax neutrality, the other principle taken into account when composing the ITCI, refers to a “tax code that seeks to raise the most revenue with the fewest economic distortions.” This would mean that tax systems are fair and equally targeted towards all firms and industries, with no tax breaks for any specific business activity. A neutral tax system would also limit the rate of – amongst others – capital gains and dividends taxes, all of which encourage consumption at the expense of savings and investment. 

Even the two countries that have less competitive tax regimes than the U.S. – France and Portugal – have lower corporate tax rates than the U.S., at 34.4% and 31.5%, respectively. The U.S. corporate rate on average across states, on the other hand, is at 39.1%. This is the highest rate in the OECD, which has an average corporate tax rate of 24.8% across the 34 member countries. According to a report by KPMG, if the United Arab Emirates’ severance tax on oil companies was ignored, the U.S. average corporate tax rate would be the world’s highest.

Table 1.

The poor showing of the U.S. resulted from other countries recognizing the need to improve their competitive position in an increasingly globalized world. Indeed, the only OECD member countries not to have cut their corporate tax rates since the onset of the new millennia are Chile, Norway, and, yes, the United States. The high U.S. corporate tax rate not only raises the cost of doing business in the U.S., but also overseas. The U.S., along with just 5 other OECD countries, imposes a “global tax” on profits earned overseas by domestically-owned businesses. In contrast, Estonia, ranked 1st in the ITCI, does not tax any profit earned internationally. Since these profits earned overseas by U.S.-domiciled companies are already subject to taxes in that specific country, there is a clear incentive for American companies to try to avoid double taxation. Indeed, many of the largest American multinational corporations have established corporate centers overseas, where tax codes are less stringent, to avoid this additional tax.

The ITCI also reported a myriad of other reasons for the low ranking of the U.S., including poorly structured property taxes and onerously high income taxes on individuals. One major reason why the U.S. lags so far behind most of the industrialized world is simply the lack of serious tax code reforms since the Tax Reform Act of 1986.

The annual Doing Business report published by the World Bank has an even more expansive analysis that determines the tax competitiveness in 189 economies, and also provides an equally sobering look at the heavy taxes faced by business in the United States. (See accompanying Table 2.) One of the metrics it incorporates into the assessment is the “total tax rate.” The Doing Business report defines the total tax rate as “the taxes and mandatory contributions that a medium-size company must pay in the second year of operation as well as measures of the administrative burden of paying taxes and contributions.”

According to the rankings in the most recent Doing Business 2015 report (which reported total tax rates for the calendar year 2013), Macedonia had the lowest total tax rate in the world at 7.4% and was followed closely by Vanuatu at 8.5%. The United States, as in previous years, appears near the bottom of the list, at 126th out of 189, with a total tax rate of 43.8%.

Table 2:

The fact that both the ITCI and Doing Business report, whose methodologies and calculations were conducted independent of one another, rank the United States very low shows that the tax rates in this country are non-neutral and uncompetitive, no matter how they are measured. The message is clear, and very simple: taxes on corporations increase costs and decrease margins, and lead to price increases on goods and ultimately hurt the consumer and the development of any country.

As proposed in “Policy Priorities for the 114th Congress,” published by the Cato Institute, to increase the incentives of domestic firms to go into business and become competitive globally, the U.S. would have to drastically reduce its corporate tax rate. 

The Wall Street Journal just offered two articles in one day touting Robert Shiller’s cyclically adjusted price/earnings ratio (CAPE).  One of then, “Smart Moves in a Pricey Stock Market” by Jonathan Clements, concludes that, “U.S. shares arguably have been overpriced for much of the past 25 years.” Identical warnings keep appearing, year after year, despite being endlessly wrong.  

The Shiller CAPE assumes the P/E ratio must revert to some heroic 1881-2014 average of 16.6 (or, in Clements’ account, a 1946-1990 average of 15).  That assumption is completely inconsistent with the so-called “Fed model” observation that the inverted P/E ratio (the E/P ratio or earnings yield) normally tracks the 10 year bond yield surprisingly closely.  From 1970 to 2014, the average E/P ratio was 6.62 and the average 10-Year bond yield was 6.77.  

When I first introduced this “Fed Model” relationship to Wall Street consulting clients in “The Stock Market Like Bonds,” March 1991, I suggested bonds yields were about to fall because a falling E/P commonly preceded falling bond yields. And when the E/P turned up in 1993, bond yield obligingly jumped in 1994.

Since 2010, the E/P ratio has been unusually high relative to bond yields, which means the P/E ratio has been unusually low.  The gap between the earnings yield and bond yield rose from 2.8 percentage points in 2010 to a peak of 4.4 in 2012.  Recylcing my 1991 analysis, the wide 2012 gap suggested the stock market thought bond yields would rise, as they did –from 1.8% in in 2012 to 2.35% in 2013 and 2.54% in 2014.

On May 1, the trailing P/E ratio for the S&P 500 was 20.61, which translates into an E/P ratio of 4.85 (1 divided by 20.61). That is still high relative to a 10-year bond yield of 2.12%.   If the P/E fell to 15, as Shiller fans always predict, the E/P ratio would be 6.7 which would indeed get us close to the Shiller “buy” signal of 6.47 in 1990.  But the 10-year bond yield in 1990 was 8.4%.  And the P/E ratio was so depressed because Texas crude jumped from $16 in late June 1990 to nearly $40 after Iraq invaded Kuwait. Oil price spikes always end in recession, including 2008.

Today’s wide 2.7 point gap between the high E/P ratio and low bond yield will not be closed by shoving the P/E ratio back down to Mr. Shiller’s idyllic level of the 1990 recession.  It is far more likely that the gap will be narrowed by bond yields rising. 

You Ought to Have a Look is a feature from the Center for the Study of Science posted by Patrick J. Michaels and Paul C. (“Chip”) Knappenberger. While this section will feature all of the areas of interest that we are emphasizing, the prominence of the climate issue is driving a tremendous amount of web traffic. Here we post a few of the best in recent days, along with our color commentary.

As Pope Francis, this week, focused on examining the moral issues of climate change (and largely ignoring the bigger moral issues that accompany fossil fuel restrictions), he pretty much took as a given that climate change is “a scientific reality” that requires “decisive mitigation.” Concurrently, unfolding scientific events during the week were revealing a different story.

First and foremost, Roy Spencer, John Christy and William Braswell of the University of Alabama-Huntsville (UAH)—developers and curators of the original satellite-derived compilation of the temperature history of the earth’s atmosphere—released a new and improved version of their iconic data set. Bottom line: the temperature trend in the lower atmosphere from the start of the data (1979) through the present came in as 0.114°C/decade (compared with 0.14°C in the previous data version). The new warming trend is less than half what climate models run with increasing atmospheric carbon dioxide emissions project to have occurred.

While the discrepancy between real world observations and climate model projections of temperature rise in the lower atmosphere has been recognized for a number of years, the question has remained as to whether the “problem” lies within the climate models or the observations. With this new data release, the trend in the UAH data now matches very closely with the trend through an independent compilation of the satellite-temperature observations maintained by a team of researchers at Remote Sensing Systems (RSS). The convergence of the observed data sets is an indication the climate models are the odd man out.

As with most long-term, real-world observations, the data are covered in warts. The challenge posed to Spencer et al. was how to splice together remotely sensed data collected from a variety of instruments carried aboard a variety of satellites in unstable orbits—and produce a product robust enough for use in climate studies. The details as to how they did it are explained as clearly as possible in this post over at Spencer’s website (although still quite a technical post). The post provides good insight as to why raw data sets need to be “adjusted”—a lesson that should be kept in mind when considering the surface temperature compilations as well. In most cases, using raw data “as is” is an inherently improper thing to do, and the types of adjustments that are applied may vary based upon the objective.

Here is a summary of the new data set and what was involved in producing it:

Version 6 of the UAH MSU/AMSU global satellite temperature data set is by far the most extensive revision of the procedures and computer code we have ever produced in over 25 years of global temperature monitoring. The two most significant changes from an end-user perspective are (1) a decrease in the global-average lower tropospheric (LT) temperature trend from +0.140 C/decade to +0.114 C/decade (Dec. ’78 through Mar. ’15); and (2) the geographic distribution of the LT trends, including higher spatial resolution. We describe the major changes in processing strategy, including a new method for monthly gridpoint averaging; a new multi-channel (rather than multi-angle) method for computing the lower tropospheric (LT) temperature product; and a new empirical method for diurnal drift correction… The 0.026 C/decade reduction in the global LT trend is due to lesser sensitivity of the new LT to land surface skin temperature (est. 0.010 C/decade), with the remainder of the reduction (0.016 C/decade) due to the new diurnal drift adjustment, the more robust method of LT calculation, and other changes in processing procedures.

Figure 1 shows a comparison of the data using the new procedures with that derived from the old procedures. Notice that in the new dataset, the temperature anomalies since about 2003 are less than those from the previous version. This has the overall effect of reducing the trend when computed over the entirety of the record.

 

Figure 1. Monthly global-average temperature anomalies for the lower troposphere from Jan. 1979 through March 2015 for both the old and new versions of LT. (Source: www.drroyspencer.com)

While this new version, admittedly, is not perfect, Spencer, Christy, and Braswell see it as an improvement over the old version. Note that this is not the official release, but rather a version the authors have released for researchers to examine and see if they can find anything that looks irregular that may raise questions as to the procedures employed. Spencer et al. expect a scientific paper on the new data version to be published sometime in 2016.

But unless something major comes up, the new satellite data are further evidence the earth is not warming as expected.  That means that, before rushing into “moral obligations” to attempt to alter the climate’s future course by restricting energy production, we perhaps ought to spend more time trying to better understand what it is we should be expecting in the first place.

One of the things we are told by the more alarmist crowd that we should expect from our fossil fuel burning is a large and rapid sea level rise, primarily a result of a melting of the ice sheets that rest atop Greenland and Antarctica. All too frequently we see news stories telling tales of how the melting in these locations is “worse than we expected.” Some soothsayers even attack the United Nations’ Intergovernmental Panel on Climate Change (IPCC) for being too conservative (of all things) when it comes to projecting future sea level rise. While the IPCC projects a sea level rise of about 18–20 inches from its mid-range emissions scenario over the course of this century, a vocal minority clamor that the rise will be upwards of 3 feet and quite possibly (or probably) greater. All the while, the sea level rise over the past quarter-century has been about 3 inches.

But as recent observations do little to dissuade the hardcore believers, perhaps model results (which they are seemingly more comfortable with) will be more convincing.

A new study available this week in the journal Geophysical Research Letters is described by author Miren Vizcaino and colleagues as “a first step towards fully-coupled higher resolution simulations with more advanced physics”—basically, a detailed ice sheet model coupled with a global climate model.

They ran this model combination with the standard IPCC emissions scenarios to assess Greenland’s contribution to future sea level rise. Here’s what they found:

The [Greenland ice sheet] volume change at year 2100 with respect to year 2000 is equivalent to 27 mm (RCP 2.6), 34 mm (RCP 4.5) and 58 mm (RCP 8.5) of global mean SLR.

Translating millimeters (mm) into inches give this answer: a projected 21st century sea level rise of 1.1 in. (for the low emissions scenario; RCP 2.6), 1.3 in. (for the low/mid scenario; RCP 4.5), and 2.3 in (for the IPCC’s high-end emission scenario). Some disaster.

As with any study, the authors attach some caveats:

The study presented here must be regarded as a necessary first step towards more advanced coupling of ice sheet and climate models at higher resolution, for instance with improved surface-atmosphere coupling (e.g., explicit representation of snow albedo evolution), less simplified ice sheet flow dynamics, and the inclusion of ocean forcing to Greenland outlet glaciers.

Even if they are off by 3–4 times, Greenland ice loss doesn’t seem to be much of a threat. Seems like it’s time to close the book on this imagined scare scenario.

And while imagination runs wild when it comes to linking carbon dioxide emissions to calamitous climate changes and extreme weather events (or even war and earthquakes),  imagination runs dry when it comes to explaining non-events (except when non-events string together to produce some sort of negative outcome [e.g., drought]).

Case in point, a new study looking into the record-long absence of major hurricane (category 3 or higher) strikes on the U.S. mainland—an absence that exceeds nine years (the last major hurricane to hit the U.S was Hurricane Wilma in late-October 2005). The authors of the study, Timothy Hall of NASA’s Goddard Institute for Space Studies and Kelly Hereid from ACE Tempest Reinsurance, concluded that while a streak this long is rare, their results suggest “there is nothing unusual underlying the current hurricane drought. There’s no extraordinary lack of hurricane activity.” Basically they concluded that it’s “a case of good luck” rather than “any shift in hurricane climate.”

That is all well and good, and almost certainly the case. Of course, the same was true a decade ago when the United States was hit by seven major hurricanes over the course of two hurricane seasons (2004 and 2005)—an occurrence that spawned several prominent papers and endless discussion pointing the finger squarely at anthropogenic climate change. And the same is true for every hurricane that hits the United States, although this doesn’t stop someone, somewhere, from speculating to the media that the storm’s occurrence was “consistent with” expectations from a changing climate.

What struck us as odd about the Hall and Hereid paper is the lack of speculation as to how the ongoing record “drought” of major hurricane landfalls in the United States could be tied in with anthropogenic climate change. You can rest assured—and history will confirm—that if we had been experiencing a record run of hurricane landfalls, researchers would be falling all over themselves to draw a connection to human-caused global warming.

But the lack of anything bad happening? No way anyone wants to suggest that is “consistent with” expectations. According to Hall and Hereid:

A hurricane-climate shift protecting the US during active years, even while ravaging nearby Caribbean nations, would require creativity to formulate. We conclude instead that the admittedly unusual 9-year US Cat3+ landfall drought is a matter of luck. [emphasis added]

Right! A good string of weather is “a matter of luck” while bad weather is “consistent with” climate change.

It’s not like it’s very hard, or (despite the authors’ claim) it requires much “creativity” to come up with ways to construe a lack of major hurricane strikes on U.S. soil to be “consistent with” anthropogenic climate change. In fact, there are loads of material in the scientific literature that could be used to construct an argument that under global warming, the United States should experience fewer hurricane landfalls. For a rundown of them, see p. 30 of our comments on the government’s National Assessment on Climate Change, or check out our piece titled, “Global Savings: Billion-Dollar Weather Events Averted by Global Warming.”

It is not for lack of material, but rather, for lack of desire, that keeps folks from wanting to draw a potential link between human-caused climate change and good things occurring in the world.

References:

Hall, T., and K. Hereid. 2015. “The Frequency and Duration of US Hurricane Droughts.” Geophysical Research Letters, doi:10.1002/2015GL063652

Vizcaino, M. et al. 2015. “Coupled Simulations of Greenland Ice Sheet and Climate Change up to AD 2300.” Geophysical Research Letters, doi: 10.1002/2014GL061142

While the Obama administration has focused on tax increases over the years, Canada has focused on tax cuts. The new Canadian budget, out a couple weeks ago, summarized some of the progress that they have made.

The budget says,

The government’s low-tax plan is also giving businesses strong incentives to invest in Canada. This helps the economy grow, spurs job creation, and raises Canada’s standard of living.

That is a refreshing attitude. While the U.S. government’s approach has been to penalize businesses and treat them as a cash box to be raided, Canada’s approach has been to reduce tax burdens and spur growth to the benefit of everybody.

A chart in the new budget—reproduced below the jump—shows that Canada now has the lowest marginal effective tax rate on business investment among major economies. It also shows that the U.S tax rate of 34.7 percent is almost twice the Canadian rate of 17.5 percent.

These “effective” tax rates take into account stated or statutory rates, plus various tax base factors such as depreciation schedules. Skeptics of corporate tax rate cuts in this country often say that while the United States has a high statutory tax rate of 40 percent, we have so many loopholes that our effective rate is low. The new Canadian estimates show that is not true: the United States has both a high statutory rate (which spawns tax avoidance) and a high effective rate (which kills investment).

For the solution to this problem, see here.

For the people of China, there’s good news and bad news.

The good news, as illustrated by the chart below, is that economic freedom has increased dramatically since 1980. This liberalization has lifted hundreds of millions from abject poverty.

 

The bad news is that China still has a long way to go if it wants to become a rich, market-oriented nation. Notwithstanding big gains since 1980, it still ranks in the lower-third of nations for economic freedom.

Yes, there’s been impressive growth, but it started from a very low level. As a result, per-capita economic output is still just a fraction of American levels.

So let’s examine what’s needed to boost Chinese prosperity.

If you look at the Fraser Institute’s Economic Freedom of the World, there are five major policy categories. As you can see from the table below, China’s weakest category is “size of government.” I’ve circled the most relevant data point.

China could–and should–boost its overall ranking by improving its size-of-government score. That would reduce the burden of government spending and lower tax rates.

With this in mind, I was very interested to see that the International Monetary Fund just published a study entitled, “China: How Can Revenue Reforms Contribute to Inclusive and Sustainable Growth.”

Did this mean the IMF is recommending pro-growth tax reform? After reading the following sentence, I was hopeful:

We highlight tax policies that can facilitate economic transition to high income status, promote fiscal sustainability and make growth more inclusive.

After all, surely you make the “transition to high income status” through low tax rates rather than high tax rates, right?

Moreover, the study acknowledged that China’s tax burden already is fairly substantial:

Tax revenue has accounted for about 22 percent of GDP in 2013… The overall tax burden is similar to the tax-to-GDP ratio for other Asian economies such as Australia, Japan, and Korea.

So what did the IMF recommend? A flat tax? Elimination of certain taxes? Reductions in double taxation? Lowering the overall tax burden?

Hardly.

The bureaucrats want China to become more like France and Greece.

I’m not joking. The IMF study actually wants people to believe that making the income tax more punitive will somehow boost prosperity.

Increasing the de facto progressivity of the individual income tax would promote more inclusive growth.

Amazingly, the IMF wants more “progressivity” even though the folks in the top 20 percent are the only ones who pay any income tax under the current system.

Around 80 percent of urban wage earners are not subject to the individual income tax because of the high basic personal allowance.

But a more punitive income tax is just the beginning. The IMF wants further tax hikes.

Broadening the base and unifying rates would increase VAT revenue considerably. … Tax based on fossil fuel carbon emission rates can be introduced. … The current levies on local air pollutants such as [sulfur dioxide] and [nitrogen oxides] emissions and small particulates could be significantly increased.

What’s especially discouraging is that the IMF explicitly wants a higher tax burden to finance an increase in the burden of government spending.

According to the proposed reform scenario, China could potentially aim to increase public expenditures by around 1 percent of GDP for education, 2‒3 percent of GDP for health care, and another 3–4 percent of GDP to fully finance the basic old-age pension and to gradually meet the legacy costs of current obligations. These would add up to additional social expenditures of around 7‒8 percent of GDP by 2030… The size of additional social spending is large but affordable as part of a package of fiscal reforms.

The study even explicitly says China should become more like the failed European welfare states that dominate the OECD:

Compared to OECD economies, China has considerable scope to increase the redistributive role of fiscal policy. … These revenue reforms serve as a key part of a package of reforms to boost social spending.

You won’t be surprised to learn, by the way, that the study contains zero evidence (because there isn’t any) to back up the assertion that a more punitive tax system will lead to more growth. Likewise, there’s zero evidence (because there isn’t any) to support the claim that a higher burden of government spending will boost prosperity.

No wonder the IMF is sometimes referred to as the Dr. Kevorkian of the global economy.

P.S.: If you want to learn lessons from East Asia, look at the strong performance of Hong Kong, Taiwan, Singapore, and South Korea, all of which provide very impressive examples of sustained growth enabled by small government and free markets.

P.P.S.: I was greatly amused when the head of China’s sovereign wealth fund mocked the Europeans for destructive welfare state policies.

P.P.P.S.: Click here if you want some morbid humor about China’s pseudo-communist regime.

P.P.P.P.S. Though I give China credit for trimming at least one of the special privileges provided to government bureaucrats.

At a hearing this week on mobile device security, law enforcement representatives argued that technology companies should weaken encryption, such as by installing back doors, so that the government can have easier access to communications. They even chastised companies like Apple and Google for moving to provide consumers better privacy protections.

As an Ars Technica report put it, “Lawmakers weren’t having it.” But a particular lawmaker’s response stands out. It’s the statement of Rep. Ted Lieu (D-CA), one of the few members of Congress with a computer science degree. He also “gets” the structure of power. Lieu articulated why the Fourth Amendment specifically disables government agents’ access to information, and how National Security Agency spying has undercut the interests of law enforcement with its overreaching domestic spying.

Give a listen to Lieu as he chastises the position taken by  a district attorney from Suffolk County, MA:

The latest Commerce Department report and FOMC press release have, as usual, led to a flood of commentary concerning the various economic indicators that the Fed committee must have mulled over in reaching its decision to put off somewhat longer its plan to start selling off assets it accumulated during the course of several rounds of Quantitative Easing.

Those indicators also inspire me to put in my own two cents concerning things that should, and things that should not, bear on the FOMC’s monetary policy decisions. My thoughts, I hasten to say, pay no heed to the Fed’s dual mandate, which is itself deeply flawed. But then again, since that mandate allows the FOMC all sorts of leeway in making its decisions, I doubt that it would prevent that body from following my advice, assuming it had the least desire to do so.

I have a simple–some may call it quaint–way of deciding whether some information supplies reason for the Fed to either sell off or buy more assets. Here it is: does the information offer reliable evidence of either a shortage or a surplus of nominal money balances?

Why this criterion? Because of two more quaint ideas. The first is that, notwithstanding the contorted arguments that contemporary monetary economists resort to in order to avoid admitting it, monetary policy is fundamentally a matter of altering the nominal quantity of monetary balances of various sorts available to banks and the public, starting with the quantity of base money. The latter quantity is, in any case, the thing that the FOMC decides to expand, or to contract, by its deliberations, whether it expresses its decision in terms of “Quantitative Easing” or in terms of some interest rate target.

The last quaint idea is that, just as a dose of vitamin D can do a world of good to someone suffering from rickets, while too much can prove toxic, monetary expansion, though the best solution to problems that have their roots in a shortage of money, is the wrong medicine otherwise. Call it crazy if you like, but that’s my belief and I’m sticking to it.

So, those indicators. Real GDP growth, first of all, slowed to a miserable 0.2% during the last quarter, or less than one-tenth the previous quarter’s figure, and just one-25th of the quarter before that.

A bad number indeed. But considered alone the number supplies no grounds for Fed easing, for the simple reason that it doesn’t tell us why real GDP growth is so low. If its low because of a money shortage, more money is called for; if its low for other reasons, it isn’t. More information, please.

Here’s some: it was a rough winter, labor disputes closed some West Coast ports, and oil has been dirt cheap. But what have such things got to do with monetary policy? Can a dose of monetary medicine make up for a winter storm, or a strike? Is cheap oil a reason for tightening money, so as to keep general prices from sagging, or one for loosening it to provide relief to domestic energy companies or to counter “weakness in the global economy”? Hard to tell, isn’t it? But that, I say, is because when one gets down to brass tacks such changes in the real economic circumstances ought in themselves to be none of the Fed’s concern.

What’s more, and though the claim may strike many readers as paradoxical, the same can be said about changes in the CPI. Although inflation is still below the Fed’s target, core CPI inflation has been inching up, and the 5-year forward expected inflation rate has been hovering just above the Fed’s 2-percent target for some months now. Surely that means that Fed policy is itself on track, right? Well, no, because these numbers could reflect either a revival of aggregate spending, which would indeed carry such an implication, or, despite the oil glut, a reduction in aggregate supply, which would not.

If all these bits of information shouldn’t shape the FOMC’s actions, what should? The statistics that come closest to serving as reliable guides to whether monetary policy is too tight, too loose, or at least roughly on track, are those that concern neither real developments nor prices but spending. When money balances are abundant, that fact is reflected in increased spending, because when people and businesses find themselves flush with money balances, they tend to dispose of those exceeding their needs by using them to buy goods or securities. If, on the other hand, people and businesses find themselves wanting larger money balances, they try to build them up by spending or investing less.

Slice it any way you like, Q1 spending was down. Way down. The annualized growth rate of consumer spending, which was 4.4% during the last quarter of 2014, or not far from its Great Moderation average, fell to just 1.9%, while business spending dropped from 4.7% to 3.4%. But the annualized growth rate of NGDP, a much broader measure of spending, experienced the sharpest decline, to just one-tenth of one percent, bringing the full-year forecast down from 3.8% to 3.6%. Some of this decline can be written off to winter doldrums, and hence as transient. But much of it can’t.

In short, the only facts that deserve to be considered approximate indicators of whether monetary policy has been too tight, too loose, or on track, suggest that it’s too tight. The others– whether they refer to the weather, or output, or dollar exchange rates, or the CPI and its variants, or stevedores’ discontent–are so many red herrings, and ought therefore to be considered perfectly irrelevant. Whenever FOMC members or any other monetary policymakers refer to such irrelevancies, they must do so either because the press expects them to, or because they are confusing things that the Fed should try to do something about with ones that shouldn’t be any of its business.

By saying all this, do I mean to say that, if the FOMC would just keep a sharp eye on spending, ignoring everything else, we would have sound monetary policy? Not for a minute. The reason, in part, is that spending statistics are themselves imperfect guides to the state of monetary policy, for too many reasons to go into here. More importantly, so long as the policymakers aren’t obliged to conduct policy according to a single, unambiguous target, their decisions will remain shrouded by uncertainty that is itself a big drag on prosperity.

But there’s more to it than that. Even if the Fed were somehow legally committed to target NGDP, or some other broad spending measure, from now on, and even if the measure were itself reliable, it wouldn’t solve our monetary troubles. And that’s because the monetary system itself is dysfunctional, and severely so. If it weren’t, it wouldn’t take more than $4.5 trillion in Fed assets to keep spending going at a reasonable clip. The defects are partly traceable to policies–including some of the Fed’s own–that discourage banks from making certain kinds of worthwhile loans, while encouraging them to hold massive excess reserves.

It’s owing to the crippled state of our monetary system, and not to any ambiguity in relevant indicators, that I myself have grave doubts concerning the gains to be expected from further Fed easing, or even from implementing a strict NGDP targeting rule, under present conditions. For if the experience of the last several years is any guide, it may require still more massive additions to the Fed’s balance sheet to achieve even very modest improvements in spending; and an NGDP-based monetary rule that would serve as a license for the Fed to become a still greater behemoth would not be my idea of an improvement upon the status quo.

You see, unlike some economists, although I’m happy to allow that an increase in the Fed’s nominal size, which is roughly equivalent to a like increase in the monetary base, is neutral in the long run, I don’t accept the doctrine of the neutrality of increases in the Fed’s relative size. I believe that Fed-based financial intermediation is a lousy substitute for private sector intermediation, and that as it takes over, economic growth suffers. The takeover is, in other words, financially repressive.

Which means that the level of spending is, after all, not the only relevant indicator of whether the Fed is or isn’t going in the right direction. Another is the real size of the Fed’s balance sheet relative to that of the economy as a whole, which measures the extent to which our central bank is commandeering savings that might otherwise be more productively employed. Other things equal, the smaller that ratio, the better.

And there, folks, is the rub. If you want to know the real dilemma facing the FOMC, forget about the CPI, oil prices, and last quarter’s weather. Here’s the real McCoy: NGDP growth is too low. But the Fed is too darn big.

Last September Kevin Dowd authored a dandy Policy Analysis called “Math Gone Mad: Regulatory Risk Modeling by the Federal Reserve.” In it Kevin pointed to the dangers inherent in the Federal Reserve’s “stress tests,” and the mathematical risk models on which those tests are often based, as devices for determining whether banks are holding enough capital or not.

Recently my Cato colleague Jeff Miron, who edits Cato’s Research Briefs in Economic Policy, alerted us to a new working paper, entitled “The Limits of Model-Based Regulation,” that independently reaches conclusions very similar to Kevin’s. The study, by Markus Behn, Rainer Haselmann, and Vikrant Vig, is summarized in this month’s Research Brief.

The authors conclude that, instead of limiting credit risk by linking bank capital more tightly to the riskiness of banks’ asset holdings, model-based regulation has actually increased credit risk. At the same time, because the model-based approach is relatively costly, large banks are much more likely to resort to it then smaller ones. Consequently, those banks have been able to expand their lending–and their risky lending especially–at the expense of their smaller rivals. In short, big banks gain, small banks lose, and we all are somewhat less safe than we might be otherwise.

Here is a link to the full working paper.

[Cross-posted from Alt-M.org]

Here’s a headline from today’s Washington Post: “Sexism in science: Peer editor tells female researchers their study needs a male author.” Peer review is the usually-anonymous process by which articles submitted to academic journals are reviewed for quality and relevance to determine whether or not they will be published. Over the past several years, numerous scandals have emerged, made possible by the anonymity at the heart of that process.

The justification for anonymity is that it is supposed to allow reviewers to write more freely than if they were forced to place their names on their reviews. But scientists are increasingly admitting, and the public is increasingly noticing, that the process is… imperfect. As the Guardian newspaper wrote last summer about a leading journal, Nature:

Nature […] has had to retract two papers it published in January after mistakes were spotted in the figures, some of the methods descriptions were found to be plagiarised and early attempts to replicate the work failed. This is the second time in recent weeks that the God-like omniscience that non-scientists often attribute to scientific journals was found to be exaggerated.

In the 1990s I sat on the peer review board of an academic journal and over the years I have occasionally submitted to and been published by such journals. Peer reviews vary wildly in depth and quality. Some reviewers appear to have only skimmed the submitted paper, while others have clearly read it carefully. Some reviewers understand the submissions fully, others don’t. Some double-check numbers and sources. Others don’t. It’s plausible that this variability (particularly on the weak end) is a side-effect of reviwers’ anonymity. I have seen terse, badly-argued reviews to which I doubt the reviewer would have voluntarily attached his or her name. Personally, I try never to write anything as a peer reviewer to which I would not happily sign.

Six years ago, that inspired an idea: it occurred to me to found a journal, called Litmus, that would be comprised of signed peer reviews of already published papers, with authors’ responses when possible. My impression is that this would lead to a much higher average quality of reviews, and reveal to readers the extent of disagreement among scholars on the issues discussed, alternative evidence, etc.

Alas, it would also be potentially dangerous for young scholars to contribute to such a journal, were they to rub a potential employer the wrong way. In the end, I was unable to interest a enough top-notch scholars to flesh out a sufficiently large editorial board. One professor declined, saying:

This strikes me as an interesting idea, but one that is sufficiently outside of what is normal that you might have quite a difficult time getting a consensus that would lead to participation high enough to sustain the journal.  Some people would probably feel that signed reviews were not of the same quality as blind ones.  Others would feel that signed reviews required formality so much beyond that of blind reviews (which at their best are candid and informal but accurate) that they would be unwilling to participate for lack of time.  I am not saying that it is a bad idea, but I think that you’re in for an uphill battle to get the idea off the ground.

Eventually I abandoned the project. But as the failure of the status quo in journal peer reviewing becomes more evident, perhaps someone will rekindle the idea. Conventional journals would have to be on their toes if they knew there was a chance their articles would be held publicly under a microscope by other reviewers.

​Well… there goes our trip to Baltimore. ​We’d been hoping to see the annual Kinetic Sculpture Race, but I see it’s been postponed sine die.

If you’re inclined, now is your chance to laugh. Get it out early.

Here’s a problem in describing how cities work: Any example I might pick to symbolize the decay of Baltimore can always be ridiculed: Weep, weep my friends for that lousy corporate CVS, the one that nobody really liked anyway!

See how easy that was?

The one direct effect I have experienced from the recent riots is that I and my daughter will possibly not be seeing a giant pink taffeta poodle pedaled down the streets of Baltimore by a bunch of probably inebriated art students. I’m unlikely to suffer any of the riots’ more troubling effects, like having to walk an extra half mile to get my asthma medication. Or like getting my car torched.

(And yes: Leading with the pink taffeta poodle might just be the definition of white privilege, but at least I’m, you know, aware of it.)

Cities are hard to explain. They’re made up of millions of tiny little things, and of the networks of trust and expectation that exist among them. Any one of those things – a CVS, a giant pink taffeta poodle, a population of inebriated art students – does not make a city. Almost any one of them can be laughed at, or just dismissed as trivial, in isolation. But good, functional cities are networks. They’re not isolated nodes. A city isn’t the big taffeta poodle, but it might be the expectation that there will be something fun, and free, to do in the streets on some warm spring afternoon. For which we can thank the art students.

And other expectations too: After we see the giant pink taffeta poodle, and when my daughter gets stung by a bee, there’s the CVS, and after that, when we decide we want dinner, we have several choices at hand. And if we want a room for the night, there it is. And if we want to relocate to Baltimore, we might just be able to find decent housing and jobs.

I think we can all agree that that’s what a city should look like. But how does it come into being?

I suspect that some significant trust has to be there first. Without it, few will venture to try new things. Restaurants won’t open. Parades won’t be held. Families won’t move in. Few will try adding new threads to the network. And when the old threads wear out, they will not be replaced.

For a very long time the networks of trust and expectation in the city of Baltimore have been fraying. But it’s not because of the rioting, which is only a symptom, if an advanced one, of an underlying condition. The well-documented culture of police brutality in Baltimore has meant that one of the bigger threads in the network – the ability to turn to police when you or your property are threatened – cannot be depended on. And when that thread goes, so go many others.

It’s long been known in Baltimore that the police can’t be counted on to perform their core functions, particularly in the poorer neighborhoods: In such places the police either can’t or won’t reliably protect persons and property from attack. Not without levels of collateral damage that any reasonable person would deplore. And when you don’t have security, you can forget all about community.

That’s part of why, paradoxically, the poor need property rights even more than the rich: What the poor possess is definitionally small. As a result, it’s all too easy to take everything that they have. Including their sense of dignity. Including their ability to trust. And, finally, including their sense of community, which has to start (and I do feel a bit pedantic saying it) with the understanding that community leaders and enforcers aren’t just out to squeeze them for cash. That the leaders and enforcers don’t see them merely as yet another home to be searched, another gun to seize, another dog to shoot, and another marijuana conviction waiting to happen.

The poor need security not just in their own property, but also in that of others. And these others aren’t necessarily poor. It’s a good thing whenever the owner of a grocery store franchise feels confident enough to get started in a neighborhood that maybe wasn’t so well-off, and that maybe lacked good choices beforehand. But that won’t happen without a measure of trust, and when the community has good reason not to trust, well, outsiders probably won’t trust either.

Contrast all this to the property rights of the rich. Paradoxically, the rich often barely need formal protections of their rights at all. Their property just isn’t threatened all that much, whether by the police or by anyone else. And when the property rights of the rich do get threatened, the rich can fight back. Definitionally, they have many more resources at hand, including non-financial ones: The rich have political influence, private security choices, and just… moving. The would-be owner of a grocery store franchise isn’t compelled to open in any particular neighborhood, or even to go into business at all. His money can always sit safely in a bank.

The rich also aren’t living so precariously: Even if all else fails, and if a rich person’s car does get torched, he can just buy another car. Yes, that’s bad, but it’s not going to ruin him. The same can’t necessarily be said of a poor person, for whom a car might be her single most valuable possession.

So while I’m complaining about the loss of a silly (but fun) kinetic sculpture race, let’s all remember just who depends the most on the networks of trust and expectation that can either live, or die, in our cities. Let’s also remember that those networks depend on protecting the all too fragile property rights of the poor.

Taking time out of his press conference with Japanese Prime Minister Shinzō Abe on Tuesday, President Obama addressed the chaos in Baltimore following the unexplained death in custody of Freddie Gray. 

While pleading for calm, President Obama lamented his lack of authority to fix the problem:

Now, the challenge for us as the federal government is, is that we don’t run these police forces.  I can’t federalize every police force in the country and force them to retrain.  But what I can do is to start working with them collaboratively so that they can begin this process of change themselves. 

Obama also lamented the lack of political momentum to address the poverty and violence afflicting communities like Baltimore:

That’s how I feel.  I think there are a lot of good-meaning people around the country that feel that way.  But that kind of political mobilization I think we haven’t seen in quite some time.  And what I’ve tried to do is to promote those ideas that would make a difference.  But I think we all understand that the politics of that are tough because it’s easy to ignore those problems or to treat them just as a law and order issue, as opposed to a broader social issue.

Both of those lamentations are misleading.

While it’s true that the federal government generally lacks the power to “force” local police departments to change their behavior, Obama’s comments completely omit his role in administering several federal policies that facilitate, and even incentivize, the abuses and tensions he condemned.

The federal drug war tears apart families through mass incarceration and violence and unjustly forces millions of (especially poor, minority) Americans to carry the stigma of being a convicted criminal. Prohibition, just as it did in the 1920s and 30s, has turned huge swaths of urban America into battlefields in the competition for black market real estate. President Obama has already demonstrated a willingness to ease federal drug enforcement in several states, and there is nothing keeping him from expanding that rollback.  He has also pardoned several non-violent drug offenders, even while federal prosecutors convict new ones every day.

The drug war and the federal war on terror also serve as vehicles for the distribution to local police of billions of tax dollars, military-grade weaponry and surveillance equipment, a warped mandate to think of themselves as the first and last lines of defense against terrorists and drug cartels, and a perverse incentive to compete for federal handouts through arrests and seizures.

Federal civil asset forfeiture laws allow state law enforcement agencies to circumvent local budget requirements.  They allow police to seize cash and property from citizens without ever charging them with any crime. This “policing for profit” is especially pernicious when directed at people in poor communities who lack the resources to contest the seizures and often desperately need the property (e.g. automobiles) being seized for their livelihoods.

All of these federal policies serve to undermine the protections of federalism, transparency, accountability, and respect for the rule of law.  They incentivize conflict between the police and the community.  They encourage police to view people as potential enemy combatants and sources of revenue rather than human beings with cherished rights to life, liberty, and property.

The federal government doesn’t “run these police forces,” but it does wield immense influence among them. It’s true that Barack Obama cannot wave a magic wand and make everything better, but he could do much more than his statements convey.  

Sen. Bernie Sanders, the independent socialist from Vermont, is running for president as a Democrat. Since he’s a self-proclaimed socialist, he’s surely to the left of all the Democrats in Congress, right? Well, a few years ago I checked into that, and I found that in fact plenty of Democratic senators have been known to spend the taxpayers’ money more enthusiastically than Sanders:

According to the National Taxpayers Union, 42 senators in 2008 voted to spend more tax dollars than socialist Bernie Sanders. They include his neighbor Pat Leahy; Californians Barbara Boxer and Dianne Feinstein, who just can’t understand why their home state is in fiscal trouble; and the Eastern Seaboard anti-taxpayer Murderers’ Row of Kerry, Dodd, Lieberman, Clinton, Schumer, Lautenberg, Menendez, Carper, Biden, Cardin, and Mikulski. Don’t carry cash on Amtrak! Not to mention Blanche Lambert Lincoln and Mark Pryor of Arkansas, who apparently think Arkansans don’t pay taxes so federal spending is free. [It turned out that Arkansans were not so clueless.] Sen. Barack Obama didn’t vote often enough to get a rating in 2008, but in 2007 he managed to be one of the 11 senators who voted for more spending than the socialist senator.

Meanwhile, the American Conservative Union rated 11 senators more liberal than Sanders in 2008, including Biden, Boxer, Feinstein, and again the geographically confused Mark Pryor. The Republican Liberty Caucus declared 14 senators, including Sanders, to have voted 100 percent anti-economic freedom in 2008, though Sanders voted better than 31 colleagues in support of personal liberties.

Now, I wrote that in January 2010, when 2008 ratings were the latest available. And it seems that 2008 was Sanders’s best year in the eyes of taxpayers, when he voted frugally a whopping 18 percent of the time. But as this lifetime chart shows, even in the past two years a dozen or so senators were more spendthrift than the socialist guy. In 2011, at an impressive 16 percent, Sanders was only the 55th spendiest senator. Spending interests will be glad to know that in the one year that they served together and NTU has rated, Sanders spent a bit more of the taxpayers’ money than Sen. Elizabeth Warren.

Today, presidential candidate Hillary Rodham Clinton addressed criminal justice reform in a speech at Columbia University. Earlier in the week, the Brennan Center released a book with chapters from politicians across the political spectrum discussing the need for criminal justice reform, and Secretary Clinton contributed one of them. Now that the Democratic front-runner has joined Republican presidential aspirants in addressing reform, criminal justice appears to be a significant 2016 campaign issue.

Three of Clinton’s policy suggestions are problematic.

First, and perhaps the one that will get the most headlines, she called for making police body cameras “the norm everywhere,” by using federal grants and matching funds. Putting aside the considerable price tag to subsidize the roughly 18,000 American law enforcement agencies to buy body cameras, how officers use those cameras and how law enforcement uses their data must be of utmost concern. As my colleague Matthew Feeney noted in a blogpost yesterday, the proposed body camera policy in Los Angeles would allow officers to review body camera footage before giving statements on use of force incidents. That policy would not serve transparency interests, but instead police officer self-interest.

Throwing money for cameras to local police departments as a solution to police transparency may sound good in theory, but making it work will be much more difficult in practice. 

Second, she argued that low-level offenders, “must be some way registered in the criminal justice system.” The criminalization of drug consumption has been one of the primary drivers of incarceration. Diverting low-level offenses to drug courts, as Clinton suggests, could be an improvement over jailing offenders, but for many of these cases, it’s not clear that the criminal justice system should be involved at all.

When implemented properly, diversion may be appropriate for petty crimes like shoplifting or nuisance offences that may arise from addiction or abuse. But simple possession of most drugs is still a crime in all 50 states, adding thousands of Americans to the criminal justice system that have no business being there. Many of those people have the mental health and substance abuse problems Clinton addressed in her speech.

Society can discourage behavior by means other than the criminal law. Education, economic opportunity, and social norms can combine to deter substance abuse in the private sphere. While not ideal to libertarians, a system of taxes, fines, and regulations could be utilized by governments to discourage use without involving the criminal justice system at all. But any policy that continues to criminalize the effects and symptoms of underlying conditions like addiction will invariably lead to the broken families and diminished employment prospects for our most vulnerable citizens.

Third, Clinton discussed the “quiet epidemic” of drug use in middle and suburban America. While the overall tone of the speech lamented racial disparities and disparate socio-economic outcomes, this passage affirmed the misconception that drug use is, was or has been just an “inner city” problem. Yet whites and blacks use drugs at roughly the same rates, per capita. That enforcement has concentrated most heavily in minority communities does not mean drugs are new to the rest of the country.

Calling any drug use an “epidemic” harkens back to the overblown crack scare of the 1980s, which fueled the arguments for mandatory minimum sentences and 100:1 crack-to-powder sentencing disparity. In a recent Heritage Foundation event on the future of marijuana policy, a medical expert frankly stated that only about 10 percent of drug (and alcohol) users become problem users. Unfortunately, most calls for diversion to treatment do not distinguish between the minority who develop drug problems and the 90 percent of adult users who do not experience significant negative health or life effects.

In all, what Secretary Clinton proposed today was a large amount of federal and state spending on tools that are already being misapplied throughout our country. Body cameras can improve police-public interactions, but it will take more than grants and matching funds to make their applications useful.  Giving people the choice between jail or counseling when neither is appropriate is a fundamental misallocation of resources. Every seat in treatment taken by an otherwise functioning adult is one fewer seat available to the addict who is struggling with her addiction. Misunderstanding the very real problem of addiction is compounded by misusing the criminal justice system to address it.

As Jonathan Simon argues in his book, Governing Through Crime, America needs a new approach to how we think about crime. Instead of using the criminal law as a first choice to fix society’s problems, it should be the last resort after policymakers have exhausted all other remedies. Secretary Clinton’s proposals, though well-intentioned, will be expensive and ultimately ineffective in fixing the disparities that inspired them.

The latest 8th grade U.S. history, civics, and geography results from the National Assessment of Educational Progress – the so-called Nation’s Report Card – have been released, and as usual, things seem bleak: only 18 percent of students scored proficient in U.S. history, 23 percent in civics, and 27 percent in geography. These kinds of results, however, should be taken with a few salt grains because we can’t see the full tests, and the setting of proficiency levels can be a bit arbitrary. Also, we don’t…

Oh, the heck with all that. As a fan of school choice, just tell me if private schools did better!

Based on the raw data, they did. 31 percent of private school students were proficient in U.S. history, versus 17 percent of public schoolers; 38 percent were proficient in civics, versus 22 percent of public schools kids; and 44 percent were proficient in geography, versus 25 percent of public schools kids. That said, to really know which broad swath of schools did better – and from a parent’s perspective, it is really only the individual schools from which they might choose that matter – you’d have to control for all sorts of characteristics of their students. From what I’ve seen, what was just released didn’t do that. Thankfully, others have.

What have they found? Controlling for various student characteristics and other factors, private schools beat traditional publics in terms of political knowledge, voluntarism in communities, and other socially desirable outcomes. Why?

There may be many possible reasons, but at least one seems to be intimately connected to choice: autonomous schools select their own curricula, and families willingly accept it when they choose the schools. That means chosen schools can more easily teach coherent U.S. history and civics than can public schools, which often face serious pressures to teach lowest-common-denominator pabulum lest conflict break out among ideologically and politically diverse people. Perhaps ironically – though not if you understand how a free society works – by not being public, private schools may actually serve the public better.

So no, you can’t conclude a lot from the latest NAEP scores. But that doesn’t mean they can’t point you in the right direction.

Negotiators for the House of Representatives and the Senate are expected to announce a deal on the budget resolution as early as today. A budget resolution sets overall spending limits for the year. If it passes, it would be the first resolution in six years, but it does little to fix the country’s long-term fiscal mess.

The original House and Senate budget proposals left much to be desired. Each proposal increased defense spending by using the Overseas Contingency Operations (OCO) account as a slush fund. This maneuver allowed each chamber to claim allegiance to the 2011 Budget Control Act spending caps, while bypassing it to boost spending.  The House budget included a version of Medicare reform, but delayed its start date until 2024. The Senate left Medicare reform off the table.

This week, budget negotiators seem to be taking disappointing parts from each chamber.

Defense Spending. Both chambers would provide $96 billion in OCO spending for fiscal 2016, up from $74 billion in fiscal 2015. But the Senate’s version makes it a little tougher to increase spending with the addition of a parliamentary point of order. Under this provision, 60 senators would have to take an affirmative vote to increase defense spending, to the higher OCO levels, without offsetting spending cuts. The Senate has more than 60 senators who would probably waive the point of order, but it would still be a modest added check on defense spending. Unfortunately, the negotiated deal removes the point of order for fiscal year 2016.

Medicare Reform. The joint budget resolution will follow the Senate’s lead and remove the House plan to reform Medicare. Medicare reform has been a mainstay of House budgets since 2011 when Republicans regained control of the chamber. Eliminating plans to reform Medicare sends a strong signal that Republicans are not serious about confronting entitlement issues.   

Balancing the Budget. The budget resolution would balance the federal budget within ten years, which would be a good goal, but nothing in the resolution would force Congress to follow through in the future. Recent actions signal that Republicans are not interested in real budget restraint. Congress repealed Medicare’s “doc fix” in April. The change increased the deficit by $141 billion over the next decade. The new House-Senate budget resolution includes $141 billion in cuts to offset the “doc fix,” but Congress will still need to repass the measures for the deficit reduction to occur. Congress should have paid for the “doc fix” in the original bill, not in the budget resolution.

Each chamber will likely approve the joint resolution this week and then switch focus to the appropriations process. Given Congress’ desire to boost defense spending and the White House’s desire to boost other discretionary spending, a summer deal increasing all types of spending seems likely.

This is a revised excerpt from White (2015), and the first item in our “What You Should Know” series offering essential background information on various alternative money themes.

Historical monetary systems that are properly classified as free banking systems, in Kevin Dowd’s (1992, p. 2) words, have involved “at least a certain amount of bank freedom, multiple note issuers, and the absence of any government-sponsored ‘lender of last resort.” There were 60-plus episodes around the world of plural private currency issue in the 19th century (Schuler 1992). Dowd (1992) has compiled studies of 9 of these episodes, and Ignacio Briones and Hugh Rockoff (2005) have surveyed economists’ assessments of 6 relatively well-studied episodes: Scotland, the United States, Canada, Sweden, Switzerland, and Chile. Because none of the six systems they review enjoyed complete freedom from legal restrictions, they suggest that “lightly regulated banking” is a more accurate label than “free banking.” All these nineteenth-century episodes had another feature worth mentioning: banknotes and deposits were denominated in and redeemable for silver or gold coins.

When we look into these episodes, we find a record of innovation, improvement, and success at serving money-users. As in other goods and services, competition provided the public with improved products at better prices. The least regulated systems were not only the most competitive but also by and large the least crisis-prone.

Case Studies

Scotland. The Scottish free-banking system of 1716 to 1845 combined remarkable stability with competitive performance. To quote my own earlier work on it (White 1995, p. 32), there were “many competing banks, most of them were well capitalized,” while in its heyday after 1810 “none were disproportionately large, all but a few were extensively branched,” and “all offered a narrow spread between deposit and discount rates of interest.” Briones and Rockoff (2005, pp. 295–96) find “considerable agreement that lightly regulated banking was a success in Scotland.” They note that some writers have given at least partial credit to “unlimited liability, or the presence of large privileged banks acting as quasi-central banks.” After 1810, however, the three chartered banks (the only banks with limited liability) were no larger than the nonchartered banks (which had unlimited liability) and did not play any special supervisory roles, while the system continued to perform successfully. Scottish banking exhibited economies of scale but not natural monopoly. The banks mutually accepted one another’s notes at par. A few writers have expressed doubt that Scotland was a good example of free banking on the grounds that the Bank of England backstopped the system, but such claims are mistaken (White 1995, ch. 3).

United States. Banking restrictions differed dramatically among states in the antebellum United States. The least restricted, most openly competitive, and best-behaved system was in the New England states, where the Suffolk Bank of Boston, succeeded by the Bank for Mutual Redemption, operated a banknote clearinghouse that kept most notes at par throughout the region. Many other states, led by New York, enacted what were called “free banking” laws. These acts opened up entry to all qualifying comers (in contrast to chartering systems that required a special act of the state legislature), but also imposed collateral restrictions on note issue (banks had to buy and hold state government bonds or other approved assets to provide a redemption fund for their notes) and maintained geographical branching restrictions. Briones and Rockoff (2005, p. 302) reiterate a point that Rockoff emphasized in his own pioneering work on the state free-banking systems, namely that these legal restrictions were fairly heavy. The less successful experiences in some states “appear to have been the result of restrictions imposed on the American free banks—restrictions on branch banking and the peculiar bond security system—rather than the result of freedom of entry.” On the positive side, freer entry enhanced competition, and the “stories about wildcat banking” that some historians took to be the natural consequence “although not baseless, were exaggerated.” In New York and some other early-adopting states, the system “worked well,” which explains why it spread to more and more states.

Canada. The Canadian system, Briones and Rockoff (2005, p. 304) note, “like the Scottish system and parts of the American system, was clearly a successful case of lightly regulated banking.” Canada did not suffer the financial panics that the United States did in the late 19th century. Its banks did not even fail in the Great Depression. The Canadian banking system “did so well that a central bank was not established until 1935,” and even then the reason was not dissatisfaction with the existing banking system but some combination of nationalism and wishful thinking about what a central bank could do to end the Great Depression (see Bordo and Redish 1987).

Sweden. Sweden had a system of competitive private note issue by “Enskilda” banks while at the same time having the official Riksbank as banker to the state. The Enskilda banks’ record for safety was remarkable. Briones and Rockoff (2005, pp. 306-7) report that, “Although one could debate the relative contributions of the Riksbank and the Enskilda banks, it is clear that the combination of the two maintained convertibility and provided an efficient means of payment for the Swedish economy.”

Switzerland. Switzerland’s system ended in a crisis, but Briones and Rockoff (2005, p. 310) doubt that this reflects poorly on lightly regulated banking because, “at least after the federal banking law of 1881, the Swiss experience seems to have been less free than other experiences in many important dimensions such as the existence of privileged cantonal banks and restrictive collateral requirements for private banks.” Moreover, the law diminished “the capacity of the public for differentiating notes,” which created a common-pool problem, weakening the effectiveness of the clearing system against overissue. (For a harsher assessment of Swiss free banking, see Neldner (1998); for a rebuttal to Neldner see Fink (2014).)

Chile. Briones and Rockoff (2005, p. 314) also consider Chile’s experience a poor test because the system was skewed by government favoritism, “With a small ruling elite and concentrated economic power, Chile had great difficulty creating note-issuing banks that were completely independent of the government.” Nonetheless a free-banking law was “successful in developing the financial and banking industry.” A new volume of studies on Chile’s free-banking experience is under way by economic historians at the Universidad del Desarrollo in Santiago (Couyoumdjian forthcoming).

Australia. Operating with few restrictions, Australian banks were large, widely branched, and competitive, and they practiced mutual par acceptance, making the system resemble Scotland’s. The Australian episode is of special interest for suffering the worst financial crisis known under a free-banking system. After a decade-long real estate boom came to an end in 1891, some building societies and land banks failed, after which 13 of 26 trading banks suspended payments in early 1893. George Selgin (1992a) finds that the banks’ reserve ratios do not indicate any overexpansion of bank liabilities during the boom, though some banks clearly made bad loans. The boom was rather financed by British capital inflows, which suddenly stopped after the Baring crisis of 1890. Kevin Dowd (1992) adds that the banks were not undercapitalized. He argues that “misguided government intervention” in the first failed institutions “needlessly undermined public confidence” in other banks, while other interventions boosted the number of suspensions (all but one of the suspended banks soon reopened) by providing favorable reorganization terms for banks in suspension. (For a different view, see Turner and Hickson (2002).

Colombia. The free-banking era in Colombia lasted only 15 years, from 1871 to 1886, during the period of a classical liberal constitution. Thirty-nine banks were created, two of which did about half the business. The system survived a civil war in 1875 with only a few months’ suspension and appears to have been otherwise free of trouble. It ended when the government created its own bank and gave it a monopoly of note issue for seigniorage purposes (Meisel 1992).

Foochow, China. George Selgin (1992b) reports that the banking system in the city of Foochow (or Fuzhao) in southeastern China operated under complete laissez faire in the 19th and early 20th centuries, being left alone by the national ruling dynasty. The successful results resembled those of free banking in Scotland or Sweden. Banknotes were widely used and circulated at par, bank failures were rare, and the system provided efficient intermediation of loanable funds.

Postrevolutionary France. The end of the French Revolution, the economist Jean-Gustave Courcelle-Seneuil later wrote, “left France under the regime of freedom for banks.” New banks began issuing redeemable banknotes in 1796. In Courcelle-Seneuil’s evaluation, the banks operated “freely, smoothly and to the high satisfaction of the public.” After only seven years, in 1803, Napoleon Bonaparte took power and created the Bank of France with a monopoly of note issue to help finance his government (Nataf 1992).

Ireland. In 1824, after poor results with plural note issues by undersized banks, the British Parliament deliberately switched Ireland from the English set of banking restrictions (the limitation of banks to six or fewer partners) to the Scottish free-banking model (joint-stock banks with an unlimited number shareholders, each with unlimited liability) and thereafter enjoyed results like Scotland’s. Howard Bodenhorn (1992) considers it “not surprising” that “free banking in Ireland should rival the success of the Scottish. After 1824, restrictions on banking were repealed, except unlimited liability, and joint-stock banks were formed based on the Scottish mould. Failures were infrequent, losses were minimal … and the country was allowed to develop a system of nationally branched banks.”

Why then did central banking triumph over free banking?

As Kevin Dowd (1992, pp. 3-6) fairly summarizes the record of these historical free banking systems, “most if not all can be considered as reasonably successful, sometimes quite remarkably so.” In particular, he notes that they “were not prone to inflation,” did not show signs of natural monopoly, and boosted economic growth by delivering efficiency in payment practices and in intermediation between savers and borrowers. Those systems of plural note issue that were panic prone, like those of pre-1913 United States and pre-1832 England, were not so because of competition but because of legal restrictions that significantly weakened banks.

Where free banking was given a reasonable trial, for example in Scotland and Canada, it functioned well for the typical user of money and banking services. Why then did national governments adopt central banking? Free banking often ended because the imposition of heavy legal restrictions or creation of a privileged central bank offered revenue advantages to politically influential interests. The legislature or the Treasury can tap a central bank for cheap credit, or (under a fiat standard) simply have the cental bank pay the government’s bills by issuing new money. Economic historian Charles Kindleberger has referred to a “strong revealed preference in history for a sole issuer.” As George Selgin and I have noted (Selgin and White 1999), the preference that history reveals is that of the fiscal authorities, not of money users. In some places (e.g., London) free banking never received a trial for the same reason. Central banks primarily arose, directly or indirectly, from legislation that created privileges to promote the fiscal interests of the state or the rent-seeking interests of privileged bankers, not from market forces.

References

Bordo, Michael D., and Angela Redish. 1987. “Why did the Bank of Canada Emerge in 1935?,” Journal of Economic History 47 (June 1987), 405-417.

Briones, Ignacio, and Hugh Rockoff. 2005. “Do Economists Reach a Conclusion on Free-Banking Episodes?,” Econ Journal Watch 2 (August), 279-324.

Couyoumdjian, Juan Pable. Forthcoming. Editor, Instituciones Económicas en Chile: La banca libre durante el siglo XIX.

Dowd, Kevin. 1992a. Editor, The Experience of Free Banking. London: Routledge.

Dowd, Kevin. 1992b. “Introduction” to Dowd 1992a.

Dowd, Kevin. 1992c. “Free Banking in Australia,” in Dowd 1992a.

Fink, Alexander. 2014. “Free Banking as an Evolving System: The Case of Switzerland Reconsidered,” Review of Austrian Economics 27 (March), 57-69.

Hickson, Charles R., and Turner, John D. 2002. “Free banking Gone Awry: The Australian Banking Crisis of 1893.Financial History Review 9 (October), 147–67.

Meisel, Adolfo. 1992. “Free Banking in Colombia,” in Dowd 1992a.

Nataf, Phillipe. 1992. “Free Banking in France (1796–1803),” in Dowd 1992a.

Neldner, Manfred. 1998. “Lessons from the Free Banking Era in Switzerland: The Law of Adverse Clearings and the Role of Non-issuing Credit Banks,” European Review of Economic History 2 (Dec.), 289–308.

Selgin, George. 1992b. “Bank Lending ‘Manias’ in Theory and History,” Journal of Financial Services Research 6 (Aug.), 169-86.

Schuler, Kurt. 1992. “The World History of Free Banking,” in Dowd 1992a.

Selgin, George A., and Lawrence H. White. 1999. “A Fiscal Theory of Government’s Role in Money,” Economic Inquiry 37 (Jan.), 154–65.

White, Lawrence H. 1995. Free Banking in Britain, 2nd ed. London: Institute of Economic Affairs. Available online at

White, Lawrence H. 2015. “Free Banking in Theory and History,” in Lawrence H. White, Viktor Vanberg, and Ekkehard Köhler, eds., Renewing the Search for a Monetary Constitution. Washington, DC: Cato Institute.

[Cross-posted from Alt-M.org]

Vindicating conventional wisdom, today’s argument suggested that the Supreme Court will find that states must both recognize and license same sex marriage. That’s remarkable in and of itself considering that a little over a decade ago, we were still debating whether states could criminalize gay sex. But it’s not surprising, given that it represents the most rapid transformation in public opinion on any political issue.

What’s more noteworthy is the reason why the Court is poised to rule this way. While it’s certainly possible that Justice Kennedy will wax metaphysical about the “sweet mystery of marriage,” the majority opinion is more likely to rest on the technical requirements of the Equal Protection Clause. Given that provision’s enforcement of “equality under the law,” states simply cannot devise a reason to draw their marriage licensing regimes in a way that distinguishes between heterosexual and homosexual couples.

Solicitor General Don Verrilli said it best – that’s possibly the only time I will use those words – when he asked the Court to secure “equal participation in a state-conferred status.” Moreover, the federal government was wise here – again unprecedented words coming from me – in focusing on the narrow point of equality in the application of state laws.

In sum, the Supreme Court should – and likely will – stay away from pontificating about marriage or philosophizing on the nature of rights. The Fourteenth Amendment is silent as to marriage, as it is regarding all other possible objects of state regulation. What it speaks to instead is the equal protection of the laws. Accordingly, as Cato said in our amicus brief, states must give marriage licenses to gays and lesbians only if they give them to everyone else.

“This is not justice. This is just people finding a way to steal stuff,” Carron Morgan, cousin of Freddie Gray, told Kevin Rector of the Baltimore Sun yesterday.  That’s one of the most clear-headed interpretations of yesterday’s mob violence in Maryland’s largest city, which followed the convergence of hundreds of youths at 3 p.m. outside Mondawmin Mall, a shopping center on the city’s west side that also serves as a hub for bus service. In the resulting tumult, groups of rioters burned police vehicles, looted stores and restaurants, and injured more than a dozen Baltimore police officers with flying missiles. 

More than twenty years ago in the Cato Journal, distinguished law and economics scholars David Haddock and Daniel Polsby published a paper entitled “Understanding Riots” that’s still highly relevant in making sense of events like these. Employing familiar economic concepts such as opportunity cost, coordination problems, and free-rider issues, Haddock and Polsby help explain why riots cluster around sports wins as well as assassinations, funerals, and jury verdicts; the group psychology of rioting, and why most crowds never turn riotous; the important role of focal points (often lightly policed commercial areas) and rock-throwing “entrepreneurs” of disorder; the tenuous relationship between riots and root causes or contemporary grievances; and why when a riot occurs the police (at least those in places like the United States and United Kingdom) seldom manage to be in enough places at once, more or less by definition.

The H&P paper helps explain why so many of the memes of the past 24 hours are off base: the “protests turn violent” headlines (yesterday’s riots broke out in different places from where there had been demonstrations), the “Freddie Gray’s family is horrified” stories (irrelevant since this riot, like most, had little to do with sending any message of protest), and, of course, the “what about sports riots?” meme (yes, the riots yesterday have a lot in common with English soccer riots, and it’s important to understand why.) “Authorities looking for ways to explain why trouble has broken out on their watch sometimes ascribe exaggerated organizational powers to ‘outside agitators,’” Haddock and Polsby write. (Check.) 

In reaction to yesterday’s events, pundits have tended to bark up a number of wrong trees, such as the supposed permission-giving remarks of the city’s Mayor, Stephanie Rawlings-Blake. (Not really, as Dave Weigel explains.) Others imagine that the technical advances of recent years – whether the availability of social media by which rioters can share intentions, or the ubiquity of surveillance cameras in public places – will fundamentally alter the balance between the riotous impulse and public order. Before you make up your mind on such questions, go read Haddock and Polsby.

This is from a Wall Street Journal article about President Obama’s push for the Trans Pacific Partnership (TPP):   Mr. Obama also warned of rising anti-globalization sentiment in Washington, reflected in Democratic opposition to the trade agreement [TPP], Republican efforts to kill the Export-Import Bank, and congressional unwillingness to approve new rules for operation of the International Monetary Fund. I agree that opposition to the TPP often, although not always, reflects anti-globalization sentiment; I’m not familiar with the IMF issue here.  But on the Ex-Im Bank issue, I think the President has it backwards. His logic, I assume, is that subsidies from the Ex-Im Bank promote exports, and are therefore pro-globalization.  But this logic is flawed.  The reality is that all export subsidies are a form of economic nationalism, in the sense that they try to give an advantage to domestic products over their foreign competition.  This leads to escalating trade wars and international economic tension.  The pro-globalization approach would be negotiate an end to the subsidies provided by export credit agencies, and let all products compete in the global marketplace without government support for domestic industry. 

Since treaties in the 19th century, the federal government has provided educational aid to American Indians. These days, the Bureau of Indian Education (BIE) owns about 180 Indian schools, which have about 41,000 students in Arizona, New Mexico, South Dakota, North Dakota, and other states.

I examined Indian schools in this study at Downsizing Government. The schools have long failed Indian children and seem to waste a great deal of money.

The Washington Post reported similar findings:

The U.S. Bureau of Indian Education spends nearly 56 percent more money than American public schools on each student, but many Native Americans learn in facilities that are languishing in poor condition, according to federal auditors.

A report this week from the Government Accountability Office said the agency has struggled to staff, manage and repair its schools, largely because of a broken bureaucracy.

… The bureau also suffers from high leadership turnover, inconsistent accountability, poor communication between offices, a lack of strategic planning and a dearth of financial experts to manage spending, auditors said. The “systemic management challenges,” as the report described them, have hindered the agency’s efforts to improve student achievement and sustain key initiatives, according to the report.

The problems have persisted for years, despite the bureau spending significantly more than U.S. public schools in general. A 2014 GAO analysis found that the agency spends an estimated $15,000 per pupil on average, while public schools nationwide spend an estimated average of about $9,900.

… Indian Education spokeswoman Nedra Darling said Thursday that the bureau is “deep into the process of fixing the problems that the GAO highlighted.”

… “The president has asked Congress for significant increases in the budget to accomplish many of these goals and to increase staff available to serve tribal schools and BIE-run schools,” Darling said.

The last two sentences are classic: Agency leaders using their own failings as an excuse to demand more taxpayer money.

A better reform would be for Congress to advance Indian self-determination by ending the federal ownership and operation of schools and converting BIE funding to block grants. Tribal governments could then use the block grants to either competitively source school management or to pass through the funds to Indian parents in the form of school vouchers.

The important thing is to get Washington out of the business of running schools because decades of experience reveal that it is not very good at it.

Pages