Cato Op-Eds

Individual Liberty, Free Markets, and Peace
Subscribe to Cato Op-Eds feed

Last month, the British government announced plans to spend two percent of GDP on defense through 2020, meeting the NATO mandated level. This comes after months of nudging from the Obama administration that feared “if Britain doesn’t spend 2 percent on defense, then no one in Europe will.” The reasoning is bizarre given that few nations were meeting this spending threshold to begin with. As I wrote in June:

In 2014, only Greece, Estonia, the U.S. and the U.K. spent as much as 2 percent of GDP on defense. Excepting NATO member Iceland, which is exempted from the spending mandates, the 23 other NATO members failed to spend even two cents of every dollar to defend themselves from foreign threats. And Greece only met the 2 percent threshold because their economy is falling faster than their military spending.

Perhaps things are shifting a bit among the NATO nations. Fear of Russia has prompted some members to announce increases to their defense spending. Germany, which currently spends only 1.2 percent of its GDP on defense, pledged to increase its defense budget by 6.4 percent over the next five years. Latvia and Lithuania will also increase their defense spending, reaching two percent of GDP by 2018 and 2020, respectively.

It’s comforting to hear some Europeans talking about taking their security more seriously, but whether they will follow through remains to be seen. Either way, they will remain heavily subsidized by the United States for some time. Cato’s latest infographic demonstrates just how far ahead the United States is in all measures of defense spending, and also documents how American security guarantees allow European governments to devote a far larger share of their spending to dubious domestic priorities. Put simply, U.S. taxpayers are subsidizing Europe’s bloated welfare states.

According to the most recent NATO figures, the United States spent an estimated $654 billion or 3.8 percent of its GDP, on defense (based on the NATO definition) in 2014. That amounts to $2,052 spent by every man, woman, and child in America, nearly 4 times more than the average in NATO founding states, and 7 times more than was spent in those member states that have joined NATO since the end of the Cold War. As a share of the sum total of NATO member states’ defense spending, U.S. taxpayers contributed nearly 69 percent, even though the United States accounts for only 46 percent of total economic output by NATO members. And the free-riding problem has only been getting worse. Total military spending by NATO’s European members was less in real terms in 2014 than in 1997 – and there are 12 more member states in NATO today.

In fairness, one can hardly blame the Europeans for failing to spend more on defense. And one shouldn’t expect that they will willingly change course, despite faint signs that some European members are finally getting serious about their security. After all, people are disinclined to pay for something that someone else is willing to provide for free.

The free ride could come to an end if Washington signaled that it wanted it to. Alas, U.S. officials have been sending the opposite message. The modest spending restraint imposed on the Pentagon’s budget by the bipartisan Budget Control Act of 2011 has forced the military services to make some difficult choices, but the Pentagon’s civilian masters have so far refused to prioritize roles and missions. They could start by having an adult conversation with NATO members, and declare emphatically that our allies have primary responsibility for defending themselves and their interests. And, in order to back up words with actions, Washington should stop deploying the U.S. military in ways that discourage these other countries from doing more. It is unreasonable to expect U.S. taxpayers to shoulder these burdens indefinitely.

 

Infographic Sources:

NATO: Public Diplomacy Division. “Financial and Economic Data Relating to NATO Defence.” Brussels, Belgium, 2015.

Central Intelligence Agency. “The World Factbook.2014” Washington, D.C., 2014.

The International Institute for Strategic Studies. The Military Balance 2015. Edited by James Hackett. London: Routledge, 2015.

In an effort to distinguish itself from competitors, poultry producer Purdue recently ran advertisements touting its “no antibiotics ever” line of chicken products. This is not just another corporate ad campaign; the story goes deeper than that, as the New York Times recently reported. At issue is the definition of what makes poultry “antibiotic free.”

Poultry companies like Purdue and its main competitors, Tyson and Foster Farms, have long used antibiotics important to humans in the raising of chickens. Many scientists have advocated a ban on routine (non-disease) use of antibiotics in the raising of food animals because of concerns that such use will hasten the evolution of antibiotic-resistant bacteria. In 2012 the Food and Drug Administration issued nonbinding regulatory guidance on using human-important antibiotics in livestock.

Companies other than Purdue continue to use animal-only antibiotics called ionophores. Even if the FDA regulatory guidance was instead a legally binding regulation, such antibiotics would not be banned because they are not “human-important,” hence Purdue’s move to use the term “no antibiotics ever” in marketing its products.

The debate over ionophores is reminiscent over the fights as to what constitutes “organic” food. The popular perception of “organic” often differs from the government’s specific definition of the term. According to Henry Miller of the Hoover Institution at Stanford University, “so long as an organic farmer abides by his organic system (production) plan–a plan that an organic certifying agent must approve before granting the farmer organic status–the unintentional presence of GMOs (or, for that matter, prohibited synthetic pesticides) in any amount does not affect the organic status of the farmer’s products or farm.” Only 5 percent of organic operators area actually tested every year.

Each time the government defines the characteristics of an acceptable product regarding its safety, organic, or antibiotic-free characteristics, some competition in the market is lost. Only those who deeply understand the intricacies of the definition can both produce compliant products and work around some of the “in spirit” rules. Ionophores are one of these workarounds. They keep chickens healthy, reduce costs, and are technically compliant with rules on human-important antibiotic use in poultry production.

Most command-and-control regulations have some anticompetitive aspects. They hinder innovation at the margins of regulatory definitions that otherwise would occur in a free market. The battle between Purdue, Tyson, and Foster Farms is an excellent case study of how competition rather than regulation can serve consumer and public health interests.

The judiciary has been described as the least dangerous branch.  But that isn’t true.  The Supreme Court has become a continuing constitutional convention, in which just five votes often turns the Constitution inside out.

The latest Supreme Court term was seen as a shift to the left. These decisions set off a flurry of promises from Republican Party presidential candidates to confront the judiciary.

For instance, Jeb Bush said he would only appoint judges “with a proven record of judicial restraint.” Alas, previous presidents claiming to do the same chose Anthony Kennedy, David Souter, and John Roberts, among many other conservative disappointments.

Sen. Ted Cruz (R-Texas) called for judicial retention elections. Even more controversially, he suggested that only those before the justices had to respect Supreme Court rulings.

Extreme measures seem necessary because a simultaneously progressive and activist judiciary has joined the legislature and executive branches in forthrightly making public policy.  The influence of judges has been magnified by their relative immunity from political pressure, including life tenure.

Lose the battle over filling a Supreme Court slot and you may suffer the consequences for decades.  Unelected, accidental President Gerald Ford merits little more than a historical footnote.  But his malign Supreme Court legacy long persisted through Justice John Paul Stevens, a judicial ideologue hostile to liberty in most forms.

Life tenure is enshrined in the Constitution and rooted in history.  The justification for lifetime appointment is to insulate the courts from transient political pressures. 

Yet judicial independence does not require lack of accountability. Judges are supposed to play a limited though vital role—interpreting, not transforming, the law.

The dichotomy activism/restraint is the wrong prism for viewing judges. They should be active in enforcing the law, including striking down legislation and vindicating rights when required by the Constitution. They should be restrained in substituting their policy preferences for those of elected representatives.

When jurists violate this role, as do so many judges, including Republican appointees, they should be held accountable. Unfortunately, many of the proposed responses are more dangerous than the judges themselves. Such as Ted Cruz’s idea that people should ignore the Supreme Court.

After all, as originally conceived the judiciary was tasked with the critical role of holding the executive and legislative branches accountable, limiting their propensity to exceed their bounds and abuse the people. For instance, Alexander Hamilton imagined that the judiciary would “guard the Constitution and the rights of individuals” from “the people themselves.”

Of course, all too often the judiciary fails to fulfill this role today, illustrating how unreviewable power is always dangerous.

Some 20 states have implemented Cruz’s second idea, of retention elections. National judicial elections, however, would be far more problematic. Alas, Americans who today choose their president based on 30-second television spots are unlikely to pay serious attention to esoteric legal issues and make the fine legal and constitutional distinctions.

There is a better alternative. The Constitution should be amended to authorize fixed terms for federal judges. Perhaps one of ten or twelve years for Supreme Court justices. Such an approach would offer several advantages. While every appointment would remain important, judicial nominations no longer would be as likely to become political Armageddon. 

Term limits also would ensure a steady transformation of the Court’s membership.  New additions at regular intervals would encourage intellectual as well as physical rejuvenation of the Court. Most important, fixed terms would establish judicial accountability.  Justices still would be independent, largely immune to political retaliation for their decisions. Nevertheless, abusive judges no longer would serve for life. Elective officials could reassert control over the court without destroying the judicial institution.

As I point out for The Freeman: “The Supreme Court has become as consequential as the presidency in making public policy. It is time to impose accountability while preserving independence. Appointing judges to fixed terms would simultaneously achieve both objectives.”

Yesterday, Ohio Governor and presidential candidate John Kasich appeared on Fox News. During his interview with host Chris Wallace, Kasich was asked about his “D” in our 2014 Governors Fiscal Report Card.

Here is a transcript of the exchange:

WALLACE: Unemployment down from 9.1 percent to 5.2 percent. And the top income tax rate has been lowered from 6.2 percent to 4.9 percent.

But the Cato Institute, a libertarian think tank, gave you a “D” on its government’s [sic] report card just last year, noting the budget grew 13.6 percent in 2014 and that over your time as governor, government jobs have increased 3 percent. A “D”, sir?

KASICH: Well, I don’t know who these folks are, Chris, another Washington group. But, look, we have the lowest number of state employees in 30 years and in addition to that, our budget overall is growing by about 2 percent or 3 percent, and our Medicaid growth has gone from 9 percent when I came in to less than 4 percent and no one has been left behind. We haven’t had to cut benefits or throw anybody off the rolls. So, we pay attention to the mentally ill and the drug addicted and the working poor.

But, you know, it’s Washington. And, Chris, here’s the thing – remember they said, “He won’t get in the race.” Then I did.

Then, they said, “OK, well, if he gets in, he won’t be able to raise the money.” Then I did.

Then, they said, “Well, he’s getting in too late.” Now they say, “What a brilliant move.”

So, I pay no attention to folks in Washington. I want to move a lot of the power and money and influence out of that town back to where we live like normal Americans, you know?

We seem to have two conflicting views here. In 2014 we gave Governor Kasich the worst score of any governor on spending and he is saying that his spending is growing by “2 percent or 3 percent.” Kasich did score well on the report card for his various tax cuts.

When I summarized Kasich’s record on July 21st, I said:

Data from the National Association of State Budget Officers illustrates the rapid growth in general fund spending. From fiscal year 2012, Kasich’s first full fiscal year, to fiscal year 2015, general fund spending increased in Ohio by 18 percent. Nationally, state general fund spending increased by 12 percent during that period. Kasich’s proposed budget for fiscal year 2016 increased spending further. It included a year-over-year increase of 11 percent. The average governor proposed a spending increase of 3 percent from fiscal year 2015 to fiscal year 2016.

In today’s Columbus Dispatch, Kasich’s team argues that I used the wrong measure of Ohio spending growth.

So who is right?

Every state has a different budget structure, and so we use spending data from the National Association of State Budget Officer (NASBO) to compare the states. NASBO tries to create a consistent dataset, which allows us to make comparisons as neutral as possible.

Ohio budgets can be measured in at least three different ways. By any measure, spending is growing in Ohio. The question is by how much.

The first method looks at state-funded Ohio spending, or general fund spending. It ignores any contribution from the federal government. Spending from fiscal year 2012 to fiscal year 2015 increased by 12 percent. The newly-enacted budget increases spending in fiscal year 2016 by 5 percent over fiscal year 2015. In fact, from 2013 to 2016, spending is increasing in Ohio at an increasing rate.

2013 2014 2015 2016 2.9% 4.1% 4.6% 5.0%

The difference between the NASBO data used in the Governors Report Card and my initial blog post, and the data just cited is that the NASBO data includes federal Medicaid. Medicaid is jointly-financed by the state and the federal government. For fiscal year 2015, Ohio receives 62 cents from the federal government for every dollar spent in the state. The NASBO data includes the federal government’s contribution to Medicaid.

Given Kasich’s record this is an important inclusion. In 2013 Governor Kasich expanded Medicaid over strong objections within the legislature. For that newly-eligible population, the federal government is currently paying 100 percent of the costs. To exclude the effect of Medicaid expansion from spending data minimizes the cost of Kasich’s policies.

According to the state’s Legislative Service Commission, Medicaid expansion is overbudget. Eighteen months into the program, costs are 63 percent over original projections. The problem for Ohio is that  the 100 percent federal reimbursement will not continue. Starting in 2017, Ohio will need to start contributing to the costs. By 2020, 10 percent of the costs will be borne by Ohio taxpayers.

Kasich’s team prefers the third type of spending data, known as all-funds data, as they told the Columbus Dispatch. This includes all Ohio state spending and all of the federal contributions made to Ohio’s budget. It includes things like federally-funded education and transportation spending. From fiscal year 2012 to fiscal year 2014, the last year for which data is available, spending increased by 6 percent.

By using the all-funds number, Kasich is trying to use federal spending to mask the quick increase in general fund spending. Federal spending—besides Medicaid—is not increasing in Ohio that quickly. Kasich has little control over federal spending, but he is using it hide how much Ohio’s state spending has grown during his tenure. Additionally, Kasich is campaigning on a promise to cut federal spending and balance the federal budget. Under those promises, federal aid to states would decrease. If this was currently the case, Kasich’s record would look worse.

Outside of our dispute over how to measure state spending, Kasich criticized our information on the growth of Ohio’s state employees. The difference between our numbers and his is the difference between when the measurements occurred. Our report card was released last October and used the most recent data available. Since publication, Ohio has decreased the number of state employees.

This graph from the St. Louis Federal Reserve shows the initial growth and the subsequent decrease.

Kasich dismissed Cato’s research yesterday suggesting that we overestimated spending growth during his tenure. Our data is correct, and so is his. Kasich seems to pick the dataset that shed the best light on him. Ohio spending has increased quickly when you look at the general fund.

The Wall Street Journal today discusses how the growth in federal subsidies for college has contributed to the growth in college costs for students. Cato scholars have been arguing for years that rising grants and loans are not so much helping students, but causing bloat in college administration costs, including wages, benefits, and excess building construction.

It is a similar story in other policy areas. Federal subsidies cause unintended effects that undermine the stated purpose of interventions, and often end up lining the pockets of people not targeted. Farm subsidy advocates want you to believe that struggling farmers are aided by billions of dollars in annual subsidies, but the real beneficiaries are mainly wealthy landowners. Housing subsidies are supposed to reduce housing costs for people with low incomes, but–to an extent—programs such as Section 8 and the Low Income Housing Tax Credit fatten the wallets of landlords and developers.

Federal taxes and regulations often miss their targets as well because of unintended consequences. Corporate tax hikes, for example, mainly reduce worker wages, not corporate profits as targeted, at least in the long run. Minimum wage laws do not help the lowest-income, least-skilled workers, they harm them. And federal regulations, in general, often serve to protect big firms from competition, not make the marketplace more fair or efficient.

My new study, Why the Federal Government Fails, discusses the unintended consequences of federal intervention. In my experience, federal policymakers focus far too much on what programs are supposed to do in their idealistic dreams, and not enough on the actual effects in the real world.

As my colleague Jeff Milyo wrote somewhat recently, the national sport isn’t baseball; it’s politics. With Americans across the nation loyally cheering on either Team Red or Team Blue (or, for a growing few, Team Purple), the discussion around key political events can seem somewhat superfluously shallow. That’s where the Cato Institute comes in.

Throughout the 2016 campaign season, Cato scholars will be injecting insightful commentary and hard-hitting policy analysis into the national conversation using the Twitter hashtag #Cato2016

We’ll be off to a running start with not, one, not two, but three major nationally televised events this week.

Tonight at 7 p.m. EST, the Voters First Forum will be held at the Dana Center at Saint Anselm College and broadcast nationwide on C-SPAN. Featuring Ben Carson, Chris Christie, Ted Cruz, Carly Fiorina, Lindsey Graham, Bobby Jindal, John Kasich, Jeb Bush, Rick Perry, Scott Walker, Rand Paul, Marco Rubio, Rick Santorum, and George Pataki, the forum will be the first time the majority of the GOP presidential primary contenders will be sharing one stage.  Tune in on Twitter for commentary from Emily Ekins, Jonathan Blanks, Adam Bates, and more. You can find a full list of participating scholars and follow their accounts here.

Then, on Thursday, August 6th, Fox News will host two nationally-televised debates featuring candidates for the GOP nomination for the 2016 presidential elections. The first of these debates—to be held at 5:00 p.m. EST—will be an opportunity to hear from some of the lesser-known contenders, while the second of these debates—to be held at 9:00 p.m. EST—will feature those candidates who place in the top 10 of an average of the five most recent national polls, as recognized by FOX News, leading up to the debate. Tune in on Twitter for commentary from Emma Ashford, Alex Nowrasteh, Patrick Eddington, Michael Cannon, Jason Bedrick, and more. You can find a full list of scholars participating in the 5. pm  and 9 p.m. debates via the @CatoEvents Twitter account.

Tuning into the debates (or simple wondering how they might impact the policy debate)? Join the conversation on Twitter with #Cato2016

Mark Iannicelli has been charged with 7 counts of jury tampering.  He did not pressure jurors in a case to vote one way or the other.  All he did was set up a booth near the courthouse and distribute pamphlets that contained information about jury nullification–the idea that jurors should be able vote according to their conscience.  Prosecutors were so outraged by this that they want Mr. Iannicelli imprisoned.  Free speech is nice, but they apparently think the supreme law does not apply as you approach the, er, courthouse.  Hmm.

Are the prosecutors aware that judges in other jurisdictions have dismissed charges in such circumstances?  If so, this could be just a thuggish attempt to intimidate people from exercising their right to talk about jury nullification.

To learn more about this subject, check out the Cato Institute book, Jury Nullification by Clay Conrad.  But Denver residents had best be careful about where they take the book and talking about it above a whisper.

This week marks the first anniversary of our latest war in the Middle East, but after some 5,000 airstrikes in two countries, and with 3,500 U.S. troops on the ground, we’ve yet to have an up-or-down vote in Congress on authorization for the use of military force against ISIS.

We’re recognizing—“celebrating” isn’t the right word—that unhappy anniversary at Cato with a talk by Senator Tim Kaine (D-VA), who holds the unfashionable view that Congress ought to vote on the wars we fight, and has been waging a (sometimes lonely) battle to get his colleagues to live up to their most important constitutional responsibility. The event runs from 9:00-10:00 AM on Thursday, August 6, so you can hear about the erosion of congressional war powers and grab your morning coffee without getting to work too late; RSVP here.  

President Obama announced the first wave of airstrikes in Iraq on August 7, 2014, and expanded the campaign against ISIS to Syrian territory in September. But it took him six months to send Congress a draft Authorization for the Use of Military Force—along with a message insisting that “existing statutes provide me with the authority I need” to wage war anyway.  Since then, as Senator Kaine recently noted, “Congress has said virtually nothing.” Recent headlines make that all too clear: “Congress avoids war debate as ISIL advances” (Politico, 5/28); “Islamic State War Authorization Goes Nowhere, Again” (Bloomberg, 6/9); “House kills measure to force debate on military force against ISIS” (The Hill, 6/11)…and so on. 

In the debate over the 2016 National Defense Authorization Act last month, Senator Kaine noted that, in the bill, Congress addresses military minutia in “excruciating detail,” but, at the same time, “we don’t want to vote on whether the nation should be at war.” When Kaine cosponsored (with Senators Jeff Flake (R-AZ) and Joe Manchin (D-WV)) an amendment to the NDAA “expressing the sense of the Senate that we should have an authorization debate about whether we should be at war with ISIL,” it was ruled out of order: “so barracks mold, yes; vehicle rust, yes; the athletic programs at West Point, yes;” he sums up, but “whether we should be at war, nongermane to the Defense authorization act. Interestingly, we even took a vote on the floor of the Senate in the NDAA about whether we should arm the Kurds in a war that Congress has not authorized that we could debate and vote on; but whether we should be at war we have not debated and voted upon.”

The president’s claim that he already has all the authority he needs to wage war with ISIS is, as Senator Kaine put it in an earlier speech, “ridiculous.” Its principal basis is the AUMF Congress passed three days after the 9/11 attacks and was intended to be used against those who “planned, authorized, committed or aided” the September 11 attacks or “harbored” those who did. Its main targets were, obviously, Al Qaeda and the Taliban, yet now, nearly 14 years later, the administration insists it serves as legal justification for a war of at least three years, in at least two countries, against a group that is not only not a “cobelligerent” with Al Qaeda, but is engaged in open warfare against the group. Building on the Bush administration’s expansive interpretation of the 2001 authorization, the Obama administration has turned the 9/11 AUMF into an enabling statute for an open-ended globe-spanning war. “This is unacceptable,” Senator Kaine argues, “and we should be having a debate to significantly narrow that authorization” as well. 

The decision to go to war is among the gravest choices a constitutional democracy can make. The Framers erected firebreaks to hasty action, designed to force deliberation and consensus before the resort to deadly force. As James Wilson put it to the Pennsylvania ratifying convention, “this system will not hurry us into war; it is calculated to guard against it. It will not be in the power of a single man, or a single body of men, to involve us in such distress; for the important power in declaring war is vested in the legislature at large.’’ Join us Thursday as we explore how Congress can take that power back. 

You Ought to Have a Look is a feature from the Center for the Study of Science posted by Patrick J. Michaels and Paul C. (“Chip”) Knappenberger.  While this section will feature all of the areas of interest that we are emphasizing, the prominence of the climate issue is driving a tremendous amount of web traffic.  Here we post a few of the best in recent days, along with our color commentary. 

This week, the royal families of Clinton and Bush offered up their 2016 campaign insights on climate change.  People have been very interested in what they would say because, as Secretary of State, Clinton gave hints that she was even more aggressive on the issue than her boss, and Bush is the son of GHW Bush, who got us into this mess in the first place by going to Rio in 1992 and signing off on the Climate Treaty adopted there.*

Hillary Clinton unveiled her “climate plan” first.  As feared, it’s a step-up over Obama’s, with an impossibly large target for electricity production from renewable energy. While her fans were exuberant, noticeably absent from her plan were her thoughts on Keystone XL pipeline and a carbon tax.

Manhattan Institute scholar Oren Cass (whose take on the carbon tax we’ve featured previously) was, overall, less than impressed. Calling Hilary’s climate plan a “fake plan” in that it really would have no impact on the climate. Cass identifies what Hilary’s “real” plan is—pushing for a $100+ billion annual  international “Green Climate Fund”  (largely populated with U.S. dollars) to be available to developing countries to fight/prepare for climate change. 

Here’s Cass’s take:

Hillary Clinton has a real climate change plan and a fake climate change plan. She released the fake plan earlier this week to predictably rapturous media applause for its “far-reaching” and “comprehensive” agenda.

…The plan is most obviously fake because it is not really a climate plan at all. Clinton offers no estimated reductions in carbon dioxide emissions or future temperatures, probably because her plan cannot achieve any meaningful ones. Her ultimate goal to generate 33 percent of U.S. electricity from renewable sources by 2027 would reduce global emissions by less than 2 percent annually, even if every new kilowatt-hour of renewable power managed to replace coal-fired power. That is only a [tiny-eds] fraction of the increase expected from China during the same period.

Instead of claiming any climate success, Clinton’s campaign material emphasizes health benefits from reducing air pollutants (not carbon dioxide). It promotes job creation (though job losses would be at least as large). And it promises to “make the United States the world’s clean energy superpower,” whatever that means.

The plan is most importantly fake because it obscures an actual climate plan that Clinton has no interest in discussing with voters. The real plan, simply put, is to pay for other countries to reduce their emissions through an unprecedented transfer of wealth from the developed world to the developing world. This plan emerged from the international climate negotiations in Copenhagen in 2009, at which then-Secretary Clinton pledged the United States would help create a Green Climate Fund of at least $100 billion in annual aid – a commitment comparable in scale to all existing development aid from OECD countries.

 Be sure to check out the whole thing, in which Cass concludes:

The silly gap in Clinton’s climate plan is the continuing no-comment on the Keystone XL pipeline. The surprising one is the absence of a price on carbon. But the dangerous one is the omission of what she actually wants to do.

Clearly, Hillary is more interested in influencing public opinion than the actual climate.

Jeb Bush then offered up his thoughts about climate change. In an interview with Bloomberg BNA, Bush said, among other things that “the climate is changing” and that “human activity has contributed to it” but that “we should not say the end is near.”  

Sounds like a solid take!

Bush went on to with his opinions on various aspects of energy regulations currently aimed at climate change. Keystone XL pipeline? “Yes.” Renewable fuel standard? “2022 is the law and is probably the good break point.” EPA’s Clean Power Plan? “[I]rresponsible and ineffective.”

You ought to have a look at the complete set of questions and answers. A refreshing and logical response to the various aspects of the issue.

For example, here’s his full answer to Bloomberg BNA’s question “Is climate change occurring? If so, does human activity significantly contribute to it?”:

The climate is changing; I don’t think anybody can argue it’s not. Human activity has contributed to it. I think we have a responsibility to adapt to what the possibilities are without destroying our economy, without hollowing out our industrial core.

I think it’s appropriate to recognize this and invest in the proper research to find solutions over the long haul but not be alarmists about it. We should not say the end is near, not deindustrialize the country, not create barriers for higher growth, not just totally obliterate family budgets, which some on the left advocate by saying we should raise the price of energy so high that renewables then become viable.

U.S. emissions of greenhouse gasses are down to the same levels emitted in the mid-1990s, even though we have 50 million more people. A big reason for this success is the energy revolution which was created by American ingenuity—not federal regulations.

This is an encouraging stance from a Republican presidential candidate. And one that we think should come to dominate the issue—from both sides. It serves no one to deny that humans are causing climate change, nor to cry that we’re all going to die. Actions should be appropriate to the magnitude of the issue—in other words, lukewarm.

— 

*Many people advised him not to go. But he did, anyway, probably thinking he would get yelled at if he didn’t, and lose votes in the upcoming Presidential election. How well did that work out for him?

Over the last couple of decades, reserve requirements all but vanished as a means of bank regulation and monetary control. But now a new variation on reserve requirements is being introduced through the capital controls of the Basel Accords.

Canada, the UK, Sweden, Australia, New Zealand, and Hong Kong have all abolished traditional reserve requirements. In many other countries, reserve requirements have become a dead letter. In the U.S., for instance, the Fed under Alan Greenspan reduced all reserve requirements to zero except for transactions deposits (checking accounts), while permitting banks to evade reserve requirements on transactions balances by using sophisticated computer software to regularly “sweep” those balances into money market deposit accounts, which have no reserve requirement. In 2011 Congress went a step further by allowing the Fed to eliminate all reserve requirements if it so desired. The Eurozone, for its part, began with a reserve requirement of only 2 percent, which was reduced to 1 percent in January 1999.

There were good reasons for this deregulatory trend. Economists consider reserve requirements an implicit tax on banks, requiring them to hold non-interest earning assets, while central banks considered changes in such requirements too blunt an instrument for monetary control. The Fed discovered the latter shortcoming when, in the midst of the Great Depression, having just gained control over the reserve requirements of national banks, it doubled them, contributing to recession of 1937.

Ostensibly designed to keep banks more liquid, reserve requirements can prevent them from drawing on their liquidity when it is most needed. As Armen A. Alchian and William R. Allen point out in University Economics (1964): “To rely upon a reserve requirement for the meeting of cash-withdrawal demands of banks’ customers is analogous to trying to protect a community from fire by requiring that a large water tank be kept full at all times: the water is useless in case of emergency if it cannot be drawn from the tank.”

As reserve requirements became less fashionable, advocates of more stringent bank regulation resorted instead to risk-based capital requirements, as implemented through the international Basel Accords. More recently the increasingly widespread practice of paying interest on bank reserves has also given central banks an alternative and less burdensome means for inducing banks to hold more reserves.

But in Basel III, agreed upon in 2010-2011, there appeared a new kind of liquidity requirement that mimics reserve requirements in many respects. Known as the “Liquidity Coverage Ratio” or LCR, it requires banks to hold “high quality liquid assets” (HQLA) sufficient to cover potential net cash outflows over 30 days. In September 2014 the Fed, the Comptroller, and the FDIC finalized the rule implementing the Liquidity Coverage Ratio. The rule, which took effect at the beginning at 2015, must be fully complied with by January 2017.

Far from involving a simple ratio, as earlier reserve requirements did, the Liquidity Coverage Ratio is extremely complicated, filling 103 pages in the Federal Register. The rule does not apply to small community banks but instead to banks with more than $250 billion of assets, with a modified rule applying to the holding companies of both banks and savings institutions. The Fed also plans to impose a similar rule on non-bank financial institutions. But because a variant of the rule applies to bank holding companies on a “consolidated basis,” the Liquidity Coverage Ratio already affects most major investment banks, which are owned by bank holding companies.

Unlike traditional reserve requirements, the Liquidity Coverage Ratio does not call for any minimum quantity of cash reserves. Instead, it calls for a minimum quantity of various high quality liquid assets. Weighting bank assets according to their maturity, marketability, and riskiness, the LCR even counts as high quality some forms of corporate debt at half of face value. The LCR also differs in being applied, not just to bank deposits, but to nearly all bank liabilities, including large CDs, derivatives, and off-balance sheet loan commitments, according to their maturity.

In short, the Liquidity Coverage Ratio is designed to reduce maturity mismatches for large financial institutions in order to protect against the kind of panics in the repo and asset-backed commercial paper markets that occurred during the financial crisis of 2007-2008. In any case, the rule will still require banks to hold more reserves or short-term Treasury securities than they otherwise might prefer. Since the rule was under discussion by 2010, it could be another reason—along with interest on reserves and capital requirements—why U.S. banks have continued to hold more than 100-percent reserves behind M1 deposits.

Every time there is a financial crisis, the proposal to force banks to hold higher reserve ratios, if not 100-percent reserves, resurfaces. During the Great Depression, this proposal went under the name of the Chicago Plan and even received support from Milton Friedman in his early writings. The proposal was called “narrow banking” during the savings and loan crisis. Since the recent crisis, it has been advocated in one form or another by such economists as Laurence Kotlikoff of the Boston University, John Cochrane of the University of Chicago, and Martin Wolf of the Financial Times. All of these proposals hinge on the government paying interest on bank reserves.

The new Liquidity Coverage Ratio in one sense is less restrictive than these proposals but in another is more so. It is less restrictive in that it allows deposits to be covered by liquid securities other than cash equivalents, and in that sense is a bit reminiscent of the discredited real-bills doctrine that insisted the banks should make only short-term, self-liquidating loans.

But the Liquidity Coverage Ratio is more restrictive than conventional reserve requirements in so far as it applies to a much broader range of bank liabilities. Unlike such requirements, it is striving to prevent banks from engaging in significant maturity transformation, which involves bundling and converting long-term securities into short-term securities. That makes it closest in spirit to Cochrane’s reform proposal, which combines a 100-percent reserve requirement for deposits with a 100-percent capital requirement for all other bank liabilities. Cochrane’s proposal really would eliminate all maturity mismatches; indeed, it would make all banks resemble combinations of safe-deposit businesses on the one hand and mutual funds or, for that matter, Islamic banks, on the other.

Will the Liquidity Coverage Ratio ultimately work? Although the question requires further thought and study, I doubt it. Several monetary economists, considering the rule’s implementation in Europe (here and here), are more optimistic than I am, and a few even think that it will not be restrictive enough. But they may be overlooking the long-term downsides.

As with so many past banking regulations, this one could ultimately end up being non-binding. Banks may find loopholes in the rule, or may innovate around it, and the rule’s very complexity and supposed flexibility is likely to make doing these things easier. On the other hand, when the next financial crisis hits, by hobbling a bank’s discretionary control over its balance sheet, the rule may well exacerbate the crisis. To the extent that the rule is binding, it changes the fundamental nature of banking in a way that may curtail efficient financial intermediation. Whatever happens, it definitely increases the government’s central planning of the allocation of savings. In the final analysis, it is another futile attempt to use prudential regulation to overcome the excessive risk taking resulting from the moral hazard created by deposit insurance and too-big-to-fail.

[Cross-posted from Alt-M.org]

News comes this morning that Beijing has been awarded the 2022 Winter Olympics, beating out Almaty, Kazakhstan. Which touches on a point I made in this morning’s Boston Herald

Columnist Anne Applebaum predicted a year ago that future Olympics would likely be held only in “authoritarian countries where the voters’ views will not be taken into account” — such as the two bidders for the 2022 Winter Olympics, Beijing and Almaty, Kazakhstan.

Fortunately, Boston is not such a place. The voters’ views can be ignored and dismissed for only so long.

Indeed, Boston should be celebrating more than Beijing this week. A small band of opponents of Boston’s bid for the 2024 Summer Olympics beat the city’s elite – business leaders, construction companies, university presidents, the mayor and other establishment figures – because they knew what Olympic Games really mean for host cities and nations:

E.M. Swift, who covered the Olympics for Sports Illustrated for more than 30 years, wrote on the Cognoscenti blog a few years ago that Olympic budgets “always soar.”

“Montreal is the poster child for cost overruns, running a whopping 796 percent over budget in 1976, accumulating a deficit that took 30 years to repay. In 1996 the Atlanta Games came in 147 percent over budget. Sydney was 90 percent over its projected budget in 2000. And the 
Athens Games cost $12.8 billion, 60 percent over what the government projected.”

Bent Flyvbjerg of Oxford University, the world’s leading expert on megaprojects, and his co-author Allison Stewart found that Olympic Games differ from other such large projects in two ways: They always exceed their budgets, and the cost overruns are significantly larger than other megaprojects. Adjusted for inflation, the average cost overrun for an Olympics is 179 percent.

Bostonians, of course, had memories of the Big Dig, a huge and hugely disruptive highway and tunnel project that over the course of 15 years produced a cost overrun of 190 percent.

Read the whole thing.

It isn’t every day that a person can go to his or her job, work, not participate in any criminal activity, and still get a prison sentence. At least, that used to be the case: the overcriminalization of regulatory violations has unfortunately led to the circumstance that corporate managers now face criminal—not just civil—liability for their business operations’ administrative offenses.

Take Austin and Peter DeCoster, who own and run an Iowa egg-producing company called Quality Egg. The DeCosters plead guilty to violating certain provisions of the Food, Drug, and Cosmetic Act because some of the eggs that left their facilities contained salmonella enteritidis, a bacterium harmful to humans. They were sentenced to 90 days in jail and fined $100,000 for the actions of subordinates, who apparently failed, also unknowingly, in their quality-control duties.

In other words, the “crime” that the DeCosters were convicted of didn’t require them to have put eggs with salmonella into interstate commerce, or even to have known (or reasonably been able to foresee) that Quality Egg was putting such eggs into interstate commerce. It didn’t even require the quality-control operator(s) most directly involved in putting the contaminated eggs into interstate commerce to have known that they were contaminated.

Nearly a century of jurisprudence has held that imprisoning corporate officers for the actions of subordinates is constitutionally suspect, given that there’s neither mens rea (a guilty mind) nor even a guilty act—the traditional benchmarks of criminality since the days of Blackstone. Yet there are about 300,000 regulations that can trigger criminal sanctions. These rules are too often ambiguous or arcane, and many lack any requirement of direct participation or knowledge, imposing strict liability on supervisors for the actions (or inactions) of their subordinates.

In United States v. Quality Egg, the district court ruled that courts have previously held that “short jail sentence[s]” for strict-liability crimes are the sort of “relatively small” penalties that don’t violate constitutional due process.  Such a sentence has only been imposed once in the history of American jurisprudence, however, and for a much shorter time on defendants with much more direct management of the underlying bad acts. Additionally, prison is not the sort of “relatively small” penalty—like a fine or probation—that the Supreme Court has allowed for offenses that lack a guilty mind requirement.

Joining the National Association of Manufacturers, Cato points out in an amicus brief supporting the DeCosters’ appeal that this case presents an opportunity for the U.S. Court of Appeals for the Eighth Circuit to join its sister court, the Eleventh Circuit, in holding that prison sentences constitute a due-process violation when applied to corporate officers being charged under a strict-liability regulatory regime.

This week, the United States and Turkey agreed on a deal to expand cooperation in the fight against ISIS, in part through the creation of an ‘ISIS-free zone’ in Northern Syria. The scope of the agreement is unclear, not least because Turkish officials are hailing it as a ‘safe zone’ and a possible area for refugees, while U.S. officials deny most of these claims. U.S. officials are also explicit that the agreement will not include a no-fly zone, long a demand of U.S. allies in the region.

But what’s not in doubt is that the United States and Turkey plan to use airstrikes to clear ISIS fighters from a 68-mile zone near the Turkish border. The zone would then be run by moderate Syrian rebels, although exactly who this would include remains undefined.

Over at the Guardian today, I have a piece talking about the many problems with this plan, in particular the fact that it substantially increases the likelihood of escalation and mission creep in Syria:

“The ambiguity around the ‘Isis-free zone’ creates a clear risk of escalation. It’s unclear, for example, whether groups engaged in fighting the regime directly will be allowed to enter the zone and train there, or only those US-trained and equipped rebels focused on Isis. US officials have been keen to note that Assad’s forces have thus far yielded to American airstrikes elsewhere in Syria – choosing not to use their air defense system and avoiding areas the US is targeting - but that is no guarantee that they would refrain from attacking opposition groups sheltering inside a safe zone.”

The plan is just another step in the current U.S. approach to Syria, which has been haphazard and ill-thought out. The United States is engaged in fighting ISIS while most fighters on the ground want to fight the Assad regime, a key reason for the abysmal recruitment record of the U.S. military’s new train-and-equip programs in Syria. Increased U.S. involvement in Syria risks our involvement in another costly, open-ended civil war.

Renewed diplomatic efforts to find a settlement are the only way to effectively address the Syrian crisis. A negotiated settlement which sees Assad removed from power - while allowing some of his followers to participate in a unified Syrian government - would allow fighters inside the country to focus on fighting ISIS, while ensuring that Syria’s minorities are not entirely disenfranchised.

A successful diplomatic settlement will be difficult to achieve. Negotiations would by necessity involve other unpleasant states, including Assad’s Iranian and Russian patrons. But there have been recent indications that Moscow may be more willing to talk, and the ties forged during the U.S.-Iranian nuclear talks could prove valuable. The United Nations is once again trying to restart talks, an initiative the United States should support wholeheartedly. Nonetheless, diplomacy is infinitely better than the slippery slope to military intervention offered by this week’s agreement with Turkey.

You can find the whole article at the Guardian here. For more thoughts on how a U.S. diplomatic strategy for Syria might work, check out this podcast.

There is something fishy about Cecil the lion story. Don’t get me wrong, I find trophy hunting nauseating. Still, why on earth would Walter Palmer pay $50,000 to kill a lion? Per capita GDP in Zimbabwe is $936 per year (2014 dollars). If Palmer wanted to do something illegal, he could have killed a lion for fraction of the price. (I assume that any lion would do. Palmer happened to get “unlucky” and kill the most famous lion in Zimbabwe.)

Goodness knows that magnificent wild animals get slaughtered throughout Zimbabwe – for food, skin and horns – on a daily basis and for free. The culprits include hungry locals, corrupt parks officials, members of the military and government officials. It is very likely that Palmer believed (or wanted to believe) that he was buying a legal kill and outsourced the details (permits, etc.) to the locals. That does not make Palmer innocent. He should have known better than go on a safari to a failed state – with no property rights and the rule of law. That said, the story should be understood in the proper context: it is not individual hunters, but poverty and anarchy that are destroying Zimbabwe’s wildlife.

For more on this, see my article in the Financial Times here [$].

Last year Narendra Modi won an unusually strong majority in India’s parliamentary election. Modi subsequently visited the U.S. and was warmly welcomed by both the Obama administration and Indian-Americans.

Although ethnic Indians circled the globe as entrepreneurs and traders, the Delhi government turned dirigiste economics into a state religion. Mind-numbing bureaucracies, rules, and inefficiencies were legion.

Eventually modest reform came, but even half-hearted half-steps generated overwhelming political opposition. Last May the Hindu nationalist Bharatiya Janata Party, led by Modi, handed the venerable Congress Party its greatest defeat ever. He seemed poised to transform his nation economically.

As the anniversary of that visit approaches, the Modi dream is fading. He simply may not believe in a liberal free market.

Moreover, few reforms of significance have been implemented. The failures overshadow the Modi government’s successes and highlight its lost opportunities. Critics cite continuing outsize budget deficits and state direction of bank lending.

Former privatization minister Arun Shourie observed last December: “when all is said and done, more is said than done.” Unfortunately, Modi has missed the “honeymoon” period during which his political capital was at its greatest. Time is slipping away.

Indeed, Indian politics quickly began shifting back to business as usual. Modi has been forced to fend off charges of corruption and other misbehavior.

None of this is unusual by Indian standards, but voters are getting fed up. Disappointed Delhi voters gave a landslide victory to a new anti-corruption party in February.

Religious violence also is on the rise, largely instigated by Hindu extremists. While serving as Gujarat state’s chief minister, Modi was implicated in the 2002 riots which killed more than 1200 people, mostly Muslims. Since his election sectarian attacks are up, on Christians as well as Muslims.

Modi has not encouraged the rising violence, but his government has catered to Hindu nationalist sentiments. Only after an assault on a Christian school—the vast majority of whose students and teachers are Hindus—did he promise that his government would give “equal respect to all religions.”

Sectarian violence obviously harms innocent Indians. It also provides foreign investors another reason to go elsewhere.

Despite his disappointing economic record so far, Modi still has an opportunity to liberalize India’s economy. In upcoming years his party will take control of the appointive upper house, which has impeded some of his initiatives.

Argued Sadanand Dhume of the American Enterprise Institute, “in Gujarat, too, he started slowly, but ended up presiding over a long boom.” However, it is not enough for his government to tinker with nonessential reforms.

On Dhume’s to-do list are tax reform, privatization, subsidy cuts, and electricity restructuring, India also should limit government spending, liberalize its labor rules, simplify the visa process, modernize bankruptcy procedures, streamline legal processes, and strengthen private property rights.

As I point out on Forbes online: “India desperately needs strong growth for years, even decades, to move to the first rank of nations, as China has done. India has extraordinary potential. But for decades the Indian government has squandered its future.”

Despite the high hopes generated after the BJP’s dramatic victory, nothing has really changed. While growth has picked up in India, that improvement is not sustainable absent far more fundamental and comprehensive reform.

Without sustainable growth, India will not follow China’s example to build a competitive manufacturing sector, generate broad-based income growth, and create a new great power capable of influencing global affairs. Such reforms will not be easy, but making tough decisions presumably is why the Indian people elevated Modi.

Some people predict the 21st Century will be the Chinese century. It is more likely to be the Asian Century, at least if Narendra Modi takes advantage of his unique opportunity. Leading India into a better, more prosperous future obviously would benefit India and the Indian people. It also would benefit the rest of the world.

Today the Hamilton County, Ohio prosecutor’s office released body camera footage showing University of Cincinnati police officer Ray Tensing shoot and kill 43-year-old Samuel DuBose during a routine traffic stop on July 19th. Tensing will face murder and voluntary manslaughter charges. Speaking about the killing, Hamilton County prosecutor Joe Deters used strong and condemning language, calling the killing “senseless” and “asinine.” He also said that the body camera footage of the killing was “invaluable” and that without it, he would probably have believed Tensing’s erroneous account of the incident.  

DuBose’s death demonstrates once again that body cameras are not a police misconduct panacea. Tensing, who knew his body camera was on, shot an unarmed man in the head and then lied about being dragged down the street. Nonetheless, the tragic incident does provide an example of how useful body camera footage can be to officials investigating allegations of police misconduct.

Ahead of the release of the video Cincinnati Police Chief Jeffrey Blackwell said that the video “is not good.” If convicted, Tensing faces life in prison.

I’ve seen many police body camera videos while researching and writing about the technology, and the video of DuBose’s death is certainly among the most disturbing that I have seen.

Watch the footage below.

Warning: this footage contains graphic violence.

Technology that highlights incidents of police misconduct ought to be welcomed by advocates of accountability and transparency in law enforcement. As Deters himself said in today’s press conference, the body camera led to Tensing’s murder indictment. 

But in order for police misconduct to be adequately addressed there need to be significant reforms of police practices and training, specifically related to the use of force. Indeed, Deters said in the press conference today that Tensing should never have been a police officer. A man who quickly resorts to shooting an unthreatening man in the head during a stop prompted by a missing license plate should not be given a gun and a badge. Yet, if it weren’t for body camera footage, Tensing would still be employed as a University of Cincinnati police officer rather than being behind bars.

The use of body cameras does raise a host of serious privacy concerns that should not be taken lightly. However, as Dubose’s killing has shown, the cameras can be instrumental in investigating police misconduct and getting dangerous police officers off the streets.

I hope I’m wrong to see it as racism returning to the mainstream. Indeed, I hope that the long, agonizingly slow erosion of racial fixations from our society will continue. But I found it interesting to see a Washington Post blog post explaining a recently minted epithet—“cuckservative”—chiefly with reference to the president of a “white nationalist” organization.

Apparently, we have such things in the United States, credible enough to get online ink from a major newspaper. I’m not against reporter Dave Weigel’s use of the source. I take it as confirmation that some of our ugliest politicians have even uglier supporters.

I don’t think it’s likely, but one can imagine a situation where these currents join a worsening economic situation to sow public distemper that gives actual political power to racists. Were some growing minority of political leaders to gain by advocating for ethnic or racial policies, do not count on the “good ones” standing against them. Public choice economics teaches that politicians will prioritize election over justice, morality, or any other high-minded concept.

It is poor civic hygiene to install technologies that could someday facilitate a police state. That includes a national ID system. I’ve had little success, frankly, driving public awareness that the U.S. national ID program, REAL ID, includes tracking of race and ethnicity that could be used to single out minorities. But that’s yet another reason to oppose it.   If the future sees no U.S. national ID materialize, and no political currents to exploit such a system for base injustice and tragedy, some may credit the favorable winds of history. Others may credit the Cato Institute and its fans. We’re working to prevent power from accumulating where it can be used for evil.

Speaking of myths about U.S. banking, another that tops my list is the myth that the Federal Reserve, or some sort of central-bank-type arrangement, was the best conceivable solution to the ills of the pre-1914 U.S. monetary system.

I encountered that myth most recently in reading America’s Bank, Roger Lowenstein’s forthcoming book on the Fed’s origins, which I’m reviewing for Barron’s. Lowenstein’s book is well-researched and entertainingly written. But it also suffers from an all-too-common drawback: Lowenstein takes for granted that those who favored having a U.S. central bank of some kind (whatever they called it and however they chose to disguise it) were well-informed and right-thinking, whereas those who didn’t were either ignorant hicks or pawns of special interests. He has, in other words, little patience with history’s losers, whether they be people or ideas. Like other “Whig” histories, his history of the Fed treats the past as an “inexorable march of progress towards enlightenment.”

Don’t get me wrong: I’m no Tory, and I certainly don’t think that the pre-Fed U.S. monetary system was fine and dandy. I know about the panics of 1884, 1893, and 1907. I know how specie tended to pile-up in New York after every harvest season, and that by the time it got there not one but three banks were likely to reckon it, or make claims to it, as part of their reserves. I also know how, when the harvest season returned, all those banks were likely to try and get their hands on the same gold, and how this made for tight money, if it didn’t spark a full-scale panic. Finally, I know that one way to avoid such panics, on paper at least, was to establish a central bank, or “federal” equivalent, capable of supplying banks with emergency cash when they needed it.

Yet I still think that the Fed was a lousy idea. How come? My reason isn’t simply that the Fed turned out to be quite incapable of preventing financial crises, though that’s certainly true. It’s that there was a much better way of fixing the pre-Fed system. That alternative was perfectly obvious to many who struggled to reform the U.S. system in the years prior to the Fed’s establishment. It could hardly have been otherwise, since it was then almost literally staring them in the face. But it should be equally obvious even today to anyone who delves into the underlying causes of the infirmities of the pre-Fed National Currency system.

What were these causes? Essentially there were two. First, ever since the Civil War state banks were prohibited from issuing circulating notes, while National banks could issue notes only to the extent that they backed them with specified U.S. government bonds. Those bonds were getting harder to come by (by the 1890s National banks had already acquired almost all of them). What’s more, it didn’t pay for National banks to acquire the costly securities just for the sake of meeting harvest-time currency needs, for that would mean incurring very high opportunity costs for the sake of having stacks of notes sitting idle in their vaults for most of the year.

The other, notorious cause of trouble was the fact that most U.S. banks, whether state or National,didn’t have branch networks of any kind. Instead, ours was for the most part a system of “unit” banks. This was so mainly owing to laws that prohibited them from branching, even within their own states. But even had branching been legal, the restrictions on banks’ ability to issue notes would have made it less economical by substantially raising the cost of equipping bank branches with inventories of till money.[1]

That unit banking limited U.S. banks’ ability to diversify their assets and liabilities, and thereby made the U.S. banking system much more fragile than it might have been, is (or ought to be) well-appreciated. Unit banking also encouraged banks to deposit their idle reserves with “reserve city” correspondents, who in turn sent their own surplus cash to New York. The National Banking Acts actually encouraged this practice by letting correspondent balances satisfy a portion of banks’ legal reserve requirements. The set-up kept money gainfully employed when it wasn’t needed in the countryside; but it also made for a mad scramble when cash was needed back home.

Far less well appreciated is how unit banking also contributed to the notorious “inelasticity” of the pre-Fed U.S. currency stock. Before I explain why, I’d better first lay another myth to rest, which is the myth that complaints concerning the “inelasticity” of the pre-Fed currency stock were a hobbyhorse of persons who subscribed to the “real-bills” doctrine — that is, the view that the currency supply could and should wax-and-wane in concert with the total quantity of “real bills” or short-term commercial paper presented to banks for discounting.

It’s true that many persons who complained about the “inelastic” nature of the U.S. currency system, including many who were instrumental in designing (and later in managing) the Federal Reserve System, also subscribed to the real bills doctrine, and that that doctrine is mostly baloney. But that doesn’t mean that the alleged inelasticity of the U.S. currency stock was a mere bugbear. The real demand for currency really did vary considerably, especially by rising a lot — sometimes by as much as 50 percent — during the harvest season, when migrant workers had to be paid to “move” the crops. And U.S. banks really were unprepared to meet such increases in demand by issuing more notes, even if doing so was only a matter of swapping note liabilities for deposit liabilities, owing to the legal restrictions to which I’ve drawn attention. In short, you don’t have to have drunk the real-bills Kool-Aid to agree that the pre-Fed U.S. currency system wasn’t capable of meeting the “needs of trade.”

How, then, did unit banking contribute to the problem of an inelastic currency stock? It did so by considerably raising the cost banks had to incur to redeem rival banks’ notes, and thereby limiting the extent to which unwanted banknotes made it back to their issuers. In a branch-banking system, note exchange and redemption are mostly a local, and therefore cheap, affair; add a few regional clearinghouses to handle items not settled locally, and you’ve got all that’s needed to see to it that unwanted currency is rapidly removed from circulation.

In the U.S., on the other hand, banks had to bear substantial costs of sorting and shipping notes to their sources, or to distant clearinghouses, which costs were made all the greater by the sheer number of National banks — tens of thousands, eventually — and resulting lack of economies of scale. These factors would normally have caused National banks to accept the notes of distant rivals at discounts sufficient to cover anticipated redemption costs, as antebellum state banks had been in the habit of doing. The authors of the 1863 and 1864 National Banking Acts were, however, determined to give the nation a “uniform” currency. Consequently they stipulated that every National bank had to accept the notes of all other national banks at par. That got rid of note discounts, sure enough. But it also meant that National banknotes would no longer be actively and systematically redeemed.[2] As I like to say, any fool can fix most any problem — so long as he ignores the others.

If my dog is limping, and I discover that she’s got a pebble wedged between her paw pads, I don’t think of calling for a team of stretcher bearers: I just pull the pebble out. In the same way any reasonable person, knowing the underlying causes of the infirmities of the pre-Fed U.S. currency system, would first consider removing those causes. And that was precisely what many advocates of currency reform tried to do before any dared to suggest anything like a U.S. central bank. That is, they tried to get bills passed — there must have been at least a dozen of them — calling for some combination of (1) repeal of the bond-backing requirement for National banknotes; (2) allowing National banks to branch, and (3) restoring state banks’ right to issue currency. The restrictions on note issue had, after all, been put into effect for the sake of helping the Union government fund the Civil War — a purpose now long obsolete. The restrictions on branching, on the other hand, were widely understood to be another deleterious consequence of the unfortunate decision to model the National Banking Acts after earlier, state “free banking” laws.

Might deregulation alone, as was contemplated in such “asset currency” reform proposals (so-called because they would have allowed banks to issue notes backed by general assets, rather than by specific securities), really have given the U.S. a perfectly sound and stable currency and banking system? Yes. How can I be so confident? Because it would have given the U.S. a currency system like Canada’s. And Canada’s system was, in fact, famously sound and famously stable.[3]

“Don’t mention the war!” is what Basil Fawlty tells his staff, out of concern for the sensibilities of his German guests. (Basil himself nevertheless can’t help referring to it again and again.) “Don’t mention Canada!” is what a Whig historian of the Fed must tell himself, assuming he knows what went on there, lest he should broach a topic that would muddle-up his otherwise tidy epic. For to consider Canada is to realize that there was, in fact, no need at all for all the elaborate proposals, hearings, secret meetings, and political wheeling-and-dealing, that ultimately gave shape to the Federal Reserve Act, if all that was desired was to equip the United States with a currency system worthy of a nation already on its way to becoming an economic powerhouse. Like Dorothy’s ruby slippers, the solution to the United States’ currency ills had been at hand, or at foot, all along. Legislators had only to repeat to themselves, “There’s no place like Canada,” while taking steps that would tap obstructive legal restrictions out of the banking system.

Of course that didn’t happen, thanks mainly to a combination of banking-industry opposition to branch banking and populist opposition — spearheaded by William Jennings Bryan — to any sort of non-government currency. “Asset currency” was, if you like, “politically impossible.”

So reformers at length turned to the alternative of a central bank. And how was that supposed to work? Though buckets of ink have been spilled for the sake of offering all sorts of elaborate explanations of the “science” behind the Federal Reserve, the essence of that solution, once considered against the backdrop of the “asset currency” alternative, couldn’t have been simpler. It boils down to this: instead of allowing already existing U.S. banks to branch and to issue notes backed by assets other than government bonds, the government would leave the old restrictions in place, while setting up a dozen new banks that would be uniquely exempt from those restrictions. If National banks (or state banks, if they chose to join the new system) wanted currency, but lacked the necessary bonds, they still couldn’t issue more of their own notes no matter what other assets they possessed. But they might now take some of those other assets to the Fed, to exchange for Federal Reserve Notes. The Fed was, in short, a sort of stretcher corps for banks lamed by earlier laws.

To an extent, the more centralized reform resembled an asset currency reform one step removed. But there were two crucial differences. First, by setting the “discount rate” at which they would exchange notes for commercial paper and other assets, the Federal Reserve Banks could either encourage or discourage other banks from acquiring their notes. Second, because member banks could count not just gold and greenbacks but Fed liabilities as reserves, the Fed’s discount rates influenced the overall availability of bank reserves and, hence, of money and credit. These differences, far from having been innocuous, were, as we now realize, portentous.

Still the Fed did have one incontestable advantage over previous reform proposals. For it alone was politically possible. It alone was a winning solution.

But the fact that the Fed won in 1913 doesn’t mean that other, rejected options aren’t worth recalling. Still less does it warrant treating the Fed as sacrosanct. History isn’t finished. Just a few years before the Federal Reserve Act was passed, most people still believed that Andrew Jackson had put paid once and for all to the idea of a U.S. central bank. Today most people still consider the Federal Reserve Act the last word in scientific monetary control. As for what most people will think tomorrow, well, that’s partly up to us, isn’t it?

___________________________
[1] Although they typically appreciate the debilitating consequences of unit banking, many U.S. economists and economic historians appear unaware of the crucial role that freedom of note issue played historically in facilitating branch banking. That banking systems involving relatively few restrictions on banks’ ability to issue banknotes, like those of Scotland before 1845 and Canada until 1935, also had extremely well-developed branch networks, was no coincidence.

[2] On the limited redemption of National banknotes and attempts to address it see Selgin and White, “Monetary Reform and the Redemption of National Bank Notes, 1863-1913.” Business History Review 68 (2) (Summer 1994).

[3] For a very good review of the features and performance of the Canadian system in its heyday, see R.M. Breckenridge, “The Canadian Banking System, 1817-1890,” Publications of the American Economic Association, v. X (1895), pp. 1-476. Not long ago, when I spoke favorably of Canada’s system at a gathering of economic historians, one asked afterwards, rather superciliously, whether I realized how large Canada’s economy had been back around 1913. Apparently my interrogator thought that Canada’s small size made its success irrelevant. I can’t see why. Nor, evidently, could the many persons who proposed and lobbied for various asset currency proposals over the course of over a decade or so.

[Cross-posted from Alt-M.org]

Global Science Report is a feature from the Center for the Study of Science, where we highlight one or two important new items in the scientific literature or the popular media. For broader and more technical perspectives, consult our monthly “Current Wisdom.”

Perhaps no other climatic variable receives more attention in the debate over CO2-induced global warming than temperature. Its forecast change over time in response to rising atmospheric CO2 concentrations is the typical measure by which climate models are compared. It is also the standard by which the climate model projections tend to be judged; right or wrong, the correctness of global warming theory is most often adjudicated by comparing model projections of temperature against real-world measurements. And in such comparisons, it is critical to have a proper baseline of good data; but that is easier acknowledged than accomplished, as multiple problems and potential inaccuracies have been identified in even the best of temperature datasets.

One particular issue in this regard is the urban heat island effect, a phenomenon by which urban structures artificially warm background air temperatures above what they normally would be in a non-urbanized environment. The urban influence on a given station’s temperature record can be quite profound. In large cities, for example, urban-induced heating can be as great as Tokyo’s 10°C, making it all the more difficult to detect and discern a CO2-induced global warming signal in the temperature record, especially since the putative warming of non-urbanized areas of the planet over the past century is believed to be less than 1°C.  Yet, because nearly all long-term temperature records have been obtained from sensors initially located in towns and cities that have experienced significant growth over the past century, it is extremely important that urbanization-induced warming – which can be a full order of magnitude greater than the background trend being sought – be removed from the original temperature records when attempting to accurately assess the true warming (or cooling!) of the natural non-urban environment. A new study by Founda et al. (2015) suggests this may not be so simple or straightforward a task.

Working with temperature records in and around the metropolitan area of Athens, Greece, Founda et al. set out to examine the interdecadal variability of the urban heat island (UHI) effect, since “few studies focus on the temporal variability of UHI intensity over long periods.” Yet, as they note, “knowledge of the temporal variability and trends of UHI intensity is very important in climate change studies, since [the] urban effect has an additive effect on long term air temperature trends.”

To complete their objective the four Greek researchers compared long-term air temperature data from two urban, two suburban and two rural stations over the period 1970-2004. The UHI was calculated as the difference between the urban and suburban (or rural) stations for monthly, seasonal and annual means of air temperature (max, min, and mean).

Among their several findings, the authors report notable differences in the UHI’s intensity across the seasons and in comparing the UHI when calculated using maximum, minimum, or mean temperatures. Of significance to the discussion at hand, however, the authors note “the warming rate of the air temperature in Athens is particularly large during [the] last decades,” such that the “difference of the annual mean air temperature between urban and rural stations exhibited a progressively statistically significant increase over the studied period.” Indeed, as shown in the figure below for the stations (a) National Observatory of Athens (NOA) in the center of Athens and Tanagra (TAN), approximately 50 km north of the city, as well as for (b) the coastal urban station of Hellinikon (HEL) and again the rural station of Tanagra, the anthropogenic influence of urbanization on temperatures at these two urban stations is growing in magnitude with time such that “the mean values of UHI magnitude [calculated across the entire record] are not quite representative of the more recent period.”

 

Interdecadal variation and annual trends of the Athens, Greece UHI calculated between two urban and one rural station using mean annual temperatures over the period 1970-2004. The two urban stations were the National Observatory of Athens (NOA) in the center of Athens and Hellinikon (HEL), located near the urbanized coast. The rural station Tanagra (TAN), was located approximately 50 km north of the city. Adapted from Founda et al. (2015).

Such findings as these are of significant relevance in climate change studies, for they clearly indicate the UHI influence on a temperature record is not static. It changes over time and is likely inducing an ever-increasing warming bias on the temperature record, a bias that will only increase as the world’s population continues to urbanize in the years and decades ahead. Consequently, unless researchers routinely identify and remove this growing UHI influence from the various temperature data bases used in global change studies, there will likely be a progressive overestimation of the influence of the radiative effects of rising CO2 on the temperature record.

 

Reference

Founda, D., et al. 2015. Interdecadal variations and trends of the Urban Heat Island in Athens (Greece) and its response to heat waves. Atmospheric Research, 161-162, 1-13.

One of the themes in my new study, “Why the Federal Government Fails,” is that the federal government has grown too large to manage with any reasonable level of efficiency and competence. Even if politicians worked diligently to advance the general interest, and even if federal bureaucracies focused on delivering quality services, the vast size of the government would still generate failure after failure.

Here’s an astounding fact: the federal government’s 2014 budget of $3.5 trillion was almost 100 times larger than the average state government budget of $36 billion, as shown in the figure. The largest state budget was California’s at $230 billion, but even that flood of spending was only one fifteenth the magnitude of the federal spending tsunami. Total state spending in 2014 was $1.8 trillion, which includes spending on general funds and nongeneral funds.

The federal government is not just large in size, but also sprawling in scope. In addition to handling core functions such as national defense, the government runs more than 2,300 subsidy and benefit programs, which is double the number in the 1980s. The federal government has many more employees, programs, contractors, and subsidy recipients to keep track of than any state government.

So even if federal officials spent their time diligently scrutinizing programs to prune waste, the job would be simply too large for them. With much of their time spent fundraising, meeting with lobbyists, and giving speeches, members have little time left to study policy, and they routinely miss all or most of their committee hearings. Congress grabs for itself vast powers over nonfederal activities, but then members do not have the time to see that their interventions actually work.

A really sad thing about American democracy is that we are squandering a huge built-in advantage that could greatly improve the nation’s governance. I’m talking about federalism, or allowing local and state governments to handle the great majority of governmental activities. Instead, politicians of both parties, and at all levels, have done their best over the past century to crush federalism and centralize power in Washington.

They have done so for no sound policy reason: centralization benefits politicians, not citizens. Consider that Congress has created hundreds of new federal programs to supposedly help the public since the 1960s. Yet, ironically, polling shows that the public has not grown fonder of the federal government. Quite the opposite, polling shows that Americans have become more alienated from the federal government, and more disgusted by its corruption and dysfunction.

To learn more about the sad realities of our government, see Why the Federal Government Fails.

Pages