Policy Institutes

The American presidency has accumulated an unprecedented set of institutional advantages in the conduct of foreign policy. Unlike on the domestic side where presidents face an activist and troublesome Congress, the Constitution, the bureaucratic and legal legacies of previous wars, the overreaction to 9/11, and years of assiduous executive branch privilege-claiming now afford the White House great latitude to run foreign policy without interference from Congress.

But one of the most tragic reasons for this situation stems from the abject failure of the marketplace of ideas to check the growth in executive power. In theory, the marketplace of ideas consists of free-wheeling debate over the ends and means of foreign policy and critical analysis of the ongoing execution of foreign policy that help the public and its political leaders to distinguish good ideas from poor ones. Philosophers since Immanuel Kant and John Stuart Mill have championed this dynamic. The Founding Fathers enshrined its logic in the First Amendment. Recent scholarship argues that the marketplace of ideas is central to the democratic peace and the ability of democracies to conduct smarter foreign policies than other nations.

In practice, however, today’s marketplace of ideas falls terribly short of this ideal.

The most famous recent example is the run up to the 2003 Iraq War. The Bush administration used an assortment of half-baked intelligence, exaggerations, and flat out lies about Iraqi WMD programs to urge the public into supporting the war. Shockingly, however, the national debate over the war was muted. Though false and based on flimsy evidence, the Bush administration’s claims received surprisingly little criticism. Reality reasserted itself, of course, as the failure to find any evidence of such programs made it clear that the administration had waged war under false pretenses. Where was the vaunted marketplace of ideas?

In an influential article in International Security written just after the 2003 Iraq War, political scientist Chaim Kaufmann argued that a good deal of the reason for the Bush administration’s ability to sell the war lay in the president’s institutional advantages. As president and Commander in Chief, Bush not only controlled the flow of critical intelligence information, he also enjoyed greater authority in the debate than his critics, allowing him to (falsely) frame the operation as part of the war on terrorism, thus taking advantage of the public’s outrage over the 9/11 attacks.

But Kaufmann (among many others) made another argument about why Bush succeeded: the news media simply failed to do its job. Indeed, after a review of its coverage of the run up to war, the New York Times editorial board took the unusual step of acknowledging it had failed in its core mission: “Looking back, we wish we had been more aggressive in re-examining the claims [about Iraqi WMD] as new evidence emerged – or failed to emerge.”

At this point, one might assume that more than a decade of intervention, chaos, and terrorism in the Middle East would have provided the news media with a powerful set of lessons. These lessons might include things like: scrutinize the basis for intervention; ask hard questions about the plans for what happens after the initial military operation ends, work to appreciate how U.S. actions affect the attitudes and actions of other people, groups, and nations around the world.

Unfortunately it does not appear that the news media has learned much if anything. President Obama has spent eight years talking about withdrawing the United States from the Middle East but has in fact expanded the military footprint of the United States. He has done so without much real debate in the mainstream news about the wisdom of his actions. Tellingly, what debate has occurred has focused on erroneous claims that Obama has appeased our enemies by withdrawing too much.

The figure below reveals some telling evidence of this sad state of affairs. Libya, Yemen, Afghanistan, and Syria all represent countries in which the United States is engaged in combat in various ways and at varying levels of intensity. Crucially, each represents a situation with the potential to involve the United States military in a bigger and messier conflict.

Figure 1. Stories per Newspaper per Month Mentioning Obama & U.S. Military & Country Name

 Source: New York Times, Washington Post, Wall Street Journal via Factiva

The figure indicates how often each month the three most important newspapers in the country (the New York Times, Washington Post, and Wall Street Journal) mention President Obama, the U.S. military, and the name of the country in the same story. In each case the newspapers are writing fewer stories per month in 2016 than they did in 2015 and the numbers over all are quite low. The average reader of one of these newspapers would have read just three stories about U.S. military involvement with Yemen, for example, assuming he or she was diligent enough to have caught all three stories over the past four and half months. 

This is no doubt an imperfect measure of the national debate about Obama’s foreign policy; it certainly fails to capture some of the other sources of information and debate. That said, it is difficult to imagine that the marketplace of ideas could be very robust without a good number of such stories in the mainstream news media. Moreover, this is something of a best-case metric for the marketplace since this figure only includes data from three newspapers that cover foreign affairs far more intensively than almost all other American news outlets.

For those who were hopeful that the American marketplace of ideas on foreign policy would improve as 9/11 receded into history, this comes as bad news. It suggests that the challenges to free-wheeling debate do not lie simply in emotional overreactions to terrorism, or to temporary Congressional obsequiousness to the White House. Recent concerns about presidential foreign policy narratives aside, it also suggests that the problem isn’t simply political spin. The problem is deeper than that. At root is a failure of the marketplace of ideas in at least one if not both of its most fundamental elements. The first possibility is that the news media in its current form – dominated by big corporations and yet weakened economically by the Internet, audience fragmentation, and increasing partisanship – is incapable of doing the job the marketplace of ideas requires of it. The second, even darker possibility, is that the public, the ultimate arbiter of what the news must look like, is simply uninterested in having the necessary debate required to force the White House to be honest and transparent about foreign policy.

The American Association of Motor Vehicle Administrators is the umbrella group for DMV bureaucrats across the nation. It’s a non-profit group, but it does more than earnestly educate government officials and the public about the nuances of driver licensing. Since the 1930s, it has advocated for increased government spending on licensing bureaucracy—and it has advocated against driver’s rights. (It’s all discussed in my book Identity Crisis.) That doesn’t mean AAMVA can’t have fun. Indeed, AAMVA’s social season gets underway next week.

You see, AAMVA is a growing business. A decade ago, when the Capitol Hill staffer with lead responsibility for the REAL ID Act came through AAMVA’s revolving door, I noted the dollar-per-driver fee it collects in the Commercial Driver Licensing system. That $13 million in revenue has surely grown since then.

AAMVA’s revenues will grow far more when it runs the back-end of the REAL ID system, potentially pulling in from three-and-a-quarter cents to five cents per driver in the United States. At 210 million licensed drivers, AAMVA could make upwards of ten million dollars per year.

To help that business flow, every year AAMVA holds not one, but five lavish conferences, each of which has its own awards ceremonies aimed at saluting DMV officials and workers. There, AAMVA leadership, vendors, and officials from government agencies both state and federal gather to toast their successes in advancing their cause, including progress in implementating our national ID law, the REAL ID Act.

The first conference and ceremony, for “Region IV” (roughly, everything west of Texas), starts this coming Monday, May 16, 2016, in Portland, Oregon. Several awards will be distributed. Last year’s Region IV winners included Washington State’s Department of Licensing (“Excellence in Government Partnership” for a “Use Tax Valuation Project”) and California’s Department of Motor Vehicles (“USC Freshman Orientation Project”). Stay tuned to find out who will win prizes at this year’s taxpayer subsidized extravaganza!

AAMVA is doing everything right to cultivate friendship with its membership and to advance the aims of the driver licensing industry. Department of motor vehicle officials, after all, are the ones who elected legislators turn to first when they have questions about policy.

It helps AAMVA a lot if DMV officials sing from the industry songbook. Heaven forfend if a DMV official were to tell his or her legislature that implementing REAL ID is unnecessary because the costs are disporportionate to the benefits, that REAL ID allows tracking of race, and that the federal government will always back down if a state declines to implement.

AAMVA regional conferences occur monthly between now and August, when their international conference kicks off in Colonial Williamsburg, “a location that is ideal to bring the entire family”! We will be taking a close look at awardees and top DMV officials who are close to AAMVA. There is a distinct possiblity that they represent the interests of AAMVA to the legislature when called upon, rather than giving dispassionate advice about what’s best for taxpayers and the people of their states. That inclination is helped along by AAMVA’s busy national ID social calendar.

The federal district court sitting in D.C. yesterday handed a victory to those who believe in following statutory text, potentially halting the payment of billions of dollars to insurers under the Affordable Care Act’s entitlement “cost-sharing” provisions.

Since January 14, 2014, the Treasury Department has been authorizing payments of reimbursements to insurers providing Obamacare coverage. The problem is that Congress never appropriated the funds for those expenditures, so the transfers constitute yet another executive overreach.

Article I of the Constitution provides quite clearly that “No Money shall be drawn from the Treasury but in Consequence of Appropriations made by Law.” The “power of the purse” resides in Congress, a principle that implements the overall constitutional structure of the separation of powers and that was noted as an important bulwark against tyranny by Alexander Hamilton in the Federalist 78.

It’s a basic rule that bears repeating: the executive branch cannot disburse funds that Congress has not appropriated.

Accordingly, in a win for constitutional governance, Judge Rosemary Collyer held in House of Representatives v. Burwell that the cost-sharing reimbursements authorized under the ACA’s section 1402 must be appropriated by Congress annually, and are not assumed to be appropriated.

Judge Collyer gave a biting review of the federal government’s argument in the case: “It is a most curious and convoluted argument whose mother was undoubtedly necessity.” The Department of Health and Human Services claimed that another part of the ACA that is a permanent appropriation—section 1401, which provides tax credits—also somehow included a permanent appropriation for Section 1402. Hearkening to the late Justice Scalia’s lyrical prose, Collyer explained that the government was trying to “squeeze the elephant of Section 1402 reimbursements into the mousehole of Section 1401(d)(1).”

Indeed, this ruling is a bit of a feather in Cato’s cap as well. The legal argument that prevailed here—that the section 1402 funds cannot be disbursed without congressional appropriation—first was discussed publicly at a 2014 Cato policy forum. The lawyer who came up with the idea, David Rivkin of BakerHostetler, refined it in conjunction with his colleague Andrew Grossman, also a Cato adjunct scholar who spoke at the forum. After BakerHostetler had to withdraw from the case due to a conflict, George Washington University law professor Jonathan Turley (who also spoke at the forum) took over the case.

Judge Collyer stayed her injunction against the Treasury Department pending appeal before the U.S. Court of Appeals for the D.C. Circuit. Regardless of how that court decides – as in King v. Burwell, even if there’s a favorable panel, President Obama has stacked the overall deck – the case is likely to end up before the Supreme Court. If Chief Justice Roberts sees this as a technical case (like Hobby Lobby or Zubik/Little Sisters) rather an existential one (like NFIB v. Sebelius or King), the challengers have a shot. But because Democrat-appointed justices simply will not interpret clear law in a way that hurts Obamacare, this case, like so much else, turns on the presidential election and the nominee who fills the current high-court-vacancy.

Whatever happens down that line, Judge Collyer’s succinct ruling makes a powerful statement in favor of constitutional separation of powers as a bulwark for liberty and the rule of law.

“Obama said ‘so sue me.’ The House did, and Obama just lost.” That’s how the Wall Street Journal sub-heads its lead editorial this morning discussing the president’s latest court loss, nailing this most arrogant of presidents who believes he can rule “by pen and phone,” ignoring Congress in the process. With an unmatched record of losses before the Supreme Court, this onetime constitutional law instructor persists in ignoring the Constitution, even when the language is crystal clear.

Article I, section 9, clause 7 of the Constitution provides that “No Money shall be drawn from the Treasury but in Consequence of Appropriations made by Law.” Not much wiggle room there. So what did the president do? He committed billions of dollars from the Treasury without the approval of Congress. In her opinion yesterday Judge Rosemary Collyer noted, the Journal reports, “that Congress had expressly not appropriated money to reimburse health insurers under Section 1402 of the Affordable Care Act. The Administration spent money on those reimbursements anyway.”

George Washington Law’s Jonathan Turley, lead counsel for the House in this case, House v. Burwell, called yesterday’s decision “a resounding victory not just for Congress but for our constitutional system as a whole. We remain a system based on the principle of the separation of powers and the guarantee that no branch or person can govern alone.”

But don’t expect the president to be any more chastened by this decision than by his many previous losses in the courts. Indeed, as he was smarting from yesterday’s loss he was preparing, the Washington Post reports, to release a letter this morning “directing schools across the nation to provide transgender students with access to suitable facilities—including bathrooms and locker rooms—that match their chosen gender identity.” And where did he get his authority for that? Not from Congress. It’s based on his reading of Title IX of the Civil Rights Act of 1964 that for over half a century no one else has seen, doubtless because Title IX prohibits discrimination on the basis of sex, not chosen sex. Reading Title IX as we want it to be is of a piece with reading the Constitution that way too. Thus do objectivity and the rule of law fade into the rule of man.

The day before yesterday, The Washington Post ran a piece with the alarming headline, “The middle class is shrinking just about everywhere in America.” Although you wouldn’t know it from the first few paragraphs, a shrinking middle class isn’t necessarily a bad thing. As HumanProgress.org Advisory Board member Mark Perry has pointed out, America’s middle class is disappearing primarily because people are moving into higher income groups, not falling into poverty. Data from the U.S. Census Bureau shows that after adjusting for inflation, households with an annual income of $100,000 or more rose from a mere 8% of households in 1967 to a quarter of households in 2014.

According to the Pew Research Center, 11% fewer Americans were middle class in 2015 than in 1971, because 7% moved into higher income groups and 4% moved into lower income groups. The share of Americans in the upper middle and highest income tiers rose from 14% in 1971 to 21% in 2015. 

One has to read fairly far into the Washington Post’s coverage before seeing any mention of the fact that a shrinking middle class can mean growing incomes: 

“[In many] places, the shrinking middle class is actually a sign of economic gains, as more people who were once middle class have joined the ranks at the top. [For example, in] the Washington, D.C. metropolitan area, the share of adults living in lower-income households has actually held steady [from 2000 to 2014]. The households disappearing from the middle-class, rather, are reflected in the growing numbers at the top.”

Other cities with a shrinking middle class, a growing upper class and very little change in the lower class include New York, San Francisco and New Orleans. So the next time you hear someone bemoan the “shrinking middle class,” take a closer look at the data and keep in mind that it may actually be a sign of growing prosperity. 

The 2016 Milton Friedman Prize for Advancing Liberty has been awarded to Flemming Rose and will be formally presented at a dinner in New York on May 25. (Tickets still available!)

Flemming Rose is a Danish journalist. In the 1980s and 1990s he was the Moscow correspondent for Danish newspapers. He saw the last years of Soviet communism, with all its poverty, dictatorship, and censorship, and the fall of communism, only to be disappointed again with the advance of Russian authoritarianism. After also spending time in the United States, he became an editor at the Danish newspaper Jyllands-Posten. In 2005 he noticed “a series of disturbing instances of self-censorship” in Europe. In particular, “a Danish children’s writer had trouble finding an illustrator for a book about the life of Muhammad. Three people turned down the job for fear of consequences. The person who finally accepted insisted on anonymity, which in my book is a form of self-censorship.”

Rose decided to take a stand for free speech and the open society. He asked 25 Danish cartoonists “to draw Muhammad as you see him.” Later, he explained that 

We [Danes] have a tradition of satire when dealing with the royal family and other public figures, and that was reflected in the cartoons. The cartoonists treated Islam the same way they treat Christianity, Buddhism, Hinduism and other religions. And by treating Muslims in Denmark as equals they made a point: We are integrating you into the Danish tradition of satire because you are part of our society, not strangers. The cartoons are including, rather than excluding, Muslims.

Rose promised to publish all the cartoons he received. He got 12. They were by turns funny, provocative, insightful, and offensive. One implied that the children’s book author was a publicity seeker.  One mocked the anti-immigration Danish People’s Party. One portrayed the editors of Jyllands-Posten as a bunch of reactionary provocateurs. The most notorious depicted the prophet with a bomb in his turban.

A firestorm erupted. Protests were made. Western embassies were attacked in some Muslim countries. As many as 200 people were killed in violent protests. Rose and the turban cartoonist were the subject of death threats. To this day Rose travels with security. 

Is Rose in fact a provocateur or anti-Muslim? No. When we discovered that his book A Tyranny of Silence had not been published in English, that was the first question we asked. From reading the manuscript, and from talking to contacts in Denmark and Europe, we became confident that Rose was a genuine liberal with a strong anti-authoritarian bent, sharpened during his years as a reporter in the Soviet Union. His book, recently reissued with a new afterword, confirms that. Chapter 10, “A Victimless Crime,” traces the history of religious freedom from the Protestant Reformation to the challenges faced today by Muslims of different religious and political views.

Through it all, and through future attacks such as those at the French magazine Charlie Hebdo, Rose has continued to speak out for free speech and liberal values. He has made clear that his concern has always been – in the Soviet Union, in Europe, in the United States, and in Muslim countries – for individual dignity, freedom of religion, and freedom of thought. But he has insisted that there is no “right not to be offended.” He has become a leading public intellectual in a time when free speech is threatened in many ways by many factions. Today, in Politico Europe, he deplores a proposed law that would deny admission to Denmark to Islamists and criminalize anti-democratic speech. He worries:

What’s at stake in this controversy, and visible in similar developments across Europe, is the success of the Continent’s struggle to manage cultural and religious diversity. Most politicians believe we need to promote a diversity of opinions and beliefs, but manage that diversity with more tightly-controlled speech. That is wrong. A more diverse society needs more free speech, not less. This will be the key challenge for Denmark and Europe in the years ahead. The prospects do not look bright.

The prospects are brighter as long as free speech has defenders such as Flemming Rose.

The first few recipients of the Milton Friedman Prize were economists. Later came a young man who stopped Hugo Chavez’s referendum to create a socialist dictatorship, and a writer who spent 6 years in Iranian jails, followed by economic reformers from China and Poland.

I think the diversity of the recipients reflects the many ways in which liberty must be defended and advanced. People can play a role in the struggle for freedom as scholars, writers, activists, organizers, elected officials, and many other ways. Some may be surprised that a Prize named for a great scholar, a winner of the Nobel Prize in Economics, might go to a political official, a student activist, or a newspaper editor. But Milton Friedman was not just a world-class scholar. He was also a world-class communicator and someone who worked for liberty in issues ranging from monetary policy to conscription to drug prohibition to school choice. When he discussed the creation of the Prize with Cato president Ed Crane, he said that he didn’t want it to go just to great scholars. The Prize is awarded every other year “to an individual who has made a significant contribution to advance human freedom.” Friedman specifically cited the man who stood in front of the tank in Tiananmen Square as someone who would qualify for the Prize by striking a blow for liberty. Flemming Rose did not shy away from danger when he encountered it. He kept on advocating for a free and open society. Milton Friedman would be proud. 

For more than a century, America has been the global leader of the aviation industry. But these days, the government-run parts of the industry are inefficient and falling behind, including airports, security screening, and air traffic control (ATC). International experience shows that these activities can be better run outside of government bureaucracies.

House Transportation Committee chairman Bill Shuster introduced legislation to shake-up our moribund ATC system and move it out of the government. Shuster modeled his bill on highly successful Canadian reforms that established ATC as a self-funded nonprofit corporation, Nav Canada. For America, Canadian-style reforms could reduce airspace congestion, improve efficiency, benefit the environment, and save taxpayer money.

Such reforms should appeal to conservatives and Republicans, but there is resistance. Some Republicans are carrying water for the general aviation industry, which opposes the bill for apparently short-sighted financial reasons. And some conservative wonks oppose the bill because it does not reform the labor union structure of the ATC workforce. That objection is also short-sighted.

Economist Diana Furchtgott-Roth opposes Shuster’s legislation over labor issues. Diana is an expert on labor unions, but she is letting the perfect be the enemy of the good here. Our ATC system—run by the Federal Aviation Administration (FAA)—is being held back by government bureaucracy and congressional micromanagement, not so much by unionization.

The reason why the FAA has a long history of cost overruns, mismanaged technology projects, and other failures is the bad incentive structure that exists within all federal agencies. The more complex the task, the more that government bureaucracies fail, and ATC is becoming increasingly complex. Bob Poole has described the FAA’s bureaucracy problems in this study, and I have discussed federal bureaucratic failure more generally in this study.

Marc Scribner at CEI does a fantastic job of countering Diana’s arguments, and Bill Shuster responds to Diana’s labor-related complaints here. Personally, I would repeal “collective bargaining” (monopoly unionism) completely in the public and private sectors, for both economic and freedom reasons. But until that happens, I would take a private-sector unionized company any day over a unionized federal bureaucracy. Diana would apparently prefer the latter, which I find perplexing.

Private ATC managers would be more likely to push back against unreasonable union demands than government managers. And even as a nonprofit entity, a self-funded and unsubsidized ATC company would have a bottom line to meet. In Canada’s case, that structure has driven major improvements in productivity and created perhaps the best ATC system in the world. Nav Canada has more freedom to innovate than the FAA, and it has a strong incentive to do so because foreign sales of its technologies help the company meet its bottom line.

Nav Canada has won three International Air Transport Association “Eagle” Awards as the world’s best ATC provider. The system is handling 50 percent more traffic than before privatization, but with 30 percent fewer employees. And, as Marc Scribner notes, “Since the Canadian reforms 20 years ago, the fees charged to aircraft operators are now more than 30 percent lower than the taxes they replaced.”

And that progress in Canada was achieved with a unionized workforce and collective bargaining. In the long run, I favor freedom of association for ATC workers, but the top priority today is to overhaul the institutional structure of the system and bring in private management.

More on ATC reform here.

More on labor union reform here.

For the Wall Street Journal’s comparison of U.S. and Canadian ATC, see here.

Marc Scribner provides more excellent analysis here.

Bob Poole weighs in on these issues here.

Kudos to Marc and Bob, who both deserve “Eagle” awards for their top-class ATC analyses.

Copepods are small crustaceans and encompass a major group of secondary producers in the planktonic food web, often serving as a key food source for fish. And in the words of Isari et al. (2015), these organisms “have generally been found resilient to ocean acidification levels projected about a century ahead, so that they appear as potential ‘winners’ under the near-future CO2 emission scenarios.” However, many copepod species remain under-represented in ocean acidification studies. Thus, it was the goal of Isari et al. to expand the knowledge base of copepod responses to reduced levels of seawater pH that are predicted to occur over the coming century.

To accomplish this objective, the team of five researchers conducted a short (5-day) experiment in which they subjected adults of two copepod species (the calanoid Acartia grani and the cyclopoid Oithona davisae) to normal (8.18) and reduced (7.77) pH levels in order to assess the impacts of ocean acidification (OA) on copepod vital rates, including feeding, respiration, egg production and egg hatching success. At a pH value of 7.77, the simulated ocean acidification level is considered to be “at the more pessimistic end of the range of atmospheric CO2 projections.” And what did their experiment reveal?

In the words of the authors, they “did not find evidence of OA effects on the reproductive output (egg production, hatching success) of A. grani or O. davisae, consistent with the numerous studies demonstrating generally high resistance of copepod reproductive performance to the OA projected for the end of the century,” citing the works of Zhang et al. (2011), McConville et al. (2013), Vehmaa et al. (2013), Zervoudaki et al. (2014) and Pedersen et al. (2014). Additionally, they found no differences among pH treatments in copepod respiration or feeding activity for either species. As a result, Isari et al. say their study “shows neither energy constraints nor decrease in fitness components for two representative species, of major groups of marine planktonic copepods (i.e. Calanoida and Cyclopoida), incubated in the OA scenario projected for 2100.” Thus, this study adds to the growing body of evidence that copepods will not be harmed by, or may even benefit from, even the worst-case projections of future ocean acidification.

 

References

Isari, S., Zervoudake, S., Saiz, E., Pelejero, C. and Peters, J. 2015. Copepod vital rates under CO2-induced acidification: a calanoid species and a cyclopoid species under short-term exposures. Journal of Plankton Research 37: 912-922.

McConville, K., Halsband, C., Fileman, E.S., Somerfield, P.J., Findlay, H.S. and Spicer, J.I. 2013. Effects of elevated CO2 on the reproduction of two calanoid copepods. Marine Pollution Bulletin 73: 428-434.

Pedersen, S.A., Håkedal, O.J., Salaberria, I., Tagliati, A., Gustavson, L.M., Jenssen, B.M., Olsen, A.J. and Altin, D. 2014. Multigenerational exposure to ocean acidification during food limitation reveals consequences for copepod scope for growth and vital rates. Environmental Science & Technology 48: 12,275-12,284.

Vehmaa, A., Hogfors, H., Gorokhova, E., Brutemark, A., Holmborn, T. and Engström-Öst, J. 2013. Projected marine climate change: Effects on copepod oxidative status and reproduction. Ecology and Evolution 3: 4548-4557.

Zervoudaki, S., Frangoulis, C., Giannoudi, L. and Krasakopoulou, E. 2014. Effects of low pH and raised temperature on egg production, hatching and metabolic rates of a Mediterranean copepod species (Acartia clausi) under oligotrophic conditions. Mediterranean Marine Science 15: 74-83.

Zhang, D., Li, S., Wang, G. and Guo, D. 2011. Impacts of CO2-driven seawater acidification on survival, egg production rate and hatching success of four marine copepods. Acta Oceanologica Sinica 30: 86-94.

Last year, I mentioned a Canadian court case that could help promote free trade within Canada. Well, a lower court has now ruled for free trade, finding that the Canadian constitution does, in fact, guarantee free trade among the provinces. Here are the basic facts, from the Toronto Globe and Mail:

In 2013, Gérard Comeau was caught in what is likely the lamest sting operation in Canadian police history. Mr. Comeau drove into Quebec, bought 14 cases of beer and three bottles of liquor, and headed home. The Mounties were waiting in ambush. They pulled him over, along with 17 other drivers, and fined him $292.50 under a clause in the New Brunswick Liquor Control Act that obliges New Brunswick residents to buy all their booze, with minor exceptions set out in regulations, from the provincial Liquor Corporation.

And here’s how the court ruled:

Mr. Comeau went to court and challenged the law on the basis of Section 121 of the Constitution: “All articles of the growth, produce or manufacture of any of the provinces shall, from and after the Union, be admitted free into each of the other provinces.”

The judge said Friday that the wording of Section 121 is clear, and that the provincial law violates its intention. The Fathers of Confederation wanted Canada to be one economic union, a mari usque ad mare. That’s why they wrote the clause.

If you are into these sort of things, I highly recommend reading the judge’s decision, which looks deeply into the historical background of the Canadian constitutional provision at issue.

This is all very good for beer lovers, but also has much broader implications for internal Canadian trade in general. This is from an op-ed by Marni Soupcoff of the Canadian Constitution Foundation, which assisted with the case: 

But most Canadians aren’t interested in precisely how many more six packs will now be flowing through New Brunswick’s borders. They want to know what the Comeau decision means for them. They want to know whether there will be a lasting and far-reaching impact from one brave Maritimer’s constitutional challenge. And so, as the executive director of the Canadian Constitution Foundation (CCF), the organization that supported Comeau’s case, I’d like to answer that.

Canada is rife with protectionist laws and regulations that prevent the free flow of goods from one province to another. These laws affect Canadians’ ability to buy and sell milk, chickens, eggs, cheese and many other things, including some that neither you nor I have ever even thought about. And that is the beauty of this decision. It will open up a national market in everything. Yes, the CCF, Comeau and Comeau’s pro bono defence lawyers Mikael Bernard, Arnold Schwisberg and Ian Blue can all be proud that we have “freed the beer.” But we’ve done more than that — we’ve revived the idea that Canada should have free trade within its borders, which is what the framers of our Constitution intended. That means that the Supreme Court will likely have to revisit the constitutionality of this country’s marketing boards and other internal trade restrictions. In other words, this is a big deal.

As always with lower court decisions, there is the possibility of appeal. I haven’t heard anything definitive yet as to whether that will happen here. But whatever happens down the road, this lower court decision is something to be celebrated.

In a previous blog posting, I suggested that there is no case for capital adequacy regulation in an unregulated banking system.  In this ‘first-best’ environment, a bank’s capital policy would be just another aspect of its business model, comparable to its lending or reserving policies, say.  Banks’ capital adequacy standards would then be determined by competition and banks with inadequate capital would be driven out of business.

Nonetheless, it does not follow that there is no case for capital adequacy regulation in a ‘second-best’ world in which pre-existing state interventions — such as deposit insurance, the lender of last resort and Too-Big-to-Fail — create incentives for banks to take excessive risks.  By excessive risks, I refer to the risks that banks take but would not take if they had to bear the downsides of those risks themselves.

My point is that in this ‘second-best’ world there is a ‘second-best’ case for capital adequacy regulation to offset the incentives toward excessive risk-taking created by deposit insurance and so forth.  This posting examines what form such capital adequacy regulation might take.

At the heart of any system of capital adequacy regulation is a set of minimum required capital ratios, which were traditionally taken to be the ratios of core capital[1] to some measure of bank assets.

Under the international Basel capital regime, the centerpiece capital ratios involve a denominator measure known as Risk-Weighted Assets (RWAs).  The RWA approach gives each asset an arbitrary fixed weight between 0 percent and 100 percent, with OECD government debt given a weight of a zero.  The RWA measure itself is then the sum of the individual risk-weighted assets on a bank’s balance sheet.

The incentives created by the RWA approach turned Basel into a game in which the banks loaded up on low risk-weighted assets and most of the risks they took became invisible to the Basel risk measurement system.

The unreliability of the RWA measure is apparent from the following chart due to Andy Haldane:

Figure 1: Average Risk Weights and Leverage

This chart shows average Basel risk weights and leverage for a sample of international banks over the period 1994–2011.  Over this period, average risk weights show a clear downward trend, falling from just over 70 percent to about 40 percent.  Over the same period, bank leverage or assets divided by capital — a simple measure of bank riskiness — moved in the opposite direction, rising from about 20 to well over 30 at the start of the crisis.  The only difference is that while the latter then reversed itself, the average risk weight continued to fall during the crisis, continuing its earlier trend.  “While the risk traffic lights were flashing bright red for leverage [as the crisis approached], for risk weights they were signaling ever-deeper green,” as Haldane put it: the risk weights were a contrarian indicator for risk, indicating that risk was falling when it was, in fact, increasing sharply.[2]  The implication is that the RWA is a highly unreliable risk measure.[3]

Long before Basel, the preferred capital ratio was core capital to total assets, with no adjustment in the denominator for any risk-weights.  The inverse of this ratio, the bank leverage measure mentioned earlier, was regarded as the best available indicator of bank riskiness: the higher the leverage, the riskier the bank.

These older metrics then went out of fashion.  Over 30 years ago, it became fashionable to base regulatory capital ratios on RWAs because of their supposedly greater ‘risk sensitivity.’  Later the risk models came along, which were believed to provide even greater risk sensitivity.  The old capital/assets ratio was now passé, dismissed as primitive because of its risk insensitivity.  However, as RWAs and risk models have themselves become discredited, this risk insensitivity is no longer the disadvantage it once seemed to be.

On the contrary.

The old capital to assets ratio is making a comeback under a new name, the leverage ratio:[4] what is old is new again.  The introduction of a minimum leverage ratio is one of the key principles of the Basel III international capital regime.  Under this regime, there is to be a minimum required leverage ratio of 3 percent to supplement the various RWA-based capital requirements that are, unfortunately, its centerpieces.

The banking lobby hate the leverage ratio because it is less easy to game than RWA-based or model-based capital rules.  They and their Basel allies then argue that we all know that the RWA measure is flawed, but we shouldn’t throw out the baby with the bathwater.  (What baby? I ask. RWA is a pretend number and it’s as simple as that.)  They then assert that the leverage ratio is also flawed and conclude that we need the RWA to offset the flaws in the leverage ratio.

The flaw they now emphasize is the following: a minimum required leverage ratio would encourage banks to load up on the riskiest assets because the leverage ratio ignores the riskiness of individual assets.  This argument is commonly made and one could give many examples.  To give just one, a Financial Times editorial — ironically entitled “In praise of bank leverage ratios” — published on July 10, 2013 stated flatly:

Leverage ratios …  encourage lenders to load up on the riskiest assets available, which offer higher returns for the same capital.

Hold on right there!  Those who make such claims should think them through: if the banks were to load up on the riskiest assets, we first need to consider who would bear those higher risks.

The FT statement is not true as a general proposition and it is false in the circumstances that matter, i.e., where what is being proposed is a high minimum leverage ratio that would internalize the consequences of bank risk-taking.  And it is false in those circumstances precisely because it would internalize such risk-taking.

Consider the following cases:

In the first, imagine a bank with an infinitesimal capital ratio.  This bank benefits from the upside of its risk-taking but does not bear the downside.  If the risks pay off, it gets the profit; but if it makes a loss, it goes bankrupt and the loss is passed to its creditors.  Because the bank does not bear the downside, it has an incentive to load up on the riskiest assets available in order to maximize its expected profit.  In this case, the FT statement is correct.

In the second case, imagine a bank with a high capital-to-assets ratio.  This bank benefits from the upside of its risk-taking but also bears the downside if it makes a loss.  Because the bank bears the downside, it no longer has an incentive to load up on the riskiest assets.  Instead, it would select a mix of low-risk and high-risk assets that reflected its own risk appetite, i.e., its preferred trade-off between risk and expected return.  In this case, the FT statement is false.

My point is that the impact of a minimum required leverage ratio on bank risk-taking depends on the leverage ratio itself, and that it is only in the case of a very low leverage ratio that banks will load up on the riskiest assets.  However, if a bank is very thinly capitalized then it shouldn’t operate at all.  In a free-banking system, such a bank would lose creditors’ confidence and be run out of business.  Even in the contemporary United States, such a bank would fall foul of the Prompt Corrective Action statutes and the relevant authorities would be required to close it down.

In short, far from encouraging excessive risk-taking as is widely believed, a high minimum leverage ratio would internalize risk-taking incentives and lead to healthy rather than excessive risk-taking.

Then there is the question of how high ‘high’ should be.  There is of course no single magic number, but there is a remarkable degree of expert consensus on the broad order of magnitude involved.  For example, in an important 2010 letter to the Financial Times drafted by Anat Admati, she and 19 other renowned experts suggested a minimum required leverage ratio of at least 15 percent — at least five times greater than under Basel III — and some advocate much higher minima.  Independently, John Allison, Martin Hutchinson, Allan Meltzer and yours truly have also advocated minimum leverage ratios of at least 15 percent.  By a curious coincidence, 15 percent is about the average leverage ratio of U.S. banks at the time the Fed was founded.

There is one further and much under-appreciated benefit from a leverage ratio.  Suppose we had a leverage ratio whose denominator was not total assets or some similar measure.  Suppose instead that its denominator was the total amount at risk: one would take each position, establish the potential maximum loss on that position, and take the denominator to be the sum of these potential losses.  A leverage-ratio capital requirement based on a total-amount-at-risk denominator would give each position a capital requirement that was proportional to its riskiness, where its riskiness would be measured by its potential maximum loss.

Now consider any two positions with the same fair value.  With a total asset denominator, they would attract the same capital requirement, independently of their riskiness.  But now suppose that one position is a conventional bank asset such as a commercial loan, where the most that could be lost is the value of the loan itself.  The other position is a long position in a Credit Default Swap (i.e., a position in which the bank sells credit insurance).  If the reference credit in the CDS should sharply deteriorate, the long position could lose much more than its current value.  Remember AIG! Therefore, the CDS position is much riskier and would attract a much greater capital requirement under a total-amount-at-risk denominator.

The really toxic positions would be revealed to be the capital-hungry monsters that they are.  Their higher capital requirements would make many of them unattractive once the banks themselves were to made to bear the risks involved.  Much of the toxicity in banks’ positions would soon disappear.

The trick here is to get the denominator right.  Instead of measuring positions by their accounting fair values as under, e.g., U.S. Generally Accepted Accounting Principles, one should measure those positions by how much they might lose.

Nonetheless, even the best-designed leverage ratio regime can only ever be a second-best reform: it is not a panacea for all the ills that afflict the banking system.  Nor is it even clear that it would be the best ‘second-best’ reform: re-establishing some form of unlimited liability might be a better choice.

However, short of free banking, under which no capital regulation would be required in the first place, a high minimum leverage ratio would be a step in the right direction.

_____________

[1] By core capital, I refer the ‘fire-resistant’ capital available to support the bank in the heat of a crisis.  Core capital would include, e.g., tangible common equity and some retained earnings and disclosed reserves.  Core capital would exclude certain ‘softer’ capital items that cannot be relied upon in a crisis.  An example of the latter would be Deferred Tax Assets (DTAs).  DTAs allow a bank to claim back tax on previously incurred losses in the event it subsequently returns to profitability, but are useless to a bank in a solvency crisis.

[2] A. G. Haldane, “Constraining discretion in bank regulation.” Paper given at the Federal Reserve Bank of Atlanta Conference on ‘Maintaining Financial Stability: Holding a Tiger by the Tail(s)’, Federal Reserve Bank of Atlanta 9 April 2013, p. 10.

[3] The unreliability of the RWA measure is confirmed by a number of other studies.  These include, e.g.: A. Demirgüç-Kunt, E. Detragiache, and O. Merrouche, “Bank Capital: Lessons from the Financial Crisis,” World Bank Policy Research Working Paper Series No. 5473 2010); A. N. Berger and C. H. S. Bouwman, “How Does Capital Affect Bank Performance during Financial Crises?” Journal of Financial Economics 109 (2013): 146–76; A. Blundell-Wignall and C. Roulet, “Business Models of Banks, Leverage and the Distance-to-Default,” OECD Journal: Financial Market Trends 2012, no. 2 (2014); T. L. Hogan, N. Meredith and X. Pan, “Evaluating Risk-Based Capital Regulation,” Mercatus Center Working Paper Series No. 13-02 (2013); and V. V. Acharya and S. Steffen, “Falling short of expectation — stress testing the Eurozone banking system,” CEPS Policy Brief No. 315, January 2014.

[4] Strictly speaking, Basel III does not give the old capital-to-assets ratio a new name.  Instead, it creates a new leverage ratio measure in which the old denominator, total assets, is replaced by a new denominator measure called the leverage exposure.  The leverage exposure is meant to take account of the off-balance-sheet positions that the total assets measure fails to include.  However, in practice, the leverage exposure is not much different from the total assets measure, and for present purposes one can ignore the difference between the two denominators.  See Basel Committee on Banking Supervision, “Basel III: A global regulatory framework for more resilient banks and banking systems.”  Basel: Bank for International Settlements, June 2011, pp. 62-63.

[Cross-posted from Alt-M.org]

There are a great many reasons to support educational choice: maximizing freedomrespecting pluralism, reducing social conflict, empowering the poor, and so on. One reason is simply this: it works.

This week, researchers Patrick J. Wolf, M. Danish Shakeel, and Kaitlin P. Anderson of the University of Arkansas released the results of their painstaking meta-analysis of the international, gold-standard research on school choice programs, which concluded that, on average, such programs have a statistically significant positive impact on student performance on reading and math tests. Moreover, the magnitude of the positive impact increased the longer students participated in the program.

As Wolf observed in a blog post explaining the findings, the “clarity of the results… contrasts with the fog of dispute that often surrounds discussions of the effectiveness of private school choice.”

That’s So Meta

One of the main advantages of a meta-analysis is that it can overcome the limitations of individual studies (e.g., small samples sizes) by pooling the results of numerous studies. This meta-analysis is especially important because it includes all random-assignment studies on school choice programs (the gold standard for social science research), while excluding studies that employed less rigorous methods. The analysis included 19 studies on 11 school choice programs (including government-funded voucher programs as well as privately funded scholarship programs) in Colombia, Indiana, and the United States. Each study compared the performance of students who had applied for and randomly won a voucher to a “control group” of students who had applied for a voucher but randomly did not receive one. As Wolf explained, previous meta-analyses and research reviews omitted some gold-standard studies and/or included less rigorous research:

The most commonly cited school choice review, by economists Cecilia Rouse and Lisa Barrow, declares that it will focus on the evidence from existing experimental studies but then leaves out four such studies (three of which reported positive choice effects) and includes one study that was non-experimental (and found no significant effect of choice).  A more recent summary, by Epple, Romano, and Urquiola, selectively included only 48% of the empirical private school choice studies available in the research literature.  Greg Forster’s Win-Win report from 2013 is a welcome exception and gets the award for the school choice review closest to covering all of the studies that fit his inclusion criteria – 93.3%.

Survey Says: School Choice Improves Student Performance

The meta-analysis found that, on average, participating in a school choice program improves student test scores by about 0.27 standard deviations in reading and 0.15 standard deviations in math. In laymen’s terms, these are “highly statistically significant, educationally meaningful achievement gains of several months of additional learning from school choice.”

Interestingly, the positive results appeared to be larger for the programs in developing countries rather than the United States, especially in reading. That might stem from a larger gap in quality between government-run and private schools in the developing world. In addition, American students who lost the voucher lotteries “often found other ways to access school choices.” For example, in Washington, D.C., 12% of students who lost the voucher lottery still managed to enroll in a private school, and 35% enrolled in a charter school, meaning barely more than half of the “control group” attended their assigned district school.

The meta-analysis also found larger positive results from publicly funded rather than privately funded programs. The authors note that public finding “could be a proxy for the voucher amount” because the publicly funded vouchers were worth significantly more, on average, than the privately funded scholarships. The authors suggest that parents who are “relieved of an additional financial burden… might therefore be more likely to keep their chid enrolled in a private school long enough to realize the larger academic benefits that emerge after three or more years of private schooling.” Moreover, the higher-value vouchers are more likely to “motivate a higher-quality population of private schools to participate in the voucher program.” The authors also note that differences in accountability regulations may play a role.

The Benefits of Choice and Competition

The benefits of school choice are not limited to participating students. Last month, Wolf and Anna J. Egalite of North Carolina State University released a review of the research on the impact of competition on district schools. Although it is impossible to conduct a random-assignment study on the effects of competition (as much as some researchers would love to force different states to randomly adopt different policies in order to measure the difference in effects, neither the voters nor their elected representatives are so keen on the idea), there have been dozens of high-quality studies addressing this question, and a significant majority find that increased competition has a positive impact on district school performance: 

Thirty of the 42 evaluations of the effects of school-choice competition on the performance of affected public schools report that the test scores of all or some public school students increase when schools are faced with competition. Improvement in the performance of district schools appear to be especially large when competition spikes but otherwise, is quite modest in scale.

In other words, the evidence suggests that when district schools know that their students have other options, they take steps to improve. This is exactly what economic theory would predict. Monopolists are slow to change while organizations operating in a competitive environment must learn to adapt or they will perish.

On Designing School Choice Policies

Of course, not all school choice programs are created equal. Wolf and Egalite offer several wise suggestions to policymakers based on their research. Policymakers should “encourage innovative and thematically diverse schools” by crafting legislation that is “flexible and thoughtful enough to facilitate new models of schooling that have not been widely implemented yet.” We don’t know what education will look like in the future, so our laws should be platforms for innovations rather than constraints molded to the current system.

That means policymakers should resist the urge to over-regulate. The authors argue that private schools “should be allowed to maintain a reasonable degree of autonomy over instructional practices, pedagogy, and general day-to-day operations” and that, beyond a background check, “school leaders should be the ones determining teacher qualifications in line with their mission. We don’t know the “one best way” to teach students, and it’s likely that no “one best way” even exists. For that matter, we have not yet figured out a way to determine in advance whether a would-be teacher will be effective or not. Indeed, as this Brookings Institute chart shows (see page 8), there is practically no difference in effectiveness between traditionally certified teachers and their alternatively certified or even uncertified peers:

In other words, if in the name of “quality control,” the government mandated that voucher-accepting schools only hire traditionally certified teachers, not only would such a regulation fail to prevent the hiring of less-effective teachers, it would also prevent private schools from hiring lots of effective teachers. Sadly, too many policymakers never tire of crafting new ways to “ensure quality” that fall flat or even have the opposite impact.

School choice policies benefit both participating and nonparticipating students. Students who use vouchers or tax-credit scholarships to attend the school of their choice benefit by gaining access to schools that better fit their needs. Students who do not avail themselves of those options still benefit because the very access to alternatives spurs district schools to improve. These are great reasons to expand educational choice, but policymakers should be careful not to undermine the market mechanisms that foster competition and innovation.

For more on the impact of regulations on school choice policies, watch our recent Cato Institute event: “School Choice Regulations: Friend or Foe?”

 

Fresh off his resounding victory in the West Virginia primary, Senator Bernie Sanders has intimated that he has no intent of dropping out of the race any time soon, even though he trails his rival Hillary Clinton significantly in pledged delegates. One of the cornerstones of the Sanders campaign has been his health care plan, which would replace the entirety of the current health care system with a more generous version of Medicare. His campaign has claimed the plan would cost a little more than $13.8 trillion over the next decade, and he has proposed to fund these new expenditures with a clutch of tax increases, many of them levied on higher-income households. At the time, analysts at Cato and elsewhere expressed skepticism that the cost estimates touted by the campaign accurately accounted for all the increases in federal health expenditures the plan would require, and incorporated costs savings estimates that were overly optimistic. Now, a new study from the left-leaning Urban Institute corroborates many of these concerns, finding that Berniecare would cost twice as much as the $13.8 trillion price tag touted by the Sanders campaign.

The authors from the Urban Institute estimate that Berniecare would increase federal expenditures by $32 trillion, 233 percent, over the next decade. The $15 trillion in additional taxes proposed by Sanders would fail to even cover half of the health care proposal’s price tag, leaving a funding gap of $16.6 trillion. In the first year, federal spending would increase by $2.34 trillion. To give some context, total national health expenditures in the United States were $3 trillion in 2014.

Sanders was initially able to restrict most of the tax increases needed to higher-income households through income-based premiums, significantly increasing taxes on capital gains and dividends, and hiking marginal tax rates on high earners. Sanders cannot squeeze blood from the same stone twice, and there’s likely not much more he could do to propose higher taxes on these households, which means if he were to actually have to find ways to finance Berniecare, he’d have to turn to large tax increases on the middle class.

There are different reasons Berniecare would increase federal health spending so significantly. The most straightforward is that it would replace all other forms of health care, from employer sponsored insurance to state and local programs, with one federal program. The second factor is that the actual program would be significantly more generous than Medicare (and the European health systems Sanders so often praises), while also removing even cursory cost-sharing requirements. In addition, this proposal would add new benefits, like a comprehensive long-term services and support (LTSS) component that the Urban Institute estimates would cost $308 billion in its first year and $4.14 trillion over the next decade. These estimates focus on annual cash flows over a relatively short time period, so the study doesn’t delve into the longer-term sustainability issues that might develop from this new component, although they do note that “after this 10-year window, we would anticipate that costs would grow faster than in previous years as baby boomers reach age 80 and older, when rates of severe disability and LTSS use are much higher. Revenues would correspondingly need to grow rapidly over the ensuing 20 years.”

Even at twice the initial price tag claimed by the Sanders campaign, these cost estimates from the Urban Institute might actually underestimate the total costs. As they point out, the authors do not incorporate estimates for the higher utilization of health care services that would almost certainly occur when people move from the current system to the generous, first-dollar coverage in the more generous version of Medicare they would have under this proposal. They also chose not to incorporate higher provider payment rates for acute care services that might be necessary, and include “assumptions about reductions in drug prices [that] are particularly aggressive and may fall well short of political feasibility.”

Berniecare would increase federal government spending by $32 trillion over the next decade, more than twice as much as the revenue from the trillions in taxes Sanders has proposed. And this might not be underselling the actual price tag, and only considers the cash flow issues in the short-term. There could be even greater sustainability problems over a longer time horizon. One thing is for certain the plan would require even more trillions in additional tax hikes.

Economists certainly don’t speak with one voice, but there’s a general consensus on two principles of public finance that will lead to a more competitive and prosperous economy.

To be sure, some economists will say that high tax rates and more double taxation are nonetheless okay because they believe there is an “equity vs. efficiency” tradeoff and they are willing to sacrifice some prosperity in hopes of achieving more equality.

I disagree, mostly because there’s compelling evidence that this approach ultimately leads to less income for the poor, but this is a fair and honest debate. Both sides agree that lower rates and less double taxation will produce more growth (though they’ll disagree on how much growth) and both sides agree that a low-tax/faster-growth economy will produce more inequality (though they’ll disagree on whether the goal is to reduce inequality or reduce poverty).

Since I’m on the low-tax/faster-growth side of the debate, this is one of the reasons why I’m a big fan of tax competition and tax havens.

Simply stated, when politicians have to worry that jobs and investment can cross borders, they are less likely to impose higher tax rates and punitive levels of double taxation. Interestingly, even the statist bureaucrats at the Organization for Economic Cooperation and Development agree with me, writing that tax havens “may hamper the application of progressive tax rates.” They think that’s a bad thing, of course, but we both agree that tax competition means lower rates.

And look at what has happened to tax rates in the past few years. Now that politicians have undermined tax competition and weakened tax havens, tax rates are climbing.

So I was very surprised to see some economists signed a letter saying that so-called tax havens “serve no useful economic purpose.” Here are some excerpts.

The existence of tax havens does not add to overall global wealth or well-being; they serve no useful economic purpose. …these jurisdictions…increase inequality…and undermine…countries’ ability to collect their fair share of taxes. …There is no economic justification for allowing the continuation of tax havens.

You probably won’t be surprised by some of the economists who signed the letter. Thomas Piketty was on the list, which is hardly a surprise. Along with Jeffrey Sachs, who also has a track record of favoring more statism. Another predictable signatory is Olivier Blanchard, the former top economist at the pro-tax International Monetary Fund.

But if that’s an effective “appeal to authority,” there’s a big list of Nobel Prize winners who recognize the economic consensus outlined at the beginning of this post and who understand a one-size-fits-all approach would undermine progress.

In other words, there is a very strong “economic purpose” and “economic justification” for tax havens and tax competition.

Simply stated, they curtail the greed of the political class.

Philip Booth of the Institute of Economic Affairs in London opined on this issue. Here’s some of what he wrote for City A.M.

…the statement that tax havens “have no useful purpose” is demonstrably wrong and most of the other claims in the letter are incredible. Offshore centres allow companies and investment funds to operate internationally without having to abide by several different sets of rules and, often, pay more tax than ought to be due. …Investors who use tax havens can avoid being taxed twice on their investments and can avoid being taxed at a higher rate than that which prevails in the country in which they live, but they do not avoid all tax. …tax havens also allow the honest to shelter their money from corrupt and oppressive politicians. …one of the advantages of tax havens is that they help hold governments to account. They make it possible for businesses to avoid the worst excesses of government largesse and crazy tax systems – including the 39 per cent US corporation tax rate. They have other functions too: it is simply wrong to say that they have no useful purpose. It is also wrong to argue that, if only corrupt governments had more tax revenue, their people would be better served.

Amen. I especially like his final point in that excerpt, which is similar to Marian Tupy’s explanation that tax planning and tax havens are good for Africa’s growth.

Last but not least, Philip makes a key point about whether tax havens are bad because they are sometimes utilized by bad people.

…burglars operate where there is property. However, we would not abolish property because of burglars. We should not abolish tax havens either.

When talking to reporters, politicians, and others, I make a similar point, arguing that we shouldn’t ban cars simply because they are sometimes used as getaway vehicles from bank robberies.

The bottom line, as Professor Booth notes, is that we need tax havens and tax competition if we want reasonable fiscal systems.

But this isn’t simply an issue of wanting better tax policy in order to achieve more prosperity. In part because of demographic changes, tax havens and tax competition are necessary if we want to discourage politicians from creating “goldfish government” by taxing and spending nations into economic ruin.

P.S. Here’s my video on the economic case for tax havens.

P.P.S. Let’s not forget that the Paris-based Organization for Economic Cooperation and Development is the international bureaucracy most active in the fight to destroy tax competition. The is especially outrageous because American tax dollars subsidize the OECD.

In several states around the country, legislators are working to pass legislation that would move their states toward compliance with the REAL ID Act, the U.S. national ID law. Oklahoma state senator David Holt (R), for example, has touted his plan as giving Oklahomans the “liberty” to choose which of two ID types they’ll get. Either one feeds their data into a nationwide system of databases.

If you want a sense of what these legislators are getting their states into, take a look at the eight-page notice the Department of Homeland Security published in the Federal Register today. It’s an entirely ordinary bureaucratic document, which walks through the processes states have to go through to certify themselves as compliant. Its few pages represent hundreds of hours of paperwork that state employees will have to put in complying with federal mandates.

Among them is the requirement that the top official of the DMV and the state Attorney General confirm that their state jumps through all the hoops in federal law. Maybe Oklahoma’s Attorney General, Scott Pruitt (R), thinks his office’s time is well spent on pushing paper for the federal government, but it’s more likely that he wants to be enforcing Oklahoma laws that protect Oklahomans.

REAL ID-compliant states have to recertify to the DHS every three years that they meet DHS’s standards. DHS can and will change these standards, of course. DHS officials get to inspect state facilities and interview state employees and contractors. DHS can issue corrective demands and require the states to follow them before recertification.

It’s all unremarkable—if you’re sanguine about taxpayer dollars burned on bureaucracy, and if you think that states are just administrative arms of the federal government. But if you think of states as constitutionally independent sovereigns, you recognize that this document is out of whack. States do not exist to play second fiddle in bureaucrat-on-bureaucrat bureaucracy.

Whether or not we have a national ID matters. The constitutional design of government matters, including, one hopes, to people in Oklahoma and other states across that land. State officials who are conscious of these things should reject this paperwork and these mandates. If the federal government wants a national ID, the federal government should implement it itself.

Global Science Report is a feature from the Center for the Study of Science, where we highlight one or two important new items in the scientific literature or the popular media. For broader and more technical perspectives, consult our monthly “Current Wisdom.”

Although it’s a favorite headline as people shiver during the coldest parts of the winter, global warming is almost assuredly not behind your suffering (the “warming” part of global warming should have clued you in on this).

But, some folks steadfastly prefer the point of view that all bad weather is caused by climate change.

Consider White House Office of Science and Technology Policy (OSTP) head John Holdren. During the depth of the January 2014 cold outbreak (and the height of the misery) that made “polar vortex” a household name, OSTP released a video featuring Holdren telling us that “the kind of extreme cold being experienced by much of the United States as we speak, is a pattern that we can expect to see with increasing frequency as global warming continues.” 

At the time we said “not so fast,” pointing out that there were as many (if not more) findings in the scientific literature that suggested that either a) no relationship exists between global warming and the weather patterns giving rise to mid-latitude cold outbreaks, or b) the opposite is the case (global warming should lead to fewer and milder cold air outbreaks).

The Competitive Enterprise Institute even went as far as to request a formal correction from the White House. The White House responded by saying that the video represented only Holdren’s “personal opinion” and thus no correction was necessary. CEI filed a FOIA request, and after some hemming and hawing, the White House OSTP finally, after a half-hearted search, produced some documents. Unhappy with this outcome, CEI challenged the effort and just this past Monday, a federal court, questioning whether the OSTP acted in “good faith,” granted CEI’s request for discovery.

In the meantime, the scientific literature on this issue continues to accumulate. When a study finds a link between human-caused global warming and winter misery, it makes headlines somewhere. When it doesn’t, that somewhere is usually reduced to here.

Case in point: Last week, Washington Post’s Capital Weather Gang published a piece by Jason Samenow that highlighted a pair of new findings that suggested that global warming was leading to more blizzards along the East Coast. The mechanism, favored by the global-warming-is-making-cold/blizzards-worse crowd is that Arctic warming, enhanced by melting sea ice there, is causing the curves (i.e., ridges and troughs) in the jet stream to become bigger, and thus slower. This “locks in” a particular weather pattern and can allow cold air to drop further southward as well as set up condition necessary for big snow storms. To us, this seemed more a case of natural variability than global warming, but we suppose beauty is in the eye of the beholder.

But what you haven’t read in the Washington Post (or anywhere else for that matter), is that an even newer paper has just been published by scientists (including Martin Hoerling) at NOAA’s Earth System Research Laboratory  that basically demonstrates that global warming and Arctic sea ice loss should, according to climate models, lead to warmer winter temperatures, less temperature variability, and milder cold air outbreaks. This is basically the opposite conclusion from the one preferred and disseminated by Holdren et al.

From the paper’s abstract:

The emergence of rapid Arctic warming in recent decades has coincided with unusually cold winters over Northern Hemisphere continents. It has been speculated that this “Warm Arctic, Cold Continents” trend pattern is due to sea ice loss. Here we use multiple models to examine whether such a pattern is indeed forced by sea ice loss specifically, and by anthropogenic forcing in general. While we show much of Arctic amplification in surface warming to result from sea ice loss, we find that neither sea ice loss nor anthropogenic forcing overall to yield trends toward colder continental temperatures. An alternate explanation of the cooling is that it represents a strong articulation of internal atmospheric variability, evidence for which is derived from model data, and physical considerations. Sea ice loss impact on weather variability over the high latitude continents is found, however, characterized by reduced daily temperature variability and fewer cold extremes.

They were even more direct in paper’s conclusion:

We…showed that sea ice loss impact on daily weather variability over the high latitude continents consists of reduced daily temperature variability and fewer cold extremes indicating that the enhanced occurrences of cold spells during recent winters (e.g., Cohen et al. 2014) are not caused by sea ice loss.

This is pretty emphatic. Global warming results in warmer, less variable winters in North America (Figure 1).

 

Figure 1. Modeled change in winter mean temperature (left), daily temperature variability (middle), and temperature on the coldest 10 percent of the days (right) as a result of decline in Arctic sea ice. (source: Sun et al., 2016).

Now, if only our government’s “top scientist” were paying attention.

Reference:

Sun, L., J. Perlwitz, and M. Hoerling, 2016. What Caused the Recent “Warm Arctic, Cold Continents” Trend Pattern in Winter Temperatures? Geophysical Research Letters, doi: 10.1002/2016GL069024.

Yesterday the Center for Immigration Studies (CIS) published a report authored by Jason Richwine on the welfare cost of immigration. The CIS headline result, that immigrant-headed households consume more welfare than natives, lacks any kind of reasonable statistical controls.  To CIS’s credit, they do include tables with proper controls buried in their report and its appendix.  Those tables with proper controls undermine many of their headline findings.  In the first section, I will discuss how CIS’ buried results undermine their own headline findings.  In the next section, I will explain some of the other problems with their results and headline findings. 

CIS’s Other Results

The extended tables in the CIS report paint a far more nuanced picture of immigrant welfare use than they advertised.  To sum up the more detailed findings:

“In the no-control scenario, immigrant households cost $1,803 more than native households, which is consistent with Table 2 above. The second row shows that the immigrant-native difference becomes larger — up to $2,323 — when we control for the presence of a worker in the household. The difference then becomes gradually smaller as controls are added for education and number of children. The fourth row shows that immigrant households with the same worker status, education, and number of children as native households cost just $309 more, which is a statistically insignificant difference. The fifth row shows that immigrants use fewer welfare dollars when they are compared to natives of the same race as well as worker status, education, and number of children.” [emphasis added]

All of the tables I reference below are located in CIS’s report.   

Table 5 shows that households headed by an immigrant with less than a high school education consume less welfare than native households with the same education level.  For every other level of education, immigrant-headed households consume more than natives in the same education bracket. 

Table 6 controls for the number of children in native and immigrant households.  Immigrant households with one child, two children, and three or more children all consume fewer welfare benefits that the same sized native households. The only exception is that immigrant households without any children consume more. 

Table 7 has more mixed results. It shows that Hispanic and black immigrant-headed households consume less welfare than Hispanic and black native-headed households.  Immigrant white and Asian immigrants consume more welfare than native households headed by whites and Asians.  Table 8 breaks down their results with numerous different controls.  When controlled for a worker in the household, the number of children, the education of the head of household, and race, immigrant households consume less welfare.        

Table A3 shows that immigrant households with the youngest heads, 29 years old and under, impose a much lower cost than households headed by natives of the same age.  Table A4 shows that immigrants impose the greatest welfare costs in their first five years of residency but it decreases afterward and never again rises to that high initial level.  Table A5 shows that immigrant-headed working households with less than a high school degree consume less welfare than their native household counterparts.  For all other educational groups, the immigrant-headed households consume more than the comparable native-headed household. 

Table A6 shows immigrant headed households with children by race. Households headed by Hispanic, black, and Asian immigrants all consume less welfare than their native counterparts.  Households headed by white immigrants consume more welfare than white natives. 

Table A7 controls for poverty and race.  Overall, immigrant households in poverty consume less welfare than native households in poverty.  Hispanic and black immigrant households both massively under consume compared to native Hispanics and blacks.  White and Asian immigrant-headed households, on the other hand, consume more welfare than native households headed by members of the same race.

Many of the report’s detailed tables that use proper controls undermine their main conclusion. Excluding the bullet points at the beginning, this is a much more careful report than CIS has issued in the past. As a result, the report does come to a more nuanced conclusion than the headlines about it indicate.

Broader Issues

Below I will describe in detail some methodological and other issues with the CIS analysis – some of which expand on CIS’s controlled results that were not headlined. 

Individual Welfare Use or Head of Household

The CIS report compared all immigrant households and all of their inhabitants, including millions of native-born citizen children and U.S.-born spouses, with all households headed by native-born Americans. Richwine admits that the larger family size of immigrant households accounts for much (not all) of their greater welfare use because those born in the United States are eligible for all means-tested welfare benefits – even though Table 6 shows that immigrant households controlled for children consume a lower level of benefits.  A household level analysis does not reveal who receives the benefits, leaving the impression that the immigrants are the intended legal beneficiaries when they are often legally excluded from these programs. 

The CIS report should have compared immigrant individuals to native-born individuals for three reasons.  First, the number of people in an individual does not vary but the number of people in a household can vary tremendously.  The greater number of children in the immigrant household, rather than any different level of individual welfare use, is what largely drove the report’s results. 

Second, Medicaid and SSI benefit levels and eligibility are determined on an individual basis, not a household one.  Many immigrants are legally ineligible for those programs but their U.S.-born spouses and children do have access.  Thus, CIS counts the benefits received by the U.S.-born children even though the immigrants themselves are often ineligible.  This gives an inflated picture of immigrant welfare use. 

Third, it’s a lot easier and more accurate to compute the immigrant and native welfare costs when they are individuals than it is to work backward from the Survey of Income and Program Participation (SIPP), budgetary data, and imputations of program costs necessary due to a household analysis.   

Cato published an analysis of poor immigrant welfare use that compares individuals.  As a result, we can see the immigration or citizenship status, within limits, of the actual welfare users and the amount they consume.  The immigrants themselves are almost always less likely to use welfare and consume a lower dollar value of benefits than similar natives – as CIS corroborates in Table A7 of their report. 

The immigrant-headed household unit of analysis used in the CIS report presents other problems.  As a unit, it is just not as meaningful as it once was.  Professor Leighton Ku, director of the Center for Health Policy Research at George Washington University and a nationally recognized expert on these issues, wrote:

“Another problem is the ambiguous nature of what it means to be an ‘immigrant-headed household.’ In the CPS, a head of household is often assigned by the parent who is completing the survey: it could be the husband or wife. Consider an example of a five-person household, consisting of an immigrant male, a native-born wife, two native-born children, and a native-born unrelated person (such as someone renting a room). If the male has been deemed the head of household, this is an immigrant-headed household despite the fact that only one of five members is an immigrant and one (the renter) is not financially dependent on the immigrant. But if the wife was deemed the head of household, this would be a native-headed household, even though one member is an immigrant. Given that many families today have dual incomes and that the wife’s income often exceeds the husband’s, it is not clear if being assigned the ‘head of household’ in the Census form has much social meaning.”

The CIS report included the welfare cost of all the people living in the immigrant-headed household.  They make the defensible case that those U.S.-born children should be included because they would not exist in the United States and, therefore, would not consume welfare without the immigrant being here.  That’s a fair point, but it also leads to the defensible claim that the welfare consumed by the grandchildren, great-grandchildren, and every subsequent generation of an immigrant should also be included in the welfare calculation.  After all, without the initial immigrant, those subsequent welfare consuming native-born Americans wouldn’t be here either. 

The choice of researchers is to count just the immigrants and their welfare usage or to count the welfare consumed by the immigrants and all of their subsequent descendants.  Influenced by the Texas Office of the Comptroller, Cato decided to measure the welfare consumption of the immigrants themselves and excluded all of the subsequent generations.  CIS just counted the immigrants and their U.S.-born children and excluded their subsequent descendants (there are many grandchildren and great-grandchildren of immigrants alive today consuming welfare).     

Medicaid and Obamacare 

Differing Medicaid use rates and consumption levels account for over two-thirds of the entire gap between native and immigrant households in their headline results (table 2 of the CIS report).  That result is an artifact of the welfare system prior to the implementation of Obamacare’s Medicaid expansion. This difference will shrink or reverse when native enrollment and use rates rise in response to Obamacare’s mandated 2014 Medicaid expansion and rollout of exchange subsidies.

Reform Welfare or Restrict Legal Immigration – Which is Easier?

CIS seeks to use immigrant welfare use as an argument for cutting legal immigration.  Cato, on the other hand, has sought to build a wall around the welfare state and restrict non-citizen access rather than to more strictly regulate the international labor market.  When I suggested that CIS concentrate on reforming welfare over further restricting immigration, Richwine said, “[welfare reform is] not a policy change likely to occur in the near future.”  That may be true, but legally restricting legal immigration to the United States is even less likely to occur. 

Richwine’s explanation for focusing on immigration cuts rather than welfare reform doesn’t stand to scrutiny.  Congress has continually increased legal immigration levels since 1965. Congress considered a more restrictive immigration reform in 1996–and it was defeated handily.  In the mid-1990s, a high of 65 percent of Americans wanted to decrease immigration and Congress still couldn’t pass such a reform.  By mid-2015, only 34 percent of Americans wanted to decrease immigration.  The last time the anti-immigration opinion was this unpopular was in 1965 – on the eve of a transformative liberalization.   

Welfare, on the other hand, was reformed and restricted in 1996.  Furthermore, the public wants more reforms that limit welfare access and place more restrictions on welfare users. Historical trends and public opinion indicate that welfare reform is more likely and more popular than a severe reduction in legal immigration.  Regardless, CIS should join Cato and focus its efforts on restricting immigrant access to welfare rather than spin its wheels in a quixotic quest for a more restrictive immigration policy. 

Excluding the Big Welfare Programs

The CIS report only includes some means-tested welfare programs but excludes the rest of the welfare state.  Their report includes all immigrants and natives divided by households so it should include all of the welfare state – including the largest portions of Social Security and Medicare.  Immigrants pay large surpluses into Medicare and Social Security.  Richwine might object to including these programs because age and work history determine eligibility for the programs, so he might want to control for those factors.  That is a defensible argument, but it appears CIS thought that such a correction was not appropriate for the report’s headline results because they do not control for the eligibility of the programs.  Tables with proper controls are buried later in the paper and the appendix. CIS should at least add Medicare and Social Security to the tables in its appendix.  

One of the explanations Richwine gave for this report was “[w]ith the nation facing a long-term budgetary deficit, this study helps illuminate immigration’s impact on that problem.”  As the OECD makes clear in its fiscal analysis, it makes little sense to exclude immigrant consumption and contribution to the old-age entitlement programs that are actually driving the long-term debt.  If CIS wishes to grapple with the fiscal issues surrounding immigration, there is vast empirical literature on the topic that they should consult.   

As a final point, CIS’s headline result should have compared poor immigrant welfare use to poor American welfare use instead of comparing all American households to all native households.  The welfare benefit programs analyzed here are all intended for the poor.  It adds little to include Americans and immigrants who are too wealthy to receive much welfare. 

Net Fiscal Effects

Richwine includes a section in this CIS report where he attempts to defend his 2013 Heritage Foundation fiscal cost estimate that was roundly criticized by economists on the left and right.  He makes a lot of confused statements concerning how to measure the fiscal impact of immigration.  Instead of rehashing those arguments, here’s one small criticism of his 2013 Heritage paper: It was a 50-year fiscal cost analysis without a discount rate.             

Conclusion

When they use appropriate controls in the later parts of their paper and their appendix, CIS reaches much less negative and sometimes even positive results than their messaging indicates. Many of the issues raised in this post may be too wonky for general consumption, but they are important for producing excellent research. Cato has published two working papers, a bulletin, a policy analysis, and a book chapter on immigrant welfare use and the broader fiscal effects in which we explain our methods and defend them against criticisms.  We even include a literature survey on the topic that discusses the different results from the National Research Council.  I invite anybody more interested in these issues to read them.      

Special thanks to Charles Hughes for his excellent comments and suggestions on an earlier draft of this blog post.

In the Food and Drug Administration’s crackdown on what is now a thriving market for vaping products (nicotine and flavorings delivered without tobacco through a vaporizing device), Trevor Burrus has identified one group that is likely to emerge as winners from the regulations, namely large tobacco companies, which have lost many smoking customers to the vaping market without being notably successful at playing in it themselves. Another set of winners? Governments whose treasuries are enriched by conventional cigarette sales. Under the 1998 tobacco settlement, which I and others at Cato have criticized at length, a large chunk of revenue from these conventional sales goes to state governments. But this revenue source has been eroded badly as smokers switch to vaping, a trend the new rules are well calculated to slow or reverse.

As often happens with bad regulations, the winners will not be nearly so numerous as the losers, some of which I identify in a new piece at Ricochet. Not to be forgotten are thousands of small businesses; independent vaping shops and small vaping suppliers have both become common in recent years, and face ruin. But the real target of restrictions on consumer choice are consumers, and in this case what is at risk is not flavors and gimmicks but lives and health. While the FDA (and its sister Centers for Disease Control under Obama appointee Thomas Frieden) are dismissive of the benefits of vaping as harm reduction for established smokers, many others in the public health field take it seriously. An evidence review published last year by Public Health England found that “the current best estimate is that e-cigarettes are around 95% less harmful than smoking,” and that while there was plenty of evidence that the vaping option had led to a lot of successful switching away from cigarettes, there was “no evidence so far” to bolster counter-fears that vaping posed a countervailing menace as a gateway for novices destined to graduate into cigarette smoking.  I conclude my piece

If Congress chooses, it can do something about this. An amendment approved by the House Appropriations Committee last month would grandfather in products now available, applying the prohibitive rules only to products introduced in the future. Whether Washington acts on this sensible idea will depend in part on whether it is listening to the voices of ex-smokers and young consumers around the country who feel competent to run their own lives and make their own choices.

As I note at Overlawyered, the FDA at the same time announced stringent regulations restricting the sale of cigars, which former Catoite Jacob Grier writes about here (““The market for cigars is about to become a lot less diverse and a lot more boring.”) 

A few years ago President Barack Obama urged members of the European Union to admit Turkey. Now he wants the United Kingdom to stay in the EU. Even when the U.S. isn’t a member of the club the president has an opinion on who should be included

Should the British people vote for or against the EU? But Britons might learn from America’s experience.

What began as the Common Market was a clear positive for European peoples. It created what the name implied, a large free trade zone, promoting commerce among its members. Unfortunately, however, in recent years the EU has become more concerned about regulating than expanding commerce.

We see much the same process in America. The surge in the regulatory Leviathan has been particularly marked under the Obama administration. Moreover, the EU exacerbated the problem by creating the Euro, which unified monetary systems without a common continental budget. The UK stayed out, but most EU members joined the currency union.

At the same time, European policymakers have been pressing for greater EU political control over national budgets. Britain’s Westminster, the fount of parliamentary democracy worldwide for centuries, would end up subservient to a largely unaccountable continental bureaucracy across the British Channel. In fact, what thoughtful observers have call the “democratic deficit”—the European Parliament is even more disconnected from voters than the U.S. Congress—has helped spawn populist parties across the continent, including the United Kingdom Independence Party.

Britons today face a similar dilemma to that which divided Federalists and Anti-Federalists debating the U.S. Constitution. As I argued in American Conservative: “Unity enlarges an economic market and creates a stronger state to resist foreign dangers. But unity also creates domestic threats against liberty and community. At its worst an engorged state absorbs all beneath it.”

In America the Federalists were better organized and made the more effective public case. In retrospect the Anti-Federalists appear to have been more correct in their predictions of the ultimate impact on Americans’ lives and liberties. This lesson, not President Obama’s preferences, is what the British should take from the U.S. when considering how to vote on the EU.

The decision is up to the British people alone. They should peer across the Atlantic and ponder if they like what has developed.

Few people would, I think, take exception to the claim that, in a well-functioning monetary system, the quantity of money supplied should seldom differ, and should never differ very much, from the quantity demanded.  What’s controversial isn’t that claim itself, but the suggestion that it supplies a reason for preferring some path of money supply adjustments over others, or some monetary arrangements over others.

Why the controversy?  As we saw in the last installment, the demand for money ultimately consists, not of a demand for any particular number of money units, but of a demand for a particular amount of monetary purchasing power.  Whatever amount of purchasing X units of money might accomplish, when the general level of prices given by P, ½X units might accomplish equally well, were the level of prices ½P.  It follows that changes in the general level of prices might, in theory at least, serve just as well as changes in the available quantity of money units as a means for keeping the quantity of money supplied in line with the quantity demanded.

But then it follows as well that, if our world is one in which prices are “perfectly flexible,” meaning that they always adjust instantly to a level that eliminates any monetary shortage or surplus, any pattern of money supply changes will avoid money supply-demand discrepancies, or “monetary disequilibrium,” as well as any other.  The goal of avoiding bouts of monetary disequilibrium would in that case supply no grounds for preferring one monetary system or policy over another, or for preferring a stable level of spending over an unstable level.  Any such preference would instead have to be justified on other grounds.

So, a decision: we can either adopt the view that prices are indeed perfectly flexible, and proceed to ponder why, despite that view, we might prefer some monetary arrangements to others; or we can subscribe to the view that prices are generally not perfectly flexible, and then proceed to assess alternative monetary arrangements according to their capacity to avoid a non-trivial risk of monetary disequilibrium.

Your guide does not hesitate for a moment to recommend the latter course.  For while some prices do indeed appear to be quite flexible, even adjusting almost continually, at least during business hours (prices of goods and financial assets traded on organized exchanges come immediately to mind), in order for the general level of prices to instantly accommodate changes to either the quantity of money supplied or the quantity demanded, it must be the case, not merely that some or many prices are quite flexible, but that all of them are.  If, for example, the nominal stock of money were to double arbitrarily and independently of any change in demand, prices would generally have to double in order for equilibrium to be restored.  (Recall: twice as many units of money will command the same purchasing power as the original amount only when each unit commands half as much purchasing power as before.)  It follows that, so long as any prices are slow to adjust, the price level must be slow to adjust as well.  Put another way, an economy’s price level is only as flexible as its least flexible prices.

And only a purblind observer can fail to notice that some prices are far from fully flexible. The reason for this isn’t hard to grasp: changing prices is sometimes costly; and when it is, sellers have reason to avoid doing it often.  Economists use the expression “menu costs” to refer generally to the costs of changing prices, conjuring up thereby the image of a restaurateur paying a printer for a batch of new menus, for the sake of accommodating the rising costs of beef, fish, vegetables, wait staff, cooks, and so forth, or the restaurants’ growing popularity, or both.  In fact both the restaurants’ operating costs and the demand for its output change constantly.  Nevertheless it usually wouldn’t make sense to have new menus printed every day, let alone several times a day, to reflect all these fluctuations!  Electronic menus would help, of course, and now it is easy to conceive of them (though it wasn’t not long ago).  But those are costly as well, which is why (or one reason why) most restaurants don’t use them.

The cost of printing menus is, however, trivial compared to that of changing many other prices. The prices paid for workers, whether wage or salaried, are notoriously difficult to change, except perhaps according to a prearranged schedule, which can’t itself accommodate unexpected change.  Renegotiating wages or salaries can be an extremely costly business, as well as a time-consuming one.

“Menu costs” can account for prices being sticky even when the nature of underlying changes in supply or demand conditions is well understood.  Suppose, for example, that a restaurant’s popularity is growing at a steady and known rate.  That fact still wouldn’t justify having new menus printed every day, or every hour, or perhaps even every week.  But add the possibility that a perceived increase in demand may not last, and the restaurateur has that much more reason to delay ordering new menus: after all, if demand subsides again, the new menus may cost more than turning a few customers away would have.  (The menus might also annoy customers who would dislike not being able to anticipate what their meal will cost.)  Now imagine an employer asking his workers to take a wage rate cut because business was slack last quarter.  Get the idea?  If not, there’s a vast body of writings you can refer to for more examples and evidence.

These days it is common for economists who insist on the “stickiness” of the price level to be referred to, or to refer to themselves, as “New Keynesians.”  But the label is misleading.  Although John Maynard Keynes had plenty of innovative ideas, the idea that prices aren’t perfectly flexible wasn’t one of them.  Instead, by 1936, when Keynes published his General Theory, the idea that prices aren’t fully flexible was old-hat: no economists worth his or her salt thought otherwise.[1]  The assumption that prices are fully flexible, or “continuously market clearing,” is in contrast a relatively recent innovation, having first become prominent in the 1980s with the rise of the “New Classical” school of economists, who subscribe to it, not on empirical grounds, but because they confuse the economists’ construct of an all-knowing central auctioneer, who adjusts prices costlessly and continually to their market-clearing levels, with the means by which prices are determined and changed in real economies.

Let New Classical economists ruminate on the challenge of justifying any particular monetary regime in a world of perfectly flexible prices.  The rest of us needn’t bother.  Instead, we can accept the reality of “sticky” prices, and let that reality inform our conclusions concerning which sorts of monetary regimes are more likely, and which ones less likely, to avoid temporary surpluses and shortages of money and their harmful consequences.

What consequences are those?  The question is best answered by first recognizing the crucial economic insight that a shortage of money must have as its counterpart a surplus of goods and services and vice versa.  When money, the means of exchange, is in short supply, exchange itself, meaning spending of all sorts, suffers, leaving sellers disappointed.  In contrast, when money is superabundant, spending grows excessively, depleting inventories and creating shortages.  Yet these are only the most obvious consequences of monetary disequilibrium.  Other consequences follow from the fact that, owing to different prices’ varying degrees of stickiness, the process of moving from a defunct level of equilibrium prices to a new one necessarily involves some temporary distortion of relative price signals, and associated economic waste.  A price system has work enough to do in coming to grips with ongoing changes in consumer tastes and technology, among many other non-monetary factors that influence supply and demand for particular goods and services, without also having to reckon with monetary disturbances that call for scaling all prices up or down.  The more it must cope with the need to re-scale prices, the less capable it becomes at fine-tuning them to reflect changing conditions within particular markets.

Hyperinflations offer an extreme case in point, for during them sellers often resort to “indexing” local-currency prices to the local currency’s exchange rate with respect to some relatively stable foreign currency.  That is, they cease referring altogether to the specific conditions in the markets for particular goods, and settle instead for keeping their prices roughly consistent with rapidly changing monetary conditions.  In light of this tendency it’s hardly surprising that hyperinflations lead to all sorts of waste, if not to the utter collapse of the economies they afflict.  If relative prices can become so distorted during hyperinflations as to cease entirely to be meaningful indicators of goods’ and services’ relative scarcity, it’s also true that the usefulness of price signals in promoting the efficient use of scarce resources declines to a more modest extent during less severe bouts of  monetary disequilibrium.

What sort of monetary policy or regime best avoids the costs of having too much or too little money?  In an earlier post, I’ve suggested that keeping the supply of money in line with the demand for it, without depending on help in the shape of adjustments to the price level,  is mainly a matter of achieving a steady and predictable overall flow of spending.  But why spending?  Why not maintain a stable price level, or a stable and predictable rate of inflation?  If, as I’ve claimed, changes in the general level of prices are an economy’s way of coping, however imperfectly, with monetary shortages and surpluses, then surely an economy in which the price level remains constant, or roughly so, must be one in which such surpluses and shortages aren’t occurring.  Right?

No, actually.  Despite everything I’ve said here, monetary order, instead of going hand-in-hand with a stable level of prices or rate of inflation, is sometimes best achieved by tolerating price level or inflation rate changes.  A paradox?  Not really.  But as this post is already too long, I must put off explaining why until next time.
______________________
[1] For details see Leland Yeager’s essay, “New Keynesians and Old Monetarists,” reprinted in The Fluttering Veil.

[Cross-posted from Alt-M.org]

The ridesharing companies Uber and Lyft have withdrawn from Austin, Texas after voters there failed to pass Proposition 1, which would have repealed regulations requiring Uber and Lyft to include fingerprints as part of their driver background checks. This is a disappointing result, especially given that fingerprinting is, despite its sexy portrayal in forensic TV shows, not a perfect background check process and needlessly burdens rideshare companies.

Austin’s ordinances require rideshare companies to implement fingerprints as part of their background check system by February 2017. Under the rules the fingerprints would be submitted to the Texas Department of Public Safety, which would then send the records to the Federal Bureau of Investigation (FBI). As I pointed out in my Cato paper on ridesharing safety, the FBI fingerprint data are hardly comprehensive: 

Some have faulted Uber and Lyft for not including fingerprint scans as part of their background checks. However, fingerprint databases do not contain a full case history of the individual being investigated, and in some instances an FBI fingerprint check may unfairly prevent a qualified taxicab driver applicant from being approved. The FBI fingerprint database relies on reporting from police departments, and other local sources, as well as other federal departments and is not a complete collection of fingerprints in the United States.

Critics of the FBI fingerprint database point to its incomplete or inaccurate information. In July 2013 the National Employment Law Project (NELP) released a study on the FBI’s employment background checks and found that “FBI records are routinely flawed.” Also, while law enforcement agencies are diligent when it comes to adding fingerprint data of arrested or detained persons to the federal data, they are “far less vigilant about submitting the follow-up information on the disposition or final outcome of the arrest.”

This lack of vigilance is significant because, as the NELP study goes on to point out, “About one-third of felony arrests never lead to a conviction. Furthermore, of those initially charged with a felony offense and later convicted, nearly 30 percent were convicted of a different offense than the one for which they were originally charged, often a lesser misdemeanor conviction. In addition to cases where individuals are initially overcharged and later convicted of lesser offenses, other cases are overturned on appeal, expunged, or otherwise resolved in favor of the worker without ever being reflected on the FBI rap sheet.”

A Wall Street Journal article from 2014 made similar findings:

Many people who have never faced charges, or have had charges dropped, find that a lingering arrest record can ruin their chance to secure employment, loans and housing. Even in cases of a mistaken arrest, the damaging documents aren’t automatically removed. In other instances, arrest information is forwarded to the FBI but not necessarily updated there when a case is thrown out locally. Only half of the records with the FBI have fully up-to-date information.

“There is a myth that if you are arrested and cleared that it has no impact,” says Paul Butler, professor of law at Georgetown Law. “It’s not like the arrest never happened.”

Relying on fingerprints to paint an accurate picture of a driver applicant’s criminal history is misguided. Uber and Lyft do carry out background checks via third parties that look at court records and sex offender registries in order to determine whether a driver applicant meets their criminal background requirements, which are often stricter than those that govern taxi driver applicants. In fact, Austin is one of the cities where Uber’s and Lyft’s safety requirements are more stringent than those imposed on taxi drivers.

As R Street Institute’s Josiah Neeley has explained, Austin doesn’t prohibit taxi driver applicants who have been convicted of “a criminal homicide offense; fraud or theft; unauthorized use of a motor vehicle; prostitution or promotion of prostitution; sexual assault; sexual abuse or indecency; state or federal law regulating firearms; violence to a person; use, sale or possession of drugs; or driving while intoxicated” to work as taxi drivers provided that they have “maintained a record of good conduct and steady employment since release.” 

In contrast, Uber and Lyft disqualify driver applicants if they have been convicted of a felony in the last seven years. Uber and Lyft also include features that make drivers and passengers safer than they would be in traditional taxis.

Rideshare transactions are cashless. This removes an incentive for thieves to target rideshare drivers. Taxi drivers, who make a living out of picking up strangers, on the other hand can be more reliably assumed to be carrying cash than rideshare drivers. 

In addition, both the rideshare driver and passenger have profiles and ratings. The rating system provides an incentive for riders and passengers to be on their best behavior, and the profiles make it comparatively easy for investigators to determine who was at the scene of an alleged crime in a rideshare vehicle. It would be very stupid for an Uber passenger to try and get away with robbing an Uber driver, just as it would be unwise for an Uber driver to assault an Uber passenger. This, of course, doesn’t mean that rideshare background checks and safety features will deter all criminals, but they do compare very favorably to the safety procedures in place for taxis.

Perhaps too many Austin residents have watched CSI:Crime Scene Investigation and exhibited something similar to the “CSI effect” in the voting booth last weekend. The FBI fingerprint database may sound like a sensible resource to use for background checks, but it is not up-to-date and could result in otherwise qualified driver applicants being denied the opportunity to use Uber or Lyft.

Pages