Feed aggregator

Tom Clougherty’s recent post on competition (or the lack thereof) in UK banking nicely highlights the problem posed by barriers to entry into the British banking industry. That there is indeed an entry problem should be obvious from the fact that Metro Bank, which opened in 2010, was the first new financial institution in the UK to get its own banking license in over 150 years!

What may still not be sufficiently appreciated is the extent to which entry into the British banking industry has been limited, not by the unavoidable challenges would-be entrants must face in attempting to compete head-on with established British banks, but by hurdles erected by British bank regulators.

Nothing better illustrates this fact than the story of Dave Fishwick and his struggle to establish a bank in Burnley, a run-down town in Lancashire, in the northwest of England.

Now that the Bank of Dave is up and running, Fishwick has become something of a celebrity here in the UK. A charismatic self-made businessman, Fishwick grew up in a small two-up-two-down terraced house in one of the poorer parts of the former mill town of Nelson, just outside of Burnley. He did badly at school, which he left at 16. He then began dealing in second-hand cars, eventually moving up from cars to vans and from vans to minibuses. The minibus business then grew to be the largest in the country, making Fishwick a rich man in the process.

Come the financial crisis, bank lending in Burnley dried up almost overnight. Local firms could no longer finance purchases of Fishwick’s vehicles. Soon his business was in trouble. To save it, he himself started lending to his customers. When, after six months of doing so, and despite hard times, not a single customer defaulted, it struck him that running a bank wouldn’t be too difficult.

So Fishwick rented and renovated the lower floor of a vacant £100-a-week shop, installed a cash machine and a safe, and (it’s said) hid the key to the safe behind a bottle of cherryade. He then put a sign above the window saying “Bank on Dave!” and, September 2011, opened shop.

A little more than half a year later, Fishwick had formed some rather caustic opinions about established British banks, which he shares in this Guardian article. “The whole [banking] system,” he observed, “is rotten and it’s ruining the lives of good, hardworking people.”

Fishwick’s “bank” (to call it that, despite British regulator’s insistence that he not do so), resembles a brick-and-mortar peer-to-peer crowd funding scheme. Purists might argue over whether Dave is really doing banking or not, as opposed to operating a building society or a credit union. To such arguments I would respond that Dave is doing what banks traditionally do, acting as a financial intermediary that takes in deposits and then loans them out. But what clinches it in my mind as banking and differentiates it from some mutual is that Dave guarantees lenders’ returns out of his own personal wealth, i.e., Dave acts as the residual claimant or shareholder. As it happens, Dave donates his profits to charity, but that is entirely his choice. Were he interested in making personal profit from his bank like a typical bank shareholder, he would be entitled to do so. In short, for all practical purposes — although not legal or regulatory ones, and I will return to this subject presently — Dave is running a bank.

Investors in Dave’s bank were offered interest of up to 5% AER (Annual Equivalence Rate) for those willing to make deposits that required a year’s notice of withdrawal. Borrowers — and Dave targeted small businesses for the most part — paid between 8.9% flat (17.4% APR) and 14.9% flat (29% APR) depending on their credit assessment. Potential borrowers were assessed primarily using Dave’s own judgment of their businesses’ viability and their personal characters, and he then followed through with advice to help them run their businesses, all very old-fashioned. Dave’s lending policy was highly successful, too: after a couple of years, 99.5% of borrowers had repaid in full.

However, Dave’s bank also had a £25,000 a week lending target and additional customers were put onto a waiting list. So unlike Northern Rock in September 2007, people were queuing to put their funds in, rather than queuing to take them out — despite the fact that investors in the Bank of Dave had no protection under the Financial Services Compensation Scheme, the UK version of deposit insurance. Their only protection is their confidence in Dave and his business model, and their confidence that he would come through on his personal guarantee if it came to that. Moreover, since his bank is legally set up as a limited liability company, Dave’s guarantee is not legally binding: his word is his bond. But that was good enough for those queuing up to invest in his bank.

But Dave’s task to set up a “bank” was far from easy, if not downright Herculean, between the 8,000 pages of forms he had to fill in, the lawyers’ bills and the £10 million minimum reserve that he was required to maintain to get his license.

According to The Guardian, as he went around the City, one expert after another poured cold water on his plans: “They told me that if I use the word deposit or say I’m a bank then I will go to prison.” He was told that he had ideas “above his station” and didn’t “have a chance.” Someone else told him that, in the past, “if you went to the right school and had the right parents you might be considered a fit and proper person to go into the banking industry … [but] there is no evidence you are.” He didn’t have the right accent either.

As an aside, the “fit and proper” test is a real test. As the latest FCA handbook on the subject explains, the key criteria are “honesty, integrity and reputation,” “competence and capability” and “financial soundness.” Curiously, as applied by UK regulators in 2012, both James Crosby, whose aggressive risk-taking led to the collapse of HBOS, and Fred Goodwin, whose aggressive expansion led to the collapse of RBS, both easily met the test — perhaps because they had the right backgrounds and moved in the right circles — and were even awarded knighthoods (though since cancelled) for their services to the banking industry. Yet Dave, who clearly had these “fit and proper” qualities, was deemed not to have them because of his unconventional background. The “fit and proper” test is a joke.

Not to be deterred, Dave mounted a publicity campaign that got a lot of media coverage and elicited a huge amount of public sympathy. His campaign culminated in a Parliamentary hearing in the early summer of 2012, ably chaired by my friend Steve Baker MP (Con., Wycombe). The hearing room was packed and well attended by MPs. “Curious how TV cameras draw in the MPs like moths to a lamp,” Steve said to me afterwards. Dave’s message resonated with the audience: I am only trying to help my community but the regulators won’t let me.

Dave’s story also appeared in July 2012 on Channel 4’s “Bank of Dave” documentary series, which chronicled the challenges he met at every turn. Reckoning he can’t do any worse than the banks who lost fifty billion quid, he sets off to see expert after expert in the City, who tell him that he hasn’t got a cat in hell’s chance. The system is heavily regulated to protect the public, he was told — this at the time when the LIBOR scandal was in full swing and feelings were still raw from the bank bailouts. Dave isn’t put off, however. “Sometimes it’s far easier just to go and do something than to get permission,” he says. He tries to get Richard Branson’s phone number to put him right on the banking system. He tries to get the Bank of England’s number by calling directory inquiries. He then gets through to the Bank switchboard. “Head of t’Bank of England,” he asks. “Thanks … Threadneedle Street? And that’s London?” And so on he goes from one hilarious encounter to another. By the end of the program, Dave has got nowhere.

The documentary got rave reviews, and so did a book, Bank of Dave: How I took on the banks, and another documentary, Dave: Loan Ranger, shown in January 2014, in which he successfully took on the payday lenders, which is another story in itself.

The publicity campaign put a lot of pressure on the regulators, who buckled eventually: they agreed to talk to him and, as Dave acknowledged, they couldn’t have been more helpful guiding him through the regulations.

After all that effort, however, Dave never did get his banking license. Obtaining a consumer credit license to lend is not too difficult, but obtaining a deposit-taking licence is an altogether different matter. You see, deposit-taking is highly regulated in order to protect the public and is also subject to the “fit and proper” test that Dave did not meet: wrong side of the tracks, old boy.

The upshot is that he can run a bank but he may not call it one, and he can take in deposits but he is not permitted to call them deposits. His bank is formally known as Burnley Savings and Loans Ltd and is regulated as a peer-to-peer lender. The best he can then do is put “Bank on Dave!” over his shop window and invite regulators to take the V sign as read.

So has Dave’s campaign prompted major deregulation to help other would-be Daves set up their own banks? Nope. As far as I can tell, the regulatory barriers to entry are just as high as they were before, so no joy there.

Can we conclude then that Dave’s campaign was a failure? On the contrary. Dave has achieved not just one but three major successes.

First, his lampooning of UK banks and their regulators provides a far more effective critique of the system than any academic study could ever achieve.

Second, Dave provides those who would follow him with the perfect how-to guide. Basically, don’t bother applying for a banking license, just set up your bank but watch your regulatory p’s and q’s so you don’t land in jail: don’t call your bank a bank and make sure that you call your deposits something else.

Last, the success of Dave’s bank suggests a natural reform that would open up entry: allow anyone to enter the market provided that they accept personal liability toward investors and make it clear to their depositors that their deposits are not covered by the deposit insurance scheme. Such a reform would do away with all the pointless pretense — and the success of the Bank of Dave proves that the business model is viable. And if it would work in Burnley, believe me, it would work anywhere.

A review of one of Dave’s documentaries says it all: “Up on a cloud somewhere, George Bailey is weeping tears of joy.” (Andrew Billen, The Times)

Not bad Dave, not bad at all.

[1] Loc. cit.

[Cross-posted from Alt-M.org]

Under the default constitutional rule, all federal officials are nominated by the president with the “advice and consent of the Senate.” But sometimes, when an unexpected vacancy arises, appointing and confirming a replacement can take a while. Congress knows this, and that’s why it has enacted—and frequently updated—the Vacancies Act. The latest version, called the Federal Vacancies Reform Act (FVRA), authorizes the president to bypass advice and consent by appointing temporary “acting officers” to fill certain vacancies.

But Congress is keenly aware that such a unilateral appointment power can be easily abused. That’s why acting officers serve under a strict 210-day time limit. It’s also why “a person may not serve as an acting officer” if that person is nominated to be the permanent officer (with an exception only for longtime first assistants).

Nonetheless, in January 2011, President Obama nominated Lafe Solomon to be the permanent general counsel of the National Labor Relations Board (NLRB) while he was serving—and continued to serve—as the acting general counsel. When Solomon later brought enforcement proceedings against an ambulance company, SW General, that company objected on the grounds that Solomon was no longer validly serving as acting general counsel once he was nominated for the permanent job. The U.S. Court of Appeals for the D.C. Circuit agreed based on a straightforward reading of the text of the FVRA, but the NLRB appealed to the Supreme Court.

Cato has filed an amicus brief supporting SW General and urging the Court to adopt a “clear statement” rule when interpreting statutes that let the president bypass advice and consent. The NLRB’s only textual argument is that a phrase in the preamble to the FVRA’s disqualification clause, “notwithstanding subsection (a)(1),” means that the disqualification for permanent nominees only applies to a subset of acting officers.

But as the D.C. Circuit has previously explained, “notwithstanding” means “in spite of,” not “for purpose of” or “with respect to.” Courts shouldn’t strain to read statutes contrary to their natural reading—especially ones that aren’t even ambiguous in the first place. Just the opposite: The Framers recognized that “advice and consent” would be a core check-and-balance mechanism. That’s why it is only through the express act of Congress that the appointment of particular officials can be vested in “the President alone.”

It’s clear that the Framers intended such waivers of advice and consent to be the exception to the rule, and that is indeed how the system has developed. When the Constitution sets such a default equilibrium between two branches of government, the Supreme Court has recognized that the burden must always be on those who would alter that equilibrium. Absent a clear statement of Congress, the constitutional presumption is that both the president and the Senate must assent to the appointment of every high-ranking official, whether serving permanently or for a limited tenure. Giving the benefit of the doubt to an unauthorized appointment like that of Lafe Solomon would turn this presumption on its head.

The Supreme Court will hear argument in NLRB v. SW General, Inc. on November 6. Just as it unanimously did with President Obama’s illegal “recess” appointments to the NLRB, the Court should reject his overreach here.

The police are supposed to protect and serve the public.  Most police procedural dramas on television–perennially among the most popular shows for decades–paint a picture of officers working diligently and honestly to catch the bad guys. Many children are taught that police officers are among the most trusted members of the community and that there is no need to fear them. But is that how police work in real life?

Not exactly.

Police officers are trained to extract information from people whether or not they are criminal suspects. Indeed, one of the more common tricks officers use is getting people to give up the right to refuse a search of their person or property. With consent, police officers can rummage through your pockets and cars–or even your homes–looking for a reason to arrest you. 

For this reason, talking to police when you don’t have to is often a bad idea. So many of the wrongfully convicted people in this country didn’t exercise their right to be silent and were put away because they didn’t think they had anything to hide. How wrong they were.

On Thursday, Cato is hosting an event with Prof. James Duane, the law professor whose lecture to NEVER talk to the police went viral. He’s here to discuss his book on self-incrimination and the criminal justice system, You Have the Right to Remain Innocent. The book is engaging, informative, and easy to read. Cato adjunct Randy Barnett of Georgetown University Law Center will be commenting on the book and it will be moderated by our own Tim Lynch. 

Copies of the book will be sold at the event. You can register for the free event and lunch here. You can join the discussion online using the Twitter hashtag #6ARights. 

The Guardian has a story out today outlining–to the extent that the Clinton campaign would do so–what the ex-Secretary of State would do vis a vis national security policy if she becomes the next occupant of the Oval Office. For those concerned with our out-of-control, post-9/11 Surveillance State, these three paragraphs should give you pause:

Domestically, the “principles” of Clinton’s intelligence surge, according to senior campaign advisers, indicate a preference for targeted spying over bulk data collection, expanding local law enforcement’s access to intelligence and enlisting tech companies to aid in thwarting extremism. 

The campaign speaks of “balancing acts” between civil liberties and security, a departure from both liberal and conservative arguments that tend to diminish conflict between the two priorities. Asked to illustrate what Clinton means by “appropriate safeguards” that need to apply to intelligence collection in the US, the campaign holds out a 2015 reform that split the civil liberties community as a model for any new constraints on intelligence authorities. 

The USA Freedom Act, a compromise that constrained but did not entirely end bulk phone records collection, “strikes the right balance”, Rosenberger said. “So those kinds of principles and protections offer something of a guideline for where any new proposals she put forth would be likely to fall.”

In fact, as Senator Ted Cruz (R-TX) noted during the GOP primaries, the USA Freedom Act increased the amount of information on Americans the NSA and FBI are vacuuming up electronically. Apparently, Clinton is just fine with that completely ineffective, taxpayer money-wasting, and constitutionally dubious mass surveillance program. 

And if you are a member of the Arab- or Muslim-American community, this paragraph from the Guardian story should send chills down your spine:

Now, Clinton and her advisers are studying whether and how law enforcement agencies ought to balance the privacy and security questions which arise: should agencies share information with each other on those preliminarily under terrorism suspicion, while attempting to avoid keeping such people under permanent investigation or alienating Muslim and other communities.

In fact, this kind of activity has been underway for months via the FBI’s notorious “Shared Responsibility Committees“–and the included non-disclosure agreement language in the SRC “participant agreement” letter is as odious as one could imagine. It follows the launch earlier this year of the FBI’s de facto anti-Muslim “Don’t Be A Puppet” website

Clinton has spent much of the post-convention campaign season excoriating Trump for his anti-Muslim language and proposals. He richly deserves the criticism. But Trump at least appears to be honest about the kind of unconstitutional surveillance and political repression he would likely try to perpetrate against Arabs and Muslims, whether its targeting those who already live here or those who would like to come here to escape a war-torn Middle East. Clinton is telling Arab- and Muslim-Americans how our government should not be persecuting members of their community while endorsing federal surveillance and related programs that do precisely that.

George Will’s oped the other day argued that Congress should hurry up and fund an expansion in the Charleston, South Carolina, seaport. But his piece revealed why the federal government should reduce its intervention in the nation’s infrastructure, not increase it, as Clinton and Trump are proposing.

The Charleston seaport has become crucial to South Carolina’s economy. Will notes that “1 of every 11 South Carolina jobs — and $53 billion in economic output are directly or indirectly related to Charleston’s port.”

There is a problem, however. The Charleston seaport:

needs further dredging in order to handle more of the biggest ships, which is where Congress enters the picture: Unless it authorizes the project and appropriates the federal portion of the $509 million cost to augment South Carolina’s already committed $300 million, the project will be delayed a year. The deepening project is only 14 percent of the $2.2 billion South Carolina is investing in its port facilities and related access.

The biggest ships pay more than $1 million to transit the [Panama] canal; if they miss their transit time, their fee is doubled. Until the port is deepened, too few can be handled here simultaneously, and they can enter and leave the port only at high tide.

Right. It is crucial to South Carolina’s economy to expand the seaport right now without delay. So one would think that state politicians and port-dependent businesses would be springing into action and funding the full port expansion themselves. But they don’t because they are waiting for federal subsidies. Federal intervention into the seaport industry is apparently slowing progress, not speeding it up.

Will says:

There is no controversy in Congress about this project. But unless Congress acts on it before the end of the year, the deepening will not be in the president’s 2018 budget and will be delayed, with radiating costs — inefficiencies and lost opportunities. This a mundane matter of Congress managing its legislative traffic, moving consensus measures through deliberation to action. It will illustrate whether Congress can still efficiently provide public works to enhance private-sector efficiency.

I’m surprised that the astute and pro-market Will missed the obvious solution to the problem he laid out. The federal government is in deep gridlock, and probably will be for years to come. It cannot “efficiently provide public works,” and it rarely has in the past. There never was a golden era of federal efficiency. Army Corps of Engineers infrastructure investment, for example, has been pork barrel for more than a century. And today, we see similar investment-delay problems with numerous areas of federal infrastructure involvement, such as air traffic control.

The solution to the inefficiency that Will rightly criticizes is devolution of infrastructure spending and control out of Washington, optimally to the private sector. Margaret Thatcher privatized most British seaports, and Tony Blair privatized British air traffic control. Privatization is a good way to meet America’s infrastructure challenges as well. Charleston’s seaport is “booming” according to Will, and thus it should have no problem attracting private financing for expansion.

For more on privatization, see here.

You Ought to Have a Look is a feature from the Center for the Study of Science posted by Patrick J. Michaels and Paul C. (“Chip”) Knappenberger.  While this section will feature all of the areas of interest that we are emphasizing, the prominence of the climate issue is driving a tremendous amount of web traffic.  Here we post a few of the best in recent days, along with our color commentary.

We came across a pair of interesting, but somewhat involved reads this week on the interface of science and science policy when it comes to climate change. We’ll give you a little something to chew on from each one, but suggest that you ought you have a look at them at length to appreciate them in full.

First up is a piece, “The Limits of Knowledge and the Climate Change Debate” appearing in the Fall 2016 issue of the Cato Journal by Brian J. L. Berry, Jayshree Bihari, and Euel Elliott in which the authors examine the “increasingly contentious confrontation over the conduct of science, the question of what constitutes scientific certainty, and the connection between science and policymaking.”

Here’s an extended abstract:

As awareness of the uncertainties of global warming has trickled out, polling data suggests that the issue has fallen down the American public’s list of concerns. This has led some commentators to predict “the end of doom,” as Bailey (2015) puts it. In light of this, it seems odd to keep hearing that “the science is settled” and that there is little, if anything, more to be decided. The global warming community still asks us to believe that all of the complex causal mechanisms that drive climate change are fully known, or at least are known well enough that we, as a society, should be willing to commit ourselves to a particular, definitive and irreversible, course of action.

The problem is that we are confronted by ideologically polarized positions that prevent an honest debate in which each side acknowledges the good faith positions of the other. Too many researchers committed to the dominant climate science position are acting precisely in the manner that Kuhnian “normal science” dictates. The argument that humanity is rushing headlong toward a despoiled, resource-depleted world dominates the popular media and the scientific establishment, and reflects a commitment to the idea that climate change represents an existential or near-existential threat. But as Ellis (2013) says, “These claims demonstrate a profound misunderstanding of the ecology of human systems. The conditions that sustain humanity are not natural and never have been. Since prehistory, human populations have used technologies and engineered ecosystems to sustain populations well beyond the capabilities of unaltered natural ecosystems.”

The fundamental mistake that alarmists make is to assume that the natural ecosystem is at some level a closed system, and that there are therefore only fixed, finite resources to be exploited. Yet the last several millennia, and especially the last two hundred years, have been shaped by our ability—through an increased understanding of the world around us—to exploit at deeper and deeper levels the natural environment. Earth is a closed system only in a very narrow, physical sense; it is humanity’s ability to exploit that ecology to an almost infinite extent that is important and relevant. In other words, the critical variables of creativity and innovation are absent from alarmists’ consideration.

In that sense, there is a fundamental philosophical pessimism at work here—perhaps an expression of the much broader division between cultural pessimists and optimists in society as a whole. Both Deutsch (2011) and Ridley (2015b) view much of the history of civilization as being the struggle between those who view change through the optimistic lens of the ability of humanity to advance, to solve the problem that confronts it and to create a better world, and those who believe that we are at the mercy of forces beyond our control and that efforts to shape our destiny through science and technology are doomed to failure. Much of human history was under the control of the pessimists; it has only been in the last three hundred years that civilization has had an opportunity to reap the benefits of a rationally optimistic world view (see Ridley 2010).

Yet the current “debate” over climate change—which is really, in Ridley’s (2015a) terms, a “war” absent any real debate—has potentially done grave harm to this scientific enterprise. As Ridley documents, one researcher after another who has in any way challenged the climate orthodoxy has met with withering criticism of the sort that can end careers. We must now somehow return to actual scientific debate, rooted in Popperian epistemology, and in so doing try to reestablish a reasonably nonpolitical ideal for scientific investigation and discovery. Otherwise, the poisoned debate over climate change runs the risk of contaminating the entire scientific endeavor. 

It seems the idea that the way climate change science is being conducted is proving a detriment to the good of science is becoming a common theme these days (see a new examination of the general topic by Paul Smaldino and Richard McElreath here, as well as our reflections from last week).


Our second piece this week is an opinion paper by Oliver Geden in the publicationWiley Interdisciplinary Reviews: Climate Change titled “The Paris Agreement and the inherent inconsistency of climate policymaking.” In it, Geden basically outlines what international climate negations are basically broken and that the role of climate scientists (especially those who want to act as climate policy advisor) is largely contradictory to what these (self-ordained) well-intentioned folks seem to think. While most policymakers assume consistency from talk to decision to action, in reality, Geden points out, inconsistency is true way of the world when addressing complex issues involving a “deliberately transformative agenda such as energy and climate policy.” This fundamental misunderstanding, or improper assumption, only furthers the ineptitude (foolhardiness?) of international climate negotiations.

Here’s an excerpt:

Until now, there has been no serious questioning of the intention to limit the temperature increase to 2 or even 1.5 °C. Not that many in the climate research community seem to grasp the political rationalities behind the setting of long-term policy targets. Even the mainstream policy discourse assumes consistency between talk, decisions, and actions. Accordingly, a decision on a certain climate target is presented and perceived as an act of deliberate choice, that will be followed up with the deployment of appropriate measures. In real-world policymaking, however, many decisions are viewed as independent organizational products, not necessarily requiring appropriate action. Despite the cultural norm of consistency, inconsistency is an inherent and inevitable feature of policymaking.

…Against this backdrop, the most challenging task ahead for policy-driven researchers and scientific advisors is that of critical self-reflection. In a world of inherently inconsistent climate policymaking, simply delivering the best available knowledge to policymakers might have counterintuitive effects. This means that those providing expertise cannot rely solely on their good intentions but also have to consider results. They must critically assess how their work is actually being interpreted and used in policymaking processes. This is not to say that researchers and scientific advisors should try to actively influence policymaking, as occasionally suggested, since that would almost inevitably lead to more inconsistency in experts’ knowledge production as a result of an increased politicization of climate research.

Climate researchers and scientific advisors should resist the temptation to act like political entrepreneurs peddling their advice, for example, by exaggerating how easy it is to transform the world economy. It is by no means their task to spread optimism about the future achievements of climate policy. Instead, to provide high-quality expertise, it is sufficient to critically analyze the risks and benefits of political efforts and contribute empirically sound—and sometimes unwelcome—perspectives to the global climate policy discourse.

This latter advice seems to have been lost on the 375 National Academy of Sciences who this week were signatories (aka “Responsible Scientists”) of an open letter expressing their “concern” that pulling out of the Paris Accord (as advocated by the “Republican nominee for President”) “would make it far more difficult to develop effective global strategies for mitigating and adapting to climate change. The consequences of opting out of the global community would be severe and long-lasting – for our planet’s climate and for the international credibility of the United States.”

Sure, whatever you say.

Members of the public should be able to access the body camera footage related to Tuesday’s police-involved shooting that left Keith Scott dead and prompted violent protests in Charlotte, North Carolina. But we shouldn’t be under any illusion that everyone who watches the footage will arrive at the same opinion about the police officer’s behavior. Two people can watch the same video and come to different moral conclusions. A study on video footage that proved instrumental in a Supreme Court case helps illustrate this fact.

In Scott v. Harris (2007) the Supreme Court considered whether a police officer (Scott) had violated the Fourth Amendment when he deliberately ran Harris’ car off the road during a high-speed chase, which resulted in Scott being left a quadriplegic. An 8-1 majority found that, “a police officer’s attempt to terminate a dangerous high-speed car chase that threatens the lives of innocent bystanders does not violate the Fourth Amendment, even when it places the fleeing motorist at risk of serious injury or death.”

Dash camera footage played an important role in the Court’s deliberations. In fact, Scott v. Harris has been called the Court’s first “multimedia cyber-opinion,” with Justice Scalia citing the URL to the dash camera video in the opinion, noting, “We are happy to allow the videotape to speak for itself.”

Scalia described what the dash camera video shows as follows:

There we see respondent’s vehicle racing down narrow, two-lane roads in the dead of night at speeds that are shockingly fast. We see it swerve around more than a dozen other cars, cross the double-yellow line, and force cars traveling in both directions to their respective shoulders to avoid being hit. We see it run multiple red lights and travel for considerable periods of time in the occasional center left-turn-only lane, chased by numerous police cars forced to engage in the same hazardous maneuvers just to keep up. Far from being the cautious and controlled driver the lower court depicts, what we see on the video more closely resembles a Hollywood-style car chase of the most frightening sort, placing police officers and innocent bystanders alike at great risk of serious injury.

During oral argument Justice Alito also noted the footage, prompting this exchange:

Justice Alito: […] I looked at the videotape on this. It seemed to me that he created a tremendous risk of drivers on that road.  Is that an unreasonable way of looking at the – at this tape?

Justice Scalia: He created the scariest chase I ever saw since “The French Connection.”            


Justice Scalia: It is frightening.

Yet Justice Stevens, the sole dissenter in the case, rejected the “Hollywood-style car chase” description and came to a different view:

At no point during the chase did respondent pull into the opposite lane other than to pass a car in front of him; he did the latter no more than five times and, on most of those occasions, used his turn signal. On none of these occasions was there a car traveling in the opposite direction. In fact, at one point, when respondent found himself behind a car in his own lane and there were cars traveling in the other direction, he slowed and waited for the cars traveling in the other direction to pass before overtaking the car in front of him while using his turn signal to do so. This is hardly the stuff of Hollywood. To the contrary, the video does not reveal any incidents that could even be remotely characterized as “close calls.”       

Law professors at Yale, Temple, and George Washington University showed the Scott v. Harris video to a diverse sample of 1,350 Americans, asking them a range of questions about the chase. They found that while a majority did agree with the Court’s ruling, some groups sided with the driver more strongly than others:

Our subjects didn’t see things eye to eye. A fairly substantial majority did interpret the facts the way the Court did. But members of various subcommunities did not. African Americans, low-income workers, and residents of the Northeast, for example, tended to form more pro-plaintiff views of the facts than did the Court. So did individuals who characterized themselves as liberals and Democrats.

A table from the study showing the variety of opinions on the chase by demographic is below:

Clearly, it’s possible for two people to see the same video footage and come to different conclusions. While it’s important for body camera footage of deadly police encounters to be made public we shouldn’t be under the impression that everyone will interpret the footage the same way. Nonetheless, body camera footage will make it easier to show where people think the line between reasonable and unreasonable use of force should be drawn.

If as expected Congress passes a continuing resolution in coming weeks to fund the government into December, take note of how neatly our elected officials are side-stepping responsibility for government spending. The votes that should have come in the summer ahead of the election, giving them some electoral salience, will happen in December, after you’ve made your “choice.”

But let’s home in on another way that the failed appropriations process undercuts fiscal rectitude and freedom. A “CR” will almost certainly continue funding for implementation of the REAL ID Act, the federal national ID program.

From 2008 to 2011, direct funding for REAL ID was included in the DHS appropriations bills, typically at the level of $50 million per fiscal year. That process was evidently too transparent, so from 2011 on appropriators have folded REAL ID funding into the “State Homeland Security Grant Program” (SHSGP). That’s a $400 million discretionary fund. Combining the SHSGP with other funds, there’s a nearly $700 million pool of money for DHS to tap into in order to build a national ID.

REAL ID is a national ID system, despite its’ advocates consistent denials. Passed in 2005, the REAL ID Act is designed to coerce states into adopting uniform federal standards for driver’s licenses and non-driver IDs. (Oklahoma is a current battleground. To push the state legislature, Department of Homeland Security bureaucrats are threatening to refuse the state’s driver’s license at military bases.) Compliance also requires states to share drivers’ personal data and copies of their digitally scanned documents with departments of motor vehicles across the country through a nationwide data sharing system.

If fully implemented, REAL ID would be a de facto national ID card administered by states for DHS. The back-end database system the law requires would expose data about drivers and copies of basic documents, such as birth certificates and Social Security cards, to hacking risks and access by corrupt DMV employees anywhere in the country. Based on recent hacking scandals in Louisiana and elsewhere, the risk is real—and Congress will soon vote to continue funding it.

An important investigation by Charles Seife in Scientific American looks at how scientific newsmakers – in this case the U.S. Food and Drug Administration (FDA) – use “close-hold embargoes” to manipulate news coverage on breaking stories. Embargoes in themselves are a common enough practice in journalism; the special feature of a “close-hold” embargo is that it conditions a reporter’s access to a forthcoming story on not seeking comment from outside, that is to say independent or adversary, sources. 

The result of this kind of embargo, critics say, is to turn reporters into stenographers by ensuring that no expert outside perspective contrary to the newsmaker’s makes it into the crucial first round of coverage. And the FDA uses the technique to go further, according to Seife: it “cultivates a coterie of journalists whom it keeps in line with threats.” In fact, it even “deceives” disfavored major news organizations like Fox News “with half-truths to handicap them in their pursuit of a story.” 

The FDA has used this means of forestalling informed critical reaction on major, controversial regulations such as the recent “deeming” rule governing e-cigarettes and vaping. It also used the same technique in unveiling a major public health ad campaign – taking measures, as you might put it, to shape opinion about its shaping of opinion. An FDA official even upbraided a New York Times reporter who, unlike her colleagues, noted the close-hold embargo in her report. The agency resented its news-shaping methods becoming public. 

The whole article is a case study in how government-as-newsmaker - and by no means just the Food and Drug Administration - can get the coverage it wants.

The Cato Institute is a 501(c)(3)—a nonprofit organization. Of course, as an employee I get paid more than my job costs me—I make what you might call “profit”—but because of the tax designation of my employer, I could be getting big forgiveness on any federal student loans I might have. Indeed, a new, quick-read report from the Brookings Institution shows that someone could potentially get all of their graduate schooling covered for free through the federal Public Service Loan Forgiveness (PSLF) program which, by the way, is expected to cost the American taxpayer a lot more than originally anticipated.

The general way PSLF operates is if you work for government, a 501(c)3 organization, or some other qualifying entity like a public interest law firm, you can get the remainder of your federal student loans forgiven after 10 years of regular payments. Sound great? Well don’t order yet! Those payments are also controlled, capped at 10 percent of income above 150 percent of the poverty line. So a single person would pay nothing on income below $17,820, and 10 percent on income above that. And it doesn’t matter if you get paid more than your job-description doppelganger in a for-profit venture—as long as you work for a “nonprofit” you qualify for PSLF.

The Brookings report describes how someone could essentially get a graduate degree for free through PSLF as long as he had substantial—but not huge—undergraduate debt and worked in a relatively low-paid field. Of course, many people will want to earn more than low pay, but PSLF furnishes strong incentives to stick with a low-paying job for awhile, or more likely, take on much bigger debt and all the nice-to-have college stuff that goes with big college revenue.

Go ahead, future Jack McCoy, take that dip in the lazy river!

Of course, this is not free to taxpayers, many of whom have not gone to college, or may work in struggling for-profit businesses, or may even have thought the right thing to do was to get an inexpensive—and frill free—education. But according to the report, their PSLF bill is rising as enrollment in the program is much higher than anticipated, and nearly one-third of enrollees have debt exceeding $100,000. The report doesn’t give estimated total costs because those are very hard to predict, but estimates of what would be saved with controls such as capping forgivable amounts have risen by more than 2000 percent just from 2014 to 2016! The figures are in the billions of dollars.

There is a strong argument, of course, that there is nothing more noble about working for government, or a nonprofit hospital, or even a think tank, than owning a neighborhood shoe store, or being an accountant at Apple, or risking all you have on a new, entrepreneurial venture, all of which seek to offer things of value to other people. Heck, it is the production of goods and services for profit that gives us the “excess” wealth that enables us to pay for government and all its programs. But few employees, regardless for whom they work, are losing money on their jobs, and many—see, for instance, federal workers—make big profits from their nonprofit jobs not just financially, but also with lots of vacation time, or job security, or simply doing something fun every day.

We’re all working for profit. Why should we be treated—especially given big costs and unintended consequences—differently just because of our employers’ tax designation?

That’s the provocative title of my new essay in National Affairs, out this week. I’m mostly addressing conservatives who believe that judges ought to be “restrained,” as opposed the in contradistinction to the “liberal judicial activism” of the Supreme Court in the 1960s and ’70s. It’s puzzling that the attack would be that judges should have a bias towards inaction, towards sitting on their hands, when it’s precisely this deference to the political branches that allowed progressives to rewrite the Constitution during the New Deal. As I explain:

Under the founders’ Constitution, under which the country lived for its first 150 years, the Supreme Court hardly ever had to strike down a law. The Congressional Record of the 18th and 19th centuries shows a Congress discussing whether legislation was constitutional much more than whether it was a good idea. Debates focused on whether something was genuinely for the general welfare or whether it served only a particular state or locality. “Do we have the power to do this?” was the central issue with any aspect of public policy… .

But deferentialist judges played their part in changing all that. The idea that the general welfare clause says that the government can essentially regulate any issue as long as the legislation fits someone’s conception of what’s good — meaning, as it’s understood by the majority party in Congress — emerged in the Progressive Era and was soon judicially codified. After 1937’s so-called “switch in time that saved nine,” when the Supreme Court began approving grandiose legislation of the sort it had previously rejected, no federal legislation would be struck down for exceeding Congress’s Article I powers until 1995. The New Deal Court is the one that politicized the Constitution, and therefore too the confirmation process, by laying the foundation for judicial mischief of every stripe — be it letting laws sail through that should be struck down or striking down laws that should be upheld.

And it’s this unholy alliance of liberal activism and conservative passivism – both the progeny of the Progressive Era – that leads to rulings like NFIB v. Sebelius, where Chief Justice John Roberts rewrote the Affordable Care Act in order to avoid having to strike it down as unconstitutional. I use NFIB as a salient recent case study of the ills of judicial restraint, including a provocative vignette on how Roberts begat Donald Trump:

Roberts essentially told future Donald Trump supporters not to bother the courts with important issues, that if you want to beat Obama you have to get your own strongman — complete with pen, phone, and contempt for the Constitution. So they did, bypassing several flavors of constitutional conservative in favor of a populism that knows nothing but “winning.” …

Constitutional conservatism simply couldn’t survive this brand of judicial conservatism. The genteel Roberts and the vulgar Trump thus seem to have one thing in common: a belief that judges should stop striking down laws and let political majorities rule, individual liberty be damned.

It’s fortuitous that my piece came out just as Cato Unbound is featuring a symposium on “judicial engagement.” The point is that judges should judge – we pay them for making those hard balls-and-strikes calls, as Roberts described at his confirmation hearings – and then we can debate their interpretive theories rather than whether they’re activist or restrained. Read the whole thing

The National Academies of Sciences, Engineering and Medicine released a major new report on the fiscal and economic impacts of immigration on the United States yesterday. The report is being heralded by all sides of the immigration debate as the most important collection of research on this issue. This reception could be due to the Academies’ meticulously avoiding any policy implications from their research, allowing policy wonks to draw their own conclusions. Here are my top four policy implications of the new research:

1) Dramatically expanded high skilled immigration would improve federal and state budgets, while spurring economic growth. The fiscal and economic benefits of high skilled immigration are tremendous. The net value to the federal budget is between $210,000 and $503,000 for each immigrant with a bachelor’s degree over their lifetime (the full chart below highlights the overall impact). The sections on immigrant entrepreneurship and innovation are also universally positive. “High-skilled immigrants raise patenting per capita, which is likely to boost productivity and per capita economic growth,” they conclude (p. 205).

Exempting spouses and children of legal immigrants, as Congress intended, would double the flow of high skilled immigrants, allowing the United States to capture these benefits.

2) Legalization could hasten assimilation. One conclusion of the report is that wage and language assimilation is lower among the 1995-1999 cohort of immigrants than among the 1975-1979 cohort. The rise of illegal immigration likely explains much of this difference. More than one in four immigrants today is illegally present in the United States. As Douglas Massey has shown, documented and undocumented immigrants had roughly the same wages until the 1986 law banning employment of undocumented immigrants, which depressed the wages of undocumented immigrants. Legalization would reverse this.

Moreover, other studies have shown that immigrants who are legalized rapidly increase their earnings and invest in skills, including language acquisition. A legalization program that specifically required language classes, education, and workforce participation while restricting welfare, as the 2013 Senate-passed bill did, would further enhance the gains from legalization.

3) A large guest worker program can mitigate the negative fiscal impacts of low-skilled immigration. The most negative finding in the report is that the lowest skilled immigrants have negative fiscal impacts, but those impacts are entirely driven by costs in childhood and retirement, as the figure below from the report shows (p. 331). A large guest worker program that allowed low-skilled immigrants with less than a high school degree to enter during their prime years and retire in their home country would be a strong fiscal gain for the United States.

4) Governments should strengthen the wall around the welfare state. The positive fiscal gains from immigration could be improved by limiting immigrants’ access to benefits. As I have shown before, immigrants overall did very well after benefits were partially restricted in 1996, and my colleagues have detailed a number of ways that these barriers could be reinforced. One particular insight of the report is that most of the welfare usage comes after retirement, so that should be a focus of reform.

There are many other implications of this report, but these four are enough for Congress to get started on.

On Tuesday, President Obama delivered a short address to the Leaders Summit on Refugees at the United Nations.  He went out of his way to praise the Mexican government by stating:“Mexico … is absorbing a great number of refugees from Central America.” 

In reality, the Mexican government has done very little to absorb refugees.  From 2013 to 2015, Mexico only recognized 720 refugees from Honduras, 721 from El Salvador, and 62 from Guatemala.  During the time period, Mexico granted asylum to 129 Hondurans, 82 Salvadorans, and 17 Guatemalans.  That’s a total of 1,731 refugees and asylum seekers from those countries.  Only 83 of them were children. 

In 2015 alone, Mexico deported 175,136 people to Honduras, Guatemala, and El Salvador - more than 100 times as many as were accepted by the humanitarian visa programs from 2013 to 2015.    

Instead, President Obama should have thanked the Mexican government for enforcing American immigration laws in a way that shields his administration from criticism.  Mexico has improved its immigration laws in recent years but refugee and asylum laws are one area still in desperate need of reform.  Let’s not let flowery speeches obscure the reality.

Thanks to Bryan Johnson for bringing this to my attention and Guillermina Sutter Schneider for her translation of Mexican government documents. 

George Will writes in his column today about the importance of the Port of Charleston – and by extension, trade – to the economy of South Carolina. Recent completion of the 10-year project to widen the Panama Canal to accommodate more traffic and passage of a new class of container ships with nearly triple the capacity of their immediate predecessors has exposed a logistics snafu that could cost South Carolina’s economy billions of dollars: Charleston Harbor is too shallow to accommodate these much larger, “Post-Panamax” ships efficiently (only limited sections of the harbor are deep enough and only during high tide).

According to the American Society of Civil Engineers, these vessels can lower shipping costs from 15-20 percent, but harbors need to be at least 47 feet deep to accommodate them. The U.S. Army Corps of Engineers reports that only seven of the 44 major U.S. Gulf Coast and Atlantic ports are “Post-Panamax ready.” American ports must be modernized if the United States is going to continue to succeed at attracting investment in manufacturing and if U.S. companies are going to compete successfully in the global economy.

As I wrote in the Wall Street Journal last year:

The absence of suitable harbors, especially in the fast-growing Southeast, means fewer infrastructure- and business-development projects to undergird regional growth. It also means that Post-Panamax ships will have to continue calling on West Coast ports, where their containers will be put on trucks and railcars to get products from Asia to the U.S. East and Midwest—a slower and more expensive process.

The problem can be traced to one major issue: funding.  And that issue is made more complicated by another problem: protectionism.  Most funding of infrastructure inevitably come from federal and state budgets – taxpayers, who should have a voice in the debate about whether these infrastructure projects constitute wise public investments.  But a couple of long-standing, though obscure, protectionist laws have conspired to reduce capacity in dredging services, ensuring that projects take twice as long and cost twice as much as they should.

As I wrote in the WSJ:

This capacity shortage is the result of the Foreign Dredge Act of 1906 and the Merchant Marine Act of 1920 (aka the Jones Act). These laws prohibit foreign-built, -chartered, or -operated dredgers from competing in the U.S. The result is a domestic dredging industry that is immune to competition, has little incentive to invest in new equipment, and cannot meet the growing demand for dredging projects at U.S. ports.

For the next few years, federal, state and local government spending on dredging is expected to be about $2 billion annually. That spending will be supplemented by investments from U.S. ports and their private terminal partners to the tune of $9 billion a year to build and upgrade harbors, docks, terminals, connecting roads and rail, and storage facilities, as well as to purchase cranes and other equipment. There would be a lot more of these job-creating investments if European dredging companies were allowed to offer their services.

The Transatlantic trade talks offer a great opportunity to fix this problem. The best dredging companies in the world are European, mainly from the low-lying countries of Belgium and the Netherlands, where mastery of marine engineering projects has been developed over the centuries.

Industry analysts at Samuels International Associates estimate that European dredgers could save U.S. taxpayers $1 billion a year on current projects, and enable more projects to be completed more quickly. The European Dredging Association boasts that its member companies win 90% of the world’s projects that are open to foreign competition.

In a global economy where capital is mobile, workforce skills, the cost of regulation, taxes, energy costs, proximity to suppliers and customers and dozens of other criteria factor into where a company will invest. And for companies with transnational supply chains, transportation costs are crucial considerations.

Today the U.S. is falling behind…

Over at Café Hayek today, Don Boudreaux assesses Will’s piece and offers an excellent analogy between administrative protectionism (tariffs and the like) and physical protectionism (harbor disrepair), which reminded me of this masterful passage from Fredric Bastiat equating tariffs and physical impediments to trade with sweeping brilliance and simplicity:

Between Paris and Brussels obstacles of many kinds exist. First of all, there is distance, which entails loss of time, and we must either submit to this ourselves, or pay another to submit to it. Then come rivers, marshes, accidents, bad roads, which are so many difficulties to be surmounted. We succeed in building bridges, in forming roads, and making them smoother by pavements, iron rails, etc. But all this is costly, and the commodity must be made to bear the cost. Then there are robbers who infest the roads, and a body of police must be kept up, etc. Now, among these obstacles there is one which we have ourselves set up, and at no little cost, too, between Brussels and Paris. There are men who lie in ambuscade along the frontier, armed to the teeth, and whose business it is to throw difficulties in the way of transporting merchandise from the one country to the other. They are called Customhouse officers, and they act in precisely the same way as ruts and bad roads.

A recent police-involved shooting in Charlotte, North Carolina helps illustrate the importance of body cameras and why North Carolina’s body camera law is misguided and unhelpful. Last night, Governor Pat McCrory declared a state of emergency following protests over the shooting of Keith Scott, who was shot and killed by an officer on Tuesday. The protests have left one citizen on life support and numerous police officers injured. The National Guard has been deployed. Making footage of the shooting publicly available would show that the Charlotte-Mecklenburg Police Department is dedicated to accountability and transparency while providing Charlotte residents with valuable information about the police who are tasked with protecting their rights.

Although the officer who shot Scott was not wearing a body camera, three officers at the scene were. There are concerns associated with making this body camera footage available to the public. But in my Cato Policy Analysis “Watching the Watchmen” I outline policies that I think balance citizens’ privacy with the need to increase accountability and transparency.

Many routine police interactions with citizens present significant privacy concerns. What about when police talk to informants, children, or victims of sexual assault? What if a citizen is naked, or is the victim of a traffic accident? What about footage that shows someone’s bedroom or living room? Police regularly interact with people experiencing one of the worst days of their lives, and it would be irresponsible to think that a desire to increase police accountability outweighs these privacy concerns.

Nonetheless, the Scott shooting is an example of the kind of police encounter that presents few privacy concerns. Scott was outside, in a parking lot. He and the responding officers did not have a reasonable expectation of privacy. The shooting, like the Walter Scott shooting, could have been filmed by passerbys. There is no indication from the reporting that I’ve seen suggesting that Scott was naked, intoxicated, or blurting out confidential information. According to Charlotte-Mecklenburg Police Chief Kerr Putney, Scott refused repeated demands to put down a gun. Scott’s family claim he was reading a book in his vehicle.  

An upcoming North Carolina law heavily restricts access to body camera footage, requiring members of the public to have a court order before accessing video. Despite the fact that the law doesn’t come into effect until next week, Charlotte-Mecklenburg Police Chief Kerr Putney has mentioned it while discussing his decision not to release the footage. However, he has stated that Scott’s family will be able to view the relevant footage, which he claims does not show Scott definitively pointing a gun at anyone.

Footage of the Keith Scott shooting is not the kind of footage sometimes mentioned in discussions about which body camera footage should be exempt from public release requests. Scott was outside, and his shooting is clearly of interest to the public. As such, footage of the shooting should be released.

Yesterday, Hillary announced her latest policy prescription to increase low-cost housing: don’t hold your breath, it’s anything but original. The basic prescription is simply to double down on tax subsidies for housing developers. 

To that end, Hillary proposes enlarging the Low Income Housing Tax Credit (LIHTC) program and shifting the tax burden from housing developers and financial institutions back to taxpayers.

Here are a few reasons she should reconsider:

  1. The Low Income Housing Tax Credit program (hereafter “the subsidy”) crowds out market-provided low-cost housing. That means taxpayers are paying for low-cost housing that would otherwise be provided by the market for free.
  2. The IRS has proven entirely inept in its role as administrator of the subsidy. This is not a controversial point (the Government Accountability Office agrees).
  3. The subsidy has a highly fragmented, complex system of delivery, which means it is inefficient, and by extension, expensive.
  4. As a consequence, the subsidy doesn’t even stack up well against comparable housing subsidies: research describes the subsidy as 19-44% more expensive than comparable housing subsidies.
  5. To make matters worse, the subsidy is often not viable as a stand-alone. Forty percent or more of housing units receiving this subsidy end up utilizing other subsidies, while they’re at it.
  6. The subsidy is a tax expenditure and as such does not appear as an outlay on the federal budget. This means that Congress never has to confront any of the problems noted to this point.

Still unconvinced? Here are a few more reasons why expansions of the Low Income Housing Tax Credit program should be opposed.

In yesterday’s Washington Post, a headline proclaimed: “Saudi Arabia is Facing Unprecedented Scrutiny from Congress.” The article focused on a recently defeated Senate bill which sought to express disapproval of a pending $1.15 billion arms sale to Saudi Arabia. Unfortunately, though the presence of a genuine debate on U.S. support for Saudi Arabia – and the ongoing war in Yemen – is a good sign, Congress has so far been unable to turn this debate into any meaningful action.  

Yesterday’s resolution, proposed by Kentucky Senator Rand Paul and Connecticut Senator Chris Murphy, would have been primarily symbolic. Indeed, support for the bill wasn’t really about impacting Saudi Arabia’s military capacity. As co-sponsor Sen. Al Franken noted, “the very fact that we are voting on it today sends a very important message to the kingdom of Saudi Arabia that we are watching your actions closely and that the United States is not going to turn a blind eye to the indiscriminate killing of men, women and children.” This message was intended as much for the White House as for the Saudi government, with supporters arguing that the Obama administration should rethink its logistical support for the war in Yemen.

Unfortunately, opponents of the measure carried the day, and the resolution was defeated 71-26. These senators mostly argued that the importance of supporting regional allies outweighed any problems. Yet in doing so, they sought to avoid debate on the many problems in today’s U.S.-Saudi relationship. In addition to the war in Yemen – which is in many ways directly detrimental to U.S. national security interests, destabilizing that country and allowing for the growth of extremist groups there – Saudi Arabia’s actions across the Middle East, and funding of fundamentalism around the world are often at odds with U.S. interests, even as it works closely with the United States on counterterror issues. As a recent New York Times article noted, in the world of violent jihadist extremism, the Saudis are too often “both the arsonists and the firefighters.”

Despite these problems, the growing debate over the U.S.-Saudi relationship in congress has yielded few results. A previous arms bill proposed by Senators Murphy and Paul also failed to gain traction. That measure would have barred the sale of air-to-ground munitions to the Saudis until the President was able to certify that they were actively working against terrorist groups and seeking to avoid civilian casualties inside Yemen. Though it has the potential to actually slow the Saudi war effort and protect civilians inside Yemen, the bill has languished in committee since April.

Worse, the only concrete measure passed by Congress on this issue is counterproductive. The Justice Against Sponsors of Terrorism Act (JASTA) would lift various sovereign immunity protections, allowing the families of 9/11 victims to directly sue the Saudi government. Yet even with the release of the 9/11 report’s missing 28 pages, there is no evidence that the Saudi government - as opposed to individuals within the country - financed al Qaeda. The JASTA bill thus has few positive impacts, but creates a worrying precedent for the United States: allowing citizens to sue a foreign government implies that other states may let their citizens sue the United States over issues like drone strikes. A presidential veto is expected as a result, and many in Congress are having second thoughts about a veto override.

So despite the headlines, Congress has had a fairly limited impact on the U.S.-Saudi relationship. But the simple fact that debate is occurring on Capitol Hill is a positive sign. Perhaps it can convince the Saudi government to reconsider some of their destabilizing actions in the Middle East, particularly the horrible humanitarian toll of the conflict in Yemen. At the very least, an active debate in Congress can help to remind the White House that our interests and Saudi Arabia’s don’t always align.  

In the latest issue of Cato Journal, I review Casey Mulligan’s book, Side Effects and Complications: The Economic Consequences of Health-Care Reform.

Some ACA supporters claim that, aside from a reduction in the number of uninsured, there is no evidence the ACA is having the effects Mulligan predicts. The responsible ones note that it is difficult to isolate the ACA’s effects, given that it was enacted at the nadir of the Great Recession, that anticipation and implementation of its provisions coincided with the recovery, and that administrative and congressional action have delayed implementation of many of its taxes on labor (the employer mandate, the Cadillac tax). There is ample evidence that, at least beneath the aggregate figures, employers and workers are responding to the ACA’s implicit taxes on labor…

Side Effects and Complications brings transparency to a law whose authors designed it to be opaque.

Have a look (pp. 734-739).

I was just about to treat myself to a little R&R last Friday when — wouldn’t you know it? — I received an email message from the Brookings Institution’s Hutchins Center. The message alerted me to a new Brookings Paper by former Minneapolis Fed President Narayana Kocherlakota. The paper’s thesis, according to Hutchins Center Director David Wessel’s summary,is that the Fed “was — and still is — trapped by adherence to rules.”

Having recently presided over a joint Mercatus-Cato conference on “Monetary Rules for a Post-Crisis World” in which every participant, whether favoring rules or not, took for granted that the Fed is a discretionary monetary authority if there ever was one, I naturally wondered how Professor Kocherlakota could claim otherwise. I also wondered whether the sponsors and supporters of the Fed Oversight Reform and Modernization (FORM) Act realize that they’ve been tilting at windmills, since the measure they’ve proposed would only require the FOMC to do what Kocherlakota says it’s been doing all along.

So, instead of making haste to my favorite watering hole, I spent my late Friday afternoon reading, “Rules versus Discretion: A Reconsideration.” And a remarkable read it is, for it consists of nothing less than an attempt to champion the Fed’s command of unlimited discretionary powers by referring to its past misuse of what everyone has long assumed to be those very powers!

To pull off this seemingly impossible feat, Kocherlakota must show that, despite what others may think, the FOMC’s past mistakes, including those committed during and since the recent crisis, have been due, not to the mistaken actions of a discretionary FOMC, but to that body’s ironclad commitment to monetary rules, and to the Taylor Rule especially.

Those who have paid any attention to John Taylor’s own writings on the crisis and recovery will not be surprised to discover that his own response to Kocherlakota’s article is less than enthusiastic, to put it gently. As Taylor himself exposes many of the more egregious shortcomings of Kocherlakota’s paper, I’ll concentrate on others that Taylor doesn’t address.

A Fanciful Consensus

These start with Kocherlakota’s opening sentence, declaring that “Over the past forty years, a broad consensus has developed among academic macroeconomists that policymakers’ choices should closely track predetermined rules.” That sentence is followed by others referring to “the consensus that favors the use of rules over discretion in the making of monetary policy” and to the “conventional wisdom” favoring the same.

That such a broad consensus favoring rules exists is news to me; I suspect, moreover, that it will come as a surprise to many other monetary economists. For while it’s true that John Taylor himself claimed, in a passage cited by Kocherlakota, that a “substantial consensus” exists regarding the fact “that policy rules have major advantages over discretion,” Taylor wrote this in 1992, when both the Great Moderation and Taylor’s own research, not to mention the work of earlier monetarists, appeared to supply a strong prima-facie case for rules over discretion. To say that this strong case had as its counterpart a “broad consensus” favoring strict monetary rules in practice seems to me to be stretching things even with regard to that period. In any case it can hardly be supposed that the consensus that may have been gathering then has remained intact since!

Instead, as everyone knows, the crisis, whether for good reasons or bad ones, led to a great revival of “Keynesian” thinking, with its preference for discretionary tinkering. To suggest, as Kocherlakota does, that monetarist ideas — and a preference for monetary rules over discretion is essentially monetarist — have remained as firmly in the saddle throughout the last decade as they may have been in 1992 is to indulge in anachronism.

How does Kocherlakota manage to overlook all of this? He does so, in part, by confusing the analytical devices employed by most contemporary macroeconomists, including new-Keynesians, with the policy preferences of those same macroeconomists. Thus he observes that “Most academic papers in monetary economics treat policymakers as mere error terms on a pre-specified feedback rule” and that “Most modern central bank staffs model their policymaker bosses in exactly the same way.” These claims are valid enough in themselves. But they point, not to the policy preferences of the economists in question, but merely to the fact that in formal economic models every aspect of economic reality that’s represented at all is represented by one or more equations.

In the Kydland-Prescott model, for example, a discretionary monetary policy is represented by a desired future rate of inflation, where that rate depends in turn on current levels of various “state” variables; the rate is, to employ the phrase Kocherlakota himself employs in describing rule-based policy, “a fixed function of some publicly observable information.” Discretion consists, not of the absence of a policy function, but in the fact that an optimal policy is chosen in each period. (The rule for which Kydland and Prescott argue consists, in contrast, of having policymakers pick a low inflation rate and commit to stick to it come what may.) This example alone should suffice to make it perfectly clear, if it isn’t so already, that representing monetary policy with a formula, and hence with what might be regarded as a monetary rule of sorts, is hardly the same thing as favoring either the particular rule the formula represents, or monetary rules generally.

Inputs aren’t Injunctions

Suppose that we nevertheless allow that most monetary economists and policy makers favor rules. Doing so certainly makes Kocherlakota’s claim that the Fed has been rule-bound all along appear more plausible. But it hardly suffices to establish that claim’s truth. How, then, does Kocherlakota do that? He does it, or attempts to do it, by misrepresenting the part that the Taylor Rule plays in the Fed’s deliberations, and by artful equivocation.

The misrepresentation consists of Kocherlakota’s confounding a mere input into the Fed’s post-1993 policymaking with a rule that the Fed was bound to obey. Starting not long after 1993, when Taylor published his now famous paper showing that, over the course of the preceding decade or so, the Fed behaved as if it had been following a simple feedback rule, the Fed began to actually employ versions of what had by then come to be known as the Taylor Rule to inform its policy decisions. In particular, board staff began supplying the FOMC with baseline inflation and unemployment rate forecasts based on the assumption that the Fed adhered to a Taylor Rule.

It is to these forecasts or “projections” that Kocherlakota refers in claiming both that the Fed was “unwilling to deviate greatly from the recommendations of the Taylor Rule” and that its poor handling of the crisis and recovery were instances of the failure of that rule. As Taylor explains (for I can’t do any better), Kocherlakota’s proof consists of nothing save

an informal and judgmental comparison of the Fed staff’s model simulations and a survey of future interest rate predictions of FOMC members at two points in time (2009 and 2010). He observes that the Fed staff’s model simulations for future years were based on a Taylor rule, and FOMC participants were asked, “Does your view of the appropriate path for monetary policy [or interest rates in 2009] differ materially from that [or the interest rate in 2009] assumed by the staff.” However, a majority (20 out of 35) of the answers were “yes,” which hardly sounds like the Fed was following the Taylor rule. Moreover, these are future estimates of decisions not actual decisions, and the actual decisions turned out much different from forecast.

As for equivocation, Kocherlakota begins his paper by referring to Kydland and Prescott’s finding that (in Kocherlakota’s words) “to require monetary policymakers to follow a pre-determined rule” would enhance welfare (my emphasis). He thus understands a “monetary rule” to be, not merely a convenient rule-of-thumb, but a formula that must be followed, which is only proper, since that is the understanding that has been shared by all proponents of rules both before and since Kydland and Prescott’s famous contribution. But when it comes to establishing that the FOMC has been committed to the Taylor Rule all along, he speaks, not of the FOMC’s having had no choice but to adhere to that rule, but of its “unwillingness to deviate from” it, of its understanding that the rule is “a useful” or “key” “guide to policy,” and of its “reliance” upon it.

The plain truth is that the FOMC’s members have long been entirely free to make any decisions they like, including decisions that deviate substantially from the Taylor Rule owing to their consideration of “non-rulable information” — Kocherlakota’s term for the sort of information that formal rules can’t take into account. To the extent that they so deviated (and John Taylor himself insists that they deviated a great deal), they faced no sanctions of any kind — not even such desultory sanctions as the FORM Act would impose, were it to become law. What’s more, Kocherlakota himself understands that they were free to deviate as much as they liked, for he goes on to answer in the affirmative the question, “Could the FOMC Have Done Anything Differently?” What Kocherlakota apparently fails to appreciate is that an FOMC that could have done things differently is ipso facto one that was not genuinely “rule-bound.”

Theory and Practice

In light of all this, what merit is there to Kocherlakota’s formal demonstration, near the end of his paper, of the superiority of discretion over rules? Not much. For once one recognizes that, if the FOMC allowed itself to be guided by the Taylor Rule, it did so voluntarily, then one must conclude that its conduct was that of an essentially discretionary policy regime. It follows that, if Kocherlakota’s formal model of discretionary policy were reliable, it would predict that a discretionary Fed confronted by the same “environment” faced by the actual Fed would do just what the actual Fed did, including (perhaps) following a faulty monetary rule, rather than something wiser.

Suppose, on the other hand, that Kocherlakota’s model of discretion did predict that a legally discretionary FOMC might slavishly follow a severely flawed rule. What policy lesson could one draw from such a model, other than the lesson that unlimited monetary discretion is a bad thing, and that the only way out is to impose upon the FOMC a different and better monetary rule than the one the FOMC would voluntarily adopt were it left to its own devices?

To state the point differently, there are not two but three conceivable FOMCs to be considered in properly assessing the merits of discretion. There is, first of all, the actual FOMC which, according to Kocherlakota, followed a (bad) rule, though it did so of its own volition. Then there’s Kocherlakota’s ultra-discretionary FOMC, which uses discretion, not the way the FOMC actually used it, but to do just the right thing, or at least something a lot better than what the actual FOMC did. Finally, there is a genuinely rule-bound FOMC, where the rule may differ from one that the FOMC might voluntarily follow if it could. The third possibility is one that Kocherlakota altogether ignores. That matters, because even if Kocherlakota’s ultra-discretionary Fed is the best of the three, that fact would matter only if he told us how to make an already legally discretionary FOMC do what his ultra discretionary FOMC does. Since he does nothing of the sort, his ultra-discretionary FOMC is a mere chimera.

If, on the other hand, we can identify a rule that does better than the FOMC’s favorite rule, supposing that it really has one, then we could really improve things by forcing the FOMC to follow that rule. Imaginary discretion beats both a bad monetary rule and actual discretion that depends on such a rule; but a better rule beats imaginary discretion, because a better rule is not merely something one can imagine, but a real policy alternative.

Kydland, Prescott, and Those Dead Guys

Finally, a word or two concerning Kocherlakota’s scholarship. Of the many general arguments favoring monetary rules over monetary discretion, he refers only to that of Kydland and Prescott, in which the authorities are modeled as being both equipped with all the pertinent information needed to wield discretion responsibly, and free from any inclination to abuse their powers. What’s remarkable about Kydland and Prescott is, not that by making these assumptions they were more faithful to reality than past advocates of monetary rules, who based their arguments on appeals to limited information (and monetary authorities’ limited forecasting powers especially) and the potential for abuse, but that despite assuming away the problems past rule advocates had emphasized, they were still able to make a powerful case for monetary rules!

A compelling case for discretion must, on the other hand, answer not only Kydland and Prescott’s argument, but also the less subtle but no less important arguments of Henry Simons, Milton Friedman, and Jacob Viner, among others. Despite his conclusion that “there are good reasons to believe that societies will achieve better outcomes if central banks are given complete discretion to pursue well-specified goals,” Kocherlakota never really does this. Instead, his demonstrations make only very limited allowances for those central-banker infirmities that caused early exponents of rules to plead for them in the first place. In particular, he allows that central bankers may suffer from an “inflation bias.” But he does not allow for the many other political as well as cognitive biases to which central bankers may be subject. More importantly, he does not allow for the very real possibilities that central bankers might respond to “non-rulable” information inappropriately, or that such information might be inaccurate or otherwise misleading.*

More egregious still is Kocherlakota’s failure to refer to any work by John Taylor save his 1993 paper. Since Kocherlakota comes within an ace of blaming Taylor for the fact that the U.S. economy has gone to seed, you’d think that he would at least acknowledge Taylor’s own rather different opinion on the matter. Instead he leaves his readers with the impression that Taylor himself believes that his rule remains the centerpiece of a “broad consensus” in which the Fed itself takes part. As Taylor points in his own reply to Kocherlakota to some evidence to the contrary, I’ll simply observe that, if he believed that the Fed stuck to his rule in the years surrounding the subprime debacle, he wouldn’t have called his book on the Fed’s role in that debacle Getting Off Track.

In short, Kocherlakota’s attempt to treat the Fed’s failures as proof of the desirability of monetary discretion is as unsuccessful as it is bold. He might, after all, have spared himself the effort, had he only kept in mind an advantage of discretion that even its most determined opponents aren’t likely to deny, to wit: that it’s the bigger part of valor.


*Even so, Kocherlakota’s formal demonstration still favors a rule over discretion in the event that “the bias of the central bank exceeds the standard deviation of the central bank’s non-rulable information.”

[Cross-posted from Alt-M.org]

When writing a few days ago about the newly updated numbers from Economic Freedom of the World, I mentioned in passing that New Zealand deserves praise “for big reforms in the right direction.”

And when I say big reforms, this isn’t exaggeration or puffery.

Back in 1975, New Zealand’s score from EFW was only 5.60. To put that in perspective, Greece’s score today is 6.93 and France is at 7.30. In other words, New Zealand was a statist basket cast 40 years ago, with a degree of economic liberty akin to where Ethiopia is today and below the scores we now see in economically unfree nations such as Ukraine and Pakistan.

But then policy began to move in the right direction; between 1985 and 1995 especially, the country became a Mecca for market-oriented reforms. The net result is that New Zealand’s score dramatically improved and it is now comfortably ensconced in the top-5 for economic freedom, usually trailing only Hong Kong and Singapore.

To appreciate what’s happened in New Zealand, let’s look at excerpts from a 2004 speech by Maurice McTigue, who served in the New Zealand parliament and held several ministerial positions.

He starts with a description of the dire situation that existed prior to the big wave of reform.

New Zealand’s per capita income in the period prior to the late 1950s was right around number three in the world, behind the United States and Canada. But by 1984, its per capita income had sunk to 27th in the world, alongside Portugal and Turkey. Not only that, but our unemployment rate was 11.6 percent, we’d had 23 successive years of deficits (sometimes ranging as high as 40 percent of GDP), our debt had grown to 65 percent of GDP, and our credit ratings were continually being downgraded. Government spending was a full 44 percent of GDP, investment capital was exiting in huge quantities, and government controls and micromanagement were pervasive at every level of the economy. We had foreign exchange controls that meant I couldn’t buy a subscription to The Economist magazine without the permission of the Minister of Finance. I couldn’t buy shares in a foreign company without surrendering my citizenship. There were price controls on all goods and services, on all shops and on all service industries. There were wage controls and wage freezes. I couldn’t pay my employees more—or pay them bonuses—if I wanted to. There were import controls on the goods that I could bring into the country. There were massive levels of subsidies on industries in order to keep them viable. Young people were leaving in droves.

Maurice then discusses the various market-oriented reforms that took place, including spending restraint.

What’s especially impressive is that New Zealand dramatically shrank government bureaucracies.

When we started this process with the Department of Transportation, it had 5,600 employees. When we finished, it had 53. When we started with the Forest Service, it had 17,000 employees. When we finished, it had 17. When we applied it to the Ministry of Works, it had 28,000 employees. I used to be Minister of Works, and ended up being the only employee… if you say to me, “But you killed all those jobs!”—well, that’s just not true. The government stopped employing people in those jobs, but the need for the jobs didn’t disappear. I visited some of the forestry workers some months after they’d lost their government jobs, and they were quite happy. They told me that they were now earning about three times what they used to earn—on top of which, they were surprised to learn that they could do about 60 percent more than they used to!

And there was lots of privatization.

[W]e sold off telecommunications, airlines, irrigation schemes, computing services, government printing offices, insurance companies, banks, securities, mortgages, railways, bus services, hotels, shipping lines, agricultural advisory services, etc. In the main, when we sold those things off, their productivity went up and the cost of their services went down, translating into major gains for the economy. Furthermore, we decided that other agencies should be run as profit-making and tax-paying enterprises by government. For instance, the air traffic control system was made into a stand-alone company, given instructions that it had to make an acceptable rate of return and pay taxes, and told that it couldn’t get any investment capital from its owner (the government). We did that with about 35 agencies. Together, these used to cost us about one billion dollars per year; now they produced about one billion dollars per year in revenues and taxes.

Equally impressive, New Zealand got rid of all farm subsidies… and got excellent results.

[A]s we took government support away from industry, it was widely predicted that there would be a massive exodus of people. But that didn’t happen. To give you one example, we lost only about three-quarters of one percent of the farming enterprises—and these were people who shouldn’t have been farming in the first place. In addition, some predicted a major move towards corporate as opposed to family farming. But we’ve seen exactly the reverse. Corporate farming moved out and family farming expanded.

Maurice also has a great segment on education reform, which included school choice.

But since I’m a fiscal policy wonk, I want to highlight this excerpt on the tax reforms.

We lowered the high income tax rate from 66 to 33 percent, and set that flat rate for high-income earners. In addition, we brought the low end down from 38 to 19 percent, which became the flat rate for low-income earners. We then set a consumption tax rate of 10 percent and eliminated all other taxes—capital gains taxes, property taxes, etc. We carefully designed this system to produce exactly the same revenue as we were getting before and presented it to the public as a zero sum game. But what actually happened was that we received 20 percent more revenue than before. Why? We hadn’t allowed for the increase in voluntary compliance.

And I assume revenue also climbed because of Laffer Curve-type economic feedback. When more people hold jobs and earn higher incomes, the government gets a slice of that additional income.

Let’s wrap this up with a look at what New Zealand has done to constrain the burden of government spending. If you review my table of Golden Rule success stories, you’ll see that the nation got great results with a five-year spending freeze in the early 1990s. Government shrank substantially as a share of GDP.

Then, for many years, the spending burden was relatively stable as a share of economic output, before then climbing when the recession hit at the end of the last decade.

But look at what’s happened since then. The New Zealand government has imposed genuine spending restraint, with outlays climbing by an average of 1.88 percent annually according to IMF data. And because that complies with my Golden Rule (meaning that government spending is growing slower than the private sector), the net result according to OECD data is that the burden of government spending is shrinking relative to the size of the economy’s productive sector.

P.S. For what it’s worth, the OECD and IMF use different methodologies when calculating the size of government in New Zealand (the IMF says the overall burden of spending is much smaller, closer to 30 percent of GDP). But regardless of which set of numbers is used, the trend line is still positive.

P.P.S. Speaking of statistical quirks, some readers have noticed that there are two sets of data in Economic Freedom of the World, so there are slightly different country scores when looking at chain-weighted data. There’s a boring methodological reason for this, but it doesn’t have any measurable impact when looking at trends for individual nations such as New Zealand.

P.P.P.S. Since the Kiwis in New Zealand are big rugby rivals with their cousins in Australia, one hopes New Zealand’s high score for economic freedom (3rd place) will motivate the Aussies (10th place) to engage in another wave of reform. Australia has some good polices, such as a private Social Security system, but it would become much more competitive if it lowered its punitive top income tax rate (nearly 50 percent!).