Cato Op-Eds

Individual Liberty, Free Markets, and Peace
Subscribe to Cato Op-Eds feed

As crude oil prices recently approached $68 a barrel, a Wall Street Journal writer concluded that “inflation fears got an added jolt this week as oil prices rose to a three-year high.”

Two other Wall Street Journal writers added that “If crude continues to move higher, it could begin to stifle economic growth.”  They suggest that “higher consumer prices for gasoline and other energy products act like a tax, while pushing inflation higher and increasing pressure on the Federal Reserve to raise interest rates more aggressively.” 

Such anxieties about $70 oil are obviously overwrought. Crude prices were usually above $100 from March 2011 to September 2014, yet nobody was then fretting about inflation fears forcing the Fed to raise the fed funds rate.   

But this does raise two very important issues: First, the importance of soaring oil prices in the recession of 2008-2009.  Second, the way the Federal Reserve has overreacted to surging oil prices by pushing up interest rates before and during oil-shock recessions and (in 2008) leaning against their fall after the recession was well under way.

In May 2009, economist James Hamilton of U.C. San Diego testified before the Joint Economic Committee.  He noted that, “Big increases in the price of oil that were associated with events such as the 1973-74 embargo by the Organization of Arab Petroleum Exporting Countries, the Iranian Revolution in 1978, the Iran-Iraq War in 1980, and the First Persian Gulf War in 1990 were each followed by global economic recessions. The price of oil doubled between June 2007 and June 2008, a bigger price increase than in any of those four earlier episodes.”  

Like every postwar recession except 1960, the “Great Recession” of 2008-09 was preceded by a spike in the price of crude oil.  West Texas crude soared from $54 at the start of 2007 to $145 by mid-July 2018.  Yet U.S. reporters and economists still write as though the Great Recession had nothing to do with a global energy shock but was instead a “financial crisis” that began with the collapse of an investment bank (Lehman Brothers) on September 15, 2008.  This is a stubborn myth.

In reality, the inability of unemployed homeowners to pay their mortgage bills, and the failure of investments tied to those mortgages, were secondary complications of a global energy shock which cut industrial production in Canada and Europe in 2017 before that happened in the U.S.  By the end of 2008, the contraction of real GDP “was twice as deep in Germany and Britain  [as it was in the U.S.] and much worse in Japan and Sweden.”

Because energy is a key part of the cost of doing business, higher energy costs made production and distribution less profitable and thereby shrunk the global economy’s supply.  Yet even as late as June 2008, as crude prices soared above $140, The New York Times and Washington Post were hysterical about illusory inflation – not recession. 

Did the Fed also mistake a temporary oil price spike for a sustained rise in the overall trend of inflation?  I believe it did that in 2008 and even more obviously in prior incidents of a sudden surge in oil prices.

The big oil price spikes (and recessions) between 1973 and 1980 that Hamilton mentioned were clearly matched by huge spikes in the Fed-controlled interest rate on federal funds.  Oil prices and the fed funds rate were also rising before the 1991 and 2001 recessions. When crude rose from $34 to $74 from May 2004 to June 2006, the fed funds rate rose from 1 percent to 5.25 percent.  Once recessions were well underway the Fed always began to bring interest rates back down, but always (including 2008) too slowly.

On January 2, 2008, The Financial Times published my article, “Why I am Not Using the-R-Word This Time.”   Citing James Hamilton, I wrote that “if the emphasis on oil prices in Prof Hamilton’s 1983 study is correct, the US economy is likely to slip into recession because of higher energy costs alone, regardless of what the Fed does. If Mr. Bernanke’s 1997 study is right, timely reductions in the Fed funds rate should avert such a recession.”  Once he became Fed chairman, unfortunately, Bernanke did not aggressively cut the funds rate in a timely manner – but instead tried hard to prop rates up.  As Cato’s George Selgin documented, “Between December 2007 and September 2008, the Fed sold over $300 billion in Treasury securities, withdrawing a like amount of reserves from the banking system, or just enough to make up for reserves it created through its emergency lending,” One result was to keep the fed funds rate above 2 percent until September when oil prices finally fell.  In October the Fed also began paying interest on bank reserves (above 1 percent until mid-December) to discourage bank lending.  

Although an oil price of around $70 is only half as high as the peak in 2008, and lower than it was just a few years ago, we do have a lot of experience with sudden increases in oil prices that always ended in recession.  And we have a lot of experience with the Fed acting as though they were not focused on “core” inflation at all (i.e., excluding energy) but were unduly influenced by the misleading and ephemeral impact of oil price gyrations on headline inflation numbers.  

So, the Wall Street Journal’s recent warning that “If crude continues to move higher, it could begin to stifle economic growth” would be likely only if crude moved a lot higher.  And the warning that a higher oil price must put “pressure on the Federal Reserve to raise interest rates more aggressively” would be likely only if the Fed has still not learned anything from one of its biggest and most frequently repeated mistakes.

When it comes to increasing police accountability and transparency it’s policy, not technology, that does the heavy lifting. Police body cameras, tools that are overwhelmingly popular among the public, are sometimes cited as a valuable resource for addressing police misconduct and secrecy. They can be, but only if the right policies are in place. Absent policies that balance privacy interests with the need to increase police accountability, body cameras are surveillance tools. The risk of body camera surveillance is especially pronounced at a time when a major body camera manufacturer is doing more work on artificial intelligence, a development that may result in the widespread use of police body cameras with real-time facial recognition capability.

Axon, the company that makes one of the most popular police body cameras, released a Law Enforcement Technology Report last year. That report outlined some of the technology that’s on the horizon: “Soon, you’ll be able to tell almost immediately if someone has an outstanding warrant against them, thanks to facial recognition technology.”

According to reporting by The Wall Street Journal, the merger of body camera and facial recognition technology is months rather than years away.

I’ve written on this blog before about why body cameras with facial recognition capability are a threat to civil liberties. I’m hardly alone in highlighting this threat. Axon’s leadership is clearly aware of the concerns raised by civil libertarians and has convened an AI Ethics Board. Yet it seems as if this board will have little if any impact on Axon’s development of technology that poses a significant risk to civil liberties.

An “Ethics Board” sounds like the kind of body a company that builds surveillance equipment and weapons should have. However, Axon’s AI Ethics Board lacks any kind of authority to ensure that the company’s products aren’t used unethically.

Yesterday, a coalition of civil rights groups wrote a letter to the Axon AI Ethics Board outlining their well-founded concerns. The letter calls for board members to assert themselves and oppose real-time facial recognition on body cameras, consult with community members with direct experience with the criminal justice system, limit sales to law enforcement agencies with appropriate body camera policies, and ensure that they have an oversight remit that covers all of Axon’s digital products.

Members of the AI Ethics Board, which includes eight volunteer civil liberties, AI, and criminal justice experts, do not currently have the authority to veto Axon products. A functional ethics board should be free to halt products or at the very least publish reviews of all Axon devices.

If Axon’s ethics board guaranteed that only departments with policies that increase accountability and transparency while also protecting civil liberties could buy Axon products the company would sell fewer body cameras. Dozens of America’s largest and most prominent police departments fail to implement praiseworthy body camera policies. For example, an Upturn examination of 75 police department body camera policies found that the Baltimore Police Department is the only department with strict limits on body camera footage being analyzed with facial recognition software, and that not a single department requires officers to write a report before reviewing body camera footage related to any incident. Giving the AI Ethics Board the power to dramatically affect sales is one of the reasons that Axon is unlikely to adhere to the recommendations in the recent coalition letter.

In all likelihood, Axon will continue to sell products that can, if governed by poor policies, erode civil liberties. Although Axon is signaling that it’s concerned about the ethical implications of its products, it doesn’t look as if its ethics board will prevent the proliferation of body cameras that will become known as tools of surveillance, not police accountability. In order for body cameras to achieve their potential as tools that improve policing it’s policymakers rather than private companies who will have to implement necessary changes.

Paul Krugman’s column yesterday lamented Republican policy towards the poor. He has particular gripes with Ben Carson’s changes to housing subsidies, increased work requirements for those seeking food stamps, and waivers granted to states to enable new work requirements for Medicaid.

I’m not going to get into these specific policy changes here. But let’s take Krugman’s analysis of the changes and the motivation for them at face value, and pose a question: how robust is an anti-poverty agenda that depends so much on political and societal attitudes to the poor?

As I outlined in a recent blog, it’s a mistake to think of policy towards the poor being merely about government transfers, services and benefits-in-kind. In fact, this focus on income and services has blinded the debate about poverty from the truth that there are lots and lots of state, local and federal policies that increase the price of goods and services the poor spend a disproportionate amount on.

Zoning laws and urban growth boundaries raise house prices. Regulations make childcare more expensive. Sugar, milk programs and the ethanol mandate increase food costs. Tariffs on clothes and footwear have particularly regressive effects. Energy regulations which seek to subsidize renewables rather than being “technology neutral” can raise prices. CAFE standards, constraints against ride sharing, and some regulations on gas taxes raise some transport prices too. Not to mention the broader effects of protectionism and occupational licensing in both raising prices and reducing efficiency across the economy.

Some of these things affect families by orders of magnitude greater than the changes Krugman is concerned about. Combined, they would have a huge impact for many households. What’s more, most of the status quo interventions make the economy less efficient too, reducing market-wages and, in the case of housing and childcare, deterring the mobility of labor over different dimensions. One cannot talk about a “war on the poor” without acknowledging these fronts and the armies which battle on them, not least because these bad policies in part drive significant demands for redistributive transfers in the first place.

In my view, it would be far more fruitful for liberals concerned with the well-being of the poor to focus on all these issues as part of a “first do no harm” poverty agenda. Why?

1. There’s evidence that fiscal transfers may have hit diminishing returns in terms of their role in poverty alleviation.

2. The fiscal environment is not conducive to huge new expenditures on programs, and evidence from other countries (not least Britain) suggests working age welfare is the first port of call for cuts when a fiscal crisis hits.

3. There are clear economic trade-offs where transfers are concerned. As this accompanying Twitter thread by Paul Krugman acknowledges, even increasing the availability and generosity of transfers to more people disincentivizes people from earning more income.

4. And crucially for Krugman’s column, attitudes to redistribution are volatile, and support can be replaced by narratives about “moochers” or “welfare queens” relatively quickly.

In contrast, a pro-market agenda seeking to undo existing damaging regulations at the local, state and federal levels could: reduce poverty, reduce the demands for redistributive activity, would not undermine work incentives and would be harder to undo given its dispersed nature. Those in favor of extensive redistribution should see this too: you do not have to believe existing anti-poverty programs have failed to acknowledge they can have negative unintended consequences, hit diminishing returns, or that their effectiveness is undermined by bad policies which drive up living costs.

A pro-market cost of living agenda would not “solve” poverty, of course. And there are major vested interests in each of these areas who would resist reform. But there are clearly lots of different wars on the poor being raged, even if inadvertently. As long as the poverty debate focuses on just income transfers and government services, the more bountiful battles against vested interests who drive up the poor’s living costs go unfought.

The Grace Cathedral church near Akron, Ohio, found itself in big legal trouble for running a (money-losing) cafeteria open to the public in which much of the labor was provided free by volunteer members of the congregation. Beginning in 2014, the U.S. Department of Labor investigated and then sued it on the grounds that for an enterprise, church or otherwise, to use volunteer unpaid labor in a commercial setting violated the minimum wage provisions of the Fair Labor Standards Act (FLSA) of 1938. A trial court agreed with the Department and found liability, but now, in Acosta v. Cathedral Buffet et al., the Sixth Circuit has reversed the ruling and sent the case back for further proceedings, noting that “to be considered an employee within the meaning of the FLSA, a worker must first expect to receive compensation.”

Judge Raymond Kethledge, writing in concurrence, takes issue with what may be the most remarkable argument advanced by the Department of Labor: that the congregation volunteers should count as employees because “their pastor spiritually ‘coerced’ them to work there. That argument’s premise — namely, that the Labor Act authorizes the Department to regulate the spiritual dialogue between pastor and congregation — assumes a power whose use would violate the Free Exercise Clause of the First Amendment,”

Judge Kethledge goes on to note that as “the record makes clear, the Buffet’s purpose was to allow the church’s members to proselytize among local residents who dined there,” and that along with its congregation volunteers the establishment “had 35 full-time paid employees — all of whom, incidentally, have lost their jobs as a result of this lawsuit.” 

A footnote: Given that the Obama Labor Department’s stance flies in the face both of sound labor policy and principles of church-state separation, why didn’t the Trump administration reverse position on it? One clue to a possible answer (via Ted Frank and commenters on Twitter) is that the nomination of a new solicitor for the department did not clear the Senate until December 21, 2017, two weeks after the case had been argued before the Sixth Circuit panel. (cross-posted and adapted from Overlawyered). 

 

Last week I put up a post with charts showing total, per-pupil, public school spending between the 1999-00 and 2014-15 school years, as well as breaking out spending for a handful of states facing notable education unrest. Due to popular demand—if that’s what you call very mild comments from a few people on Twitter and Facebook—this post is going to break that spending into numerous subcategories used by the federal government in the tables that formed the bases for most of the charts. This post will only look at aggregate national data, but next week I’ll break down spending for those embattled states.

 

Looking at this inflation-adjusted chart, you can get a sense for how big numerous components of spending are relative to each other, and how they have moved over the 15-period. I won’t define all the categories—indeed, the federal definitions themselves are not entirely clear—but the two biggest ones that people are most likely to be interested in are “instruction,” which includes really important things like teacher and principal pay, and “capital outlay,” which covers costs for things such as acquiring property and new buildings. Also important are “student support services” and “other support services,” which include compensation for people like guidance counselors and speech pathologists, and costs for business support services.

Overall we see the same trend as previously: spending up between 99-00 and 07-08, down between 07-08 and 12-13, then trending back up. Just eyeballing the chart it appears that the one area that saw a very meaningful dip over the period was capital outlay.

Is it? Crunching the numbers between 99-00 and 14-15, it seems to be. Only two categories of spending saw drops for the entire period: the very tiny “enterprise operations”—basically, funding from selling things—and capital outlays. Enterprise operations dropped 2 bucks per student, or about 9 percent, while capital outlay fell by $314, or almost 24 percent. In contrast, instructional spending rose by $876, or approaching 15 percent.

Between the pre-Great Recession, 07-08 spike, and 14-15, numerous categories saw drops, with the biggest dip in both dollar and percentage terms coming to capital outlays: $540 and 35 percent. Instructional spending fell only a modest 4 percent, or by $286. Meanwhile, “student support services,” “other support services,” and “food services” actually experienced increases. For the entire 15-year period, student and other support services saw the biggest increases in percentage terms, both growing by about a third.

What does this mean? At least in the aggregate, public schools did not cut spending for the overall period, but did during the recession and its aftermath. But it was in buildings and other property where the most serious cutting occurred both overall and post-Recession, with instructional outlays growing overall and various supports receiving increases even in the worst of times.

Of course, the aggregate does not apply to any given state, where the locus of education authority is held. See you next week for a look at some of those guys.

For the last few years a number of financial managers have been spending increasing resources to prod the companies they invest in to adopt more environmentally conscious and socially aware policies in their businesses. Many publicly-held corporations have had to deal with multiple shareholder resolutions annually intended to force them to adopt more socially or environmentally aware perspectives. Other investment fund managers have begun voting their proxies in favor of such shareholder resolutions, while taking other actions to encourage what they deem to be more socially salutary behavior.

However, there is an inherent conflict in such resolutions coming from investors: after all, the inevitable result of a corporation bowing to a proxy vote and prioritizing such activities is that it would reduce its profits–a bad thing for those who hold stock in the company.

Not so fast, say the activists driving this movement: while they may be pushing companies to pursue environmentally and socially responsible activities for their own political goals, they aver that such actions actually make good business sense as well, and that these will ultimately increase the profits of these companies. One oft-used rationale for resolutions pertinent to climate change is that these actions improve the long run sustainability of the company, which rational shareholders will value and thus will be reflected in the stock price.

While the firm-as-pure-profit-maximizer may be a perspective that fails to capture the complexity that is today’s multinational corporation, the notion that investor activists are somehow doing shareholders a favor and helping corporations correct a blind spot in their vision is absurd, and earlier this week the Department of Labor finally declared that to be the case.

The DOL declared that financial managers can no longer justify pushing environmentally or socially beneficial investments on the grounds that they are inherently beneficial for shareholders. Specifically, it wrote that:

“fiduciaries may not sacrifice returns or assume greater risks to promote collateral environmental, social, or corporate governance (ESG) policy goals when making investment decisions.”

The Labor Department’s actions effectively make it more difficult for activists to pursue social policy via proxy battles, which is precisely how it should be. Such efforts represent nothing less than an attempt to effectively tax the retirement wealth of savers to advance a political agenda.

The world has lost one of its greatest monetary economists ever: Leland Yeager (b. 1924) has passed away, and boy will I miss him.

Leland’s writings on monetary economics taught me a big chunk of all that I know about the subject. Among other things, they gave me a better understanding of just what “money” means; of what banks do and why it matters; of how changes in the price level clear the market for money balances (and why those changes can take time, giving rise to bouts of “monetary disequilibrium”); and of how various international monetary arrangements can either aggravate or limit the extent of such disequilibrium.

Beyond matters of strict economics, Leland taught me — by example mainly — the importance of clear writing, and of steering clear of such enervating impediments to good scholarship as excessive preoccupation with one’s “methodology” or exclusive devotion to any one school of economic thought.

But mostly I will miss Leland simply because, besides being a kindred spirit, he was one of my all-time favorite human beings. Although it has now been many years since I last spent time with him, we corresponded regularly until recently, and I remember our past encounters with great affection. Leland was simply a blast to be with, and all the more so if the occasion involved plenty of good food and wine, for which he had a fine appreciation (and, in the case of wine, an apparently unlimited tolerance).

But all that food and wine was just so much fuel and lubricant. The motive force behind those rollicking rendezvous was Leland’s gift for conversation. His erudition and curiosity were such that one could always count on learning something, despite never knowing just where the conversation was going. And with Leland’s broad interests it could end up anywhere. Well, almost anywhere: so far as I can recall we never talked about professional sports, TV shows, or consumer electronics. That alone was a reason, so far as I was concerned, for cherishing Leland’s company.

Leland also spoke more languages than anyone I ever knew — and more than anyone I’ve ever heard of save Richard Burton (the explorer, not the actor), who commanded 29 of them! So depending on the food we were eating or where the conversation wandered we might switch to Italian, or French. But whereas I speak the one fluently, the other haltingly, and nothing apart from them besides English, Leland was conversant with at least half a dozen natural languages, as well as with the planned languages Esperanto and Interlingua. For all I know, he was perfectly capable of waxing lyrical, in Swahili of course, over a well-seasoned bowl of ugali.

Though we eventually became close, and despite my having admired Leland before we ever met, for a time he wanted nothing to do with me. Believe it or not, instead of being repulsed by any of my many, genuinely off-putting qualities, he disliked me because of my name. It happens, you see, that Leland once had a colleague whom he intensely disliked, whose last name also began with “Sel.” That alone made me persona non grata. It was, Leland later explained to me, a matter of simple prudence. “ ‘Selgin,’ ‘Sel___’ … Too close for comfort!”

After I’d known him for some years, it occurred to me that, although I’d learned plenty from Leland’s writings on monetary economics, most of those writings were scattered throughout various professional journals, some of which weren’t easy to come by. (This was, needless to say, in the days before the world-wide-web and PDFs and all that.) So I had the bright idea of gathering them together under one cover, along with several of Leland’s unpublished papers. I approached Liberty Fund with the proposal, knowing that, if they went for it, I’d end up not only with a beautiful book, but with one that, in its paperback version especially, even grad students might reckon a bargain. Such was the genesis of The Fluttering Veil.

That title, by the way, alludes to a remark by John Gurley, another monetary economist who once summed up Milton Friedman’s monetarism, facetiously but not inaccurately, as holding that “Money is a veil, but when the veil flutters, the economy sputters.” Although Leland was nothing if not eclectic when it came to schools of economic thought, the label he was most happy with was “old-fashioned monetarist,” by which he mainly meant to distinguish himself from New Classical economists like Bob Lucas, Robert Barro, and Tom Sargent. The New Classicals imagined that, by modeling expectations as model-consistent or “rational,” while assuming that prices always adjusted to their Walrasian market-clearing values, they were proffering a more “rigorous” version of monetarism.

Yeager, who was equipped with an especially sensitive version of the device Ernest Hemingway referred to as a “built in, shock-proof s***t detector,” saw right through the New Classical pretense. Instead of perfecting monetarism, by invoking the deux ex machina of instantaneous Walrasian pricing, the New Classical economists managed instead to strip it of its capacity to account for the real consequences of monetary disturbances.[1]

But while he rejected the “equilibrium always” theorizing of New Classical economists, and understood instead that an economy has to “grope” its way to a new set of equilibrium prices following any major monetary disturbance, Yeager was far from embracing either “Keynesian” or “New Keynesian” economics. While he denied that prices adjust instantly to clear the market for money balances, he was if anything still more adamantly opposed to the “old” Keynesian treatment of the rate of interest as the “price” of money — with its implicit assumption of a parametrically “given” price level. “New” Keynesian models, with their staggered pricing parameters and other devices for simulating “sticky” prices, are no doubt better. But those models’ basic tenets, Yeager pointed out, are in fact neither “New” nor particularly “Keynesian,” having roots that trace back to Classical economics, and having been a feature of old-fashioned Chicago-school thinking since the days of Herbert J. Davenport.

Now that Leland is gone, it cheers me up somewhat to think that future students can still learn from him, and that they might even do it using the collection of his articles I assembled for the purpose. But no amount of exposure to Leland’s writings can suffice to give someone who hasn’t met him a true measure of the man. He had too many facets; and he was, in his peculiar way, infinitely delightful. I thank my lucky stars for having had the chance to know him.

_______________________________

[1]For all his awareness of the drawbacks of “equilibrium always” reasoning in monetary economics, Yeager was far from holding general equilibrium theory in contempt. On the contrary: he insisted, against its (mainly Austrian School) detractors, that it supplied “a necessary background” for monetary analysis as well as an important check against many kinds of fallacious reasoning. Yet Yeager would also have been the first to defend valuable Austrian-school insights against THEIR detractors among more mainstream economists. Yeager was, in short, a scholar remarkably free of partisanship and bias, save one favoring reasonable arguments over unreasonable ones.

[Cross-posted from Alt-M.org]

If President Trump wants to have a successful summit with Kim Jong-un then it’s important to understand the domestic political incentives that will shape Kim’s approach to negotiations. On April 20th, Kim gave a major speech at a plenum meeting of the Workers’ Party of Korea. Most U.S. media outlets focused on the announcement that the North would dismantle its nuclear testing facility and stop ballistic missile tests, but the speech also revealed important information about Kim’s political incentives that received less attention.

During a plenum meeting in March 2013, Kim announced the byungjin line,” which stated that the North would develop its economy and nuclear arsenal simultaneously. North Korea’s nuclear weapons program has made significant progress in the five years since the byungjin line was first announced. Kim acknowledged this progress in his April 20th speech when he declared that the byungjin line was successfully concluded. He also announced a “new strategic line” that focuses on economic and scientific development.

The end of the byungjin line and announcement of a new overarching strategy for North Korea shortly before the Trump-Kim summit has major implications for the Trump administration’s negotiating strategy.

The April 20th speech indicates that Kim’s primary objective in negotiations will be getting sanctions relief, because lifting sanctions is essential for achieving the economic development objective of the new strategic line. The new line may partially explain why the North has not demanded U.S. troop reductions in the lead-up to the summit. Pyongyang would rather not have a U.S. military presence on the Korean peninsula, but the troop presence does not greatly affect North Korea’s economic development so their removal is not necessary to achieve the new strategic line.

The Trump administration could use sanctions relief in a couple different ways depending on its overall negotiating strategy. For example, the United States could offer small concessions on sanctions relief in exchange for incremental progress on denuclearization in a tit-for-tat process. Such an approach would incentivize Kim to stay at the negotiating table over a longer period of time, but it would probably not produce any big, short-term wins for the Trump administration. Another approach entails standing firm on sanctions and not loosening them until the North takes major steps toward denuclearization. This would give the United States more leverage than the tit-for-tat approach, but Kim may be less willing to do what the United States demands without some other kind of concessions.

Coordination with U.S. allies and China will take center stage if sanctions relief is a more important issue to Kim than security guarantees. There are two main types of sanctions against North Korea. The United States, Japan, and South Korea have implemented several rounds of unilateral sanctions, while the UN Security Council has its own set of sanctions. The Trump administration was able to get China’s support for strong UN sanctions in 2017 as part of its maximum pressure strategy.

If the Trump administration wants to withhold sanctions relief to pressure Kim to take big steps towards denuclearization then it will have to coordinate with other sanctioning parties. Japan will likely stay in lockstep with the United States, but keeping South Korea and China on board could be more challenging. Seoul and Washington appear to be on the same page right now, but maintaining close coordination may prove difficult if the Moon Jae-in administration faces pressure to make some concessions on its unilateral sanctions during the inter-Korean summit. Maintaining China’s support for UN sanctions could also prove difficult because of the recent downturn in the US-China economic relationship.

Kim’s April 20th speech warrants very close consideration by the Trump administration. The end of the byungjin line marks the start of a new period for North Korea. Kim’s nuclear weapons are still important to him, but the speech indicates shifting domestic political incentives that will play an important role in negotiations with the United States. As the Trump administration crafts their negotiating strategy for the Trump-Kim summit they should keep Kim’s domestic incentives in mind and do their best to use these incentives to their advantage.

The specific language of the Fourth Amendment was largely a product of the colonists’ experience with the noxious institution of the general warrant. Historically, general warrants—and specifically, writs of assistance—gave law enforcement broad discretion to search wherever and whatever they deemed necessary, without the need to establish specific probable cause before a judicial officer. Such broad discretion enabled abusive, selective enforcement, and the colonists’ contempt for those arbitrary practices was a major cause for the Revolutionary War itself.   But 227 years after ratification of the Fourth Amendment, we are tragically approaching a stealth resurrection of the general warrant, in the form of pretextual stops. In Whren v. United States, 517 U.S. 806 (1996), the Supreme Court held that the actual intent of law enforcement officers in making a stop—even unlawful intent, like racial discrimination—is irrelevant to the legality of a traffic stop under the Fourth Amendment, so long as there is probable cause to believe that some traffic violation occurred. The practical effect of this decision has been to give police officers nearly unfettered discretion to stop any person they choose at any time. After all, no one can actually operate a motor vehicle for an extended period of time without running afoul of some traffic law. Especially when combined with other areas of Fourth Amendment law that create expansive exceptions to the warrant requirement, Whren itself has already been described as the “twentieth-century version of the general warrant.”   But if that were not bad enough, the Seventh Circuit, in an en banc decision, recently extended the Whren doctrine even to parking violations—and effectively, to any and all fine-only offenses, no matter how trivial. Especially in light of the rampant state of overcriminalization in our country today, that move represents an endorsement of the general warrant in all but name. We have so many criminal laws today that we cannot even count them all, and states routinely regulate a huge swath of generally harmless conduct that regular citizens engage in every day. If violation of any regulation whatsoever is sufficient to justify a police stop (regardless of whether that regulation was actually the motive behind the stop), then police can, in effect, stop anyone they want to. The Cato Institute has therefore filed an amicus brief urging the Supreme Court to grant cert in this case, reverse the Seventh Circuit, and reconsider the Whren doctrine entirely.

Nicaragua is in flames as the 11-year-old kleptocracy of Daniel Ortega is rocked by massive protests that threaten its continuity. The unrest began after the government announced some adjustments to its bankrupted social security system. Ironically, for a self-proclaimed socialist who constantly rails against U.S. imperialism, Ortega was implementing the recommendations of the International Monetary Fund (IMF).

Ortega’s second spell in power has been quite the puzzle. He was a fervent supporter of the late Hugo Chávez and continues to be one of Venezuela’s most vocal allies in Latin America. Huge billboards that portray Nicaragua as “Christian, Socialist, and Solidary” greet visitors to Managua. Yet the economic policies of the Ortega regime are among the most orthodox in Latin America: inflation is relatively low (5.7% in 2017), the projected fiscal deficit for 2018 is just 1.1% of GDP, and economic growth has averaged 4.2% per year in the last decade. Nicaragua boasts free trade agreements with the United States and the European Union. In January, Standard & Poor’s highlighted Nicaragua’s “track record of steady GDP growth and pragmatic economic policies, its low fiscal deficits, and moderate government debt burden.” Moreover, in 2016 the IMF closed its office in Managua because of “Nicaragua’s success in maintaining macroeconomic stability and growth.” Not bad for a left-wing populist.

Since coming back to power in 2007, Ortega kept his revolutionary rhetoric but dropped his socialist economic policies of yore. He reached an understanding with the business sector: Ortega would guarantee macroeconomic stability and a good environment for private investment in exchange for allowing him to dismantle Nicaragua’s democratic institutions and impose a corrupt dynastic dictatorship. The business community acquiesced.

That is the irony about what triggered the protests: Faced with the imminent insolvency of the social security system, Ortega had several options. The easiest for a populist would have been to make up the shortfall by printing money. That would have fueled inflation, but Ortega could have blamed it on external factors—just as Nicolás Maduro does in Venezuela. Instead, Ortega decided to follow the IMF recommendations of increasing payroll taxes and cutting pension benefits. That is certainly less irresponsible than debasing the currency.

In fairness, the adjustments to the social security system were the straw that broke the camel’s back. Nicaraguans reacted to years of widespread corruption and nepotism. The heavy-handed way in which the regime handled the first bouts of unrest—by suspending independent TV channels and violently cracking down on demonstrators—just fueled the protests. Nicaragua’s turmoil is no longer about the controversial adjustments to social security—which Ortega called off anyway. It is about the massive corruption of the Ortega regime and the legitimate aspiration of many Nicaraguans to live in a democratic country with more or less decent institutions.

It is quite ironic that Ortega was elected as a left-wing populist but rules as an economic centrist who closely follows the advice of international financial institutions. The protests in Nicaragua show that macroeconomic stability without democracy, transparency and political freedoms is neither desirable nor sustainable.

Hot off the press, in yesterday’s Journal of Climate, Nic Lewis and Judith Curry have re-calculated the equilibrium climate sensitivity (ECS) based upon the historical uptake of heat into the ocean and human emissions of greenhouse gases and aerosols. ECS is the net warming one expects for doubled atmospheric carbon dioxide. Their ECS ranges from 1.50 to 1.56 degrees Celsius.

Nic has kindly made the manuscript available here, so you don’t have to shell out $35 to the American Meteorological Society for a one-day view.

The paper is a follow-on to their 2015 publication that had a median ECS of 1.65⁰C. It was criticized for not using the latest-greatest “infilled” temperature history (in which less-than-global coverage becomes global using the same data) in order to derive the sensitivity. According to Lewis, writing yesterday on Curry’s blog, the new paper “addresses a range of concerns that have been raised about climate sensitivity estimates” like those in their 2015 paper.

The average ECS from the UN’s Intergovernmental Panel on Climate Change (IPCC) is 3.4⁰C, roughly twice the Lewis and Curry values. It somehow doesn’t seem surprising that the observed rate of warming is now running at about half of the rate in the UN’s models, does it?

Lewis and Curry’s paper appeared seven days after Andrew Dessler and colleagues showed that the mid-atmospheric temperature in the tropics is the best indicator of the earth’s energy balance. This means that any differences between observed and forecast midatmospheric temperatures there can be used to adjust the ECS.

Late last year, University of Alabama’s John Christy and Richard McNider showed that the observed rate of warming in the tropical mid-atmosphere is around 0.13⁰C/decade since 1979, while the model average forecast is 0.30⁰C/decade. This adjusts down the IPCC’s average ECS to the range of 1.5⁰C (actually 1.46⁰).

That’s three estimates of ECS all in the same range, and all approximately half of the UN’s average. 

It seems the long-range temperature forecast most consistent with these findings would be about half of what the IPCC is forecasting. That would put total human warming to 2100 right around the top goal of the Paris Accord, or 2.0⁰C.

Stay tuned on this one, because that might be in the net benefit zone.

This month, Attorney General Jeff Sessions announced that the Department of Justice would institute a “zero-tolerance policy” along the Southwest border, stating that he wants to criminally prosecute 100 percent of all illegal entries. Sessions claimed that “a crisis has erupted at our Southwest Border that necessitates an escalated effort to prosecute those who choose to illegally cross our border.” Yet the “crisis” amounts to a flow of illegal immigration 96 percent lower than the level in the 1980s and lower than just two years ago.

Because we cannot know how many border crossers actually evade capture, the best measure of illegal entries is the number of crossers that Border Patrol apprehends. Of course, more agents results in more apprehensions for the entire Border Patrol, which is why it is important to control for the effect of enforcement, focusing on the number each agent arrests. More attempted crossings generally translate into more apprehensions for the average agent. Figure 1 presents the average number of monthly apprehensions along the Southwest border per Border Patrol agent. The average apprehensions per agent in a month in Fiscal Year 2018 was less than 2—which is 95.5 percent lower than the rate in the peak year of 1986.

Figure 1: Average Monthly Southwest Apprehensions Per Border Patrol Agent, FY 1980 to FY 2018

 

Source: Agents—Border Patrol & TRAC; Apprehensions—Border Patrol (1980-2017 & 2018)

The 1.9 average monthly apprehension rate for each agent so far for 2018 is exactly the average rate for the last decade. How does the Department of Justice find a crisis in these figures? It states, “the Department of Homeland Security reported a 203 percent increase in illegal border crossings from March 2017 to March 2018.”

To show why comparing one month in 2018 to the same month in 2017 is misleading, Figure 2 compares the first six months of each fiscal year since 2010. I bolded FY 2018 in black just so that it is visible among the thicket of near parallel lines. It is obvious that FY 2017 (orange), not 2018, is the abnormal year. Every other year followed the same pattern: lower apprehensions in the fall and winter, higher apprehensions in the spring and summer. 2017 broke this pattern. People came in higher numbers in the fall and winter of FY 2017—i.e. October 2016 to February 2017—while fewer came in the spring of 2017. Now the pattern has simply returned to normal.

Figure 2: Average Southwest Apprehensions Per Border Patrol Agent by Month, FY 2010 to FY 2018

Source: Border Patrol (agents); Apprehensions—Border Patrol (2010-2017 & 2018)

This fits with the hypothesis that I proposed last August: that Trump’s campaign rhetoric had a major effect on border crossings. People moved up their travel plans to hedge against the possibility that President Trump would institute major reforms to border security. In other words, Trump caused an increase in illegal immigration starting before the election and a decrease after his inauguration, but no net change in total arrivals. I predicted that the prior trend would return once migrants and asylum seekers realized that the hype was overblown. This is exactly what has happened.

Sessions should not use the anomalous months of 2017 to argue that the border crossings in 2018 are at “crisis” levels. There is simply no evidence to support this view.  

One of the jobs of a think tanker is to synthesize information from other sources and put it in the context of his or her particular field. Hard data are particularly important to our work because data are measurable outcomes from policy and practice in the real world. No one cares what anyone at Cato “feels.” Feelings have their place, of course. Measuring the feelings of a particular group or groups of people can be useful in the aggregate because people will act in accordance with those feelings, but those feelings make up just another metric on which we collect data to explain the world. Reliable data and provable outcomes are fundamental to shaping and forming effective public policy.

As irritating as strict libertarians may find it, several bodies within the federal government are very good at collecting and analyzing data. One of these bodies is the U.S. Sentencing Commission. From the USSC website:

The U.S. Sentencing Commission, a bipartisan, independent agency located in the judicial branch of government, was created by Congress in 1984 to reduce sentencing disparities and promote transparency and proportionality in sentencing.

The Commission collects, analyzes, and distributes a broad array of information on federal sentencing practices.  The Commission also continuously establishes and amends sentencing guidelines for the judicial branch and assists the other branches in developing effective and efficient crime policy.

The Commission publishes data on the impacts of sentencing, the levels of recidivism among different populations of the formerly incarcerated, and other inputs and outputs related to our federal carceral system. For criminal justice researchers of all levels, the Commission provides detailed and easily accessible information about how federal policy and law translate into practice and outcomes.

Another component of the Commission’s work is passing along recommendations about sentencing law to Congress. Reasonable people may disagree with these recommendations, but the Commission clearly bases its recommendations on the best available data they have collected and analyzed. Individuals of all ideological persuasions should want any nominee to the Commission to share this dedication to data collection and evidence-based practices. Instead, President Trump nominated William “Bill” Otis.

Otis is an adjunct professor at Georgetown University Law Center and spent many years in the Justice Department. He has continuously lambasted bipartisan efforts to reduce sentences and remains a stalwart proponent of the “tough on crime” rhetoric of the 1980s, warning of great crime waves that will follow widespread sentencing reduction.

Otis marshals no empirical evidence for his claims—because there isn’t any. And that’s the problem.

To be clear, I’m not worried about Otis’s nomination because he’s conservative. Plenty of conservatives work on criminal justice issues, and some have led the way on reforms. Republican governors and GOP-controlled legislatures in Georgia, Texas, and other “red” states have passed significant criminal justice reforms that reduced prison and jail populations while also reducing crime rates. When presented with evidence-based opportunities to help individuals and save public money, many realized criminal justice reform could be a conservative cause.

The problem is, as Julie Stewart, the founder of Families Against Mandatory Minimums wrote in 2015, “Otis is impervious to facts and evidence.” Put another way, Bill Otis is interested in the politics, not the policy, of criminal justice.

The mountains of data that support less carceral policies and alternatives to incarceration have not swayed Otis’s rhetoric at all. When a man who had benefitted from reducing the sentence for a crack cocaine conviction brutally murdered a woman and two children, Otis was quick to blame the shortened sentence:

Three people, including two children, are dead today because of early release from a duly imposed, lawful and fully deserved federal drug trafficking sentence.

How many times were we lectured that those released under lowered sentencing rules would be only “low level, non-violent offenders?” I don’t know, exactly. Hundreds if not thousands.

Question:  How many more lives are the congressmen and senators who support the [Sentencing Reform and Corrections Act] willing to see sacrificed for their “we’ve-been-too-tough” agenda?

An exact number, please, gentlemen.  We want to remember who you are on election day.  And we will.

It’s Willie Horton all over again.

Yes, a few people who are incarcerated for drugs may do horrible things when they get out. Most of them, even those who commit new offenses, will not. According to several Commission reports, the most common reason for re-arrest among federal drug offenders is a violation of supervision policies—that is, getting drunk or committing some other minor infraction—and certainly not murder.

The latest data from the Sentencing Commission shows no statistical difference in recidivism among those released early under new drug sentencing guidelines and those who served the longer sentences:

The recidivism rates were virtually identical for offenders who were released early through retroactive application of the [Fair Sentencing Act] Guideline Amendment and offenders who had served their full sentences before the FSA guideline reduction retroactively took effect. Over a three-year period following their release, the “FSA Retroactivity Group” and the “Comparison Group” each had a recidivism rate of 37.9 percent.

These data coincide with previous data from the Commission that measured the effects of crack cocaine sentencing retroactivity that found no significant statistical difference between recidivism among those who got out early and those who served the full sentence.

All this is not to say that policymakers should not find ways to reduce recidivism, no matter how serious the offenses are. But Otis’s belief that serving long drug sentences will make convicted individuals more lawful citizens is at odds with what liberals, conservatives, progressives, and libertarians in criminal justice have found in years of research and by measuring the effects of new policies put into practice. The push for reducing incarceration is not a conspiracy of groups taken in by George Soros and Al Sharpton, as Otis suggested in 2016, but a broad coalition of individuals, organizations, and lawmakers who look at the evidence and formulate policy accordingly.

The Sentencing Commission provides, among many other things, a trove of information that can teach us more about how our policies do and do not work. Otis’s nomination signals a return to reactionary politics based on what some people think and feel, rather than what they can show and prove.

I was in the courtroom for this morning’s argument in Trump v. Hawaii, otherwise known as the “travel ban” case. Recall that this is Travel Ban 3.0, which is the most detailed executive action regarding entry restrictions yet. Indeed, Solicitor General Noel Francisco called it the most detailed immigration proclamation ever (in contrast to earlier ones by President Carter regarding Iranians and President Reagan regarding Cubans).

It’s an odd case: as Neal Katyal, lawyer for Hawaii and the other state and private challengers, put it, if Donald Trump hadn’t made all his various campaign statements and tweets about Muslim bans, “we wouldn’t be here.” In other words, “no president has ever said anything like this.”

In a normal case involving an executive action over national security, no court would ever second-guess the president. But this isn’t a normal case or a typical president, so the Supreme Court struggled mightily over a travel ban that, all sides seem to agree, wouldn’t be a legal controversy if any other president had implemented it. Indeed, the whole course of the litigation would’ve been different if Travel Ban 1.0—the one President Trump signed his first week in office without interagency process or guidance to the line agents who were supposed to implement it, causing chaos at airports—had been skipped and we’d gone straight to the more fully lawyered 2.0. I doubt there would’ve been quite as much judicial resistance and treatment of this president differently from the president.

But that’s a historical counterfactual, so you go to court with the facts you have.

Of course, it’s not that unusual for a court to apply a law to factual circumstances that were never contemplated. Here, the relevant immigration provision gives the executive wide discretion to deny entry to any type of foreigner when citing great national interest—and it’s not hard to square that with other provisions regarding nondiscrimination in granting visas. Courts don’t get to review that kind of determination.

That really should be the end of it, even if one thinks, as I do, that the travel ban doesn’t do much for national security and has a greater symbolic than practical effect. And it should be the end of it regardless whether one think that in his heart of hearts Donald Trump has anti-Muslim animus.

Chief Justice John Roberts will try mightily to cobble together a coalition to make this case go away on jurisdictional or other narrow grounds. Justice Neil Gorsuch seems ready to join him (presumably Justice Clarence Thomas too), while Justice Samuel Alito was clearly with the government on the merits. Justice Elena Kagan was the only one on the left who raised pointed questions of Katyal; given her views on administrative law and the breadth of the immigration statute here, she’s “gettable” for some sort of technical compromise. To do so, the Court would likely have to finesse Sale v. Haitian Centers Council (1993), in which it found claims against immigration-related executive actions to be justiciable (before recognizing the executive’s broad discretion in this area).

Given that weird cases make for bad law, we can only hope that, however the Court rules, no strong precedent is set.

I wrote last month that new regulations and taxes in California’s legalized marijuana regime are likely to result in a situation in which

a few people are going to get rich in the California marijuana industry, and fewer small growers are going to earn a modest but comfortable income. Just one of the many ways that regulation contributes to inequality.

Now the East Bay Express in Oakland offers a further look at the problem:

Ask the people who grow, manufacture, and sell cannabis about the end of prohibition and you’ll hear two stories. One is that legalization is ushering a multibillion-dollar industry into the light. Opportunities are boundless and green-friendly cities like Oakland are going to benefit enormously. There will be thousands of new jobs, millions in new tax revenue, and a drop in crime and incarceration.

But increasingly you’ll hear another story. The state of California and the city of Oakland blew it. The new state and city cannabis regulations are too complicated, permits are too difficult and time consuming to obtain, taxes are too high, and commercial real estate is scarce and expensive. As a result, many longtime cannabis entrepreneurs are either giving up or they’re burrowing back into the underground economy, out of the taxman’s reach, and unfortunately, further away from the social benefits legal pot was supposed to deliver….

Some longtime farmers, daunted by the regulated market’s heavy expenses, taxes, and low-profit predictions, have shrugged and gone back to the black market where they can continue to grow as they always have: illegally but free of hassle from the state’s new pot bureaucrats armed with pocket protectors and clipboards.

Not all the complaints in the two-part investigation are about taxes and overregulation. Some, especially in part 1, are about “loopholes” in the regulations that allow large corporations to get into the marijuana business and about “dramatic changes to Humboldt County’s cannabis culture, which had an almost pagan worship of a plant that created an alternative lifestyle in the misty hills north of the ‘Redwood Curtain.’”

But there’s plenty of evidence that regulations are more burdensome on newer and smaller companies than on large, established companies. Indeed, regulatory processes are oftencaptured” by the affected interest groups. The Wall Street Journal confirmed this just yesterday, reporting that “some of the restrictions [in Europe’s GDPR online privacy regulations] are having an unintended consequence: reinforcing the duopoly of Facebook Inc. and Alphabet Inc.’s Google.”

Several weeks ago, the United States and Korea reached an “agreement in principle” on an amended Korea-US Free Trade Agreement (KORUS FTA). This amendment process was minor enough that the Trump administration believed it could undertake it without having Congress vote on the changes (there will be a consultation with Congress on some tariff changes, as described here). Congress could object, as it does have the ultimate constitutional power over trade, but so far there are no signs that it plans to do so.

In an op-ed on the new KORUS, we described the result as follows: “the KORUS renegotiation looks like a minor tweak to U.S. trade relationships, rather than the wholesale ‘populist’ revolution that is sometimes indicated by Trump’s tweets.” In this blog post, we offer a more detailed assessment of the KORUS changes that have been reported  so far.

However, keep in mind that there is no final text of the amended agreement yet, so our analysis is necessarily a bit tentative. Specific wording can be important to understanding the implications of a provision, and there may be additional items that have not been reported yet. (In addition, statements by President Trump suggest the deal may be held up by other issues).

The outcomes of KORUS 2.0 can be grouped into two categories: (1) new issues that were not covered by the existing KORUS, and were negotiated as something akin to side deals to the talks, and (2) amendments or modifications to the current text. We examine each in turn.

With regard to the side deals, the biggest (and most negative) economic impact will arise from the export restrictions on steel that Korea agreed to. Pursuant to these restrictions, Korean would cap steel exports to the U.S. at 70 percent of the average volume from the past three years on a product-by-product basis. This was in exchange for a permanent exemption of the Trump administration’s Section 232 “national security” tariffs on steel. The impact of these quotas/tariffs will be some degree of price increase for U.S. consumers, with the amount of the increase depending on exactly how the measures are implemented. In terms of the impact on Korea, Korean producers may actually benefit, now that they have avoided the tariffs. Their sales to the U.S. will now be at higher prices, and they may find other markets for their steel to replace the lost volume in the U.S.

There are also provisions on currency manipulation. Media reporting on the currency provisions suggests they are non-binding. It sounds like the provisions are similar to those agreed to in a side letter to the Trans Pacific Partnership (TPP). Adding these currency provisions is not particularly signficant, as the Trump administration is mostly just carrying over an Obama-era policy. However, the Trump administration may be pushing for binding currency provisions as part of the renegotiated NAFTA. This would be a bigger deal, as there have never been such detailed provisions on this issue in trade agreements, and U.S. attempts to promote such provisions in additional agreements would have significant implications. The specific terms will be important for determining the impact. 

Turning to the amendments and modifications to the existing KORUS, the outcomes on automobile exports and truck imports stand out.

Under the existing KORUS, U.S.-based auto manufacturers can export up to 25,000 vehicles (per manufacturer) to Korea per year that will be deemed compliant with Korean safety standards simply by meeting U.S. standards. Through the renegotiation, this quota has now been increased to 50,000 vehicles per manufacturer. On its face, this is a good market-opening provision, and a positive development for increasing access to the Korean market. However, the real economic value is not clear. In 2017, U.S. passenger vehicle and light truck exports to Korea totalled only 52,607 units. Ford and General Motors shipped fewer than 10,000 vehicles each. Given the low volume of U.S. exports in these products, increasing the quota may not have much impact. (And to put these figures in perspective, Canada leads the way as a destination for U.S. exports with 912,277 units, and China is second at 267,473 units.)

With regard to light trucks, it appears that the administration took a more protectionist tack, extending until 2041 a 25% U.S. tariff that was supposed to be phased out by 2021. While there will be no immediate impact, because Korea does not currently export trucks to the U.S., this change could delay any future export plans. It has been suggested that the reason Korea has not yet sold light trucks on the U.S. market is because the existing tariff has effectively blocked the possibility of exports. In an interview with CNBC, USTR Robert Lighthizer said: “The Koreans don’t ship trucks to the United States right now and the reason they don’t is because of this tariff,” and “They were going to start next year – we would have seen massive truck shipments. So, that’s put off for two decades.” This modification can therefore be seen as an attempt by the Trump administration to prevent trucks produced in Korea from being sold in the United States. However, even if the tariff had been removed as scheduled, any trucks produced for the U.S. market after 2021 may very well have been produced in the Korean companies’ existing North American factories. As a result, the claim that “massive truck shipments” have been blocked is a bit misleading.

Other reported KORUS renegotiation results sound minor, although, again, a full assessment will have to wait for the release of the text. For instance, there appears to be a new agreement on environmental testing standards for autos. This could refer to Korea’s Fuel Economy and Greenhouse Gas Standards, which are updated every five years by the Korean Ministry of Environment. Through the negotiations, Korea has agreed to base the update of these standards for the 2021-2025 period on “global trends, including U.S. standards” and increase the number of eco-innovation credits available for auto imports to meet the fuel economy and greenhouse gas requirements. In addition, there was an agreement on harmonizing the testing requirements on gasoline engine vehicle exports so that these products will not have to be tested twice. As a result, U.S. emissions testing will be seen as equivalent to Korean testing requirements.

And Korea agreed to include American companies in a “national drug reimbursement program,” which offers premium pricing for certain new drugs. This change has been pushed by the Pharmaceutical Research and Manufacturers of America (PhRMA), which has argued that U.S. companies have been negatively affected by Korea’s low drug prices.   In addition to these changes, vague announcements were made with regard to introducing more transparency to certain dispute procedures, and changes to Korean customs inspection procedures.   Overall, from what we know so far, the KORUS renegotiation looks like a minor tweak to U.S. trade relationships, rather than the wholesale revolution that is sometimes indicated by Trump’s tweets. That is probably for the best. However, KORUS has been a somewhat minor point on the Trump administration’s trade agenda, so we should not take too much comfort from this. It may be that the administation simply wanted to focus its more aggressive trade actions on other countries. The U.S. trade relationship with each country is different. The two big items that are coming next on the agenda are the NAFTA renegotiation and the U.S.-China trade relationship. The resolution of these will tell us more about whether the administation can figure out a way to put together a coherent trade strategy that does not unravel decades of trade liberalization.

Last week the White House announced that Richard Clarida will be nominated to become Vice Chair of the Federal Reserve Board. More than a month ago, Clarida became the front-runner for the role. He is widely seen as a centrist and a pragmatist holding mostly conventional views on monetary policy. Mostly.

As Vice Chair, Clarida will be the third pillar of the Fed’s new leadership, joining Chair Jerome Powell and recently announced incoming NY Fed President John Williams. Having been an economics professor at Columbia University since 1988 and a Global Strategic Advisor at Pacific Investment Management Company (PIMCO) since 2006, Clarida provides a complement to both Powell’s largely business background and Williams’ career inside the Fed.

With a couple of mutual research interests, Clarida and Williams will likely work well together. They’ve both explored the natural rate of interest (r*) — Williams is the coauthor of the widely cited r* estimates and Clarida has examined natural rates from an international perspective. Another area of mutual interest is price level targeting. As I have noted previously, Williams is an advocate of the Fed adopting such a target while Clarida has also explored its merits for monetary policy.

At first blush this may be concerning, given the shortcomings of price level targeting. However, the evolution of Clarida’s post-crisis thinking on monetary policy, including towards price level targeting, shows that he may be persuaded by the superior merits of nominal GDP level targeting.

In 2010, Clarida presented a paper at the Boston Fed conference, Revisiting Monetary Policy in a Low Inflation Environment. The paper discussed what economists had learned throughout the 2000s, with a particular focus on what they ought to learn after years of low inflation (a subject with renewed saliency in recent years).

He also discussed the large-scale asset purchases of the Fed’s quantitative easing program, casting doubt on much of the literature of the day — which tended to find positive, but limited effects of such purchases on reducing bond yields. Clarida, on the other hand, thought large-scale asset purchases could be very robust. He had two main points, one flawed and one overlooked.

The first was that a determined central bank, prepared to buy the requisite amount of securities up to the outstanding stock, could always put a ceiling on the yield (or, put another way, a floor underneath the price) of the securities it targeted. Now, this proposal puts the central bank squarely into the credit allocation business, which is a role it ought to avoid.

However, the second, subtle point in his framework that should not be ignored is that Clarida recommends the central bank fully commit to an outcome rather than announce various mechanical steps. This goal-oriented strategy suggests that Clarida may indeed become receptive to the benefits of nominal GDP level targeting — a point to which I will return.

But why did Clarida suggest focusing on securities’ yields at the time, rather than consider changing the central bank’s nominal target?

He explained that adopting a price level target, a possible alternative to the Fed’s then “stable prices” mandate and now 2% inflation growth rate target, was not a time consistent policy. That is to say, a central bank would initially commit to level targeting when its policy was below the trend line but then fail to run an expansionary policy to reacquire that trend line. Clarida believed that, while attractive in theory, a central bank could not credibly commit itself to future actions. Modern Fed parlance would call this forward guidance and, to Clarida, that was not a sufficiently robust strategy because it lacked the “proper commitment technology” to satisfy markets and the public that the central bank would indeed execute its promises in the future.

But by 2016, as the recovery from the Great Recession proved to be weaker than expected, Clarida’s thinking about forward guidance and the viability of a level target had changed.

At a Brookings conference early that year, which focused on whether the US was ready for the next recession, Clarida said that despite the fact that textbooks and economic theory suggest forward guidance should not work in practice, it, in fact, does. He also suggested, or perhaps wondered, if this meant a price level targeting strategy could work (his slides are here).

He rightly pointed out that a price level target would have the advantage of making up for past monetary policy failures that inflation growth rate targeting lacks. Level targeting corrects for the bygones problem in growth rate targeting, making up for past mistakes rather than embedding those errors in current policy.

Incidentally, this was not the first time he had suggested a price level target for the Fed.

In a Global Perspectives note at PIMCO published in 2014, Clarida endorsed a price level target. He believed such a target would be an improvement for the new Yellen Fed over the Evans Rule, which had been in effect for more than two years. Promising to leave rates at the zero lower bound until either inflation was above 2.5% or the unemployment rate was below 6.5% was not enough to guide policy going forward. These thresholds were not goals and therefore were insufficient anchors for monetary policy (indeed the Fed abandoned the Evans Rule the following month).

Clarida saw the weakness in the Fed’s communication strategy of putting thresholds on inflation and unemployment and proposed a price level target as an alternative.

As mentioned, a price level target is not the proper alternative for the Fed’s target because it can make a central bank procyclical and thus amplify, rather than dampen, the business cycle. A price level targeting central bank runs the danger of tightening policy because of an adverse supply shock and over-easing because of a productivity boom. Nevertheless, Clarida was right to criticize the kind of open-ended policy that characterized the Evans Rule and this kind of thinking will be a welcome addition to the Board.

Clarida now seems predisposed to three views about monetary policy that could significantly influence the Fed’s actions going forward:

  1. That a central bank fully committed to reaching a nominal target is superior to one focused on mechanical operations.
  2. That employing forward guidance is indeed an effective tool for conducting monetary policy.
  3. That level targeting can make up for past errors in monetary policy in a way that growth rate targeting cannot.

Combined, I think these views point to Clarida being more amenable to a nominal GDP target than even he may presently admit. After all, nominal GDP level targeting requires two things of a central bank to work in practice: first a central bank must credibly pledge to keep nominal GDP growing along a stable trend line and then it must be prepared to do whatever is necessary to achieve that level of nominal growth.

Clarida has already expressed the importance of both of these elements. In addition, he has repeatedly shown a willingness to let his thinking evolve when presented with new information. Therefore, he may yet be persuaded on the shortcomings of price level targeting in favor of a superior option.

Clarida may have said little about nominal GDP targeting to date — but with his nomination, the Fed may be getting a nominal GDP target advocate for the future.

[Cross-posted from Alt-M.org]

Democrats are plugging new energy into an old idea: a federal “Jobs Guarantee” program. Senator Cory Booker previously introduced legislation for a pilot in high unemployment communities. Now Senator Bernie Sanders will announce a plan guaranteeing a job or training paying $15 an hour and health-care benefits to every American worker “who wants or needs one,” in a host of public infrastructure, care giving and environmental upkeep projects.

The scheme, seemingly based on a recommendation from the Levy Economic Institute, comes with grandiose purported benefits. It would, we are told, eliminate involuntary unemployment, deliver a living wage, boost GDP, reduce the cost of recessions, raise labor market standards, reduce environmental degradation, reduce racial inequality, and much else besides. If it sounds too good to be true, that’s because it is. There are severe problems with this idea, which can be loosely grouped under three “c’s”: costs, crowd out and corruption.

Costs

The Levy Economic Institute calculates up to 16 million could take part in such a program today (including the unemployed, those working part time seeking full time work and individuals currently inactive who might move into the labor market). Given the federal government would have to pay $15 an hour for full time jobs, plus benefits equal to 20 percent of wages, total labor costs per worker would be $37,440 per year. That’s before the cost of the materials for the programs and administration of the program itself. Even assuming some opt for part-time positions, and ignoring the non-labor program costs, we are talking about a gross cost of up to around 2.4 percent of GDP, significantly higher than the existing Medicaid program (2 percent of GDP).

The net cost on these assumptions will be lower, of course. People who take jobs will require less in welfare payments and pay some back in taxes. Some might wisely consider it a risk for their employment fortunes to be tied to the whims of politicians and their willingness to fund this program, and so remain in the private sector. But even taking this into account, and assuming the policy generates the macroeconomic bounty that the Levy researchers expect, they still think the annual net cost will be between 0.8 and 2 percent of GDP, with the program employing up to 10 percent of the workforce. That would in itself be a huge new commitment to finance at a time when the long-term fiscal outlook is already dire, and the short-term deficit already expected to balloon to over 5 percent of GDP in the coming years.

Crowd-out

In reality, the fiscal costs are likely to be much, much higher, and the economic welfare losses even more significant, because in the labor market and broader economy, a public jobs guarantee program would significantly crowd out productive private sector activity. This type of policy will radically alter behavior of both workers and businesses, and so the supply and demand for labor.

The Census shows that, among those who worked in 2016, 70+ million Americans earned under $32,500 (the full-time job guarantee salary would be $31,200). Yes, not all of these would seek out positions on the jobs guarantee program. But a large proportion would, especially those employed in uncertain roles with low levels of job security.

In fact, some even paid more than $31,200 might consider leaving their jobs to pursue guaranteed roles if they perceive better working conditions or an easier worklife (asked under what conditions someone would be fired from such a role, the Levy Institute paper suggests that you would be sacked for failing to go to work, but that your performance would not be judged by “private sector ‘efficiency criteria’”, for example.) It’s not inconceivable then that over 25 percent of the labor force could find itself part of the scheme.

This crowd-out is likely to be particularly acute in low productivity regions, and (ironically) after economic downturns. A nationwide jobs guarantee program paying $15 an hour will be particularly attractive to workers in low wage regions, and by setting a de facto wage floor the program will prevent private investment in regions on the basis of cheap labor.

Though no doubt there would be some demand spillovers from well-paid jobs, the net consequence is highly likely to be weaker private sector job creation in poor regions, which has been the experience of countries such as Britain with a nationwide minimum wages and public sector national pay bargaining. Proponents of the scheme see “higher labor standards” as a good thing, but absent productivity improvements, policies which raise labor costs significantly will reduce the quantity of workers demanded.

There’s good reason to expect the policy will reduce the efficiency and productive potential of the economy too. Taxes will eventually need to be raised to cover the net cost of the program. In infrastructure and care giving provision, costs will rise – because nobody would now work in these directly substitutable sectors for less than the wage and conditions offered in the job guarantee program. This will waste resources, and there’s highly likely to be overinvestment in lots of relatively low value ventures and programs to ensure workers are employed, especially given the explicit aim is to provide employment rather than deliver projects at low cost.

Throwing resources at regions with higher levels of unemployment and after recessions too will work directly against market signals and deter the mobility of labor (in geographic and industrial terms) and capital to its most productive uses given prevailing market conditions. This is important: yes, employment is highly likely to have some positive externalities; but the real driver of better living standards over time are productivity improvements, discovered by market-based activity. 

Proponents of this policy seem to put an enormous weight on the idea that time out of the labor market has huge scarring consequences which could be ameliorated by any type of temporary employment. But the literature on this shows that temporary jobs do not provide the workers with skills to improve longer-term labor market outcomes.

Corruption and incentives

As if all these consequences were not bad enough, such a program will be ripe for corruption and political interference at the government, provider and individual level. Senator Sanders’ plan would be administered by the Department for Labor, with local and state governments submitting projects to regional offices for consideration. There’s a huge question mark on whether projects will be considered on economic grounds, when there might be an incentive for make-work schemes to aid particular politicians or indeed to put resources towards “public good” causes or NGOs more in line with the ethos of the governing party. For Democrats this might be for environmental issues. For Republicans it might be, say, for a wall on the southern border.

NGOs and local public bodies themselves will have incentives to apply for federal funds for projects that would otherwise have occurred anyway, and to maximize the number of applications. Pork barrel projects would proliferate. What is more, at the individual level, the guarantee coupled with the purported unwillingness to judge worker performance on a commercial basis will incentivize low levels of work effort on the margin.

Conclusion

The Jobs Guarantee then is an extremely large and costly endeavor, which would have major economic consequences and risk a large federal politicization of the labor market and public project delivery. 

The US does have serious labor market issues to contend with - not least depressed labor participation and a weak productivity outlook - but are things really so bad that they require such a risky and extensive policy response?

Well-paid jobs and low levels of real unemployment are outcomes desired by all. But attempting to achieve that through this program amounts to cracking a nut with a sledgehammer, undermining what matters far more for living standards: efficiency and productivity. 

This morning in Jesner v. Arab Bank, the Supreme Court split 5-4 along conventional ideological lines to confirm that it is up to Congress, not the judiciary, to decide whether and when American courts should entertain international human rights cases against foreign defendants. It thus continues the course of its 2013 Kiobel v. Royal Dutch Petroleum case, about which I wrote here at the time

Today the U.S. Supreme Court unanimously and decisively buried the misguided, decades-long hope of some lawyers and academics that they could turn the Alien Tort Statute (ATS) into a wide-ranging method of hauling overseas damage claims into American courts. All nine Justices agreed with the Second Circuit that the statute does not grant jurisdiction for our courts to hear a controversy over alleged assistance in human rights violations outside the U.S. against non-U.S. plaintiffs by a non-U.S. business. A majority of five justices reiterated and relied on our law’s strong traditional presumption against extraterritoriality, that is to say, presumption against applying the law to actions that take place in other countries. While parting from this reasoning, four concurring justices nonetheless endorsed a view of ATS as applicable extraterritorially only to very extreme misconduct comparable to piracy, and also as sharply limited by considerations of comity with foreign sovereigns.

It is a good day for a realistic and modest sense of what United States courts of justice can successfully do, namely: do justice within the United States.

But in Kiobel, as Kenneth Anderson noted in the Cato Supreme Court Review that year, the Court ducked the question it had originally agreed to decide: may foreign corporations be sued in U.S. courts under the ATS, or only individuals? The correct answer is that Congress, not the courts, should decide. Issues of foreign affairs are peculiarly the province of the political branches, which can weigh (and take responsibility for) the dangers of engendering friction with foreign sovereigns by extending liability (Jordan, an important U.S. ally, has for years been riled by the attempt to go after Arab Bank over handling transactions, including some in New York, that allegedly facilitated terrorist acts abroad.) 

The only time Congress chose affirmatively to create such a cause of action, in a 1991 statute providing torture victims a right to sue over abuse abroad, it placed significant limits on the right, among which was providing that only individuals could be sued. Parallel restrictions should be read into other, unenumerated causes of action under the ATS, said Justice Anthony Kennedy in his opinion for the majority today; that means that unless Congress says so, the statute would enable holding individual wrongdoers liable but not imputing their liability to an organization. Writing separately in partial or full concurrence, Justices Gorsuch, Alito, and Thomas would have gone further to make clear that courts should simply not get into the business of inventing causes of action in this area, especially given the ATS’s history as an early American enactment meant to reduce rather than exacerbate diplomatic tensions. 

Not too many years ago, whole sectors of American legal academia were besotted with notions of “universal jurisdiction” in which misbehavior taking place in Africa, Latin America, or Southeast Asia could be sued over in American courts – in practice, often, in certain West Coast federal courts that welcomed such suits. The Court’s retreat from that proposition has been steady and prudent. Despite the dissent by Justice Sonia Sotomayor, no one has immunized business miscreants against anything. The Court has simply made it clear that if the United States courts are to become a sort of human rights policeman to the world, it is Congress that will need to decide to fit them out for that task. 

 

 

Toronto Police Chief Mark Saunders said that there is no evidence that yesterday’s “van incident,” where Alek Minassian murdered 10 people and injured 15 others on a busy sidewalk with a van, was a terrorist attack.  To count as a terrorist attack, Minassian’s motivations must have been political, religious, or social in nature beyond simply a desire to terrorize or murder others.  Minassian’s motives are so far unclear with much speculation regarding his social awkwardness and possible anti-women opinions but, so far, little surrounding his political or religious opinions.  This could change as police and investigators uncover new facts.

Many in media and government, prompted by Minassian’s mass murder, are commenting on terrorism in Canada but with little context.  By using the methods employed in my recent terrorism risk analysis for the United States, I’ve found that terrorism is rare in Canada.  Assuming that investigators will eventually find that Minassian’s mass-murder is not terrorism, as they currently claim, then the annual chance of being murdered in a terrorist attack on Canadian soil over the last 25 years was about one in 60.4 million per year.  The annual chance of being injured in a terrorist attack on Canadian soil during that time was about one in 7.4 million per year.

Data and Methodology

This post examines 25 years of terrorism on Canadian soil from 1993 through April 23, 2018.  Fatalities and injuries in terrorist attacks are the most important measures of the cost of terrorism. The information sources are the Global Terrorism Database (GTD) at the University of Maryland, the RAND Corporation, and others.  I excluded three fatalities counted by the GTD as they were the terrorists themselves.  I further grouped the ideology of the deadly attackers into four broad ideologies: Islamists, Anti-Muslims, anti-government, and Unknown/Other. GTD descriptions of the attackers, news stories, and wikipedia were my guide in grouping the attacks by ideology. The grouping by ideology was easy as there were so few terrorist attacks in Canada from 1993 to the present.  The number of Canadian residents and non-terrorist murders in each year comes from Statistics Canada.

Terrorism Risk in Canada

Terrorists have murdered 14 people on Canadian soil from 1993 through April 23, 2018.  Islamists murdered 3 of the victims, an anti-government terrorist murdered 3, suspected terrorists of an unknown ideology murdered 2, and 6 were murdered by an anti-Muslim terrorist named Alexandre Bissonnette in a shooting at a Quebec mosque last year (Figure 1).  Of the 63 terrorist attacks in Canada during that time, according to a wide definition of the term “terrorist” in the GTD, only 7 resulted in a fatality.  In other words, 89 percent of terrorist attacks in Canada during the last 25 years killed nobody.

Figure 1

Murders in Canadian Terrorist Attacks by the Ideology of the Attacker, 1993-2018

 

Sources: Global Terrorism Database at the University of Maryland, RAND Corporation, ESRI, and author’s calculations.

Although most of the recorded terrorist attacks targetted small groups in Canada, like Muslims or the police, it is useful to get a sense of the relative danger by looking at the annual chance of being murdered by a terrorist inspired by each ideology.  The annual chance of being murdered by an Islamist in a terrorist attack was the same as that of being murdered by an anti-government terrorist: about one in 281.7 million per year.  The annual chance of being murdered by a terrorist with an unknown ideology was about one in 422.5 million per year.  The greatest risk, but also still tiny, was being murdered by Alexandre Bissonnette in his Mosque attack last year at one in 140.8 million per year over the 25 years. 

There were 114 injuries in terrorist attacks on Canadian soil from 1993 through April 23, 2018 (Table 1).  Terrorists with unknown or other ideologies caused almost 68 percent of those injuries.  Alexandre Bissonnette, the anti-Muslim terrorist, was personally responsible for 17 percent of all injuries in terrorist attacks during this time in Canada.  Islamist terrorists were responsible for about 11 percent of injuries while anti-abortion and anti-government terrorists were responsible for 4 and 2 percent of all injuries, respectively. 

Table 1

Injuries in Canadian Terrorist Attacks by the Ideology of the Attacker, 1993-2018

  Injuries Annual Chance of Being Injured Percent of All Injuries Unknown/Other

77

1 in 10,973,614

67.5%

Anti-Muslim

19

1 in 44,472,016

16.7%

Islamist

12

1 in 70,414,026

10.5%

Anti-abortion

4

1 in 211,242,077

3.5%

Anti-government

2

1 in 422,484,154

1.8%

Total

114

1 in 7,412,003

100%

Sources: Global Terrorism Database at the University of Maryland, RAND Corporation, ESRI, and author’s calculations.

Comparison to Murder

Fatalities and injuries in terrorist attacks are rare so a relevant comparison to non-terrorist murder puts the terrorism danger into perspective.  There were about 14,807 murders in Canada from 1993 through April 23, 2018.  Because the number of murders is not reported for 2016-2018, I assumed that the number of murders for each of those years was the same as the number in 2015.  The annual chance of being murdered outside of a terrorist attack was about one in 57,000 per year from 1993 through 2018 – about 1,058 times greater than the chance of being killed in a terrorist attack.      

Conclusion

There is a small chance of being murdered in a terrorist attack in Canada over the last 25 years.  By comparison, the annual chance of being murdered in a terrorist attack in the United States over that time was about 25 times greater than in Canada.  Similarly, the annual chance of being murdered in a terrorist attack in Canada also appears to be lower than in Europe.  The chance of being murdered in a non-terrorist murder in Canada was over 1000 times greater.  Alek Minassian’s horrific mass murder does not appear to be a terrorist attack based on the information available at this time, but if it does turn out to be terrorism then it would be the deadliest attack on Canadian soil since December 6, 1989, when Marc Lepine murdered 14 and injured 14 others in an attack inspired by his anti-feminism.  The murder or death of innocent people is tragic no matter the circumstances and the perpetrator should be punished to the fullest extent of the law.  Regardless, Canadians can at least take some comfort in the fact that the chance of being murdered in a terrorist attack in Canada is small in absolute terms, relative to the residents of other developed nations, and compared to the chance of being murdered in a non-terrorist homicide.     

 

 

 

 

Pages