Feed aggregator

Just over a year into a presidency already full of unusual precedents, President Trump has agreed to a North Korean offer, communicated through South Korean national security adviser Chung Eui-yong, to meet face to face with Kim Jong-un. Though such meetings have been bandied about in the past, no sitting U.S. president has ever met with a sitting North Korean Supreme Leader. It is a prospect fraught with risk and opportunity.

Kim reportedly made this offer along with a statement that North Korea is “committed to denuclearization.” He left ambiguous what he would want in return, though, according to Chung, it involves a commitment that South Korea and the United States “not repeat the mistakes of the past.” Given what Pyongyang has previously demanded, this likely refers to upholding our side of any bargain, and possibly an end to what they call America’s “hostile policy” (i.e., our alliance with South Korea, proximate U.S. military assets, joint military drills, and economic sanctions).

It is a bewildering and unexpected development. Just a few weeks ago, Kim and Trump were trading barbs about how stupid the other is and making explicit threats of nuclear aggression. Answers to a few preliminary questions are in order.

Was Trump right to say yes?

Yes, but we should proceed with caution. The dangerous cycle of taunts, threats, and ever-heightening tensions over the past several months risked inadvertent escalation. The Trump administration even publicly floated the idea of a so-called “bloody nose” attack, involving a surgical strike against North Korean targets in the hopes that they would back down in response. Essentially every informed assessment of the consequences of even this kind of minor use of force predicts catastrophic escalation and possibly nuclear war, with higher-end casualty estimates in the many millions of people and with no clear political win at the other end of the conflagration.

Agreeing to meet face to face with an adversary is, by its nature, the opposite of the bluster and threats of war that has been the rule in Trump’s first year. And therefore, a welcome development. The consensus among analysts is that any war would be calamitous. It is therefore hard to see how we had a choice here. Declining the offer would mean a return to confrontation and antagonism.

That said, we should not have high confidence that the Trump administration is prepared to actually handle serious face to face negotiations. This is not really how smart diplomacy is done. Typically, lower-level officials, including seasoned diplomats and technical experts, engage in private discussions for years, determining each side’s red lines, finding areas of compromise, and establishing arrangements for neutral verification and mediation protocols. Only after progress is made at this level would a meeting between heads of state be appropriate, constructive, and, crucially, safe for both sides.

Furthermore, Trump has consistently undermined the value of diplomacy and has hollowed out the State Department of the kind of diplomatic professionals needed now to meet this challenge. Trump has not even appointed an ambassador to South Korea yet. In fact, Trump unexpectedly withdrew an impending nomination for this post to Victor Cha after the latter told Trump preventive war against the North was a bad idea. This has made us terribly unprepared for such an unprecedented and unpredictable meeting.

Things could very easily unravel. And the consequences of failure could be extreme. As Sen. Lindsey O. Graham (R-S.C.) said, “The worst possible thing you can do is meet with President Trump in person and try to play him. If you do that, it will be the end of you — and your regime.” Victor Cha writes today in the New York Times that “failure could also push the two countries to the brink of war.”

Trump may very well have a similar outlook.

Why did North Korea make this offer?

It is very hard to say. The strategic and tactical calculations of states are inherently opaque, particularly in extraordinary and rapidly developing situations like this. 

Many have argued that Kim has offered to meet because of the “maximum pressure” policy of the Trump administration – specifically the additional economic sanctions imposed on North Korea over the past year. But this is a wild oversimplification at best. The sanctions are surely weakening North Korea’s already ailing economy and tightening the screws on the regime, but the real key on sanctions was greater Chinese enforcement. Trump would be eager to take credit for Beijing’s slightly harder line against North Korea of late, but in reality, it has been a gradual process resulting from changing Chinese perceptions of their regional role and their increased frustration due to Pyongyang’s progress on nuclear and missile development over the past couple of years.

It is not inconceivable that Trump’s threats have scared Kim into offering direct talks. The North Korean regime may view Trump as unstable. The Washington Post reports that top North Korean officials have even read Michael Wolff’s Fire and Fury, a book about Trump’s chaotic first year in office that depicts the president as erratic, ignorant, and impulsive. Maybe Kim thought Trump was actually mad enough to unleash a war that is widely acknowledged to be too costly to contemplate. I am skeptical. It is just as likely that Kim understands Trump is a political novice who violated the parameters of debate in Washington, DC by suggesting during the campaign that South Korea assume responsibility for its own security. Maybe Kim thinks he can outsmart Trump or make a fool of him at the negotiating table.

Another possible explanation is that Pyongyang has leverage like it has never had before, and so now is as propitious a time as any to negotiate at the highest levels. The regime feels emboldened by the successful completion of their nuclear development, as they refer to it. Now they feel their nuclear deterrent is strong enough to meet with their greatest enemy on more equal footing.

It is even possible that Pyongyang sincerely believes direct talks are the best way to dial down tensions. Perhaps they really are willing to make concessions in return for reciprocal concessions from the United States. That is perhaps the most rational explanation for the regime’s motivations here.

However, the notion that North Korea is really ready to denuclearize is far-fetched, to say the least. They have devoted enormous resources, at great risk, to obtain their current capabilities. They won’t forfeit them without truly significant concessions from the United States.

Is it likely to succeed?

No. Despite his claims to be a world-class dealmaker, Trump is manifestly unprepared to engage such difficult negotiations. His mishandling of diplomatic engagements with other world leaders does not leave me with much confidence that he can prudently conduct himself in such high-stakes talks with an avowed enemy like Kim Jong-un.

Successful negotiations require a solid understanding of the interests of all the players, some measure of regional expertise, and technical knowledge of how to establish limitations and verification regimes of the nuclear program. It requires experienced diplomats and strategic clarity of the political goals driving all sides. What do we expect to get out of this meeting? What are some realistic expectations? What are we willing to concede? What do we expect Pyongyang is willing to give up? Finally, negotiating partners must have confidence that the other side will uphold its commitments under any agreement. This crucial element is not present in this case. Neither side trusts the other, and each has a long list of accusations that the other has cheated and reneged on past arrangements.

The initial announcement suggested this face to face meeting would take place by May. With so little time to prepare, and with such daunting obstacles, we do not have the ingredients for probable success. It is not clear what choice we had, however.

On Thursday, President Trump held a meeting to discuss how and whether violent video games affect gun violence, particularly school shootings. Before getting into the details of this claim, perhaps we should take a step back and read a classic fairy tale from 1812, printed in the Brothers Grimm’s Nursery and Household Tales and titled “How the Children Played Butcher with Each Other”:

A man once slaughtered a pig while his children were looking on. When they started playing in the afternoon, one child said to the other: “You be the little pig, and I’ll be the butcher,” whereupon he took an open blade and thrust it into his brother’s neck. Their mother, who was upstairs in a room bathing the youngest child in the tub, heard the cries of her other child, quickly ran downstairs, and when she saw what had happened, drew the knife out of the child’s neck and, in a rage, thrust it into the heart of the child who had been the butcher. She then rushed back to the house to see what her other child was doing in the tub, but in the meantime it had drowned in the bath. The woman was so horrified that she fell into a state of utter despair, refused to be consoled by the servants, and hanged herself. When her husband returned home from the fields and saw this, he was so distraught that he died shortly thereinafter.  

The end.

Violent entertainment is nothing new, nor is the older generation complaining about it. In usual Trump fashion, he claimed to be “hearing more and more people say the level of violence on video games is really shaping young people’s thoughts.” But it’s not true. People all over the world play video games, especially young boys, and there’s no resulting correlation to acts of violence. Actually, some studies have shown that violent video games might reduce crime by keeping young men off the street and glued to their TVs. 

In 2011, the Supreme Court decided the case of Brown v. Entertainment Merchants Association, holding that California’s 2005 law banning the the sale of “violent” video games to minors violated the First Amendment. Cato filed a brief in that case that documented the history of complaints about uniquely violent entertainment and the effectiveness of industry self-regulation–such as the MPAA movie ratings, the ESRB ratings for video games, and the Comics Code–over ham-handed government oversight. The Court cited Cato’s brief in its opinion.

Due to Brown, any federal law regulating violent video games is likely to be struck down by the courts. That doesn’t mean, however, that Trump and other government agents can’t make things uncomfortable for the industry. Most likely, we’ll just hear a bunch of complaining about “these kids today” from older generations. Everything old is new again, particularly when new forms of entertainment come around that are foreign to older generations.

As many people know, Brothers Grimm fairy tales can be shockingly violent and disturbing. In the Grimms’ “Cinderella,” the stepsisters slice off part of their feet to fit the glass slipper. When the prince notices that “blood was spurting” out of the shoes, he disqualifies them. Some critics were shocked at the tales and urged parents to protect their children from the gruesome content. Later editions of the Brothers Grimm toned down some parts, but in other parts, particularly violence suffered by evil doers in order to teach a moral lesson, the gore actually increased.

In the late 19th century, “dime novels” and “penny dreadfuls” were blamed for youth violence. An 1896 edition of the New York Times told of the “Thirteen Year Old Desperado” who robbed a gold watch from a jeweler and fired a gun while being pursued. “The boy’s friends say that he is the victim of dime novel literature,” the story concludes. Or Daniel McLaughlin, in an 1890 New York Times, “who sought to emulate the example of the heroes of the dime novels and ‘held up’ Harry B. Weir in front of 3 James Street last night.”

Next there were movies, which apparently made dime novels look tame, as the Times wrote in 1909:

The days when the police looked upon dime novels as the most dangerous of textbooks in the school for crime are drawing to a close. They have found a new subject for attack. They say that the moving picture machine, when operated by the unscrupulous, or possibly unthinking, tends even more than did the dime novel to turn the thoughts of the easily influenced to paths which sometimes lead to prison.

In fact, the Supreme Court didn’t grant movies First Amendment protection until 1952, ruling in a 1915 case that movies could “be used for evil” and thus could have their content regulated.

Movies might be bad, but violent radio dramas actually make listeners play out the violence their heads, a fact which concerned some in the ‘30s and ‘40s. In 1941, Dr. Mary Preston released a study in the Journal of Pediatrics which claimed that a majority of children had a “severe addiction” to radio crime dramas. One 10-year old told her that “Murders are best. Shooting and gangsters next. I liked the Vampire sucking out blood very much.”

In the 1950s, America had a prolonged scare about violent comic books prompted by the psychiatrist Dr. Frederic Wertham. Wertham exhorted parents to understand that comics were “an entirely new phenomenon” due to their depictions of “violence, cruelty, sadism, crime, beating, promiscuity,” and much more. Writing in the Saturday Review in 1948, Wertham chastasized those who downplayed the risk: “A thirteen-year-old boy in Chicago has just murdered a young playmate. He told his lawyer, Samuel J. Andalman, that he reads all the crime comic books he can get hold of. He has evidently not kept up with the theories that comic-book readers never imitate what they read.” Wertham’s activism led to congressional hearings and eventually the comic book industry creating the Comics Code Authority.   

Since the 1950s, we’ve seen periodic scares about violent television, movies, and now video games. And although the idea that violent entertainment might cause crime can’t be dismissed out of hand, empirical studies consistently fail to show a connection, just as with video games. The most consistent correlation is that of older generations misunderstanding the pastimes of the youth, coupled with a hearty sense of nostalgia for the good ol’ days.  

Protecting citizens from threats domestic and foreign is the most important function of government.  Among those very threats is a government willing to concoct and aggrandize dangers in order to rationalize abuses of power, which Americans have seen in spades since 9/11. Justifying garden variety protectionism as an imperative of national security is the latest manifestation of this kind of abuse, and it will lead inexorably to a weakening of U.S. security.

The tariffs on imported steel and aluminum that President Trump formalized this afternoon derive, technically, from an investigation conducted by the U.S. Department of Commerce under Section 232 of the Trade Expansion Act of 1962.  The statute authorizes the president to respond to perceived national security threats with trade restrictions. While the theoretical argument to equip government with tools to mitigate or eliminate national security threats by way of trade policy may be reasonable, this specific statute does little to ensure the president conducts a rigorous threat analysis or applies remedies that are proportionate to any identified threat.  There are no benchmarks for what constitutes a national security threat and no limits to how the president can respond. 

In delegating this authority to the president, Congress in 1962 (and subsequently) simply assumed the president would act apolitically and in the best interest of the United States.  The consequences of this defiance of the wisdom of the Founders—this failure to imagine the likes of a President Trump—could be grave.

Immediately, the higher costs of steel and aluminum to the U.S. industries that rely on those raw materials will rise. By how much depends on the relative importance of steel and aluminum to manufacturing the respective downstream product.  For example, steel accounts for about 50 percent of the material costs (and about 25% of the total cost) to produce an automobile, but closer to 100 percent of the material costs of producing the pipes and tubes used in oil and gas extraction and transmission infrastructure.

According to Bureau of Economic Analysis, the industries that consume steel as an input to their production account for 5.8 percent of GDP, while steel producers account for just 0.2 percent of GDP.  Steel users contribute $29 dollars for every $1 dollar contributed by steel producers.  The Bureau of Labor Statistics data show that for every worker in steel production there are 46 workers in steel-consuming industries. It doesn’t require complicated analysis to see that the costs to the broader U.S. manufacturing sector and the economy at large will dwarf any small benefits that accrue to the steel lobby.

Meanwhile, the costs to the economy will be compounded, as foreign governments target U.S. exporters for retaliation.  Lost market share abroad will mean smaller revenues for U.S. companies that need to hit profit target in order to invest, expand, and hire.

By signing these tariffs into law, President Trump has substantially lowered the bar for discretionary protectionism, inviting governments around the world to erect trade barriers on behalf of favored industries.  Ongoing efforts to dissuade China from continuing to force U.S. technology companies to share source code and trade secrets as the cost of entering the Chinese market will likely end in failure, as Beijing will be unabashed about defending its Cybersecurity Law and National Security Law as measures necessary to protect national security.  That would be especially incendiary, given that the Trump administration is pursuing resolution of these issues through another statute—Section 301 of the Trade act of 1974—which could also lead the president to impose tariffs on China unilaterally.

As of the moment, members of Congress are mobilizing in an effort to neutralize or somehow contain the damage from Trump’s action. Sen. Jeff Flake (R-AZ) announced an hour or so ago that he “will immediately draft and introduce legislation to nullify these tariffs.”  Whether that or any other efforts by Congress will succeed seem remote.  A veto-proof majority would be needed and according to a Quinnipiac University poll conducted this week, 67 percent of Americans who identify as Republicans agree with the president’s claim that “a trade war would be good for the United States and could be easily won.” By contrast, only 7 percent of Democrats and 19 percent of Independents agreed.

Unfortunately, Congress seems to have awoken too late to the dangerous situation created by the its delegation of authorities without the necessary constraints.  Perhaps we will see renewed interest in legislation Sen. Mike Lee (R-UT) introduced last year that would reestablish more robust congressional oversight of trade policy decision making.  In the meantime, let’s hope for the best.

A favorite statistic cited by paid family leave activists is thoroughly misleading. Activists regularly argue that only 15 percent of workers have access to paid family leave, relying on a Bureau of Labor and Statistics (BLS) number. Just this week, the figure was cited in a Harvard Business Review article, a WSJ letter, and a Bloomberg Businessweek report on Leaning In, among other places.

But the BLS figure doesn’t agree with federal data sets or national survey results, including the Census Bureau’s Survey of Income and Program Participation (SIPP), FMLA Worksite and Employee Surveys, Census Bureau’s Current Population Survey (CPS), or the National Survey of Working Mothers. Estimates of access to paid leave by source are detailed in the table below.

Table: Estimates of Access to Paid Parental Leave

 

Source Paid Leave Figure Details FMLA Worksite and Employee Surveys 57% of women and 55% of men received pay for parental leave from any source 2012 Data National Survey of Working Mothers 63% of employed mothers said their employer provided paid maternity leave benefits 2013 Survey Census Bureau’s Survey of Income and Program Participation (SIPP) 50.8% of working mothers report using paid leave of some kind before or after child birth 2006 - 2008 Data Census Bureau’s Current Population Survey (CPS) Dating back to 1994, on average 45% of working women took parental leave received some pay 1994 - 2014 Data

The difference between the BLS figure and other federal and national figures is considerable. For example, the BLS figure is more than 40 percentage points lower than the FMLA figure, and there is a 50 percentage point spread between the BLS number and the National Survey of Working Mothers number.

That is partly because BLS uses a peculiar definition of paid family leave that excludes most types of paid leave that can be used for family reasons. The particulars are described in greater detail here. As a result, the BLS figure is an extreme outlier even compared to other federal data sources.

As an extreme outlier, the BLS figure is misleading in the extreme. To engage in an accurate conversation about the experience of working parents, activists and policy makers should abandon it. 

In his new book Enlightenment Now and in his McLaughlin Lecture at the Cato Institute this week, Steven Pinker made the point that we may fail to appreciate how much progress the world has made because the news is usually about bad and unusual things. For instance, he said, quoting Max Roser, if the media truly reported the important changes in the world, “they could have run the headline NUMBER OF PEOPLE IN EXTREME POVERTY FELL BY 137,000 SINCE YESTERDAY every day for the last twenty-five years.”

This is understandable. As Pinker writes, 

News is about things that happen, not things that don’t happen. We never see a journalist saying to the camera, “I’m reporting live from a country where  a war has not broken out”—or a city that has not been bombed, or a school that has not been shot up. As long as bad things have not vanished from the face of the earth, there will always be enough incidents to fill the news, especially when billions of smartphones turn most of the world’s population into crime reporters and war correspondents.

And among the things that do happen, the positive and negative ones unfold on different time lines. The news, far from being a “first draft of history,” is closer to play-by-play sports commentary. It focuses on discrete events, generally those that took place since the last edition (in earlier times, the day before; now, seconds before). Bad things can happen quickly, but good things aren’t built in a day,  and as they unfold, they  will be out of sync with the news cycle. The peace researcher John Galtung pointed out that if a newspaper came out once every fifty years, it would not report half a century of celebrity gossip and political scandals. It would report momentous global changes such as the increase in life expectancy.

I’ve noted this myself. I think the mainstream media such as NPR, which I listen to morning and evening, fail to adequately examine the most important fact in modern history—what Deirdre McCloskey calls the Great Fact, the enormous and continuing increase in human longevity and living standards since the industrial revolution. If you listen to NPR or read the New York Times, you’ll be well informed about the news in general and about problems such as racism, sexism, and environmental disaster. But you won’t often be reminded that we are the richest, most comfortable, best-fed, longest-lived people in history. Or as Indur Goklany put it in a book title, you won’t hear about The Improving State of the World: Why We’re Living Longer, Healthier, More Comfortable Lives on a Cleaner Planet.

Pinker does point out, “Information about human progress, though absent from major news outlets and intellectual forums, is easy enough to find. The data are not entombed in dry reports but are displayed in gorgeous Web sites, particularly Max Roser’s Our World in Data, Marian Tupy’s HumanProgress, and Hans Rosling’s Gapminder.” But of course those aren’t the major media. Which is why, he says, “And here is a shocker: The world has made spectacular progress in every single measure of human well-being. Here is a second shocker: Almost no one knows about it.”

So what if the media did report the most important news, the Great Fact? I asked Cato intern Thasos Athens to help me envision that:

Comparing the risk of dying in a terrorist attack to a common household accident like slipping in the bathtub is inappropriate.  After all, inanimate objects like bathtubs do not intend to kill, so people rightly distinguish them from murderers and terrorists.  My research on the hazard posed by foreign-born terrorists on U.S. soil focuses on comparing that threat to homicide, since both are intentional actions meant to kill or otherwise harm people.  Homicide is common in the United States, so it is not necessarily the best comparison to deaths in infrequent terror attacks.  Yesterday, economist Tyler Cowen wrote about another comparable hazard that people are aware of, that is infrequent, where there is a debatable element of intentionality, but that does not elicit nearly the same degree of fear: deadly animal attacks.

Cowen’s blog post linked to an academic paper by medical doctors Jared A. Forrester, Thomas G. Weiser, and Joseph H. Forrester who parsed Centers for Disease Control and Prevention (CDC) mortality data to identify those whose deaths were caused by animals in the United States. According to their paper, animals killed 1,610 people in the United States from 2008 through 2015. Hornets, wasps, and bees were the deadliest and were responsible for 29.7 percent of all deaths, while dogs were the second deadliest and responsible for 16.9 percent of all deaths. 

The annual chance of being killed by an animal was 1 in 1.6 million per year from 2008 through 2015.  The chance of being murdered in a terrorist attack on U.S. soil was 1 in 30.1 million per year during that time.  The chance of being murdered by a native-born terrorist was 1 in 43.8 million per year, more than twice as deadly as foreign-born terrorists at 1 in 104.2 million per year.  The small chance of being murdered in an attack committed by foreign-born terrorists has prompted expensive overreactions that do more harm than good, such as the so-called Trump travel ban, but address smaller risks than those posed by animals.

In addition to the data analyzed in the Forrester et al. paper, the CDC has mortality data for animals back to 1968.  This period includes the 9/11 attacks, the deadliest terrorist attacks in world history, which helps to take account of the fat-tailed distribution of actual terrorist attacks.  From 1975 through the end of 2016, 7,548 people have been killed by animals while 3,438 have been killed by all terrorists.  Even during this time, the annual chance of being killed by an animal is far higher than being killed in a terrorist attack (Table 1). 

Table 1

Annual Chance of Being Killed by Different Means, 1975-2016

Means of Death

Annual Chance of Dying

Homicide

1 in 14,296

Animal Attack

1 in 1,489,177

All Terrorists

1 in 3,269,432

Native-born Terrorists

1 in 27,482,415

Foreign-born Terrorists

1 in 3,710,897

 Sources: John Mueller, ed., Terrorism Since 9/11: The American Cases; RAND Database of Worldwide Terrorism Incidents; National Consortium for the Study of Terrorism and Responses to Terrorism Global Terrorism Database; U.S. Census Bureau, “American Community Survey”; Disaster Center, “United States Crime Rates 1960-2014”; Centers for Disease Control and Prevention (CDC); and author’s calculations.

One reason people fear terrorism so much is that it appears random and there is little one can do to avoid it.  While terrorism certainly appears random, not living in New York City or Washington, DC would have substantially reduced one’s chance of dying in a terrorist attack since 1975.  But just because terrorist attacks strike randomly and infrequently does not mean that the fear that those attacks create needs to be addressed through new public policies that spend trillions of dollars and kill many people in addition to making daily life just a little more inconvenient for little to no benefit. 

As far as I can tell, nobody suggests banning bees, dogs, or other animals just because they have killed 7,548 people since 1975.  But it is common for people to argue for banning immigrants due to the manageable hazard posed by infrequent terrorist attacks by foreign-born individuals.  Animals can be scary and they are infinitely more in control of their actions than inanimate objects like bathtubs, although probably not as much in control of themselves as human beings.  Adjusting for an American’s number and frequency of contacts with animals relative to people is essential to understanding the relative risks of dying from animals or other people.  Many of us have zero daily interaction with animals but talk to many different people. 

The chance of dying in any of these types of incidents, whether terrorism or homicide or animal attack, is small and manageable.  Certain precautions do make sense but only if they pass a cost-benefit test that counter-terrorism spending is guaranteed to fail.  Evaluating small and manageable threats such as that from terrorism relative to other small and manageable threats from homicide or animal attacks is a useful way to understand the world and where we should focus our energies and worries. 

As the nation remains fixated on the opioid epidemic, methamphetamine is making a resurgence. Meth is less expensive than heroin, and it is gaining users who fear opioid overdoses.

Meth is not new; it burst onto the scene in the early 1990, as the crack epidemic waned.  Synthesized from readily available chemicals, meth provided a cheaper, homemade alternative to other drugs. As use increased, legislators and law enforcement officials took note.

The first major legislation targeting meth was the 1996 Comprehensive Methamphetamine Control Act. Passed unanimously by the Senate and by 386-34 in the House, the legislation required that individuals buying and selling chemicals used in meth production register with the federal government, which sought to track such chemicals and reduce their supply to manufactures.

Despite this legislation, meth use – and fatal overdoses – increased. In response, Congress passed the Combat Methamphetamine Epidemic Act of 2005 (officially enacted in March 2006), which limited over-the-counter sales of ephedrine and pseudoephedrine, and required retailers to log customer purchase of such drugs. Simultaneously, federal and state authorities were instituting restrictions on pharmaceutical amphetamines including Ritalin and Adderall. And many states instituted prescription drug monitoring programs to reduce the availability of prescription amphetamines acquired legally and resold on the black market.

While well-intentioned, these policies may have induced users to substitute from expensive prescription drugs to cheap, readily available meth. And this switch had the usual impact of restrictions on access.

Overdose deaths related to methamphetamine initially declined after the crackdown on prescription access, but by 2016, the meth overdose rate had reached four times its level a decade ago. The likely explanation is that restrictions pushed users from prescription versions to black market meth, where uncertainty about purity generated increasing overdoses.

 

As the opioid crisis worsens and calls for supply restrictions increase, policymakers should consider how the same approach failed to halt – indeed exacerbated – the meth epidemic.

 

Research assistant Erin Partin contributed to this blogpost.

In the book I advertised in my last post, I argue that the Fed’s decision to switch to a “floor”-type operating system “deepened and prolonged the Great Recession.” Yet the Fed is only one of several central banks that have adopted floor systems for monetary control during the last dozen years. That fact raises some obvious questions: Did those other floor systems have similarly dire consequences? If not, why not?

In this post I answer these questions for one of those other cases: New Zealand’s. By doing so I also hope to shed some further light upon the U.S. floor system experience.

New Zealand’s Corridor System

From 1999 until July 2006, the Reserve Bank of New Zealand (RBNZ), New Zealand’s Central Bank, relied upon a symmetric corridor system in which the benchmark policy rate, known as the Official Cash Rate, or OCR for short, was kept 25 basis points above the rate paid on banks’ reserve (“settlement”) balances, and 25 basis points below the rate that the RBNZ charged for overnight loans.

Not long before it established its corridor system, the RBNZ implemented a Real Time Gross Settlement (RTGS) system for wholesale payments, in which interbank payments are settled bilaterally and immediately, thereby becoming final and irrevocable as transactions are processed, rather than at the end of the business day only, following the determination and settlement of net balances. Because banks’ overnight settlement balances bore an opportunity cost under New Zealand’s corridor regime, as they do in any corridor-type system, banks held very few such balances — an aggregate value of just NZ $20 million was typical — relying instead on intraday credits from the RBNZ to meet their ongoing settlement needs.

The advantage of RTGS is that it allows payments to be made “final” as soon as they’re processed, so that payees don’t have to wait until the end of the day to find out whether their money came through. In a net-settlement system, in contrast, a transfer made earlier in the day remains tentative until banks pay their net settlement dues. Should a bank fail to settle, its intraday payments have to be “unwound,” reversing any transfers made with money the bank wasn’t good for.

“Cashing Up” the Banking System

The disadvantage of RTGS, or rather of any RTGS system that relies on intraday central bank credits by permitting overdrafts on participants’ settlement accounts, is that it exposes the central bank itself to credit risk: should a bank fail while in overdraft, the central bank would incur a loss. To avoid that risk the Reserve Bank, instead of supplying unsecured intraday credit by allowing banks to overdraw their accounts, chose to supply it in the form of free but nonetheless fully-secured intraday repurchase agreements. In principle at least, if a bank with an outstanding repo failed, the RBNZ could sell the purchased security to recover any cash it had advanced.

In practice, however, the RBNZ’s decision to accept municipal and corporate paper as repo collateral meant that it still faced some risk of loss; moreover, it soon discovered that the volume of its outstanding repos with particular banks was such as made that risk uncomfortably large. Recognizing the danger, and desiring as well to reduce the frequency of delayed or failed settlements, the RBNZ determined to encourage banks to rely on overnight settlement balances, instead of intraday repos, to meet their settlement needs.

With that aim in mind, in July 2006 the RBNZ began its program of “cashing up” the New Zealand banking system. Because the Reserve Bank’s intent was to enhance banks’ liquidity without altering its monetary policy stance, this program involved several components. The first consisted of the RBNZ’s creation, between July and October, of an additional NZ $7 billion of settlement balances, while the second consisted of a concurrent 25 basis-point increase, made in five five-point increments, in the interest rate paid on those balances, aimed at encouraging banks to hold them. Finally, these other steps having been taken, the RBNZ stopped providing intraday repos. As the figure below shows, although total settlement balances hovered around NZ $8 billion during the crisis, and occasionally were raised beyond NZ $10 billion, they eventually settled down near the RBNZ’s originally-chosen target of NZ $7 billion, where they’ve remained ever since.


New Zealand Settlement Balances, 1999-2014

A Floor System, but Not for Long

Since the Reserve Bank did not find it necessary to alter its policy rate until March 2007, when it raised the OCR from 7.25 to 7.5 percent, it seems to have achieved its goal of cashing-up the banks without altering its policy stance. However, the steps it took to cash-up the New Zealand banking system did involve a fundamental change in the central bank’s operating system: from a symmetrical corridor system to what most observers have regarded as a “floor” system, in which the interest rate on settlement balances was identical to the Reserve Bank’s policy rate, and banks were well-supplied, if not satiated with, liquidity.

However, at least two crucial facts distinguish New Zealand’s floor system from floor systems employed by the Fed, the ECB, and the Bank of England. One is that, while it involved a quantity of settlement balances that was adequate to meet banks’ settlement needs, the RBNZ never took advantage of it to engage in Quantitative Easing. Having supplied banks with a level of settlement balances it judged adequate for their ordinary liquidity needs, it never attempted to enlarge those balances substantially and for an extended period by means of further, large-scale asset purchases. Instead, as the world crisis deepened during the last half of 2008 and first half of 2009, it mainly responded by cutting the OCR, and hence the interest rate it paid on banks’ settlement balances, aggressively, from 8.25 percent in June 2008 to just 2.5 percent at the end of April 2009 — with the biggest cuts coming between September 2008 and January 2009. It was, it bears noting, while these cuts were in progress that the Fed introduced its own floor system, raising  the interest rate it paid on banks’ excess reserves from zero to a final level of 25 basis points, where it was to stay until December 2015.

Second, while the Fed, the ECB, and the Bank of England retained full-fledged floor systems throughout the crisis and since, the Reserve Bank of New Zealand had already taken important steps away from such a system in August 2007, or well before the crisis reached its most critical stage with Lehman Brothers’ failure.

The RBNZ’s decision to modify its floor system was informed by a recommendation made in the same March 2006 consultation document that caused it to install that system in the first place, to wit: that “Incentives should be in place to foster an environment where the commercial banks get liquidity from each other and deal with the Reserve Bank only when liquidity is not otherwise available in the market.”

The RBNZ had hoped that its “cashed up” floor system would satisfy this requirement. “The increased base level of settlement account balances in the system,” the consultation document claimed,

should better foster the development of an inter-bank cash market. In the presence of significant market liquidity, market participants should transact cash with each other at the end of day in preference to using the Bank’s standing facilities. Development of the inter-bank market is desirable to improve the distribution of cash between ESAS [Exchange Settlement Account System] participants, leaving the Bank to concentrate on the liquidity to the system as a whole. This market, if developed, would also provide another source of information for the Bank on any inefficiencies in the market.”

However, once the floor system was up and running it became clear that, instead of encouraging banks to lend and borrow settlement balances in the private, overnight market, it was encouraging at least some banks to hoard any surplus balances that came their way. Like floor systems elsewhere, New Zealand’s involved a deposit rate equal to the central bank’s overnight policy rate, which tended to be higher than the corresponding, secured interbank overnight repo rate. New Zealand’s floor system therefore allowed banks to accumulate reserves without incurring any substantial opportunity cost by doing so.

New Zealand Adopts a Tiering System

To correct the problem of reserve hoarding, the Reserve Bank needed to modify the terms of its interest payments on banks’ settlement balances so as to keep banks from holding settlement balances beyond what they actually needed for settlement purposes. The solution it settled on was a “tiering system,” with settlement balances up to a bank’s assigned tier limit earning the OCR, and balances beyond that level earning 100 basis points less. The tier levels were themselves based on banks’ apparent settlement needs, but collectively still amounting to the aggregate target of NZ $7 billion.

Although it was originally supposed to go into effect in September 2007, the tiering system was put in place a month ahead of schedule to deal with stresses from the emerging global crisis — which “threatened to materially tighten monetary and credit conditions in New Zealand, jeopardising banks’ confidence in continuing access to credit.” In other words, the RBNZ found it more desirable than ever to move away from an orthodox floor system as credit markets, and markets for overnight bank funding especially, tightened, so as to keep its own payments arrangements from contributing unnecessarily to that tightening.

As Ian Nield (2008, p. 14) explains, the establishment of the tiering system, together with the Reserve Bank’s decision to accept domestic bank bills in its overnight standing facility, “had an immediate effect which, broadly speaking, re-normalised the domestic bank bill market,” especially by reducing short-term money market spreads. As the next figure shows, from Enzo Cassino and Aidan Yao (2011, p. 40), it managed to limit such spreads far more successfully then either the Fed or the ECB, and to do so without adding large quantities of fresh reserves to its banking system, though the difference also reflected the fact that New Zealand’s banks were not so encumbered with toxic assets as some U.S. and European banks.


Three-Month LIBOR-OIS Spread

Some Lessons

New Zealand’s floor experience makes for an interesting comparison with that of the United States. Of parallels, perhaps the most interesting is that in both cases a mere 25 basis point increase in the rate paid on banks’ central-bank balances proved sufficient to sustain a switch from a corridor or corridor-like operating system to a floor system. That such a small absolute change was all it took is particularly impressive in New Zealand’s case, for whereas in the U.S. between 2009 and 2015 25 basis points was a relatively significant amount in comparison to then-prevailing short-term rates, in New Zealand in July 2007 the OCR rate was 7.25 basis points —  making the 25 point increase in the rate paid on bank deposits proportionately much smaller. The New Zealand case therefore seems to supply strong evidence in support of Donald Dutkowsky and David VanHoose’s claim that even very small changes in reserve-compensation schemes can suffice to trigger major central bank operating-system regime changes.

But it’s the differences between the two experiences that are, after all, most striking. Chief among these is the fact that New Zealand installed its floor system well before the financial crisis began, and did it for reasons unrelated to monetary control. It’s sole goal was that of boosting banks liquidity to limit its own exposure to intraday credit risk — not to combat a crisis. For this purpose, raising the reward paid on bank settlement balances made perfect sense, for the point was to encourage banks to hold more such balances, and therefore become more liquid, at a time when there was no credit crunch. The switch was undertaken, moreover, in a neutral manner, so as to leave the Reserve Bank’s monetary policy stance unchanged.

The U.S. in October 2008 was, in contrast, in the throes of a credit crisis, when, in retrospect at least, the last thing it needed was a further tightening of credit. Yet the Fed’s decision to start paying interest on bank reserves that month was motivated by its desire, not to enhance banks’ liquidity, but to get them to hoard reserves that the Fed was creating through its emergency lending programs, so that those reserves would not translate into a loosening of the Fed’s policy stance. Interest on excess reserves was therefore resorted to as an instrument of monetary control, and specifically as a means of monetary tightening aimed at offsetting the loosening that would otherwise follow fresh reserve injections. Later, the same new instrument would see to it that still larger reserve additions — the by-product of the Fed’s Large-Scale Asset Purchases — would also be hoarded as so many trillions of dollars of excess bank reserves.

In New Zealand, in contrast, even before the crisis struck authorities became anxious to prevent banks from accumulating more reserves than were deemed necessary for their settlement needs. In consequence a tiering system was planned, which would prevent such hoarding by imposing an interest penalty on above-tier settlement balances. The outbreak of the crisis merely caused the RBNZ to hasten its implementation of the new plan.

Thanks to New Zealand’s switch to a tiered system, its overnight interbank lending market remained active throughout the crisis. New Zealand’s banks therefore continued to rely upon one another as lenders of first resort, turning to the RBNZ for overnight funds only as a last resort — an outcome fully in accord with orthodox doctrine. In the U.S., in contrast, the establishment of a floor system caused the once active federal funds market to altogether cease to function as a conduit for interbank loans.

Finally, although the RNZB occasionally found it desirable to inject some extra cash into the New Zealand banking system during the crisis, as it did in August 2007 and again in fall of 2009, those cash additions were — as could be seen in our first figure and as the next figure (Cassino and Yao, 2011, p. 42) makes especially clear — both relatively modest and temporary. Even recently, New Zealand’s settlement balances amount to little more than 5 percent of that nation’s GDP, whereas banks’ reserve balances held at the Fed amount to about 13.5 percent of U.S. GDP.


RBNZ Settlement Balances, 2007-2009, Millions of NZ$

Those modest cash additions proved capable, together with several other Reserve Bank programs, of “maintaining the functioning of the New Zealand money market and the flow of domestic credit during the global financial crisis,” as a later RBNZ study concluded. That they did so was due in part to the fact that, by discouraging banks from hoarding reserves in excess of their fixed tier limits, New Zealand’s tiering system preserved the banking system money multiplier, instead of causing it to collapse, as happened in the U.S. The RBNZ’s success in keeping credit flowing may have in turn contributed, if only to a modest extent, to New Zealand’s Great Recession being  both one of the first to end and one of the shallowest.

Conclusion

Although the Fed was hardly alone in establishing a floor system of monetary control — and not all floor systems had consequences like those I document in my book about the U.S. case — the relative success of these other floor systems does not necessarily serve as a vindication of the general concept. The New Zealand floor system, in particular, only functioned in a relatively orthodox manner for a period of less than a year, predating the financial crisis. As that crisis dawned, the Bank of New Zealand retreated from an orthodox floor system, by placing definite limits on the balances on which banks would incur no substantial interest opportunity cost. Having thus curtailed New Zealand banks’ appetite for settlement balances, the RBNZ could expect its additions to New Zealand’s monetary base to influence economic activity by way of the same orthodox transmission mechanism, leading to the same marginal stimulus affect, as might have been the case if settlement balances bore no interest at all. Federal Reserve authorities cannot, for all of these these reasons, point to New Zealand as supplying a precedent favoring their own decision to adopt and retain a floor system of monetary control.

[Cross-posted from Alt-M.org]

To paraphrase John Lennon, imagine there are no public schools, or private ones, too. That is what writer Julie Halpert ostensibly does in a new Atlantic article in which she purports to conduct a “thought experiment,” first imagining a world of all private schools, then one of all public. But rather than coming off as a true, objective experiment, the piece reads more like a dystopian novel depicting the horrors of an imagined all-private system, while comparatively glancing past the many real, actually experienced stains and injustices of public schooling.

It’s not auspicious that the article, before the “experiment” is even proposed, begins with a description of the posh Detroit Country Day School, which likely reinforces the impression that many people seem to have that private schools are snooty preserves of the uber-rich. Halpert notes that the price of Detroit Country Day for high school is about $30,000 per year, but doesn’t mention that the average tuition at a private high school, according to the most recent federal data, is only about $13,000. That average price is high when you’re comparing it to “free” public schools for which you’ve already paid taxes, but not Detroit Country Day high.

With commencement of the experiment we are given a little history…very little. Halpert completely bypasses American educational history prior to Horace Mann’s crusade for common schools starting in the 1830s, noting only that some of our oldest high schools, specifically tony West Nottingham Academy and Phillips Academy, date back to the 18th Century. Halpert also writes that Mann was largely responsible for “the perception of education as a public good.” She ignores the evidence the education was delivered in myriad ways and was very widespread prior to the common schooling crusade—about 90 percent of white adults were literate by 1840—or that it often had a heavily moral character geared at both the private and public good. This is a huge omission, leaving out evidence that largely private provision of education, though sometimes with a modicum of government funding, worked, at least for those who weren’t subjugated by law. Law which was, of course, promulgated by government, the entity that would supply public schools.

Halpert does somewhat acknowledge a flaw in public schooling, saying that “Mann’s good intentions didn’t always translate into the kind of diversity he envisioned.” Now, Mann’s target may have been diversity in classrooms, but it was greater uniformity coming out, and Halpert at least cites Holy Cross historian Jack Schneider pointing out that the common schools were geared to inculcate basic Protestant beliefs, and were often openly hostile to Catholics. Alas, this is about as deep as the experiment dives into public schooling’s most painful flaw: its repeatedly demonstrated, poisonous inability to handle pluralism and treat diverse people equally even when it wants to, and its easy employment as a tool for soft and sometimes overt, uniform indoctrination. At times the indoctrination has been letting everyone know they should be Protestant, other times it’s been letting them know they must be Nazis. The use of public schools for brain-washing indoctrination in places like Nazi Germany and the Soviet Union are on the extreme end, of course, but Mann himself was clear that he wanted to create greater uniformity in thought and behavior through public schooling—to create a “more far-seeing intelligence, and a purer morality, then has ever existed among communities of men”—as have many public schooling advocates since. Acknowledging that public schooling has repeatedly been used as a tool for social and political control must be a major part of any thought experiment that would objectively contemplate all-public education. But it is not here.

Continuing on, Halpert quickly notes that “not all private schools fall in the same category as Detroit Country Day,” but rather than using that to explicitly state that most private schools are much less expensive, she deploys it in an attack on an all-private system, saying that because private schools can differentiate, “reliable information on school quality would likely be nonexistent.” She continues, explaining that because private schools operate independently, “they’re generally not subject to rules holding them accountable for a certain level of student performance. No rules mean no agreed-upon measures, which mean no standardized assessments whose results parents and policymakers can consult.”

No agreed-upon measures?

Often totally on their own, many private schools have for decades given nationally norm-referenced tests such as the Terra Nova, Iowa Test of Basic Skills, and California Achievement Test, to help schools and parents assess how children are doing. They also readily participate in the Advanced Placement and International Baccalaureate programs. And, of course, lots of private school kids take the SAT and ACT, and schools pursue accreditation. Private schools have a powerful incentive to share nationally comparable test results if parents value them, because parents will demand to see them when deciding where to send their kids. Research has shown parents with choice indeed do this, though they often, very reasonably, put other things, like safety, and whether the schools seem to care for children, higher on their priority lists.

In contrast to the metric-free chaos we’d see in an all-private system, Halpert writes that public schools provide “critical information about a particular school [that] is generally accessible to anyone. This accountability reduces ‘the possibility that parents could be duped,’ said the College of the Holy Cross’s Schneider.”

Really? Let’s remember what common public school metrics often look like: “proficiency” that that is often a very low bar and varies wildly from state to state; empty graduation rates; and inscrutable “report cards.” And remember that all children and families are different, and there is huge disagreement over what education is all about, which means no single metric—or two, or three—can capture what makes each individual school special, or what each child needs. There’s public schooling’s inability to handle diversity again! Halpert does cite me noting that all kids are different, but I’m sandwiched between lengthy quotes saying that accountability and good info in an all-private system are impossible, concrete evidence to the contrary notwithstanding…or mentioned. I appear to be but a foil.

Next we get to the inequality-based condemnation of private schooling, an attack predicated on the premise that rich people can get better private schooling than poor, therefore private schooling is bad. When it comes to evidence, this is primarily grounded in conjecture and Chile, which has significant school choice but is also accused of significant inequality in school access.

As a logical proposition, the rich-will-get-better-stuff argument makes little sense: the rich will be able to access better schools than the poor with or without vouchers. What vouchers do is just even things up a bit. And even if private schools were totally outlawed, wealthier people could buy houses in better districts, which is exactly what happens now. Halpert addresses that, but not until the end of the experiment, and not until citing numerous academics declaring that choice would clearly stratify and segregate, and Halpert offering this whopper: “experts tend to agree an all-public-school world would make the United States a higher-functioning, and more harmonious, place by exposing students to peers from different backgrounds.”

Maybe most of the experts Halpert talked to concluded that, but that appears to have been a heavily slanted lot. From what I can tell the only choice supporters she talked to were folks from Detroit Country Day, me, AEI’s Andy Smarick, and Barbara Gee from the group Private Schools with a Public Purpose. Worse, she only cites Smarick pointing out that how much power parents should have over school selection is still a contentious topic; seems to throw me in as a foil; and cites Gee saying kids with dyslexia are actually better served at public than private schools. (The latter after only parenthetically and with big cost modifiers noting that there are private schools that actually specialize in working with kids with disabilities.) Oh, and at the very end she quotes Donna Orem from the National Association of Independent Schools asking, “Would America be as creative if all the schools in the country were the same?” It’s an important question, but far too little, far too late, appearing at the end of a very long assault on private schooling.

Of course, more important than what experts say is what the evidence says, and it is against the assertion that public schools are better harmonizers than private.

Not only are public schools hugely segregated, which again Halpert only gets to after a long, sharp take-down of private schooling, she ignores the lengthy empirical evidence that U.S. school choice programs typically provide as good or better education than public schools, usually at a fraction of the cost, and that they actually tend to reduce racial segregation. Far worse, her experiment totally ignores public schooling’s shameful past when it comes to integration, including sometimes painful efforts to “Americanize” immigrants, and decades of forced racial segregation. Well, I shouldn’t say “totally”: the piece does quickly mention “desegregation efforts,” but only to criticize private schools. Without discussing mandated racial segregation in public school at all, Halpert writes that the private school enrollment share is higher in Nashville, Tennessee, than nationally as a “result of desegregation efforts that prompted white families to seek educational settings where their kids wouldn’t be forced to learn alongside black children.”

If the point of the experiment is to objectively assess public and private schooling, this egregious omission should lead to the whole lab being shut down. To ignore what public schooling did for over a century—and continues to do through housing patterns—but condemn private schools because some people, who had gotten their way for so long through public schools, tried to use private schooling to keep getting their way, is utterly illogical, unfair, but also all too common.

No attack on choice would be complete without a mention of Finland, but Halpert also focuses heavily on Cuba to show how great a no-choice system would be. And Chile, which has widespread school choice, has to be held up as a bad guy. Now, the Finland miracle has been debunked many times, in part by the country’s own falling scores on the exam on which it excelled—the Program in International Student Assessment (PISA)—as well as its lesser results on other exams, but Cuba has gotten very little attention.

Halpert holds Cuba up as an educational powerhouse, and quotes Stanford professor Linda Darling-Hammond saying that even Chile’s best students “couldn’t come close” to replicating Cuba’s achievement levels. So why haven’t we heard more about this? Probably in part because Cuba has never participated in the big international assessments such as PISA or the Trends in International Math and Science Study. Also, with an authoritarian regime like Cuba’s, there is always a tinge of doubt that the results being reported are real. And then there is the inconvenient reality that Cuba is a dictatorship—not exactly the ideal people want to openly advocate for.

Those things said, Cuba appears to have done very well relative to other participating Latin American countries on two exams: the First Regional Comparative and Explanatory Study (PERCE) and the Second Regional Comparative Explanatory Study (SERCE). And that did include outpacing Chile. Which shouldn’t be surprising: authoritarian regimes often have high achieving education systems into which they pour great amounts of attention and resources. Why? Because, as noted already, education is a huge tool for control!

But there’s an important irony here. While Cuba’s system may produce high scores, it does not appear to produce equity. Cuba’s overall performance well outpaced other Latin American countries, but it also typically produced by far the biggest gaps between its top and bottom performers. In other words, it suffered from the most achievement inequality. Apparently, some Cuban kids are more equal than others. Meanwhile, Chile was consistently in the upper ranks in achievement when Cuba participated in the tests, but had roughly middling gaps between top and bottom performers. For what it’s worth, Chile consistently finished first in the Third Regional Exploratory Study, which Cuba sat out.

If Cuba is your shining example of what an all-public-schooling system could look like, you have a huge problem. You have an even bigger problem if you don’t seem to realize that.

In a way, uncritically using repressive, dictatorial Cuba in this “thought experiment” exemplifies exactly what is wrong with it: it eschews or soft-pedals almost all of the unpleasant—and sometimes downright awful—realities of public schooling, while heaping worst-case-scenario prognostications on private schooling. It seems, even if not intended, like an experiment designed to get one result: illustrate that an all-private education system would be awful. And that’s not scientific at all.

California governor Jerry Brown has been taking a victory lap of sorts after putting forth a budget for fiscal year 2019 that would include a $6 billion surplus, a remarkable turnaround for a state that hemorrhaged red ink in the wake of the great recession.

Of course, much of that surplus arrived via a hefty tax increase, as well as a surfeit of revenue resulting from the stock market boom via capital gains taxes, so attributing this turnaround to fiscal probity might be taking things a bit far.

However, Governor Brown does get credit for at least temporarily righting what seemed to be a sinking ship. What’s more, he seems to realize that this surplus can easily disappear, and he has warned his potential successors to resist spending that surplus. What Brown is fully aware of is that even the most spectacular stock market increase is not enough to erase the state’s most pressing financial problem—namely, its underfunded government pension.

Currently, it has enough money set aside to cover just 68% of its future obligations—certainly far from the most indebted state (that would be my own state of Illinois), but still low enough to dismiss any notion that future stock market growth can remedy the problem.

Despite this, the California Public Employees Retirement System, or CalPERS, has put politics ahead of achieving a high rate of return by insisting that the boards of the companies it invests in adhere to various social and environmental practices.

It’s nonsense, of course, and it amounts to little more than an extension of politics into a realm that doesn’t have room for it.

The problem is that these environmental and social constraints inevitably bring with them a lower rate of return—regardless of what CalPERS and other advocates say to the contrary. And these lower returns will only hasten the day when the state’s taxpayers—or, failing that, federal taxpayers—will be on the hook to cover California’s pension deficit.

A few of the state’s politicians seem to be aware of the conundrum this places on California citizens: A Democratic state senator recently offered a bill that would allow new state employees to opt out of the state pension plan and simply participate in a defined contribution plan. The state university system already allows newly hired professors to opt out—a recognition that a defined benefit plan does not work well for a peripatetic workforce like academics.

Ultimately, moving to a defined contribution plan might make sense from a long-term sustainability perspective, but transitioning to such a system for everyone would require someone (namely, current taxpayers and state workers) to cover promises already made to current and future state retirees while new employees build up their own retirement balances. In short, someone’s going to be left holding the bag in the ponzi scheme that is a pay-as-you-go public pension plan.

That’s a tricky path to navigate: Utah did such a thing for its new employees with a much smaller per-capita shortfall, accomplishing it by making those new employees fork over a portion of their income to cover promised benefits. It is not clear even that will be sufficient for the state.

California will need every dime it can get its hands on to fund its pension shortcomings, and with the country’s highest income tax rate it probably can’t raise personal income taxes too much higher. Governor Brown has commented that the state’s retirees should expect a benefit reduction the next time there’s a recession but most people think any reductions in promised benefits are precluded by the state’s constitution.

At some point a future governor of California will need to figure out how the state’s going to cope with having billions in promised benefits and insufficient money set aside to keep those promises. That calculus will be much easier if CalPRS doesn’t accept a lower rate of return in exchange for dubious political chits.

In response to threats of retaliation by the EU over his announcment of steel/aluminum tariffs, President Trump has been complaining about high EU trade barriers. Here’s a recent tweet of his:

If the E.U. wants to further increase their already massive tariffs and barriers on U.S. companies doing business there, we will simply apply a Tax on their Cars which freely pour into the U.S. They make it impossible for our cars (and more) to sell there. Big trade imbalance!

And here’s something he said yesterday:

“The European Union has been particularly tough on the United States,” Mr Trump said at Tuesday’s joint press conference with the Swedish prime minister.

“They make it almost impossible for us to do business with them,” Mr Trump complained.

President Trump is right: EU trade barriers are too high. In addition, U.S. trade barriers are also too high. Here’s something I wrote a few years ago about tariffs:

In the context of the recently launced US-EU free trade talks (formally, the “Transatlantic Trade and Investment Partnership,” or TTIP), commentators have noted that tariffs between the US and EU are low, and thus the key part of the talks will deal with so-called regulatory barriers to trade. An article in Inside U.S. Trade observes: “Overall, the U.S. average tariff rate is 3.5 percent, although the average tariff rate on goods that the EU actually shipped to the U.S. last year was even lower, at 1.2 percent, … .”

But these average figures mask some significant “tariff peaks.” There are lots of individual tariff rates, so if many are low or zero, that makes the average figure fairly low; nonetheless, there are plenty of high tariffs still out there. The same article points out some US and EU tariff rates that may come up during the negotiations. Here is the US:

— U.S. light trucks tariff of 25 percent; a tariff on wool sweaters of 16 percent; a tariff on sardines of 20 percent; a tariff on tuna of 35 percent; and a tariff on leather at 20 percent

Here is the EU:

— applied tariffs on honey of 17.3 percent; carrots at 13.6 percent; potatoes at 14.4 percent; strawberries at 20.8 percent; lemons at 12.8 percent, beef at 12 percent; and lamb at 12 percent

And all of those tariffs add up:

— the U.S. collected about $4.5 billion in tariffs from EU products in 2012. … [Of this amount,] $900 million comes from imported German cars; about $260 million comes from Italian clothes and shoes; and about $72 million comes from cheese imports.” 

And regulatory trade barriers are even higher.   So perhaps there’s a way out of the back and forth threats of tariff retaliation going on right now: The two sides could restart the TTIP talks, and bring down barriers on both sides.

Alternative monetary policy targets continue to gain advocates. While Chair Powell was waiting to take over leadership at the Fed, internal support for rethinking the current inflation rate target was building. And while there are various possibilities to consider — such as raising the inflation rate target, turning the inflation target into an inflation rate range, or adopting price level targeting — there are reasons to believe that the Fed may end up choosing a nominal GDP target.

Powell testified before Congress for the first time last week and affirmed the Fed’s commitment to the current framework, including the symmetric 2% inflation target. However, he also went out of his way to state that the FOMC “routinely consults” monetary policy rules in their analysis and that he finds these rules “helpful.” According to David Beckworth of the Mercatus Center, this is the strongest endorsement of rules yet from a Fed Chair. If Powell is open to an expanded role for monetary policy rules, it is reasonable to think he may be open to a superior monetary policy target as well.

A New Target

Shortly after her successor testified on Capitol Hill, Janet Yellen made her first public appearance as former Fed Chair in a Brookings Institution interview — with her predecessor Ben Bernanke — about her time leading the Fed and career as an economist. When asked about potential alternatives to the Fed’s current framework, Yellen essentially dismissed raising the inflation rate target, something Bernanke has also done, on both political and economic grounds. On the other hand, Yellen volunteered nominal GDP targeting as an alternative, claiming it has “interesting advantages.”

Some of Yellen’s former colleagues within the Federal Reserve System are also open to changing the Fed’s target. In her brief remarks delivered at the University of Chicago’s Annual Monetary Policy Forum two weeks ago, Loretta Mester, President of the Cleveland Fed, supported reexamining the Fed’s inflation rate target — but added that any change would need to clear a very high bar.

However, Mester also suggested that nominal GDP level targeting and price level targeting were quite similar frameworks. She worried that under either regime a central bank might tighten policy after a negative supply shock raised prices, even if the economy was suffering from weak demand. Such tightening would be undesirable, of course. But in fact it is a risk only under price level targeting. With a nominal GDP target, the central bank would stabilize overall spending. That means the central bank would allow a decline in supply to raise prices without overtightening monetary policy.

Mester treating a price level target and a nominal GDP level target as almost one and the same was curious, because this was not the first time she had discussed alternative frameworks this year. Mester also gave a similar talk at the AEA annual meeting in January. In that discussion, Mester stressed that nominal GDP targeting was superior to other targets when the economy was hit with supply shocks. Mester expressed concern with measurement issues and the lack of central bank experience with nominal GDP level targeting, but spoke positively about it overall.

Mester’s was not the only prominent voice discussing nominal GDP targeting at the AEA annual meeting. The most notable discussion occurred during a panel titled Monetary Policy in 2018 and Beyond, when former Council of Economic Advisers Chair Christy Romer endorsed the idea, citing research done with her economist husband, David Romer.

During her presentation, Romer first asked: “Why should the Fed be thinking about a new target now?” She offered two answers to this question. First, she discussed the poor performance of monetary policy over the last ten years, including its failure to allow for a more robust recovery. Second, she rightly identified growing congressional pressure for more accountability at the Fed via a rules-based monetary policy — something that nominal GDP level targeting moves toward.

The Right Target

Why is nominal GDP, particularly the level, the right target? Romer began her answer with what may be called the “bygones” argument. Under inflation rate targeting, whenever the central bank misses its target that miss is a permanent mistake. That’s because the central bank will seek to restore the rate of inflation, rather than the price level. Policymakers would essentially let bygones be bygones, hence the name.

But policy misses distort the ability of economic actors to make their decisions, by either eroding the value of money with easy policy or acting as a contractionary force on economic activity with overly tight policy. And when a central bank is targeting a rate, policy misses are essentially written off rather than corrected — unlike under level targeting.

Under a nominal GDP level target, on the other hand, if the Fed was undershooting (or overshooting) the target, it would automatically make up for errant past performance by returning to the longer trend. If the Fed was generating too much inflation, which would be identified as excess spending, the Fed would know to tighten policy until nominal GDP was back on its trend path. Such a feature essentially eliminates the bygones problem of permanent errors and improves macroeconomic stability.

A nominal GDP level target is also superior to an inflation rate target because of desirable expectations effects, according to Romer. Keeping nominal GDP on a level path would firmly anchor expectations. Inflation expectations have been well anchored for some time, but due to the Fed’s persistent undershooting of the inflation rate target there is at least some chance those expectations could become unanchored.

Strengthening these expectations in turn leads to yet another benefit: increased accountability for Fed policy. With a nominal GDP level target there would be no ambiguity as to whether or not the Fed was successfully implementing policy. A successful policy would return nominal GDP to its trend line. Whereas now the Fed can continually claim it’s approaching its long-run inflation target, neither the public nor policymakers would tolerate that wait-and-see approach under a nominal GDP level targeting regime because it would be so clear when policy was failing. Romer provided a table showing that a nominal GDP level targeting would have improved the Fed’s performance over the last five policy cycles.

Romer concluded her remarks with optimism for a new target: now is the right time for the Fed to be rethinking its operating framework and a nominal GDP level target can work in practice. She pledged to continue this line of research.

A Robust Target

The day after the AEA meeting, the Brookings Institution hosted an event asking if now is the right time to rethink the 2% inflation target, which was adopted by the Fed in January 2012.

During the panel discussing different options for the Fed’s target, Jeff Frankel argued that targeting nominal GDP is superior to targeting the inflation rate. In addition to emphasizing some of the arguments that Romer made, Frankel added two key points.  (His slides can be found here.)

First, a nominal GDP target is the target most consistent with how the Fed ought to be making policy decisions.  It is, of course, important for a central bank not to overly focus on short term data. However, it’s equally important that the Fed not run an open ended policy that is always working towards, but never actually achieving, its goal. The Fed ought to be setting policy over the medium term, typically 1-2 years. Under inflation rate targeting, the Fed has been missing its target, in one direction, for years. A nominal GDP target would give the Fed a nominal variable that they could keep on a stable trend.

Second, and most importantly, a nominal GDP target is the most robust target available to policymakers with respect to shocks, particularly aggregate supply shocks. Under an inflation rate target a central bank needs to differentiate between supply and demand shocks.  It is supposed to “look through” the supply shocks and offset the demand shocks.  This means the Fed shouldn’t raise rates because an oil shortage raises the price of gas and they shouldn’t cut rates if a productivity boom make consumer goods available at lower prices.  With a nominal GDP target the central bank does not need to differentiate between supply and demand shocks in real time.  Rather, the Fed would monitor the overall level of spending in the economy and adjust the stance of monetary policy so that it was forecasting hitting its nominal GDP target over the medium term.

Frankel’s presentation would have been stronger had he specified that targeting the level of nominal GDP was the most desirable strategy for the Fed to adopt. His remarks could be read that he’s ambivalent between targeting the growth rate or the level of nominal GDP. But, of course, targeting the rate has the same “bygones” problem that Romer discussed when criticizing the inflation rate target.

Uncommon Consensus

While Frankel was scheduled to be the nominal GDP advocate at the event, an unexpected boost for nominal GDP level targeting came from Larry Summers. During the Q&A following his speech, which explained why the 2% inflation target is no longer appropriate, he was pressed on which alternative framework he would choose. He opted for a nominal GDP level target of 5-6%. While it’s easy to quibble with how Summers arrived at this decision — he sees it as the surest way to get higher nominal interest rates — to have an economist of his stature advocating for a nominal GDP level target is all to the good.

This is not the first year that prominent economists have announced support for targeting nominal GDP. For example, in late 2011 Paul Krugman expressed support when Christy Romer was urging Bernanke to channel his inner Paul Volcker and adopt nominal GDP level targeting. Therefore it was a bit odd when, as the keynote speaker at University of Chicago’s Annual Monetary Policy Forum, Krugman said that inflation rate targeting was insufficient, but that he did not have a replacement.  Stranger still, given that Krugman was an advocate of nominal GDP targeting in aftermath of the Financial Crisis.

Obviously, the year is young. But if these first two months are any indicator, 2018 may see a breakthrough for nominal GDP level targeting. As Scott Sumner likes to say, the Fed often follows the economics profession. The Fed is already discussing research on alternative frameworks, including level targeting. If more economists endorse a nominal GDP level target, then the Fed may move from researching it to adopting it. This would be welcome news.

[Cross-posted from Alt-M.org]

Despite the immense benefits that Americans have derived from free trade and globalization, as well as the far-reaching costs of protectionism, a “reciprocity” argument—that foreign protectionism against U.S. exports justifies current or even new U.S. protectionism against foreign imports—persists. Indeed, one of the primary justifications for President Trump’s proposed tariffs on steel and aluminum, as well as many other Trump administration trade policy proposals, rests on the notion that it is only “fair” that foreign trade barriers—low or high—be matched by America’s trade barriers. Leaving aside these arguments’ basic factual and historical errors (indeed, most countries’ average tariffs on U.S. iron and steel exports range between 0% and 5%—far less than Trump’s proposed 25%), the President’s reciprocity demands still suffer from many flaws: 

  • Reciprocity illogically demands the United States injure its own citizens because other countries injure theirs. There is overwhelming evidence that protectionism distorts markets and reduces economic welfare. For example, last year I documented “a vast repository of academic analyses and contemporaneous reporting that show that American trade protectionism—even in the periods most often cited as ‘successes’—not only has imposed immense economic costs on American consumers and the broader economy, but also has failed to achieve its primary policy aims and fostered political dysfunction along the way.” President Trump’s own Economic Report concludes that “trade and economic growth are strongly and positively correlated…. a 1-percentage-point increase in the ratio of trade to GDP raises per capita income by between 0.5 and 2 percent.” In terms of specific products, a 2006 International Trade Administration study of U.S. sugar trade barriers found that “[f]or each one sugar growing and harvesting job saved through high U.S. sugar prices, nearly three confectionery manufacturing jobs are lost.” The study also found that sugar trade barriers had caused many sugar-using companies to close or move to foreign markets (e.g., Canada and Mexico) where sugar prices were lower. A 2013 Iowa State University report found that getting rid of the sugar program would save consumers up to $3.5 billion per year. The list goes on and on and on. It therefore defies basic common sense to argue that, just because a foreign country harms its citizens through protectionist policies, the United States should do the same.
  • Reciprocity cedes control of U.S. economic policy to other counties. The reciprocity argument maintains that the United States should not unilaterally dismantle protectionist programs while other countries maintain similar (bad) policies, but this approach cedes U.S. control over its own economic decisions to countries like China or the European Union. Yet the United States should remain free to improve its economy, without the need to wait for other countries to do likewise. It’s particularly odd to hear such an argument from “America First” proponents who decry “globalist” policies that supposedly cede control of U.S. “sovereignty” to foreign powers.
  • Reciprocity undermines reform—including through reciprocal trade agreements. The reciprocity approach also would likely prohibit trade reform in the United States—contrary to popular belief, the United States still maintains numerous tariff and non-tariff barriers to trade—or elsewhere. The WTO’s “Doha Round” of global trade negotiations, meant to update and expand the body’s trade-liberalizing agreements, spent over a decade trying, and failing, to produce an agreement among Members to reduce their trade barriers and subsidies. Among the reasons for this stasis was an unwillingness of any WTO Member (including the United States) to expend the political capital necessary to lead the way—a classic “prisoners’ dilemma.” Reciprocity promises the same inaction in the future—thus why many U.S. politicians and industries advocating import protection argue so loudly for reciprocity! Furthermore, new tariff “reciprocity” threats from the Trump administration are threatening to unravel U.S. trade agreements, like NAFTA, that actually have created reciprocal, free trade regimes—to the great benefit of all parties.
  • Reciprocity ignores the many tools that the United States has to address other countries’ protectionism without harming its own citizens. The reciprocity argument also ignores the fact that the United States could unilaterally eliminate many of its trade barriers and still have ample legal tools at its disposal to encourage others’ to follow suit. WTO negotiations, for either all Members or a select group of them (“plurilateral” talks), could introduce new, binding caps on tariff and non-tariff barriers, and the United States would, unlike the current situation, be in a superior moral and diplomatic position to demand them. The United States could also seek to renegotiate its own tariffs through the procedures set forth in GATT Article XXVIII, which would simply require it to lower U.S. tariffs on other products (or allow other Members to raise some of their own). Current and future U.S. free trade agreements could provide another venue for reforms. Finally, WTO and FTA dispute settlement disciplines permit (1) consultations with a foreign government over many of its trade-distorting measures; and (2) if such consultations fail, investigation of the alleged protectionism and eventual imposition of remedial U.S. tariffs on imports from the offending government. WTO dispute settlement has been particularly effective in this regard, as the President’s own 2018 Economic Report documents.

There is no doubt that some (but not all) nations impose higher barriers to imports of U.S. goods and services than the United States does in return. This is not, however, a weakness in the current system but rather a testament to sound American economic policy, which has generated undeniable benefits for the vast majority of Americans. In short, the United States should not start impoverishing its citizens because other nations lack the economic insight or political strength to stop impoverishing theirs. To do so would not only ignore common sense and basic economics, but also lead to higher trade barriers, fewer reforms and therefore a lower standard of living for us all.

The views expressed herein are those of Scott Lincicome alone and do not necessarily reflect the views of his employers.

Apparently, if you want to learn about school choice success in the United States, your first move should be to look to countries that are thousands of miles away. Sarah Butrymowicz from The Hechinger Report did just that. She traveled to Sweden, New Zealand, and France, talked to people about their education systems, decided that they were not perfect, and concluded that “real-life examples from around the world provide little evidence that allowing families more freedom of choice improves achievement.”

The worst part is that the conclusion – that school choice does not improve achievement – is not at all scientific.

For example, Butrymowicz pointed out that Sweden enacted a universal voucher program in 1992 and that their international test scores haven fallen since then. First, Sweden’s test scores most recently rose in math, reading, and science between 2012 and 2015. But more importantly, the simple pre-post observation does not in any way establish a causal relationship. Such a claim doesn’t even attempt to consider any control variables. For instance, immigration in Sweden also increased substantially from 1992 to 2012, which may have had a lot to do with their declining test scores. In fact, a study by Dr. Gabriel Heller-Sahlgren found that about 30 percent of the decline in Sweden’s overall PISA scores is explained by immigration patterns alone. Based on strong existing empirical evidence from Sweden, it is more likely that their PISA scores would have fallen even more without the academic benefits accrued from school choice.

We have a 2015 peer-reviewed study on what actually happened as a result of Sweden’s implementation of universal school choice. Böhlmark and Lindahl used strong econometric methods and found that school choice in Sweden improved “average short- and long-run outcomes.” Another rigorous study in a top academic journal, the Journal of Public Economics, found that Sweden’s voucher program improved academic outcomes due to competition.

Butrymowicz deserves credit for pointing out three studies that show positive effects of the Sweden program. But she points out that one of these studies finds “no positive long-term effects for students.” While that may be true, she failed to acknowledge that the same study found positive effects on student achievement.

I simply do not understand how one can cite several studies finding positive effects of school vouchers in Sweden, and then come up with an article titled “Betsy Devos’ school choice ideas are a reality in Sweden, where student performance has suffered.” But that’s not all. The article goes on to pull quotes from Samuel Abrams regarding how something “went wrong in Sweden.” According to the evidence presented in the article – and elsewhere – the title and presentation are highly misleading at best.

In her article on New Zealand, Butrymowicz presents the country as a “school choice utopia.” I know she didn’t pull that quote from me, because basic economic theory compels me to prefer private school choice to the system of government school choice in New Zealand. 

Nonetheless, the article goes to say that “one unintended consequence of more school choice is more segregation” in the United States and in New Zealand. Butrymowicz cited zero studies from New Zealand to support this claim. She should have considered the fact that 7 out of the 8 rigorous studies existing on the subject in the U.S. find that private school voucher programs improve racial integration. None of the U.S. studies find that private school choice leads to segregation.

Similarly, the article on France is titled “this country spends billions on private schools – and has a terrible learning gap between poor and wealthy,” as if somehow the two are connected. The article then goes on to point out that “France hasn’t erased all of the barriers that prevent lower-income families from accessing the best schools,” implying that perfection is the standard for success. Of course, it is arguably more plausible that achievement gaps would be even larger if the government did not allow poor people to attend the expensive private schools that rich people can already afford.

The France article also claims that even though children in private schools outperform those in public schools, “public schools would actually outperform private schools” if they served the same students. What that claim comes from—an OECD report—is not causal. Absent rigorous econometric methodology, the OECD simply cannot determine whether private schooling in France – or any other country – has any effects on PISA scores. On the other hand, my 2018 study found that increases in the private share of schooling within countries actually leads to moderate increases in PISA math and reading scores, even after controlling for changes in factors such as GDP, government spending, population, and school enrollment.

The author also claims that the school choice research “shows mixed outcomes for the roughly 448,000 American students who attend private schools through taxpayer-funded programs.” This claim is simply false. And it needs to be put to bed.

Seventeen experimental studies of the effects of private school choice programs on student achievement exist in the U.S. today. The majority of the 17 studies find statistically significant positive effects on student test scores, and effects tend to be most beneficial for disadvantaged students. And it is important to note that the only two studies finding negative impacts are first-year evaluations.

But the academic benefits aren’t only limited to the students that exercise choice. Twenty-five of the 26 existing studies find that private school choice also improves achievement for the students remaining in traditional public schools due to competitive pressures.

Of course, these studies merely examine effects on standardized test scores, which may not be all that important for at least two reasons: (1) parents often care more about things like safety and culture than standardized test scores, and (2) a growing body of literature suggests that test scores may not be good proxies for long term outcomes like graduation, crime, and income.

So why don’t we look at the non-test score outcomes?

Six of the seven rigorous studies linking private school choice to student educational attainment reveal positive effects overall or for subgroups. For example, the experimental evaluation of the DC voucher program found that private school choice increased the likelihood of high school graduation by 21-percentage points. None of the seven studies found negative effects. The majority of the eleven rigorous studies linking private school choice to civic outcomes also find positive effects. Again, none of the studies find negative effects.

Anyone see a pattern here? The most rigorous U.S. evidence is not mixed.

If we want to figure out if school choice works in the U.S., we ought to rely on the preponderance of the best evidence existing in the U.S. But if we really want to learn from school choice in other countries, we ought to look at the scientific evidence rather than anecdotes. We ought to examine the data rigorously rather than from a 10,000-foot view. The strongest evidence available tells us we need more private school choice in the United States, not less.

The 7th round of negotiations on the North American Free Trade Agreement (NAFTA) wrapped up early this week, and ended on a relatively positive note.  There was a noticeable change in tone in the joint press conference with USTR Lighthizer, Minister Freeland and Secretary Guajardo, which NAFTA watchers certainly must have noticed.

The first striking detail was actually something that was omitted. Though Lighthizer did say that the two major goals of the administration were to update and rebalance the deal, he didn’t once utter the phrase trade deficit. Instead, he highlighted discouraging outsourcing, likely referring to the U.S. proposals to eliminate the controversial Chapter 11 on investor-state dispute settlement (ISDS); strengthening rules of origin by increasing the content of North American inputs in automobile manufacturing; and adjusting the rules on government procurement, through a “more balanced” dollar-for-dollar procurement market. It should come as no surprise that he is still pushing in these areas, as there is much left to negotiate, and concessions on these issues will not likely be settled until the final rounds of the agreement take shape.

So far, the three countries have closed a total of 6 out of 30 chapters, most recently finishing the chapters on Good Regulatory Practices, Administration and Publication, and Sanitary and Phytosanitary Measures. Though the three ministers all stressed the importance of timing, considering the upcoming Presidential elections in Mexico, as well as mid-term elections in the U.S., Freeland made clear that Canada would not be satisfied with just any deal. Lighthizer seemed to suggest this as well, but said that while the U.S. preferred a tripartite agreement, he would conclude bilaterals, if necessary. This light jibe is in line with previous reports that Lighthizer thinks the talks are moving a lot more smoothly with Mexico than with Canada.

The overall positive tone was only briefly interrupted when Freeland addressed President Trump’s announcement last week that he would impose a 25% tariff on steel and 10% tariff on aluminum imports. She reiterated the message from her official statement that any tariffs on Canada would be “entirely inappropriate.” A comment on this was to be expected, not least because Canada would be the country most affected by the administration’s actions. In fact, a December 2017 report by the International Trade Administration noted that Canada leads in steel imports to the U.S., making up 16% of total imports. The Canadian and U.S. steel sectors are also highly integrated, with 50% of all American steel exports destined for Canada.

What remained unacknowledged, however, were recent comments by President Trump that the tariffs would be tied to satisfactory progress on the NAFTA negotiations. The three ministers seemed to signal that they prefer to keep the discussion on steel tariffs separate from NAFTA. This would be wise. First, linking the Section 232 actions to NAFTA undermines the overall national security argument put forward by the administration, as it is now being used by the president as a bargaining chip. Second, it would run counter to the spirit of NAFTA, which is of three neighbors working together to increase North American competitiveness. The Department of Defense even expressed its concern “about the negative impact on our key allies” that the steel and aluminum tariffs would bring about. Third, linking this issue to the ongoing negotiations could seriously threaten to derail the talks, which the administration simply cannot afford due to its tight negotiating timeline.

While the NAFTA negotiations have had their ups and downs in seven successive rounds, it is important to keep in mind that things can change very quickly. In Monday’s press conference Freeland, addressing Lighthizer, said “I think we’re becoming friends.” Let’s not upset the progress we’ve made so far on NAFTA, as well as the friends we’ve made along the way, and keep steel out of the discussions.

The assimilation of immigrants and their descendants is important to their long-run success and to maximize the benefits from immigration.  Current research indicates that today’s immigrants are assimilating well.  A massive 520-page literature survey by the National Academy of Sciences found that assimilation is proceeding apace in the United States although some of those gains are masked by a phenomenon called “ethnic attrition” whereby the most successful and integrated descendants of immigrants cease to self-identify as members of their ancestor’s ethnic groups.  Numerous OECD reports find greater economic integration of immigrants and their descendants in the United States relative to other developed countries, even when it comes to job matchingResearch by University of Washington economist Jacob Vigdor shows that modern immigrant civic and cultural assimilation is similar to that of immigrants from the early 20th century, to the extent that “[b]asic indicators of assimilation, from naturalization to English ability, are if anything stronger now than they were a century ago.”

However, John Fonte of the Hudson Institute argues that today’s immigrants are not assimilating well because our “patriotic assimilation system is broken.”  In a shorter piece explaining his reasoning, Fonte argues that the “assimilation of the Ellis Island generation succeeded only because American elites (progressive at the time) insisted upon ‘Americanization.’”  Elites at the time showed their support for Americanization through many government programs and non-profit assimilation efforts supported by states. 

Fonte and I disagreed about this (and other topics) on a panel in 2014 at Hudson.  I argued that there is no evidence from over 100 years ago that the Americanization Movement, a government program combined with support from non-profits to assimilate immigrants, actually encouraged or sped up the assimilation of the immigrants who were affected by it.  Fonte countered by saying [2:44:15]: “It’s true we don’t have data on how well assimilation worked, but I think we have plenty of anecdotal evidence that Americanization did help.”  Later, I wrote about several contrary anecdotes where new immigrants offended and discouraged by the government’s efforts to forcibly assimilate them to a particular nationalistic definition of what it meant to be an American.

A revealing anecdote printed in a Polish-language newspaper that appealed to American traditions when it wrote that the Americanization Movement “smacks decidedly of Prussianism, and it is not at all in accordance with American ideals of freedom” (256).  A Russian-language newspaper made the more devastating claim that the Americanization Movement did not actually do much except insult immigrants:

Many Americanization Committees only exist on paper.  They make much noise, get themselves in newspapers, but do not do much good.  They mostly laugh at the poor foreigners.  If Americans want to help the immigrants, they must meet them with love.  The immigrant is by no means stupid.  He feels the patronizing attitude the American [Americanizers] adopts towards him, and therefore never opens his soul (258). 

At this point, Fonte and I had dueling anecdotes and it is not at all obvious whether these programs had an effect on assimilation regardless of the direction.  Since the 2014 Hudson event, a new empirical working paper called “Backlash: The Unintended Effects of Language Prohibition in US Schools after World War I” by Vasiliki Fouka found that anti-German language laws in the United States actually slowed down the assimilation of German Americans on several margins, lending support to my anecdotal evidence. 

The anti-German language laws have a nasty history.  Around World War I, many state governments passed anti-German laws that outlawed school instruction in the German language – even in private schools.  Beyond that, the government undertook an intense campaign to assimilate German Americans out of fear that they were a potential fifth column that would undermine the war effort.  Fouka’s paper employed expert research design methods to look at how these anti-German language laws in Ohio and Indiana affected the assimilation of German American children who were subject to them. 

The good news is that German Americans who were already well assimilated were not affected by the anti-German laws so at least they did not unassimilate them.  The bad news is that German Americans who were the least assimilated actually integrated at a much slower rate after these anti-German language laws went into effect.  As they aged, they dropped out of school at younger ages, picked German names for their children, tended to marry other Germans rather than Americans of different European ethnic backgrounds, and were less likely to volunteer for military service during World War II.

Fouka’s assimilation model has two main components: peer effects and family effects.  Peer effects are assimilation pressures from the broader society that includes schooling.  The supporters of the anti-German laws proceeded under the theory that cutting German out of schools would reduce their exposure to that language, culture, and help assimilation by boosted exposure to the English language.  Those advocates forgot that there were also family effects whereby German-American families substituted more emphasis on preserving German culture inside of the home to make up for the lack of German language and cultural instruction outside of the home.  The net result was that German-American families vastly increased their production of German culture and language relative to the decline of German language and culture in school.  This backlash overwhelmed any potential pro-assimilation effects of the anti-German policies and actually worsened the rate of assimilation.   

Most of the policies of the Progressive Era have had devastating effects on the United States but we are just now beginning to understand how their insistence of assimilation through government schools and other programs slowed integration.  The failure of government assimilation programs through public education is not confined to the United States.  Even the totalitarian Chinese Communist education system could not make ethnic minorities in China feel more “Chinese.”  What hope is there for a comparable American assimilation program to succeed today where the Chinese Communists could not?      

People do not assimilate or learn to love the United States because an American schoolteacher told them to.  Students barely even remember any lessons from school unless they use them frequently on the job, especially civics.  Immigrants and their descendants assimilate and become American because it is in their best interests to do so and they cannot help it.  Learning English, adopting most of our social norms, and understanding our culture spontaneously happens over time through exposure and because doing so increases their income.  Immigrants become patriotic (they really do) and love the United States because it is a lovable country – two things a government program cannot and should not teach.      

Donald Trump has talked up protectionism for decades, so his apparent decision to impose tariffs on steel/aluminum for (unconvincing) “national security” reasons may be something he truly believes in. If that’s the case, it’s very important for everyone to step forward and figure out a way to talk him out of it. And they are. Here’s a sampling:

Sen. John Thune, a South Dakota Republican who’s a member of GOP leadership, told reporters Monday night that Republicans are still looking at what legislative recourse they have to stop Trump’s action on trade, but first they are trying to convince him not to go through with it.   “First and foremost there is going to be an attempt to try to convince the President that he’s headed down the wrong track, and hopefully get him to a point where he’ll reconsider that decision,” Thune said.

Congress has ultimate Constitutional power over trade, although they have delegated a good deal of it by statute over the years. This is their opportunity to exercise their power in support of free trade.

West Wing aides led by Cohn, who directs the National Economic Council, are planning a White House meeting for later this week with executives from industries likely to be hurt by big tariffs on imported steel and aluminum, two officials familiar with the matter said. The meeting is tentative and the participants have not yet been set in stone, but industries that could be hit hard by the tariffs include automakers and beverage companies.

Trump announced the tariffs in front of the steel/aluminum companies who would benefit. It’s important he hear from those who would be hurt.

European Commission chief Jean-Claude Juncker has vowed to fight back against US President Donald Trump’s threat of a 25% tariff on steel and 10% on aluminum imports.

“So now we will also impose import tariffs. This is basically a stupid process, the fact that we have to do this. But we have to do it. We will now impose tariffs on motorcycles, Harley Davidson, on blue jeans, Levis, on Bourbon. We can also do stupid. We also have to be this stupid,” he said in Hamburg on Friday evening.

If Juncker’s threats lead to actual tariff retaliation, we are worse off than with Trump’s tariffs alone. But the idea behind Juncker’s response is to appear as “stupid” as Trump, in order to get him to back down, by giving other U.S. industries a reason to lobby against the steel/aluminum tariffs. (A Canadian journalist had a clever idea for retaliation without so much self-inflicted harm that I haven’t seen tried before: “Rather than raise tariffs on American exports, why not lower them on exports of the same goods from other countries, giving them a leg up over the Americans in our market?”)

Trade policy looks pretty bleak in the face of these tariffs, which would create a loophole in the system that others are sure to utilize as well. If the U.S. can impose these tariffs on steel and aluminum on the basis of “national security,” someone else is sure to try for tariffs on food, or clothing, or various other products on the same basis. But the tariffs haven’t been imposed yet. Until they are, everyone should push back in every way they can think of.

Our primary federal civil rights statute, colloquially called “Section 1983,” says that any state actor who violates someone’s constitutional rights may be sued in federal court. This remedy is crucial not just to secure relief for individuals whose rights are violated, but also to ensure accountability for government agents. Yet the Supreme Court has crippled the functioning of this statute through the judge-made doctrine of “qualified immunity.” This doctrine, invented by the Court out of whole cloth, immunizes public officials even when they commit illegal misconduct unless they violated “clearly established law.” That standard is incredibly difficult for civil rights plaintiffs to overcome because the courts have required not just a clear legal rule, but a prior case on the books with functionally identical facts.

In Pauly v. White, 874 F.3d 1197 (10th Cir. 2017), the Tenth Circuit used qualified immunity to shield three police officers who brutally killed an innocent man in his home. The officers had no probable cause to think Samuel Pauly had committed any crime, but they stormed his home with guns drawn and shouted that they had him surrounded—yet failed to identify themselves as police. Mr. Pauly and his brother reasonably believed they were in danger and retrieved two guns to defend themselves. After his brother Daniel fired two warning shots to scare away the unidentified attackers, Samuel was shot dead by one of the officers—Ray White—through the front window of his home.

The Tenth Circuit held that Officer White’s use of deadly force was objectively unreasonable and that it “violated Samuel Pauly’s constitutional right to be free from excessive force.” But the court still granted Officer White qualified immunity; there was no prior case with sufficiently similar facts, so the unreasonableness of his conduct was not “clearly established,” in the court’s view. What’s more, the court held that because Officer White had qualified immunity, the other two officers automatically received immunity as well, even though their own reckless conduct caused Officer White to commit the unlawful shooting.

This decision was erroneous even under existing precedent, but it also throws into sharp relief the shaky legal rationales for qualified immunity in general. The text of Section 1983 makes no mention of any sort of immunity, and the common-law background against which it was adopted did not include a freestanding defense for public officials who acted unlawfully; on the contrary, the historical rule was that public officials were strictly liable for constitutional violations. In short, qualified immunity has become nothing more than a “freewheeling policy choice” by the Court, at odds with Congress’s judgment in enacting Section 1983.

The Cato Institute has therefore filed an amicus brief urging the Court to hear Mr. Pauly’s case and to reconsider its misguided qualified immunity jurisprudence. This brief will be the first of many in an ongoing campaign to demonstrate to the courts that this doctrine lacks any legal basis, vitiates the power of individuals to vindicate their constitutional rights, and contributes to a culture of near-zero accountability for law enforcement and other public officials.

In his State of the Union address, President Trump expressed support for a Right to Try law that would allow terminally-ill patients to test medicines not yet fully vetted by the FDA. This perspective recognizes the tradeoff between benefits and risks.

The administration is singing a different tune, however, regarding kratom, a medicinal herb grown in East Asia that might help Americans who suffer from chronic pain and do not wish to, or cannot, rely on opioids.

The FDA recently announced that it is considering a ban on kratom and is working to prevent shipments to the United States. This announcement comes on the heels of the DEA’s attempted ban in 2016, which caused a public and Congressional backlash, forcing the DEA to back down.

Kratom, which appears to target opioid receptors in the brain, is used by many chronic pain sufferers. The FDA correctly notes that existing evidence is not conclusive on kratom’s efficacy, but numerous studies and a wealth of anecdotal evidence suggest kratom relieves pain with modest risks.

Kratom is also used to reduce opioid addiction. The FDA also doubts about its effectiveness in this area, but again several studies support its value in easing withdrawal.

Doubts about effectiveness aside, the prima facie reasoning behind the FDA’s crackdown can be found in a press release from November 14, 2017, in which Commissioner Scott Gottlieb attributes 36 deaths to kratom. A recent study, however, found no evidence that kratom alone causes death.

And even if kratom can be dangerous, banning it violates the administration’s defense of Right to Try laws: potentially dangerous medicines are nevertheless valuable if their expected benefits exceed their risks.

Outlawing kratom, moreover, will mainly spawn a black market. This harms kratom consumers (by raising prices and diminishing quality control) and society generally (by generating violence and corruption, as occurs now for other banned substances).

The FDA may believe that kratom’s risks are so great that no rational person would ever accept them. But in a free society, individuals—not a government bureaucracy—decide what risks to take with their health.

The Trump White House is on the right track by supporting Right to Try. The administration should stick to this philosophy in its treatment of kratom.

Last month, Congress authorized a massive increase in defense spending as part of a two-year budget deal. In 2018 alone, the Pentagon will receive an additional $80 billion, increasing the topline number to $629 billion. War spending will push the total over $700 billion. Though such a windfall might prompt Defense Department to ignore cost-saving measures, the White House pledged that “DOD will also pursue an aggressive reform agenda to achieve savings that it will reinvest in higher priority needs.” Noticeably absent, however, was another Base Realignment and Closure (BRAC), even though Secretary of Defense James Mattis, and at least four of his predecessors, have called for such authority in order to reduce the military’s excess overhead, most recently estimated at 19 percent.

Congress’ unwillingness to authorize a round of base closures should surprise no one. But congressional inaction doesn’t merely undermine military efficiency. In the most recent Strategic Studies Quarterly, ranking member of the House Armed Services Committee, Rep. Adam Smith (D-WA) and I explain how the status quo is actually hurting military communities.

To be sure, closing a military base can be disruptive to surrounding economies, and for some communities it may be economically devastating. But such cases are the exception, not the rule. Evidence shows that most communities recover, and some do so quite rapidly. A 2005 study by the Pentagon Office of Economic Adjustment researched over 70 communities affected by a base closure and determined that nearly all civilian defense jobs lost were eventually replaced.8 The new jobs are in a variety of industries and fields, allowing communities to diversify their economies away from excessive reliance on the federal government.

Rep. Smith and I are not alone in our assessment of the impact that congressional inaction on BRAC has on local communities and our military. In June of last year, over 45 experts from various think tanks of differing ideological and political bents signed onto an open letter urging Congress to authorize a BRAC round.

In a 2016 letter to congressional leaders, then-Deputy Secretary of Defense Robert Work explained that “local communities will experience economic impacts regardless of a congressional decision regarding BRAC authorization. This has the harmful and unintended consequence of forcing the Military Departments to consider cuts at all installations, without regard to military value… . Without BRAC, local communities’ ability to plan and adapt to these changes is less robust and offers fewer protections than under BRAC law.”

Further, an overwhelming majority of the communities represented by the Association of Defense Communities would prefer a BRAC to the current alternative. This should not come as a shock because, as Smith and I note, “Local communities have been deprived of the support BRAC would provide and have been denied access to property that could be put to productive use.”

Just to recap, nearly everyone—from think tank experts to DOD officials and from presidents to local community leaders—want a BRAC. Alas, a few key members of Congress stand in opposition.

BRAC has proven to be a fair and efficient process for making the difficult but necessary decisions related to reconfiguring our military infrastructure and defense communities. Rather than continuing to block base closures for parochial reasons, Congress should permit our military the authority to eliminate waste while providing vital defense resources where they are most needed, and give communities the clarity and financial support they need to convert former military bases to new purposes.

If you would like to hear more, Rep. Smith and I will be discussing the issue at the Cato Institute on March 14 at 9 am. Click here for more information and to register.

Pages