Policy Institutes

Parker and Ollier (2015) set the tone for their new paper on sea level change along the coastline of India in the very first sentence of their abstract: “global mean sea level (GMSL) changes derived from modelling do not match actual measurements of sea level and should not be trusted” (emphasis added). In contrast, it is their position that “much more reliable information” can be obtained from analyses of individual tide gauges of sufficient quality and length. Thus, they set out to obtain such “reliable information” for the coast of India, a neglected region in many sea level studies, due in large measure to its lack of stations with continuous data of sufficient quality.

A total of eleven stations were selected by Parker and Ollier for their analysis, eight of which are archived in the PSMSL database (PSMSL, 2014) and ten in a NOAA sea level database (NOAA, 2012). The average record length of the eight PSMSL stations was 54 years, quite similar to the average record length of 53 years for the eleven NOAA stations.

Results indicated an average relative rate of sea level rise of 1.07 mm/year for all eleven Indian stations, with an average record length of 51 years. However, the two Australian researchers report this value is likely “overrated because of the short record length and the multi-decadal and interannual oscillations” of several of the stations comprising their Indian database. Indeed, as they further report, “the phase of the 60-year oscillation found in the tide gauge records is such that sea level in the North Atlantic, western North Pacific, Indian Ocean and western South Pacific has been increasing since 1985-1990,” which increase most certainly skews the rate trend of the shorter records over the most recent period of record above the actual rate of rise.

One additional important finding of the study was gleaned from the longer records in the database, which revealed that the rates of sea level rise along the Indian coastline have been “decreasing since 1955,” which observation of deceleration stands in direct opposition to model-based claims that sea level rise should be accelerating in recent decades in response to CO2-induced global warming.

In comparing their findings to those reported elsewhere, Parker and Ollier note there is a striking similarity between the trends they found for the Indian coastline and for other tide gauge stations across the globe. Specifically, they cite Parker (2014), who calculated a 1.04 ± 0.45 mm/year average relative rate of sea level rise from 560 tide gauges comprising the PSMSL global database. And when that database is restricted in analysis to the 170 tide gauges with a length of more than 60 years at the present time, the average relative rate of rise declines to a paltry 0.25 ± 0.19 mm/year, without any sign of positive or negative acceleration.

The significance of Parker and Ollier’s work is noted in the “sharp contrast” they provide when comparing the rates of sea level rise computed from tide gauge data with model-based sea level reconstructions produced from satellites, such as the 3.2 mm/year value observed by the CU Sea Level Research Group (2014), which Parker and Ollier emphatically claim “cannot be trusted because it is so far from observed data.” Furthermore, it is clear from the observational tide gauge data that there is nothing unusual, unnatural, or unprecedented about current rates of sea level rise, with the exception that they appear to be decelerating, as opposed to accelerating, despite a period of modern warmth that climate alarmists contend is unequaled over the past millennium, and which should be melting away the polar ice caps and rapidly rising sea levels.

 

References

CU Sea Level Research Group. 2014. Global Mean Sea Level. sealevel.colorado.edu (retrieved May 30, 2014).

National Oceanic and Atmospheric Administration (NOAA). 2012. MSL global trend table, tidesandcurrents.noaa.gov/sltrends/MSL_global_trendtable.html (retrieved May 30, 2014).

Parker, A. 2014. Accuracy and reliability issues in computing absolute sea level rises. Submitted paper.

Parker, A. and Ollier, C.D. 2015. Sea level rise for India since the start of tide gauge records. Arabian Journal of Geosciences 8: 6483-6495.

Permanent Service on Mean sea level (PSMSL). 2013. Data, www.psmsl.org (retrieved October 1, 2013).

Leaders of the worldwide Anglican church are meeting at Canterbury Cathedral this week, with some observers predicting an open schism over homosexuality. There is fear that archbishops from six African countries – Uganda, Kenya, Nigeria, South Sudan, Rwanda and the Democratic Republic of the Congo – may walk out if the archbishop of Canterbury, the symbolic head of the worldwide Anglican Communion, won’t sanction the U.S. Episcopal Church for consecrating gay bishops. Since about 60 percent of the world’s Anglicans are in Africa, that would be a major break.

I am neither an Anglican nor a theologian, but I did reflect on the non-religious values that shape some of these disputes in the Guardian a few years ago:

The Anglican Archbishop of South Africa, Njongonkulu Ndungane, says his church should abandon its “practices of discrimination” and accept the gay Episcopal bishop V. Gene Robinson of New Hampshire. That makes him unusual in Africa, where other Anglican bishops have strongly objected to the ordination of practicing homosexuals.

The Nigerian primate, for instance, Archbishop Peter Akinola, condemned the consecration of Robinson as bishop, calling it a “satanic attack on the church of God.” According to the San Francisco Chronicle, “He even issued a statement on behalf of the ‘Primates of the Global South’ - a group of 20 Anglican primates from Africa, the West Indies, South America, India, Pakistan, and Southeast Asia - deploring the action and, along with Uganda and Kenya, formally severed relations with Robinson’s New Hampshire diocese.”

So what makes Ndungane different? He’s the successor to Nobel laureate Desmond Tutu, one might recall. And they both grew up in South Africa, where enlightenment values always had a foothold, even during the era of apartheid. Ndungane studied at the liberal English-speaking University of Cape Town, where Sen. Robert F. Kennedy gave a famous speech in 1966.

Ndungane didn’t hear that speech, alas, because he was then imprisoned on Robben Island. But after he was released he decided to enter the church and took two degrees at King’s College, London. The arguments of the struggle against apartheid came from western liberalism - the dignity of the individual, equal and inalienable rights, political liberty, moral autonomy, the rule of law, the pursuit of happiness.

So it’s no surprise that a man steeped in that struggle and educated in the historic home of those ideas would see how they apply in a new struggle, the struggle of gay people for equal rights, dignity, and the pursuit of happiness as they choose.

The South African Anglicans remain in favor of gay marriage. And of course, such church schisms are not new. The Baptist, Methodist, and Presbyterian churches in the United States split over slavery. The Methodists and Presbyterians reunited a century later, but the Baptists remain separate bodies.

In 2009, Duracell, a subsidiary of Proctor & Gamble, began selling “Duracell Ultra” batteries, marketing them as their longest-lasting variety. A class action was filed in 2012, arguing that the “longest-lasting” claim was fraudulent. The case was removed to federal court, where the parties reached a global settlement purporting to represent 7.26 million class members.

Attorneys for the class are to receive an award of $5.68 million, based on what the district court deemed to be an “illusory” valuation of the settlement at $50 million. In reality, the class received $344,850. Additionally, defendants agreed to make a donation of $6 million worth of batteries over the course of five years to various charities.

This redistribution of settlement money from the victims to other uses is referred to as cy pres. “Cy pres” means “as near as possible,” and courts have typically used the cy pres doctrine to reform the terms of a charitable trust when the stated objective of the trust is impractical or unworkable. The use of cy pres in class action settlements—particularly those that enable the defendant to control the funds—is an emerging trend that violates the due process and free speech rights of class members.

Accordingly, class members objected to the settlement, arguing that the district court abused its discretion in approving the agreement and failed to engage in the required rigorous analysis to determine whether the settlement was “fair, reasonable, and adequate.” The U.S. Court of Appeals for the Eleventh Circuit affirmed the settlement, however, noting the lack of “precedent prohibiting this type of cy pres award.”

Now an objecting class member has asked the Supreme Court to review the case, and Cato filed an amicus brief arguing that the use of cy pres awards in this manner violates the Fifth Amendment’s Due Process Clause and the First Amendment’s Free Speech Clause.

Specifically, due process requires—at a minimum—an opportunity for an absent plaintiff to remove himself, or “opt out,” from the class. Class members have little incentive or opportunity to learn of the existence of a class action in which they may have a legal interest, while class counsel is able to make settlement agreements that are unencumbered by an informed and participating class.

In addition, when a court approves a cy pres award as part of a class action settlement, it forces class members to endorse certain ideas, which constitutes a speech compulsion. When defendants receive money—essentially from themselves—to donate to a charity, the victim class members surrender the value of their legal claims. Class members are left uncompensated, while defendants are shielded from any future claims of liability and even look better than they did before the lawsuit given their display of “corporate social responsibility.”

The Supreme Court will consider whether to take up Frank v. Poertner later this winter.

If federal statutory law expressly commands that all covered federal employees shall be “free from any discrimination based on … race,” does that forbid the federal government from adopting race-based affirmative action plans? That is one of the important—and seemingly obvious—questions posed by Shea v. Kerry, a case brought by our friends at the Pacific Legal Foundation. William Shea is a white State Department Foreign Service Officer. In 1990, he applied for his Foreign Service Officer position and began working in 1992 at a junior-level post. At the time, the State Department operated a voluntary affirmative action plan (read: voluntary as “mandated by Congress”) whereby minorities were able to bypass the junior levels and enter the mid-level service. The State Department attempted to justify its racial plan by noting that there were statistical imbalances at the senior Foreign Service levels, even though the path to the senior levels is unrelated to service at the lower levels.

In 2001, Shea filed an administrative complaint against the State Department for its disparate treatment of white applicants under its 1990-92 hiring plan, complaining that he did not enter at as high a grade as he may have and that the discrimination cost him in both advancement opportunities and earnings. After exhausting administrative remedies, Shea took his complaint to the courts, resulting in this case. The Cato Institute has joined with the Southeastern Legal Foundation, the Center for Equal Opportunity, and the National Association of Scholars to file an amici curiae brief calling for the Supreme Court to take up the case and reverse the federal appellate court below.

In fairness to the court below, Title VII jurisprudence, as it stands, is both unclear and unworkable. The text of Title VII expressly prohibits discrimination on the basis of race—what’s called “disparate treatment.” Indeed, in the specific provisions on federal hiring, Title VII employs very expansive language to ensure that disparate treatment is not permitted. But such a “literal construction” of the Title VII statute was eschewed by Justice William Brennan in 1979, writing for the Court in United Steelworkers v. Weber. Relying on cherry-picked statutory history, Brennan found that Title VII’s plain text did not prohibit collectively bargained, voluntary affirmative action programs that attempt to remedy disparate impact—statistical imbalances in the racial composition of employment groups—even if such plans used quota systems. Later, in Johnson v. Transportation Agency, Santa Clara County, Cal. (1987), the Court exacerbated the issue by extending the Weber rule from purely private hiring to municipal hiring. In Shea, the U.S. Court of Appeals for the D.C. Circuit extended the rule from Johnson and Weber to federal hiring, not just municipal and private employment.

To make matters more confusing, in Ricci v. DeStefano (2009) (in which Cato also filed a brief), the Supreme Court held that, in order for a remedial disparate treatment to be permissible under Title VII, an employer must show a strong basis in evidence that they would face disparate-impact liability if they didn’t make discriminatory employment decisions. The Ricci Court held that the new strong-basis-in-evidence standard is “a matter of statutory construction to resolve any conflict between the disparate-treatment and disparate-impact provisions of Title VII.” While this holding implicitly jettisons the rubric created by Johnson and Weber, the court did not expressly overrule those cases, leaving lower courts in disarray as to when to apply Johnson and Weber or Ricci. Indeed, this conflict between precedents was noted by both the federal trial court and the federal appellate courts below.

As amici point out in our brief, the outcome of Shea’s case hangs on the applicable standard of review. In Ricci, this Court noted that, standing alone, “a threshold showing of a significant statistical disparity” is “far from a strong basis in evidence.” Yet, as the federal trial court noted, “[the] State [Department] does rely solely on a statistical imbalance in the mid- and senior-levels” in order to justify its affirmative action plan. Had the lower Courts applied Ricci, which superseded Johnson and Weber, Shea would have prevailed on his claims. The Court should take up this current case and clarify the jurisprudence applicable in Title VII cases in light of Ricci.

Lost in all the hoopla over “y’all queda” and “VanillaISIS” is any basic history of how public rangelands in the West–and in eastern Oregon in particular–got to this point. I’ve seen no mention in the press of two laws that are probably more responsible than anything else for the alienation and animosity the Hammonds felt towards the government.

The first law, the Public Rangelands Improvement Act of 1978, set a formula for calculating grazing fees based on beef prices and rancher costs. When the law was written, most analysts assumed per capita beef consumption would continue to grow as it had the previous several decades. In fact, it declined from 90 pounds to 50 pounds per year. The formula quickly drove down fees to the minimum of $1.35 per cow-month, even as inflation increased the costs to the government of managing the range. 

The 1978 law also allowed the Forest Service and Bureau of Land Management (BLM) to keep half of grazing fees for range improvements. Initially, this fund motivated the agencies to promote rancher interests. But as inflation ate away the value of the fee, agency managers began to view ranchers as freeloaders. Today, the fee contributes will under 1 percent of agency budgets and less than 10 percent of range management costs. Livestock grazing was once a profitable use of federal range lands but now costs taxpayers nearly $10 for every dollar collected in fees.

Ranching advocates argue that the grazing fee is set correctly because it costs more to graze livestock on federal land than on state or private land. But the BLM and Forest Service represent the sellers, not the buyers, and the price they set should reflect the amount that a seller is willing to accept. Except in cases of charity, no seller would permanently accept less than cost, and costs currently average about $10 per animal unit month.

The second law, the Steens Mountain Cooperative Management and Protection Act of 2000, was an environmentalist effort that affected the lands around the Hammonds ranch. The “protection” portion of the law put 176,000 acres of land into wilderness, which ordinarily does not shut down grazing operations. The “cooperative” portion offered local ranchers other grazing lands if they would voluntarily give up their leases in the wilderness. Nearly 100,000 acres were closed to domestic livestock when every ranch family accepted the offer except one: the Hammonds.

The Hammonds came away from the bargaining table convinced the government was trying to take their land. The government came away convinced the Hammonds were unreasonable zealots. The minuscule effect of grazing fees on their budgets also probably led local officials to think domestic livestock were more of a headache than a contributor to the local or national economy. Dwight and Steven Hammonds’ convictions for arson were more due to this deterioration in their relations with the BLM than to any specific action the Hammonds took. 

It doesn’t take much scrutiny to see that domestic grazing is not a viable use of most federal lands. The Forest Service and BLM manage close to 240 million acres of rangelands that produce roughly 12 million cattle-years of feed. While the best pasturelands can support one cow per acre, federal lands require 200 acres for the same animal. This 240 million acres is nearly 10 percent of the nation’s land, yet it produces only about 2 percent of livestock feed in this country, while less than four percent of cattle or sheep ever step on federal lands.

The problem is that decisions are made through a political tug-of-war rather than through the market. One year, Congress passes a law favorable to ranchers; another year, it passes a law favorable to environmentalists. The result is polarization and strife.

If these lands were managed in response to market forces, the agencies would probably shed much of their bureaucratic bloat, but the costs of providing feed for domestic livestock would still be several times the current grazing fee. Ranchers unwilling to pay that higher price would find alternate sources of feed. If some ranchers continued to graze their cattle and sheep on public land and environmentalists didn’t like it, they could outbid the ranchers or pay them to manage their livestock in ways that reduced environmental impacts. 

The current system is far from a free market, and free-market advocates should not defend it. A true market system would reduce polarization, lead to better outcomes for both consumers and the environment, and would not have resulted in ranchers being sent to prison for accidentally burning a few acres of land.

It is an appalling story: A thoughtful academic uses his training and profession’s tools to analyze a major, highly controversial public issue. He reaches an important conclusion sharply at odds with the populist, “politically correct” view. Dutifully, the professor reports his findings to other academics, policymakers, and the public. But instead of being applauded for his insights and the quality of his work, he is vilified by his peers, condemned by national politicians, and trashed by the press. As a result, he is forced to resign his professorship and abandon his academic career.

Is this the latest development in today’s oppressive P.C. wars? The latest clash between “science” and powerful special interests?

Nope, it’s the story of Hugo Meyer, a University of Chicago economics professor in the early 1900s. His sad tale is told by University of Pisa economist Nicola Giocoli in the latest issue of Cato’s magazine, Regulation. Meyer is largely forgotten today, but his name and story should be known and respected by free-marketers and anyone who cherishes academic freedom and intellectual integrity.

Here’s a brief summary: At the turn of the 20th century, the U.S. economy was dramatically changing as a result of a new technology: nationwide railroading. Though small railroads had existed in America for much of the previous century, westward expansion and the rebuilding of southern U.S. railways after the Civil War resulted in the standardization, interconnection, and expansion of the nation’s rail network.

As a result, railroading firms would compete with each other vigorously over price for long-distance hauling because their networks provided different routes to move goods efficiently between major population centers. However, price competition for short-hauls over the same rail lines between smaller towns wasn’t nearly as vigorous, as it was unlikely that two different railroads, with different routes, would efficiently serve the same two locales. The result was that short-distance hauls could be nearly as expensive as long-distance hauls, which greatly upset many people, including powerful politicians and other societal leaders.

Meyer examined those phenomena carefully, ultimately determining that there was nothing amiss in the high prices for short hauls.

Railroads bear two types of costs, he explained: operating costs (e.g., paying the engineer and fireman, buying the coal and water, etc.) and fixed costs (e.g., the cost of laying and maintaining the rails, rolling stock, and fixed assets). Because of the heavy competition on long hauls, those freight prices mainly covered just the routes’ operating costs, while less competitive short-haul prices covered both those routes’ operating costs and most (if not all) fixed costs.

This wasn’t bad for short-haul customers, Meyer reasoned, because if it weren’t for the long-haul revenues, railroads would provide less (and perhaps no) service to the short-haul towns. Thus, though the short-haul towns were not happy with their freight prices, they were nonetheless better-off because of this arrangement.

Meyer’s reasoning would today be associated with the field of Law & Economics, which uses economic thinking to analyze laws and regulations. Today, this type of analysis is highly respected by economists, policymakers, and U.S. courts, and is heavily linked to the University of Chicago (though it has roots in other places as well). But it hadn’t emerged as a discipline in Meyer’s era; sadly, he was a man too far ahead of his time.

And, for that, he paid a steep price. As Giocoli describes, when Meyer presented his analysis at a 1905 American Economic Association conference, he was set upon by other economists; indeed, the AEA had shamefully engineered his paper session as a trap. When he testified on his work to policymakers in Washington, he was publicly accused of corruption by a powerful bureaucrat, Interstate Commerce Commissioner Judson Clements, and a U.S. senator, Jonathan P. Dolliver. And when a monograph of his work appeared the following year, he was dismissed by the Boston Evening Transcript (then a prominent daily newspaper) as “partisan and untrustworthy.”

Meyer was disgraced. He resigned his position at Chicago and moved to Australia, where he continued his research on railroad economics but he never worked for a university again.

Back at Chicago, his former department head, James Laurence Laughlin, took to the pages of the Journal of Political Economy to lament what had befallen his colleague:

In some academic circles the necessity of appearing on good terms with the masses goes so far that only the mass-point-of-view is given recognition; and the presentation of the truth, if it happens to traverse the popular case, is regarded as something akin to consternation. … It is not amiss to demand that measure of academic freedom that will permit a fair discussion of the rights of those who do not have the popular acclaim. It is going too far when a carefully reasoned argument which happens to support the contentions of the railways is treated as if necessarily the outcome of bribery by the money kings.

There is a happy ending for Meyer’s analysis. Academic research in the latter half of the century buttressed his view that market competition was enough to produce fairly honest, publicly beneficial railroad pricing, and that government intervention harmed public welfare. The empirical evidence marshaled by that research was so compelling that Congress deregulated the railroads and even abolished the Interstate Commerce Commission. Apparently, not all federal agencies have eternal life.

But though his analysis has triumphed, Meyer’s name and story have largely been forgotten. They shouldn’t be. Hopefully, Giocoli’s article will give Meyer the remembrance he deserves.

You Ought to Have a Look is a feature from the Center for the Study of Science posted by Patrick J. Michaels and Paul C. (“Chip”) Knappenberger.  While this section will feature all of the areas of interest that we are emphasizing, the prominence of the climate issue is driving a tremendous amount of web traffic.  Here we post a few of the best in recent days, along with our color commentary.

What’s lost in a lot of the discussion about human-caused climate change is not that the sum of human activities is leading to some warming of the earth’s temperature, but that the observed rate of warming (both at the earth’s surface and throughout the lower atmosphere) is considerably less than has been anticipated by the collection of climate models upon whose projections climate alarm (i.e., justification for strict restrictions on the use of fossil fuels) is built.

We highlight in this issue of You Ought to Have a Look a couple of articles that address this issue that we think are worth checking out.

First is this post from Steve McIntyre over at Climate Audit that we managed to dig out from among all the “record temperatures of 2015” stories. In his analysis, McIntyre places the 2015 global temperature anomaly not in real world context, but in the context of the world of climate models.

Climate model-world is important because it is in that realm where climate change catastrophes play out, and that influences the actions of real-world people to try to keep them contained in model-world.

So how did the observed 2015 temperatures compare to model world expectations? Not so well.

In a seriesoftweets over the holidays, we pointed out that the El Niño-fueled, record-busting, high temperatures of 2015 barely reached to the temperatures of an average year expected by the climate models.

In his post, unconstrained by Twitter’s 140-character limit, McIntyre takes a bit more verbose and detailed look at the situation, and includes additional examinations of the satellite record of temperatures in the lower atmosphere as well as a comparison of observed trends and model expected trends in both the surface and lower atmospheric temperatures histories since 1979.

The latter comparison for global average surface temperatures looks like this, with the observed trend (the red dashed line) falling near or below the fifth percentile of the expected trends (the lower whisker) from a host of climate models:

 McIntyre writes:

All of the individual models have trends well above observations…   There are now over 440 months of data and these discrepancies will not vanish with a few months of El Nino.

Be sure to check out the whole article here. We’re pretty sure you won’t read about any of this in the mainstream media.

Next up is an analysis by independent climate researcher Nic Lewis, who’s specialty these days is developing estimates of the earth’s climate sensitivity (how much the earth’s average surface temperature is expected to rise under a doubling of the atmospheric concentrations of carbon dioxide) based upon observations of the earth’s temperature evolution over the past 100-150 years. Lewis’s general findings are that the climate sensitivity of the real world is quite a bit less than it is in model world (a reason that could explain much of what McIntyre reported above).

The current focus of Lewis’s attention is a recently published paper by a collection of NASA scientists, led by Kate Marvel, that concluded observational-based estimates of the earth climate sensitivity, such as those performed by Lewis, greatly underestimate the actual sensitivity. After accounting for the reasons why, Marvel and her colleagues conclude that climate models are, contrary to the assertions of Lewis and others, accurately portraying how sensitive the earth’s climate is to changing greenhouse gas concentrations (see here for details). It thus follows that these models serve as reliable indicators of the future evolution of the earth’s temperature.

As you may imagine, Lewis isn’t so quick to embrace this conclusion. He explains his reasons why in great detail in his lengthy (technical) article posted at Climate Audit, and provides a more easily digestible version over at Climate Etc.

After detailing a fairly long list of inconsistencies contained not only internally within the Marvel et al. study itself, but also between the Marvel study and other papers in the scientific literature, Lewis concludes:

The methodological deficiencies in and multiple errors made by Marvel et al., the disagreements of some of its forcing estimates with those given elsewhere for the same model, and the conflicts between the Marvel et al. findings and those by others – most notably by James Hansen using the previous GISS model, mean that its conclusions have no credibility.

Basically, Lewis suggests that Marvel et al.’s findings are based upon a single climate model (out of several dozen in existence) and seem to arise from improper application of analytical methodologies within that single model.

Certainly, the Marvel et al. study introduced some interesting avenues for further examination. But, despite how they’ve been touted–as freshly-paved highways to the definitive conclusion that climate models are working better than real-world observations seem to indicate–they are muddy, pot-holed backroads leading nowhere in particular.

Finally, we want to draw your attention to an online review of a paper recently published in the scientific literature which sought to dismiss the recent global warming “hiatus” as nothing but the result of a poor statistical analysis. The paper, “Debunking the climate hiatus,” was written by a group of Stanford University researchers led by Bala Rajaratnam and published in the journal Climatic Change last September.

The critique of the Rajaratnam paper was posted by Radford Neal, a statistics professor at the University of Toronto, on his personal blog that he dedicates to (big surprise) statistics and how they are applied in scientific studies.

In his lengthy, technically detailed critique, Neal pulls no punches:

The [Rajaratnam et al.] paper was touted in popular accounts as showing that the whole hiatus thing was mistaken — for instance, by Stanford University itself.

You might therefore be surprised that, as I will discuss below, this paper is completely wrong. Nothing in it is correct. It fails in every imaginable respect.

After tearing through the numerous methodological deficiencies and misapplied statistics contained in the paper, Neal is left shaking his head at the peer-review process that gave rise to the publication of this paper in the first place, and offered this warning:

Those familiar with the scientific literature will realize that completely wrong papers are published regularly, even in peer-reviewed journals, and even when (as for this paper) many of the flaws ought to have been obvious to the reviewers.  So perhaps there’s nothing too notable about the publication of this paper.  On the other hand, one may wonder whether the stringency of the review process was affected by how congenial the paper’s conclusions were to the editor and reviewers.  One may also wonder whether a paper reaching the opposite conclusion would have been touted as a great achievement by Stanford University. Certainly this paper should be seen as a reminder that the reverence for “peer-reviewed scientific studies” sometimes seen in popular expositions is unfounded.

Well said.

Since the Affordable Care Act came into effect in September 2010, its dependent coverage provision has mandated that insurers allow children to remain on their parents’ plans until age twenty-six. According to some polls, it is the single most popular provision in the law, enjoying high levels of support from Independents, Democrats, and Republicans alike. This is a marked contrast to the law as a whole, which remains unpopular to this day.  The Obama administration has pointed to increasing insurance coverage among young adults as a sign of the provision’s success. A new working paper might dampen some of this fanfare, as the researchers find evidence that the dependent coverage provision led to reduced wages of roughly $1,200 a year for affected workers. Policies and choices have trade-offs, and this mandate is no exception. 

Popularity of Select Provisions of the Affordable Care Act

Source: Kaiser Health Tracking Poll: March 2014.

Prior to the ACA, some states had passed some form of extended dependent coverage mandates, while others had not. The researchers were able to exploit this variation, using the states with prior mandates as a control group. They estimate that the dependent coverage provision reduced wages by roughly $1,200 per year for workers 26 and older. As might be expected, firms that offer health insurance are more affected than those that do not. Somewhat more surprisingly, they find that workers with eligible children are not the only ones affected: there is some degree of pooling and childless workers see wage reductions as well. These findings also imply at least a moderate level of crowd-out, meaning that some proportion of the young adults gaining coverage shifted from other forms of coverage. The authors also failed to find any significant change in labor supply resulting from the reductions in wages, but suggest one possible explanation may be the timing of the provision’s implementation in the weak labor market in late 2010. Parents might value the extended insurance coverage the provision allows, and some young adults who gained coverage might be better off, but these changes come at a cost in the form of reduced wages.

The dependent coverage provision is one of the most broadly popular aspects of the health reform, and it does seem to have increased insurance coverage among the target population to some extent. These gains come with trade-offs, however, and this evidence indicates that workers at affected firms saw wage reductions of about $1,200 per year. Favorable reports citing increased health insurance coverage should take this new evidence into account. Once the costs are fully understood, the provision might not be quite as popular. 

Cato Institute Scholars Respond to Obama’s Final State of the Union Address

Cato Institute scholars Emma Ashford‎, Trevor Burrus‎, Benjamin Friedman‎, Dan Ikenson,‎ Neal McCluskey‎, Pat Michaels‎, Aaron Powell‎, and Julian Sanchez‎ respond to President Obama’s final State of the Union Address.

Video produced by Caleb O. Brown, Tess Terrible and Cory Cooper.

Presidential candidate Hillary Clinton is proposing to increase taxes on high earners with a four percent surtax on people making more than $5 million.

Imposing such a tax hike would:

  • Encourage more tax avoidance and evasion, the sort of behaviors that Clinton and other politicians bemoan.
  • Increase deadweight losses, or economic damage, of taxation. This damage rises by the square of the tax rate, so for example, a 40 percent rate causes four times the damage of a 20 percent rate. This feature of our tax system means that raising the top rate is the worst way to raise revenue, even if raising more revenue made sense, which it does not.
  • Discourage the work and investment efforts of the most productive people in the economy. Entrepreneurs, executives, angel investors, venture capitalists, and other high earners add enormous value to the economy. Politicians should focus on removing barriers to their efforts, rather than penalizing them. Besides, the income of high earners is more flexible and mobile than the income of other people, so raising their taxes causes the strongest behavioral responses and largest deadweight losses.
  • Raise taxes on the people already paying the highest rates. Average tax rates rise rapidly as income rises. In 2015 those earning more than $1 million paid an average federal tax rate (including income, payroll, and excise taxes) of 33 percent. That is twice the rate of people with middling incomes, and many times the rate of people at the bottom.
  • Push the top U.S. income tax rate substantially higher than our trading partners in the OECD (Table 1.7). The top U.S. federal and average state tax rate is already 46 percent, which compares to the OECD average of 43 percent.
  • Move even further away from the fair and efficient ideal of a proportional, or flat, tax system. The United States already has the most graduated, or progressive, tax system in the OECD.

Perhaps the biggest problem with Clinton’s plan is that the federal government already taxes and spends too much. The American economy and average citizens would be better off if the size and scope of the government were reduced. Clinton’s tax increase would not solve any problems, but rather would add fuel to the fire of rampant bureaucratic failure in Washington.

Last week, the World Bank updated its commodity database, which tracks the price of commodities going back to 1960. Over the last 55 years, the world’s population has increased by 143 percent. Over the same time period, real average annual per capita income in the world rose by 163 percent. What happened to the price of commodities?

Out of the 15 indexes measured by the World Bank, 10 fell below their 1960 levels. The indexes that experienced absolute decline included the entire non-energy commodity group (-20 percent), agricultural index (-26 percent), beverages (-32 percent), food (-22 percent), oils and minerals (-32 percent), grains or cereals (-32 percent), raw materials (-32 percent), “other” raw materials (-56 percent), metals and minerals (-4 percent) and base metals (-3 percent).

Five indexes rose in price between 1960 and 2015.  However, only two indexes, energy and precious metals, increased more than income, appreciating 451 percent and 402 percent respectively. Three indexes increased less than income. They included “other” food (7 percent), timber (7 percent) and fertilizers (38 percent).

Taken together, commodities rose by 43 percent. If energy and precious metals are excluded, they declined by 16 percent. Assuming that an average inhabitant of the world spent exactly the same fraction of her income on the World Bank’s list of commodities in 1960 and in 2015, she would be better off under either scenario, since her income rose by 163 percent over the same time period.

This course of events was predicted by the contrarian economist Julian Simon some 35 years ago. In The Ultimate Resource, Simon noted that humans are intelligent animals, who innovate their way out of scarcity. In some cases, we have become more parsimonious in using natural resources. An aluminum can, for example, weighed about 3 ounces in 1959. Today, it weighs less than half an ounce. In other cases, we have replaced scarce resources with others. Instead of killing whales for lamp oil, for instance, we burn coal, oil and gas.

I will have a paper on this subject soon. In the meantime, please visit www.humanprogress.org.

(P.S.: This post appeared originally here.)

1. Cheaper oil lowers the cost of transporting people and products (including exports), and also the cost of producing energy-intensive goods and services.

For the seventh week in a row, the average price of a gallon of diesel declines, to $2.235 https://t.co/X3Zaq52YSO pic.twitter.com/vYMStMZ2mj

— St. Louis Fed (@stlouisfed) January 2, 2016  

2. Every upward spike in oil prices has been followed by recession, while sustained periods of low oil prices have been associated with relatively brisk growth of the U.S. economy (real GDP).

   

3. Far from being a grave danger (as news reports have frequently speculated), lower inflation since 2013 has significantly increased real wages and real consumer spending.

 

4. Cheaper energy helps explain why the domestic U.S. economy (less trade & inventories) has lately been growing faster than 3% despite the unsettling Obama tax shock of 2013.

   

Why do I keep harping on interest on reserves? Because, IMHO, the Fed’s decision to start paying interest on reserves contributed at least as much as the failure of Lehman Brothers or any previous event did to the liquidity crunch of 2008:Q4, which led to a deepening of the recession that had begun in December 2007.

That the liquidity crunch marked a turning point in the crisis is itself generally accepted. Bernanke himself (The Courage to Act, pp. 399ff.) thinks so, comparing the crunch to the monetary collapse of the early 1930s, while stating that the chief difference between them is that the more recent one involved, not a withdrawal of retail funding by panicking depositors, but the “freezing up” of short-term, wholesale bank funding. Between late 2006 and late 2008, Bernanke observes, such funding fell from $5.6 trillion to $4.5 trillion (p. 403). That banks altogether ceased lending to one another was, he notes, especially significant (p. 405). The decline in lending on the federal funds market alone accounted for about one-eighth of the overall decline in wholesale funding.

For Bernanke, the collapse of interbank lending was proof of a general loss of confidence in the banking system following Lehman Bothers’ failure. That same loss of confidence was still more apparent in the pronounced post-Lehman increase in the TED spread:

The skyrocketing cost of unsecured bank-to-bank loans mirrored the course of the crisis. Usually, a bank borrowing from another bank will pay only a little more (between a fifth and a half of a percentage point) than the U.S. government, the safest of all borrowers, has to pay on short-term Treasury securities. The spread between the interest rate on short-term bank-to-bank lending and the interest rate on comparable Treasury securities (known as the TED spread) remained in the normal range until the summer of 2007, showing that general confidence in banks remained strong despite the bad news about subprime mortgages. However, the spread jumped to nearly 2-1/2 percentage points in mid-August 2007 as the first signs of panic roiled financial markets. It soared again in March (corresponding to the Bear Stearns rescue), declined modestly over the summer, then showed up when Lehman failed, topping out at more than 4-1/2 percentage points in mid-October 2008 (pp. 404-5).

These developments, Bernanke continues, “had direct consequences for Main Street America. … During the last four months of 2008, 2.4 million jobs disappeared, and, during the first half of 2009, an additional 3.8 million were lost.” (406-7)

There you have it, straight from the horse’s mouth: the fourth-quarter, 2008 contraction in wholesale funding, as reflected in the collapse of interbank lending, led to the loss of at least 6.2 million jobs.

But was the collapse of interbank lending really evidence of a panic, brought on by Lehman’s bankruptcy? The timing of that collapse, as indicated in the following graph, tells a much different story.

The first of the three vertical lines is for September 15, 2008, when Lehrman went belly-up. Interbank lending on the next reporting date — September 17th — was actually up from the previous week. Thereafter it declined a bit, and then rose some. But these variations weren’t all that unusual. As for the TED spread, although it rose sharply after Lehman’s failure, the rise reflected, not an actual increase in the effective federal funds rate (as the “panic” scenario would suggest), but the fact that that rate, though it actually declined rapidly, did not do so quite as rapidly as the Treasury Bill rate did:

OK, now on to those other vertical lines. They show the dates on which banks first began receiving interest payments on their excess reserves. There are two lines because back then two different sets of banks had different “reserve maintenance periods,” and therefore started getting paid at different dates. (The maintenance periods have since been made uniform.) Those (mostly smaller) banks with one-week reserve maintenance periods began earning interest on October 15th; the rest, with two-week maintenance periods, started getting paid on October 22nd. The collapse in interbank payments volume coincides with the latter date. Notice also that the collapse continues after the TED spread has returned to a level not so different from its levels before Lehman failed.

If you still aren’t convinced that IOR was the main factor behind the collapse in interbank lending, perhaps some more graphs will help. The first shows the progress of interbank lending over a somewhat longer period, along with the 3-month Treasury Bill rate and (starting in October 2008) the interest rate on excess reserves:

To understand this graph, think of the banks’ opportunity cost of holding excess reserves as being equal to the difference between the Treasury Bill rate and the rate of interest on excess reserves. Prior to October 15th, 2008, the opportunity cost, being simply equal to the Treasury Bill rate itself, is necessarily positive. But when IOR is first introduced, it becomes practically zero; and shortly thereafter it becomes, and remains, negative. Mere inspection of the chart should suffice to show that the volume of interbank lending tends to vary directly with this opportunity cost.

Once the interest rate on excess reserves is fixed at 25 basis points after mid-December 2008, things get simpler, as the volume of interbank lending varies directly with the Treasury Bill rate. Here is a chart showing that period, with the opportunity cost itself (that is, the Treasury Bill rate minus 25 basis points) plotted along with the volume of interbank lending:

Now, it would be one thing if Bernanke were merely guilty of misunderstanding the cause of the decline in interbank lending, without having actually been responsible for that decline. But Bernanke was responsible, as was the rest of the Fed gang that took part in the misguided decision to start rewarding banks for holding excess reserves in the middle of a financial crisis.

What’s more, it is hard to see how Bernanke can insist that the Fed’s decision to pay IOR had nothing to do with the drying-up of the federal funds market given the justification he himself offers for that decision earlier in his memoir, which bears quoting once again, this time with emphasis added:

[W]e had been selling Treasury securities we owned to offset the effect of our lending on reserves… . But as our lending increased, that stopgap measure would at some point no longer be possible because we would run out of Treasuries to sell….The ability to pay interest on reserves…would help solve this problem. Banks would have no incentive to lend to each other at an interest rate much below the rate they could earn, risk-free, on their reserves at the Fed (p. 325).

Yet when he turns to explain the causes of the collapse in interbank lending, just eighty pages after this passage, Bernanke never mentions interest on reserves. Instead, he blames the collapse on panicking private-market lenders, while treating the Fed — and, by implication, himself — as a White Knight, galloping to the rescue. “As the government’s policy response took effect,” he writes, “the TED spread declined toward normal levels by mid-2009” (p. 405). What rubbish. We’ve already seen why the TED spread went up and then declined again. And although interbank lending itself revived somewhat during the first half of 2009, it declined steadily thereafter, ultimately falling to lower levels than ever.

And the Fed’s “policy response”? According to Bernanke, it had “four main elements: lower interest rates to support the economy, emergency liquidity lending…and the stress-test disclosures of banks’ conditions” (409). Let Kevin Dowd tell you about those idiotic stress tests. As for “lower interest rates,” they were proof, not that the Fed was taking desirable steps, but that it was failing to do so, for although the Fed did get around to reducing its federal funds rate target, its doing so was a mere charade: the equilibrium federal funds rate had long since fallen well below the Fed’s target, and the subsequent moves merely amounted to a belated recognition of that fact, without making any other difference. Finally, although the Fed’s emergency lending aided the loans’ immediate recipients, as well as their creditors, it contributed not a jot to overall liquidity, the very point of IOR having been — as Bernanke himself admits, and as I explained in my first post on this topic — to prevent it from doing so!

As the next chart shows, IOR, besides contributing to the collapse of interbank lending, also played an important part in the dramatic increase in the banking system reserve ratio. The vertical lines represent the same three dates as those referred to in the very first chart. Although the ratio did rise considerably following Lehmans’ failure, it rose even more dramatically — and, quite unlike the TED spread, never recovered again — after the Fed started paying interest on excess reserves:

To better understand what went on, here is another diagram, this one showing banks’ choice of optimal reserve and liquid asset ratios as a function of the interest paid on bank reserves:

In the diagram, the vertical axis represents the interest rate on reserve balances, in basis points, while the horizontal axis represents the reserve-deposit ratio. The picture shows two upward-sloping schedules. The first is for reserve balances at the Fed, while the second is for liquid assets more generally, here meaning (for simplicity’s sake) reserves plus T-bills. The horizontal line shows the yield on T-bills at the time of implementation of IOR, here assumed to be a constant 20 basis points. The two dots, finally, represent equilibrium ratios, the first (at the lower left) for before the crisis and IOR, the other for afterwards. Note that, the high post-IOR ratio reflects, not just the interest-sensitivity of reserve demand, but that, with IOR set at 25 basis points, reserves dominate T-bills. Thus, although the demand for excess reserves may not be all that interest sensitive so long as the administered interest rate on reserves is less than the rate earned by other liquid assets, that demand can jump considerably if that rate is set above rates on liquid and safe securities.

The last chart I’ll trouble you with today tracks changes in total commercial bank reserves, interbank loans, Treasury and agency securities, and commercial and industrial loans, from mid-2006 through mid-2009, this time with a single vertical line only, for October 22, 2008, when IOR was in full effect:

The chart shows clearly how the beginning of IOR coincided, not only with a substantial decline in interbank lending (green line), but in a leveling-off of other sorts of bank lending, which later becomes a pronounced decline. For illustration’s sake, the chart shows the course of C & I lending only; other sorts of bank lending fell off even more.

Don’t get the wrong idea: I don’t wish to suggest that IOR was responsible for the post-2008 decline in bank lending, apart from overnight lending to other banks. There’s little doubt that that decline mainly reflects the effects of both a declining demand for credit and much stricter regulation of bank lending, especially as Dodd-Frank and Basel III came into play, Nor do I believe that merely eliminating IOR, as opposed to either reducing the regulatory burdens on bank lending, or resorting to negative IOR (as some European central banks have done), or both, would have sufficed to encourage any substantial increase in bank balance sheets, and especially in bank lending, after 2009, when most estimates (including the Fed’s own) have “natural” interest rates sliding into negative territory. But as I noted in my first post in this series, when IOR was first introduced, natural rates were, according to these same estimates, still positive. And one thing IOR certainly did do, both before 2009 and afterwards, was to allow banks, and some banks more than others, to treat trillions in new reserves created by the Fed starting in October 2008, not as an inducement to expand their balance sheets, but as a direct source of risk- and effort-free income. (Note, by the way, how, just before IOR was introduced, but after the Fed stopped sterilizing its emergency loans, bank loans and security holdings did in fact increase along with reserves.)

Moreover, it’s evident that the FOMC itself, rightly or wrongly, sees IOR as continuing to play a crucial part in limiting banks’ willingness to expand credit. Otherwise, how can one possibly understand that bodies’ decision last month to raise the rate of IOR (and, with it, the upper bound of its federal funds rate target range) from 25 to 50 basis points? That decision, recall, was aimed at making sure that bank credit expansion would not progress to the point of causing inflation to exceed the Fed’s 2 percent target:

The Committee judges that there has been considerable improvement in labor market conditions this year, and it is reasonably confident that inflation will rise, over the medium term, to its 2 percent objective. Given the economic outlook, and recognizing the time it takes for policy actions to affect future economic outcomes, the Committee decided to raise the target range for the federal funds rate to 1/4 to 1/2 percent. The stance of monetary policy remains accommodative after this increase, thereby supporting further improvement in labor market conditions and a return to 2 percent inflation.

Bernanke’s implementation and defense of IOR would be more than bad enough, were it not also for his particular determination to avoid repeating the mistakes the Fed made during the Great Depression. “[M]ost of my colleagues and I were determined,” he says, “not to repeat the blunder the Federal Reserve had committed in the 1930s when it refused to deploy its monetary tools to avoid a sharp deflation that substantially worsened the Great Depression” (p. 409). Among the Fed’s more notorious errors during that calamity were its failure to expand its balance sheet sufficiently, through open-market purchases or otherwise, to offset the dramatic, panic-driven collapse in the money multiplier during the early 1930s, and its recovery-scuttling decision to double reserve requirements in 1936-7.

Of course, Bernanke’s Fed didn’t commit the very same mistakes committed by the Fed of the 1930s. But, as David Beckworth had already recognized by late October 29, 2008, it made remarkably similar ones that also resulted in a collapse of credit. “History,” Bernanke credits Mark Twain with saying, “does not repeat itself, but it rhymes” (p. 400). If you ask me, Bernanke himself was a far better versifier — and a far worse central banker — than he and his many champions realize.

[Cross-posted from Alt-M.org]

In an interview with the New York Times editorial board, Donald Trump said that he would impose a 45% tariff on all goods from China.  That is, to put it mildly, a really bad idea, and a number of commentators have already taken it to task. 

I’d like to focus on what Donald Trump said immediately after he proposed a 45% tax on Americans who buy things that were made in China.  According to the New York Times:

Mr. Trump added that he’s “a free trader,” but that “it’s got to be reasonably fair.”

Unfortunately, Trump is far from the first politician to adopt the “free but fair trade” line.  Many of the other candidates vying for the Republican nomination for President in 2016 have employed the trope in some form this election cycle. (See, for example, Mike Huckabee, Rick Santorum, Bobby Jindal, Carly Fiorina, Jeb Bush, Marco Rubio.)  Some of these candidates, like Trump, are economic nationalists who openly advocate mercantilist economic policy. 

Many of the others are merely politicians.  They probably believe in the value of free trade, but when departing from that principle is politically expedient, they need some way to defend themselves.  Vague appeals to “fairness” are uniquely well-suited to that task.

The alternative to free trade is not “fair trade.”  The choice is between free trade and protectionism.  If you think a small amount of protectionism is good, then you do not support free trade.  This is important because the economic case for free trade is a case for free trade, not partially free trade or mostly free trade. 

Tariffs, quotas, and subsidies are bad economic policy in each instance.  Some trade barriers are more harmful than others, but there is not a correct quantum of protectionism you can support while still being a “free trader.”

Perhaps it’s a good thing that even when espousing protectionism, political candidates feel the need to proclaim their support for economic liberty.  Still, it would be better if their hypocrisy were called to task more often.

Today, the Supreme Court heard oral argument in Friedrichs v. California Teachers Association, a challenge to public-sector unions’ ability to extract forced dues from non-members. As my colleague Ilya Shapiro writes, and Ian Millheiser at Think Progress agrees, the Court seems poised to strike down “fair share” fees for public-sector workers who do not want to join the union. This would essentially mean that “right to work” would be constitutionally mandated for public-sector workers.

Such a ruling would correct a 40-year-old mistake the Court made in Abood v. Detroit Board of Education. There, the Court ruled that public-sector union dues can be meaningfully separated into the “political” and the “non-political,” and that, while the First Amendment forbids forcing people to support political causes with which they disagree, public-sector unions can extract a “fair share” fee for non-political purposes.

From the very beginning, this distinction was under attack. As Justice Lewis Powell wrote in concurrence in Abood:

Collective bargaining in the public sector is “political” in any meaningful sense of the word. This is most obvious when public-sector bargaining extends … to such matters of public policy as the educational philosophy that will inform the high school curriculum. But it is also true when public-sector bargaining focuses on such “bread and butter” issues as wages, hours, vacations, and pensions.

In other words, public-sector unions are just another political special interest that seeks favors from the government, and what they can’t get at the ballot box they’ll get at the bargaining table.

Yet, if public-sector unions are just another special interest group, then why does the government give them the extraordinary privilege of extracting dues from non-members? Parents don’t get this privilege. Trade associations don’t get this privilege. Non-profits don’t get this privilege. In fact, unions are the only special interest group in American society that gets this privilege.

The primary argument in favor of forced agency fees is the “free rider” argument–namely, that those who don’t contribute to the union will be allowed to free ride on those who do. But, as pro-union Professor Clyde Summers once pointed out, this is essentially what happens in all types of private associations:

Why is it not applicable to a wide range of private associations? If a community association engages in a clean-up campaign or opposes encroachments by industrial development, no one suggests that all residents or property owners who benefit be required to contribute. If a parent-teacher association raises money for the school library, assessments are not levied on all parents. If an association of university professors has as a major function bringing pressure on universities to observe standards of tenure and academic freedom, most professors would consider it an outrage to be required to join. If a medical association lobbies against regulation of fees, not all doctors who share in the benefits share in the costs.

The government-thumb-on-the-scale, forced-dues privilege that unions enjoy should give us pause. It seems positively un-democratic for the government to grant such an extraordinary privilege to one group, or possibly just un-republican.

The Guarantee Clause, or the “Republican Form of Government Clause,” can be found in Article IV, Section 3 of the Constitution. It reads: “The United States shall guarantee to every State in this Union a Republican Form of Government…” Writing in the Heritage Guide to the Constitution, Rob Natelson, one of the foremost originalist scholars, writes that a “Republican Form of Government” means three things: 1) “popular rule, broadly understood,” 2) no monarch, and 3) the rule of law.

In practice, the Guarantee Clause is one of those constitutional clauses that is so vague it is rendered essentially unenforceable. On top of this, the Supreme Court, in one of its most interesting cases, ruled that the clause is not justiciable by the courts and is therefore only a political question for Congress. That case, Luther v. Borden, concerned the Dorr Rebellion, a virtual coup in 1840s Rhode Island (really).

So my comments here should be read in light of the fact that the Guarantee Clause is severely under-theorized. Yet, it seems not absurd to argue that, if “republican form of government” means anything, it means that the government cannot privilege one interest group over another. This would broadly accord with Natelson’s concept of “popular rule, broadly understood.”

The Guarantee Clause is mostly dormant. It was recently revived, however, by teachers unions and other organizations seeking to overturn Colorado’s Taxpayer Bill of Rights (TABOR). Kerr v. Hickenlooper, in which Cato has filed two briefs, is a challenge to Colorado’s method of raising taxes only through popular approval by the people. By removing tax hikes from representatives, the argument goes, it is no longer a “republican” form of government.

This seems like a stretch, but so is any argument based on the Guarantee Clause. If the Supreme Court preserves forced agency fees for public-sector unions, however, it may be worth looking into whether a Guarantee Clause argument might be made.

The U.S. is allied with every major industrialized power on the planet. America’s friends in Asia and Europe generally are prosperous and populous. Yet decades after the conflicts which led to Washington’s security guarantees for them, the allied gaggle remains a bunch of “losers,” to paraphrase Donald Trump.

Last week North Korea staged its fourth nuclear test. Naturally, South Korea and Japan reacted in horror. But it was America which acted.

The U.S. sent a Guam-based B-52 wandering across South Korean skies. “This was a demonstration of the ironclad U.S. commitment to our allies in South Korea, in Japan, and to the defense of the American homeland,” opined Adm. Harry B. Harris, Jr., head of Pacific Command.

Unfortunately, the message might not work as intended. CNN’s Will Ripley reported from Pyongyang that “A lot of North Korean military commanders find U.S. bombers especially threatening, given the destruction here in Pyongyang during the Korean War, when much of the city was flattened.” Which sounds like giving the North another justification for building nuclear weapons.

Worse, though, reported Reuters: “The United States and its ally South Korea are in talks toward sending further strategic U.S assets to the Korean peninsula.” Weapons being considered include an aircraft carrier, B-2 bombers, F-22 stealth fighters, and submarines.

A better response would be for Seoul to announce a major military build-up. The Republic of Korea should boost its military outlays—which accounted for a paltry 2.4 percent of GDP in 2014, about one-tenth the estimated burden borne by the North. The ROK also should expand its armed forces from about 655,000 personnel today to a number much closer to the DPRK’s 1.2 million.

Doing so obviously would be a burden. But if the economic wreck to its north can create such a threatening military, why cannot the ROK, which enjoys a roughly 40-1 economic and 2-1 population advantage, meet the challenge?

South Korea is not alone. Japan has been another long-term defense welfare client of the U.S. Only under Prime Minister Shinzo Abe has Japan begun to do more, mostly because his government is no longer convinced that the U.S. will forever subsidize Japan’s defense.

Alas, the Europeans have not yet come to that conclusion. NATO sets a two percent of GDP standard for military outlays, yet the 2015 European member average was just 1.5 percent. Only four European states hit two percent.

Moscow’s aggressive behavior against Georgia and especially Ukraine set off all sorts of angst throughout Europe. U.S. officials and NATO leaders made their usual calls for members to hike military outlays, but most European states did what they usually do, continued to cut spending.

Under normal circumstances European behavior would be mystifying. The European Union demonstrates the continent’s ability to overcome historic national divisions and collaborate for a common purpose.

Collectively the Europeans enjoy around an 8-1 economic and 3-1 population advantage over Moscow. Even after its recent revival, Russia’s military today is a poor replica of that during the Soviet era.

Yet when Moscow acts against non-NATO members Europe’s eyes turn to Washington for military relief. Instead of acting in their presumed interests, they push for U.S. action.

Washington’s allies generally are a pathetic lot. Benefiting from sizeable and capable populations and enjoying large and advanced economies, they nevertheless can’t be bothered to invest heavily in their own defense.

When troubles arise U.S. friends expect the American cavalry, in the form of a B-52 in Korea this time, to arrive. As a result, I argue on National Interest online, “the U.S. is expected to defend much of the globe. And the bulk of Washington’s over-size military outlays are to project power for the benefit of its ne’er-do-well allies.”

In the years ahead Washington should take a page from the Trump play-book and choose as allies a few “winners,” nations whose friendship actually makes America more secure. The U.S. should stop treating national security as a form of welfare for other states.

Kim Jong-un’s gift to the world is North Korea’s fourth nuclear test. Washington should respond by backing away from a potential conflict that is not its own.

Although Western intelligence widely disbelieves the DPRK’s claim to have tested a thermonuclear device, or H-bomb, Kim Jong-un has clearly demonstrated that nothing will dissuade the regime from expanding and improving its nuclear arsenal.

The North’s action has led to widespread demands for action. Alas, no one has good ideas about what to do.

Pyongyang again ignored “the international community” because “the international community” has no cost-effective means to restrain the DPRK. Although as assistant secretary of defense Ashton Carter advocated military strikes against North Korean nuclear facilities, most people on and off the Korean peninsula don’t believe the answer to a potential war is to start an almost certain war.

Sanctions long have been the West’s go-to answer. Congress already was considering three different enhanced sanctions bills and the UN Security Council is planning new economic penalties.

But the North has never let public hardship get in the way of its political objectives. So far the People’s Republic of China has refused to encourage regime collapse by cutting economic ties and eliminating energy and food support. Moreover, Russia, with a newly revived relationship with the DPRK, insisted that any response be “appropriate” and “proportionate.”

Whether there ever was a chance to negotiate away the North’s nascent nuclear program may be impossible to know. But virtually no one believes the Kim regime is willing to eliminate existing weapons developed at high cost.

So what to do?

  1. Recognize that not every problem is America’s problem. North Korea matters a lot more to its neighbors than to the U.S. Indeed, Pyongyang wouldn’t be constantly tossing imprecations and threats toward Washington, if the U.S. didn’t have troops on its border and abundant air and naval forces pointed the DPRK’s way.
  2. Withdraw American conventional forces from the peninsula. The Republic of 
  3. Korea, with twice the population and upwards of 40 times the economic strength, of the North, is well able to provide for its own defense. U.S. troops act as nuclear hostages, unnecessarily put in harm’s way without constraining North Korean nuclear activities.
  4. Seek to persuade Beijing to pressure the North out of the former’s own interest. Washington’s only chance of enlisting China’s help is by addressing its concerns—impact of potentially violent implosion spurring conflict and refugees across the Yalu, loss of economically advantageous position in the North, creation of united Korea allied with America aiding Washington efforts at containment. This requires negotiating with the PRC.
  5. Offer to establish diplomatic relations with North Korea. Engagement might not change anything, but then, we can be certain that nothing will change if we maintain the same policy toward the North.
  6. Indicate that continuing expansion of Pyongyang’s nuclear arsenal would force Washington to reconsider its position on proliferation. After all, the U.S. does not want to be left extending a nuclear umbrella over South Korea, Japan, Taiwan, Australia, and who knows else against nuclear-armed North Korea, China, and Russia. Better to extricate America from such a miasma and allow its allies to create their own nuclear deterrents. If that prospect bothers the PRC, then it should do more to prevent the DPRK from continuing its present course.

North Korea has become a seemingly insoluble problem for Washington. Nothing the U.S. can do, at least at reasonable cost, is likely to create a democratic, friendly, non-nuclear DPRK.

But as I point out on National Interest: “Washington can share the nightmare, turning South Korea’s defense over to Seoul and nuclear proliferation over to the North’s neighbors, particularly China. Moreover, Washington can diminish North Korean fear and hostility by establishing diplomatic ties, just as America had official relations with the Soviet Union and its Eastern European allies during the Cold War.”

The geopolitics still would be messy. But no longer would it be America’s responsibility to clean up.

The conventional wisdom is that Justice Scalia is the swing vote in Friedrichs v. California Teachers Association, but he gave no indication at this morning’s argument that he was anywhere but on the plaintiffs’ side. Chief Justice Roberts and Justice Kennedy – other potential defectors from the pro-workers, anti-compelled-speech side – were similarly solid. With Justice Alito having written the two recent labor-related opinions, the most likely fifth vote for the unions (supported by California and the United States) becomes Justice Thomas, but only because he said nothing, as is his wont.

Not surprisingly, the biggest issue for the more conservative justices was the matter of compulsion: why should non-unionmembers in the public sector be forced to pay “agency fees” for so-called collective bargaining when (a) all issues that are collectively bargained by public-sector unions are matters of public policy (not simply wages and conditions of labor as in the private sector), and (b) those workers disagree with the supposed “benefits” that the unions want them to pay for (e.g., tenure protections versus merit pay). “Is it even okay to force someone to contribute to a cause you do believe in?”, asked Justice Scalia. “We’re not talking about free riders, but compelled riders,” posited Justice Kennedy.

“Since public employment contracts are submitted for public comment, that suggests this is different than private-sector collective bargaining,” explained Chief Justice Roberts, who was silent during the plaintiffs’ half of the argument and an active questioner of the union and governments (typically a sign of agreement with the former and disagreement with the latter). 

While the progressive justices focused on the importance of stare decisis – respecting precedent and the reliance interests built up around it – that didn’t appear to be a major concern for anyone else, regardless of the age of the ruling that’s now under attack (Abood v. Detroit Board of Education from 1977). “Everything that’s collectively bargained [in the public sector] is necessarily a political question,” thundered Justice Scalia in describing why a ruling to strike down agency fees would even comport with Abood’s statement that states can’t force workers “to contribute to the support of an ideological cause [they] may oppose as a condition of holding a job.”

In other words, to the extent we can predict anything based solely on oral argument – take this with a mine of salt – I’d much rather be us (those who support the teachers) than them (those who support the teachers’ union and state and federal governments). If that’s how the case goes, it would be a huge victory for workers’ rights, the First Amendment, and educational freedom – and probably the most important ruling this term. 

We’ll find out by the end of June.

For background and commentary about the case, see this two-minute primerCato’s brief, my two recent op-eds, and this podcast.

When China joined the World Trade Organization in 2001, it agreed that other members would be allowed to apply “nonmarket economy methodology” in antidumping cases against Chinese goods for 15 years.  That deadline will soon pass in December 2016, but the Financial Times reported recently that U.S. officials are actively pressuring their European counterparts to continue using NME methodology indefinitely.  The report is disappointing but not at all surprising.

Given Washington’s long history of actively and intentionally violating WTO antidumping rules, most experts have guessed that the United States would not change its practices at the end of 2016 to comply with WTO rules.

It’s important to realize at the offset that the U.S. antidumping law is bad policy that exists to protect a handful of politically powerful U.S. industries from legitimate competition.  My colleague Dan Ikenson has thoroughly catalogued the numerous fallacies used to support antidumping in general, the myriad abuses of the U.S. government, and the particularly nonsensical nature of nonmarket economy treatment.

I wrote a Cato Policy Analysis in October 2014 explaining the history of NME status as an excuse for lawless protectionism.  I also spelled out some of the possible paths the United States could take following the 2016 expiration of China’s NME status at the WTO and what the legal consequences would be of each.  At the time, I thought the most likely outcome would be no change in practice resulting in years of embarrassing trade litigation at the WTO where the United States will be continually called out for violating trade rules.  According to the Financial Times, that’s exactly what they’re planning to do:

The Obama administration … is advocating a policy of inaction, which would force China to bring a challenge in the WTO and thus put the onus on Beijing to prove that its state-heavy economic model has met all the criteria for [market economy status].

Unfortunately, the article perpetuates a frustrating myth that advocates of the status quo use to misdirect the debate.  Whether China’s economic model meets the criteria laid out in U.S. or EU law for market economy or nonmarket economy treatment is irrelevant.  Those criteria are not part of WTO law. 

The WTO Antidumping Agreement lays out detailed rules for how members can implement antidumping measures, and the use of NME methodology is plainly inconsistent with those rules.  China’s accession protocol to the WTO exempts members from some of those rules until December 2016.  After that, United States, European Union, or any other WTO member that uses NME methodology against Chinese goods will be violating global trade rules.

To be blunt, the United States uses NME methodology against Chinese imports because it provides for more protectionist outcomes, not because China doesn’t have a market economy. Whether China qualifies as a market economy under any set of criteria will have no impact on WTO rules or U.S. practice.

What’s more, it’s clear that China considers resolving the NME issue to be an important international economic goal this year.  Ending NME treatment on time would smooth over relations and enable the United States to work on more important bilateral issues. 

Antidumping duties on imports from China harm American consumers and businesses by making the things we buy more expensive while privileging inefficient, rent-seeking domestic industries.  Rather than kowtowing to special interests, the U.S. government should promote economic growth and international peace by ending the NME charade as soon as possible.

In less than an hour, the U.S. Supreme Court will hear oral arguments in one of the most important cases of the year, Friedrichs v. California Teachers Association. The plaintiffs in Friedrichs are ten California teachers who are suing their union because they believe that laws forcing government employees to join a union or pay them “agency fees” as a condition of employment violate their First Amendment right to free speech, which includes the freedom not to speak, and not to be compelled to subsidize the speech of others.

SCOTUS has previously held that the agency fees may cover collective bargaining activities but not the unions’ political activities. However, as the plaintiffs argue, public-sector collective bargaining is inherently political. For example, more funding for teachers means higher taxes or less money for public parks, etc. The Cato Institute has filed an amicus brief in support of the plaintiffs, and several Cato legal eagles, such as Ilya Shapiro, Andrew Grossman, and Trevor Burrus, have already weighed in. 

Much of the constitutional analysis floating around the interwebs has focused on whether or not overcoming the supposed “free rider” problem constitutes sufficient grounds for states to grant unions the right to expropriate funds from non-members to cover collective bargaining activities that supposedly benefit them. Champions of free speech have generally attacked the other side’s strongest case, therefore their arguments assume that all teachers do, in fact, benefit from that collective bargaining, but that freedom of speech entails the freedom not to be forced to pay for someone else to advocate even on your supposed behalf. In an op-ed for the Orange County Register, however, Ilya Shapiro and I explain how collective bargaining can actually come at the expense of some teachers:

[E]ven if collective bargaining weren’t inherently political, it’s easy to see how workers could object to the supposed “benefits” negotiated on their behalf. For example, a teacher might prefer higher pay to tenure protections, or a defined-contribution pension plan – such as a 401(k) – to one that has defined benefits.

There are countless ways in which union-negotiated contracts or laws that the unions lobbied to enact can actually harm the interests of individual teachers. For example, “last-in, first-out” laws protect long-serving teachers regardless of ability at the expense of talented, young teachers. Worse, as we explain, such contracts and laws can harm the interests of the very children our education system is supposed to be designed to serve:

Collective bargaining also can come at the expense of students. When schools lack high-quality math teachers because the union contract requires they be paid the same amount as gym teachers, kids lose out. And when that contract has “last in, first out” (LIFO) rules that force a district to lay off a talented young teacher before a low-performing teacher with seniority, students suffer.

Last year, a judge in California struck down such tenure and LIFO rules after finding “compelling” evidence that making it hard to fire low-performing teachers had a negative impact on students, especially low-income and minority students. The judge pointed to research by Harvard professor Thomas Kane showing that Los Angeles Unified School District students who were taught by an English teacher in the bottom 5 percent of competence lose the equivalent of several days of learning in a single year relative to students with average teachers.

“Indeed,” the judge concluded, “it shocks the conscience.”

Sadly, the deleterious effects of collectively bargained tenure rules can be serious and long-lasting. In a 2012 study of more than 2.5 million students, Harvard professors Raj Chetty and John Friedman and Columbia professor Jonah Rockoff found that students who had just a single year in a classroom with a teacher in the bottom 5 percent of effectiveness lose approximately $50,000 in potential lifetime earnings relative to students assigned to average teachers.

If the Friedrichs plaintiffs win, it won’t solve all these problems. Some states will still have LIFO rules, teacher salary and benefits schedules, or related matters in enshrined in statute. Nevertheless, if the Friedrichs plaintiffs prevail, it will mean that district school teachers will no longer be forced to support advocacy that they believe works against their interests or the interests of their students. In the long run, less funding for such advocacy may well translate into fewer policies that come at the expense of some teachers and students.   Ultimately, a win for the plaintiffs in Friedrichs would be a victory for teachers and their students.

Pages