Cato Op-Eds

Individual Liberty, Free Markets, and Peace
Subscribe to Cato Op-Eds feed


Hillary Clinton and Sen. Bernie Sanders participate in a Democratic primary debate in Charleston, South Carolina, on Jan. 17, 2016.

In their final debate before they face Democratic primary voters, Hillary Clinton and Bernie Sanders traded sharp jabs on health care. Pundits focused on how the barbs would affect the horse race, whether Democrats should be bold and idealistic (Sanders) or shrewd and practical (Clinton), and how Sanders’ “Medicare for All” scheme would raise taxes by a cool $1.4 trillion. (Per. Year.) Almost no one noticed the obvious: the Clinton-Sanders spat shows that not even Democrats like the Affordable Care Act, and that the law remains very much in danger of repeal.

Hours before the debate, Sanders unveiled an ambitious plan to put all Americans in Medicare. According to his web site, “Creating a single, public insurance system will go a long way towards getting health care spending under control.” Funny, Medicare has had the exact opposite effect on health spending for seniors. But no matter. Sanders assures us, “The typical middle class family would save over $5,000 under this plan.” Remember how President Obama promised ObamaCare would reduce family premiums by $2,500? It’s like that, only twice as ridiculous.

Clinton portrayed herself as the protector of ObamaCare. She warned that Sanders would “tear [ObamaCare] up…pushing our country back into that kind of a contentious debate.” She proposed instead to “build on” the law by imposing limits on ObamaCare’s rising copayments, and by imposing price controls on prescription drugs. Sanders countered, “No one is tearing this up, we’re going to go forward,” and so on.

Such rhetoric obscured the fact that the candidates’ differences are purely tactical. Clinton doesn’t oppose Medicare for All. Indeed, her approach would probably reach that goal much sooner. Since ObamaCare literally punishes whatever insurers provide the highest-quality coverage, it therefore forces health insurers into a race to the bottom, where they compete not to provide quality coverage to the sick.  That’s terrible if you or a family member have a high-cost, chronic health condition—or even just an ounce of humanity. But if you want to discredit “private” health insurance in the service of Medicare for All, it’s an absolute boon. After a decade of such misery, voters will beg President (Chelsea) Clinton for a federal takeover. But if President Sanders demands a $1.4 trillion tax hike without first making voters suffer under ObamaCare, he will over-play his hand and set back his cause.

The rhetoric obscured something much larger, too. Clinton and Sanders inadvertently revealed that not even Democrats like ObamaCare all that much, and Democrats know there’s a real chance the law may not be around in four years.

During the debate, Sanders repeatedly noted ObamaCare’s failings : “29 million people still have no health insurance. We are paying the highest prices in the world for prescription drugs, getting ripped off…even more are underinsured with huge copayments and deductibles…we are spending almost three times more than the British, who guarantee health care to all of their people…Fifty percent more than the French, more than the Canadians.”

Sure, he also boasted, repeatedly, that he helped write and voted for the ACA. Nonetheless, Sanders was indicting ObamaCare for failing to achieve universal coverage, contain prices, reduce barriers to care, or eliminate wasteful spending. At least one of the problems he lamented—“even more [people] are underinsured with huge copayments and deductibles”—ObamaCare has made worse. (See “race to the bottom” above, and here.)

When Sanders criticized the U.S. health care system, he was criticizing ObamaCare. His call for immediate adoption of Medicare for All shows that the Democratic party’s left wing is simply not that impressed with ObamaCare, which they have always (correctly) viewed as a giveaway to private insurers and drug companies.

Clinton’s proposals to outlaw some copayments and impose price controls on prescription drugs are likewise an implicit acknowledgement that ObamaCare has not made health care affordable. In addition, her attacks on Sanders reveal that she and many other Democrats know ObamaCare’s future remains in jeopardy.

Seriously, does anyone really think Clinton is worried that something might “push[] our country back into that kind of a contentious debate” over health care? America has been stuck in a nasty, tribal health care debate every day of the six years since Democrats passed ObamaCare despite public disapproval. Or that Republicans would be able to repeal ObamaCare over President Sanders’ veto?

Clinton knows that if the next president is a Republican, all the wonderful, magical powers that ObamaCare bestows upon the elites in Washington, D.C., might disappear.

If we elect a Republican, they’ll roll back all of the progress we’ve made on expanding health coverage. #DemDebate pic.twitter.com/wyeac9unZE

— The Briefing (@TheBriefing2016) January 18, 2016

“I don’t want to see us start over again. I want us to defend and build on the Affordable Care Act and improve it.” —Hillary #DemDebate

— Hillary Clinton (@HillaryClinton) January 18, 2016

And she wants Democratic primary voters to believe she is the only Democrat who can win the White House. “The Republicans just voted last week to repeal the Affordable Care Act,” she warned, “and thank goodness, President Obama vetoed it.”

Clinton’s attacks on Sanders’ health care plan—her warning about “pushing our country back into that kind of a contentious debate” are just a sly way of warning Democratic voters: Bernie can’t win. Nominate me and I will protect ObamaCare. Nominate him, and ObamaCare dies.

We can’t afford to undo @POTUS’ progress. Health care for millions of Americans is too important. https://t.co/mdt5rTDsNN

— Hillary Clinton (@HillaryClinton) January 18, 2016

Health care should be a right for every American. We should build on the progress we’ve made with the ACA—not go back to square one.

— Hillary Clinton (@HillaryClinton) January 14, 2016

Perhaps that prediction is correct. Perhaps it isn’t. But it’s plausible.

Either way, ObamaCare was the biggest loser in this Democratic presidential debate.

Ross Douthat and Reihan Salam, two of the smartest conservative thinkers today, have spilt much ink worrying over immigrant assimilation.  Salam is more pessimistic, choosing titles like “The Melting Pot is Broken” and “Republicans Need a New Approach to Immigration” (with the descriptive url: “Immigration-New-Culture-War”) while relying on a handful of academic papers for support.  Douthat presents a more nuanced, Burkean think-piece reacting to assimilation’s supposed decline, relying more on Salam for evidence. 

Their worries fly against recent evidence that immigrant assimilation is proceeding quickly in the United States.  There’s never been a greater quantity of expert and timely quantitative research that shows immigrants are still assimilating.

The first piece of research is the National Academy of Science’s (NAS) September 2015 book titled The Integration of Immigrants into American SocietyAt 520 pages, it’s a thorough, brilliant summation of the relevant academic literature on immigrant assimilation that ties the different strands of research into a coherent story.  Bottom line:  Assimilation is never perfect and always takes time, but it’s going very well. 

One portion of NAS’ book finds that much assimilation occurs through a process called ethnic attrition, which is caused by immigrant inter-marriage with natives either of the same or different ethnic groups.  Assimilation is also quickened with second or third generation Americans marry those from other, longer-settled ethnic or racial groups.  The children of these intermarriages are much less likely to identify ethnically with their more recent immigrant ancestors and, due to spousal self-selection, to be more economically and educationally integrated as well.  Ethnic attrition is one reason why the much-hyped decline of the white majority is greatly exaggerated

     In an earlier piece, Salam focuses on ethnic attrition but exaggerated the degree to which it declined by confusing stocks of ethnics in the United States with the flow of new immigrants.  He also emphasizes the decrease in immigrant inter-marriage caused by the 1990-2000 influx of Hispanic and Asian immigrants.  That decrease is less dire than he reports.  According to another 2007 paper, 32 percent of American Mexican-Americans married outside of their race or ethnicity while 33 percent of women did (I write about this in more detail here).  That’s close to the 1990 rate of intermarriage reported for all Hispanics in the study Salam favored.  The “problem” disappeared.  

The second set of research is a July 2015 book entitled Indicators of Immigrant Integration 2015 that analyses immigrant and second generation integration on 27 measurable indicators across the OECD and EU countries.  This report finds more problems with immigrant assimilation in Europe, especially for those from outside of the EU, but the findings for the United States are quite positive.

The third work by University of Washington economist Jacob Vigdor offers a historical perspective.  He compares modern immigrant civic and cultural assimilation to that of immigrants form the early 20th century (an earlier draft of his book chapter is here, the published version is available in this collection).  For those of us who think early 20th century immigrants from Italy, Russia, Poland, Eastern Europe, and elsewhere assimilated successfully, Vigdor’s conclusion is reassuring:

“While there are reasons to think of contemporary migration from Spanish-speaking nations as distinct from earlier waves of immigration, evidence does not support the notion that this wave of migration poses a true threat to the institutions that withstood those earlier waves.  Basic indicators of assimilation, from naturalization to English ability, are if anything stronger now than they were a century ago [emphasis added].

American identity in the United States (similar to Australia, Canada, and New Zealand) is not based on nationality or race nearly as much as it is in the old nation states of Europe, likely explaining some of the better assimilation and integration outcomes here.       

Besides ignoring the huge and positive new research on immigrant assimilation, there are a few other issues with Douthat’s piece.

Douthat switches back and forth between Europe and the United States when discussing assimilation, giving the impression that the challenges are similar.  Treating assimilation in Europe and the United States as similar adds confusion, not clarity.  Cherry-picking outcomes from Europe to support skepticism about assimilation in the United States misleads.  Assimilation is a vitally important outcome for immigrants and their descendants but Europe and the United States have vastly different experiences. 

Douthat also argues that immigrant cultural differences can persist just like the various regional cultures have done so in the United States.  That idea, used most memorably in David Hackett Fischer’s Albion’s Seed, is called the Doctrine of First Effective Settlement (DFES).  Under that theory, the creation and persistence of regional cultural differences requires the near-total displacement of the local population by a foreign one, as happened in the early settlement of the United States. 

However, DFES actually gives reasons to be optimistic about immigrant assimilation because Douthat misses a few crucial details when he briefly mentioned it.  First, as Fischer and others have noted, waves of immigrants have continuously assimilated into the settled regional American cultures since the initial settlement – that is the point of DFES.  The first effective settlements set the regional cultures going forward and new immigrants assimilate into those cultures. 

Second, DFES predicts that today’s immigrants will assimilate into America’s regional cultures (unless almost all Americans quickly die and are replaced by immigrants).  The American regional cultures that immigrants are settling into are already set so they won’t be able to create persistent new regional cultures here.  America’s history with DFES is not a reason to worry about immigrant assimilation today and should supply comfort to those worried about it.

Immigrants and their children are assimilating well into American society.  We shouldn’t let assimilation issues in Europe overwhelm the vast empirical evidence that it’s proceeding as it always has in the United States.

Just when you thought the Syrian civil war couldn’t get any messier, developments last week proved that it could.  For the first time in the armed conflict that has raged for nearly five years, militia fighters from the Assyrian Christian community in northern Iraq clashed with Kurdish troops. What made that incident especially puzzling is that both the Assyrians and the Kurds are vehement adversaries of ISIS—which is also a major player in that region of Syria.  Logically, they should be allies who cooperate regarding military moves against the terrorist organization.

But in Syria, very little is simple or straightforward.   Unfortunately, that is a point completely lost on the Western (especially American) news media.  From the beginning, Western journalists have portrayed the Syrian conflict as a simplistic melodrama, with dictator Bashar al-Assad playing the role of designated villain and the insurgents playing the role of plucky proponents of liberty.  Even a cursory examination of the situation should have discredited that narrative, but it continues largely intact to this day.

There are several layers to the Syrian conflict.  One involved an effort by the United States and its allies to weaken Assad as a way to undermine Iran by depriving Tehran of its most significant regional ally.  Another layer is a bitter Sunni-Shite contest for regional dominance.  Syria is just one theater in that contest.  We see other manifestations in Bahrain, where Iran backs a seething majority Shiite population against a repressive Sunni royal family that is kept in power largely by Saudi Arabia’s military support.  Saudi Arabia and other Gulf powers backed Sunni tribes in western Iraq against the Shiite-dominated government in Baghdad.  Some of those groups later coalesced to become ISIS.  In Yemen, direct military intervention by Saudi Arabia and Riyadh’s smaller Sunni Gulf allies is determined to prevent a victory by the Iranian-backed Houthis.

The war in Syria is yet another theater in that regional power struggle.  It is no accident that the Syrian insurgency is overwhelmingly Sunni in composition and receives strong backing from major Sunni powers, including Saudi Arabia, Qatar, and Turkey.  Assad leads an opposing “coalition of religious minorities,” which includes his Alawite base (a Shiite offshoot) various Christian sects, and the Druze.  But there is an added element of complexity.  The Kurds form yet a third faction, seeking to create a self-governing (quasi-independent) region in northern and northeastern Syria inhabited by their ethnic brethren.  In other words, Syrian Kurds are trying to emulate what Iraqi Kurds have enjoyed for many years in Iraqi Kurdistan, where Baghdad’s authority is little more than a legal fiction.  That explains the clash between Assyrian Christians and Kurds.  Both hate ISIS, but the former supports an intact Syria (presumably with Assad or someone else acceptable to the coalition in charge), the latter does not.

Such incidents underscore just how complex the Syrian struggle is and how vulnerable to manipulation well-meaning U.S. mediation efforts might become.  Our news media need to do a far better job of conveying what is actually taking place in that part of the world, not what wannabe American nation builders wish were the case.

Surprise! Venezuela, the world’s most miserable country (according to my misery index) has just released an annualized inflation estimate for the quarter that ended September 2015. This is late on two counts. First, it has been nine months since the last estimate was released. Second, September 2015 is not January 2016. So, the newly released inflation estimate of 141.5% is out of date.

I estimate that the current implied annual inflation rate in Venezuela is 392%. That’s almost three times higher than the latest official estimate.

Venezuela’s notoriously incompetent central bank is producing lying statistics – just like the Soviets used to fabricate. In the Soviet days, we approximated reality by developing lie coefficients. We would apply these coefficients to the official data in an attempt to reach reality. The formula is: (official data) X (lie coefficient) = reality estimate. At present, the lie coefficient for the Central Bank of Venezuela’s official inflation estimate is 3.0.

Some constitutional conservatives, including Texas Gov. Greg Abbott and Rob Natelson for the American Legislative Exchange Council, have been promoting the idea of getting two-thirds of the states to call for an Article V convention to propose amendments to the U.S. Constitution. Florida senator and presidential candidate Marco Rubio recently made headlines by endorsing the notion. But I fear that it’s not a sound one under present conditions, as I argue in a new piece this week (originally published at The Daily Beast, now reprinted at Cato).  It begins:

In his quest to catch the Road Runner, the Coyote in the old Warner Brothers cartoons would always order supplies from the ACME Corporation, but they never performed as advertised. Either they didn’t work at all, or they blew up in his face.

Which brings us to the idea of a so-called Article V convention assembled for the purpose of proposing amendments to the U.S. Constitution, an idea currently enjoying some vogue at both ends of the political spectrum.

Jacob Sullum at Reason offers a quick tour of some of the better and worse planks in Gov. Abbott’s “Texas Plan” (as distinct from the question of whether a convention is the best way of pursuing them).  In using the phrase “Texas Plan,”  Gov. Abbott recognizes that in a convention scenario where any and all ideas for amendments are on the table, other states would be countering with their own plans; one can readily imagine a “California Plan” prescribing limits on campaign speech and affirmative constitutional rights to health and education, a “New Jersey Plan” to narrow the Second Amendment and broaden the General Welfare clause, and so forth. Much more on the convention idea in this Congressional Research Service report from 2014 (post adapted and expanded from Overlawyered).

Cato has published often in the past on the difficulties with and inefficiencies of the constitutional amendment process including Tim Lynch’s 2011 call for amending the amendment process itself and Michael Rappaport’s Policy Analysis No. 691 in 2012 with proposals of similar intent. This past December’s Cato Unbound discussion led by Prof. Sanford Levinson included a response essay by Richard Albert describing the founding document as “constructively unamendable” at present, although as a consequence of current political conditions and “not [as] a permanent feature of the Constitution.” And to be fair I should note also Ilya Shapiro had a 2011 post in this space with a perspective (or at least a choice of emphasis) different from mine.

I’m not known for my clairvoyance – it would be impossible to make a living predicting what the Supreme Court will do – but as the latest round of birtherism continues into successive news cycles, I do have an odd sense of “deja vu all over again.” Two and a half years ago, I looked into Ted Cruz’s presidential eligibility and rather easily came to the conclusion that, to paraphrase a recent campaign slogan, “yes, he can.” Here’s the legal analysis in a nutshell:

In other words, anyone who is a citizen at birth — as opposed to someone who becomes a citizen later (“naturalizes”) or who isn’t a citizen at all — can be president.

So the one remaining question is whether Ted Cruz was a citizen at birth. That’s an easy one. The Nationality Act of 1940 outlines which children become “nationals and citizens of the United States at birth.” In addition to those who are born in the United States or born outside the country to parents who were both citizens — or, interestingly, found in the United States without parents and no proof of birth elsewhere — citizenship goes to babies born to one American parent who has spent a certain number of years here.

That single-parent requirement has been amended several times, but under the law in effect between 1952 and 1986 — Cruz was born in 1970 — someone must have a citizen parent who resided in the United States for at least 10 years, including five after the age of 14, in order to be considered a natural-born citizen. Cruz’s mother, Eleanor Darragh, was born in Delaware, lived most of her life in the United States, and gave birth to little Rafael Edward Cruz in her 30s. Q.E.D.

We all know that this wouldn’t even be a story if weren’t being pushed by the current Republican frontrunner (though Cruz is beating Trump in the latest Iowa polls). Nevertheless, here we are. 

For more analysis and a comprehensive set of links regarding this debate, see Jonathan Adler’s excellent coverage at the Volokh Conspiracy.

Of course we’re referring to Hurricane Alex here, which blew up in far eastern Atlantic waters thought to be way too cold to spin up such a storm.  Textbook meteorology says hurricanes, which feed off the heat of the ocean, won’t form over waters cooler than about  80°F.  On the morning of January 14, Alex exploded over waters that were a chilly 68°.

Alex is (at least) the third hurricane observed in January, with others in 1938 and 1955.  The latter one, Hurricane Alice2, was actually alive on New Year’s Day.

The generation of Alex was very complex.  First, a garden-variety low pressure system formed over the Bahamas late last week and slowly drifted eastward.  It was derived from the complicated, but well-understood processes associated with the jet stream and a cold front, and that certainly had nothing to do with global warming.

The further south cold fronts go into the tropical Atlantic, the more likely that they will just dissipate, and that’s what happened last week, too.  Normally the associated low-pressure would also wash away.  But after it initially formed near the Bahamas  and drifted eastward, it was  in a region where sea-surface temperatures (SSTs) are running about 3°F above the long-term average consistent  with a warmer world. This may have been just enough to fuel the persistent remnant cluster of thunderstorms that meandered in the direction of Spain.

Over time, the National Hurricane Center named this collection “Alex” as a “subtropical” cyclone, which is what we call a tropical low pressure system that doesn’t have the characteristic warm core of a hurricane.

(Trivia note:  the vast majority of cyclones in temperate latitudes have a cold core at their center.  Hurricanes have a warm core.  There was once a move to call the subtropical hybrids “himicanes” (we vote for that!), then “neutercanes” (not bad, either) but the community simply adopted the name “subtropical.”)

In the early hours of January 14, thanks to a cold low pressure system propagating through the upper atmosphere, temperatures plummeted above the storm to a rather astounding -76°F.  So even though the SSTs were a mere 68°, far to cold to promote a hurricane, the difference between there and high altitudes was a phenomenal  144°, was so large that one could form.

Vertical motion, which is what causes the big storm clouds that form the core of a hurricane, is greatest when the change in temperature between the surface that the upper atmosphere is largest, and that 144° differential  exploded the storms that were in subtropical Alex, quickly creating a warm core and a hurricane eyewall. 

A far-south invasion of such cold air over the Atlantic subtropics is less likely in a warmer world, as the pole-to-equator temperature contrast lessens.  Everything else being equal, that would tend to confine such an event to higher latitudes.

So, yes, warmer surface temperatures may have kept the progenitor storms of Alex alive, but warmer temperatures would have made the necessary outbreak of extremely cold air over the storm less likely.

Consequently, it’s really not right to blame global warming for Hurricane Alex, though it may have contributed to subtropical storm Alex.

On December 1, 2015, the Bank of England released the results of its second round of annual stress tests, which aim to measure the capital adequacy of the UK banking system. This exercise is intended to function as a financial health check for the major UK banks, and purports to test their ability to withstand a severe adverse shock and still come out in good financial shape.

The stress tests were billed as severe. Here are some of the headlines:

“Bank of England stress tests to include feared global crash” “Bank of England puts global recession at heart of doomsday scenario” “Banks brace for new doomsday tests”

This all sounds pretty scary. Yet the stress tests appeared to produce a comforting result: despite one or two small problems, the UK banking system as a whole came out of the process rather well. As the next batch of headlines put it:

“UK banks pass stress tests as Britain’s ‘post-crisis period’ ends” “Bank shares rise after Bank of England stress tests” “Bank of England’s Carney says UK banks’ job almost done on capital”

At the press conference announcing the stress test results, Bank of England Governor Mark Carney struck an even more reassuring note:

The key point to take is that this [UK banking] system has built capital steadily since the crisis. It’s within sight of [its] resting point, of what the judgement of the FPC is, how much capital the system needs. And that resting point — we’re on a transition path to 2019, and we would really like to underscore the point that a lot has been done, this is a resilient system, you see it through the stress tests.[1] [italics added]

But is this really the case? Let’s consider the Bank’s headline stress test results for the seven financial institutions involved: Barclays, HSBC, Lloyds, the Nationwide Building Society, the Royal Bank of Scotland, Santander UK and Standard Chartered.

In this test, the Bank sets its minimum pass standard equal to 4.5%: a bank passes the test if its capital ratio as measured by the CET1 ratio — the ratio of Common Equity Tier 1 capital to Risk-Weighted Assets (RWAs) — is at least 4.5% after the stress scenario is accounted for; it fails the test otherwise.

The outcomes are shown in in Chart 1:

Chart 1: Stress Test Outcomes for the CET1 Ratio with a 4.5% Pass Standard

Note: The data are obtained from Annex 1 of the Bank’s stress test report (Bank of England, December 2015).

Based solely on this test, the UK banking system might indeed look to be in reasonable shape. Every bank passes the test, although one (Standard Chartered) does so by a slim margin of under 100 basis points and another (RBS) does not perform much better. Nonetheless, according to this test, the UK banking system looks broadly healthy overall.

Unfortunately, that is not the whole story.

One concern is that the RWA measure used by the Bank is essentially nonsense — as its own (now) chief economist demonstrated a few years back. So it is important to consider the second set of stress tests reported by the Bank, which are based on the leverage ratio. This is defined by the Bank as the ratio of Tier 1 capital to leverage exposure, where the leverage exposure attempts to measure the total amount at risk. We can think of this measure as similar to total assets.

In this test, the pass standard is set at 3% — the bare minimum leverage ratio under Basel III.

The outcomes for this stress test are given in the next chart:

Chart 2: Stress Test Outcomes Using the Tier 1 Leverage Ratio with a 3% Pass Standard

Based on this test, the UK banking system does not look so healthy after all. The average post-stress leverage ratio across the banks is 3.5%, making for an average surplus of 0.5%. The best performing institution (Nationwide) has a surplus (that is, the outcome minus the pass standard) of only 1.1%, while four banks (Barclays, HSBC, Lloyds and Santander) have surpluses of less than one hundred basis points, and the remaining two don’t have any surpluses at all — their post-stress leverage ratios are exactly 3%.

To make matters worse, this stress test also used a soft measure of core capital — Tier 1 capital — which includes various soft capital instruments (known as additional Tier 1 capital) that are of questionable usefulness to a bank in a crisis.

The stress test would have been more convincing had the Bank used a harder capital measure. And, in fact, the ideal such measure would have been the CET1 capital measure it used in the first stress test. So what happens if we repeat the Bank’s leverage stress test but with CET1 instead of Tier 1 in the numerator of the leverage ratio?

Chart 3: Stress Test Outcomes Using the CET1 Leverage Ratio with a 3% Pass Standard

In this test, one bank fails, four have wafer-thin surpluses and only two banks are more than insignificantly over the pass standard.

Moreover, this 3% pass standard is itself very low. A bank with a 3% leverage ratio will still be rendered insolvent if it makes a loss of 3% of its assets.

The 3% minimum is also well below the potential minimum that will be applied in the UK when Basel III is fully implemented — about 4.2% by my calculations — let alone the 6% minimum leverage ratio that the Federal Reserve is due to impose in 2018 on the federally insured subsidiaries of the eight globally systemically important banks in the United States.

Here is what we would get if the Bank of England had carried out the leverage stress test using both the CET1 capital measure and the Fed’s forthcoming minimum standard of 6%:

Chart 4: Stress Test Outcomes for the CET1 Leverage Ratio with a 6% Pass Standard

Oh my! Every bank now fails and the average deficit is nearly 3 percentage points.

Nevertheless, I leave the last word to Governor Carney: “a lot has been done, this is a resilient system, you see it through the stress tests.”

——

[1] Bank of England Financial Stability Report Q&A, 1st December 2015, p. 11.

[Cross-posted from Alt-M.org]

As part of his 2017 budget proposal, Secretary of Transportation Anthony Foxx proposes to spend $4 billion on self-driving vehicle technology. This proposal comes late to the game, as private companies and university researchers have already developed that technology without government help. Moreover, the technology Foxx proposes is both unnecessary and intrusive of people’s privacy.

In 2009, President Obama said he wanted to be remembered for promoting a new transportation network the way President Eisenhower was remembered for the Interstate Highway System. Unfortunately, Obama chose high-speed rail, a 50-year-old technology that has only been successful in places where most travel was by low-speed trains. In contrast with interstate highways, which cost taxpayers nothing (because they were paid for out of gas taxes and other user fees) and carry 20 percent of all passenger and freight travel in the country, high-speed rail would have cost taxpayers close to a trillion dollars and carry no more than 1 percent of passengers and virtually no freight.

The Obama adminstration has also promoted a 120-year-old technology, streetcars, as some sort of panacea for urban transportation. When first developed in the 1880s, streetcars averaged 8 miles per hour. Between 1910 and 1966, all but six American cities replaced streetcars with buses that were faster, cost half as much to operate, and cost almost nothing to start up on new routes. Streetcars funded by the Obama administration average 7.3 miles an hour (see p. 40), cost twice as much to operate as buses, and typically cost $50 million per mile to start up.

The point is that this administration, if not government in general, has been very poor at choosing transportation technologies for the twenty-first century. While I’ve been a proponent of self-driving cars since 2010, I believe the administration is making as big a mistake with its latest $4 billion proposal as it made with high-speed rail and streetcars.

The problem is that the technology the government wants is very different from the technology being developed by Google, Volkswagen, Ford, and other companies. The cars designed by these private companies rely on GPS, on-board sensors, and extremely precise maps of existing roadways and other infrastructure. A company called HERE, which was started by Nokia but recently purchased by BMW, Daimler, and Volkswagen, has already mapped about two-thirds of the paved roads in the United States and makes millions of updates to its maps every day.

Foxx proposes to spend most of the $4 billion on a very different technology called “connected vehicle” or vehicle-to-infrastructure communications. In this system, the government would have to install new electronic infrastructure in all streets and highway that would help guide self-driving cars. But states and cities today can’t fill potholes or keep traffic lights coordinated, so they are unlikely to be able to install an entirely new infrastructure system in any reasonable amount of time.

Moreover, the fixed infrastructure used for connected corridors will quickly become obsolete. Your self-driving car will be able to download software upgrades while sitting in your garage overnight–Teslas already do so. However, upgrading the hardware for a connected vehicle system could take years and might never happen due to the expense of converting from one technology to another. Thus, Foxx’s plan would lock us into a system that will be obsolete long before it is fully implemented.

Privacy advocates should also worry that connected roads would also connect cars to government command centers. The government will be able to monitor everyone’s travel and even, if you drive more than some planner thinks is the appropriate amount, remotely turn your car off to “save the planet.” Of course, Foxx will deny that this is his goal. Yet the Washington legislature has passed a law mandating a 50 percent reduction in per capita driving by 2050, and California and Oregon have similar if not quite-so-draconian rules, and it is easy to imagine that the states, if not the feds, will take advantage of Foxx’s technology to enforce their targets. No such monitoring or control is possible in the Google-like self-driving cars.

Foxx’s infrastructure is entirely unnecessary for self-driving cars, as Google, Audi, Delphi, and other companies have all proven that their cars can work without it. Not to worry: Foxx also promises that his department will write national rules that all self-driving cars must follow. No doubt these rules will mandate that the cars work on connected streets, whether they need to or not.

Some press reports suggest that Foxx’s plan will make Google happy, but it is more likely to disappoint. Google is already disappointed with self-driving car rules written by the California Department of Motor Vehicles. But what are the chances that federal rules will be any better–especially if the federal government is dead-set on its own technology that is very different from Google’s? If the states come up with 50 different sets of rules, some of them are likely to be better than the others, and the others can follow the best examples.

If Congress approves Foxx’s program, the best we can hope for is that Google and other private companies are able to ignore the new technology. The worst case is that the department’s new rules not only mandate that cars be able to use connected streets, but that they work in self-driving mode only on roads that have connected-streets technology. In that case, the benefits of self-driving cars will be delayed for the decades that it takes to install that technology–and may never happen at all if people don’t the extra cost for cars that can drive themselves only on a few selected roads and streets.

All government needs to do for the next transportation revolution to happen is keep the potholes filled, the stripes painted, and otherwise get out of the road. In contrast, Foxx’s is a costly way of doing more harm than good.

Americans often move between different income brackets over the course of their lives. As covered in an earlier blog post, over 50 percent of Americans find themselves among the top 10 percent of income-earners for at least one year during their working lives, and over 11 percent of Americans will be counted among the top 1 percent of income-earners for at least one year.   

Fortunately, a great deal of what explains this income mobility are choices that are largely within an individual’s control. While people tend to earn more in their “prime earning years” than in their youth or old age, other key factors that explain income differences are education level, marital status, and number of earners per household.  As HumanProgress.org Advisory Board member Mark Perry recently wrote

The good news is that the key demographic factors that explain differences in household income are not fixed over our lifetimes and are largely under our control (e.g. staying in school and graduating, getting and staying married, etc.), which means that individuals and households are not destined to remain in a single income quintile forever.  

According to the U.S. economist Thomas Sowell, whom Perry cites, “Most working Americans, who were initially in the bottom 20% of income-earners, rise out of that bottom 20%. More of them end up in the top 20% than remain in the bottom 20%.”  

While people move between income groups over their lifetime, many worry that income inequality between different income groups is increasing. The growing income inequality is real, but its causes are more complex than the demagogues make them out to be. 

Consider, for example, the effect of “power couples,” or people with high levels of education marrying one another and forming dual-earner households. In a free society, people can marry whoever they want, even if it does contribute to widening income disparities. 

Or consider the effects of regressive government regulations on exacerbating income inequality. These include barriers to entry that protect incumbent businesses and stifle competition. To name one extreme example, Louisiana recently required a government-issued license to become a florist. Lifting more of these regressive regulations would aid income mobility and help to reduce income inequality, while also furthering economic growth. 

Chaos and conflict have become constants in the Middle East. Frustrated U.S. policymakers tend to blame ancient history. Said President Barack Obama in his State of the Union speech, the region’s ongoing transformation was “rooted in conflicts that date back millennia.”

Of course, war is a constant of human history. But while today’s most important religious divisions go back thousands of years, bitter sectarian conflict does not. The Christian Crusades and Muslim conquests into Europe ended long ago.

All was not always calm within the region, of course. Sectarian antagonism existed. Yet religious divisions rarely caused the sort of hateful slaughter we see today.

Tolerance lived on even under political tyranny. The Baathist Party, which ruled Iraq and Syria until recently, was founded by a Christian. Christians played a leading role in the Palestinian movement.

The fundamental problem today is politics. Religion has become a means to forge political identities and rally political support.

As I point out in Time: “Blame is widely shared. Artificial line-drawing by the victorious allies after World War I, notably the Sykes-Picot agreement, created artificial nation states for the benefit of Europeans, not Arabs. Dynasties were created with barely a nod to the desires of subject peoples.”

Lebanon’s government was created as a confessional system, which exacerbated political reliance on religion. The British/American-backed overthrow of Iran’s democratic government in 1953 empowered the Shaw, an authoritarian, secular-minded modernizer. His rule was overturned by the Islamic Revolution.

This seminal event greatly sharpened the sectarian divide, which worsened through the Iran-Iraq war and after America’s invasion of Iraq. Out of the latter emerged the Islamic State. The collapse of Syria’s Assad regime has provided another political opportunity for radical movements.

Nothing about the history of the Middle East makes conflict inevitable. To reverse the process both Shiites and Sunnis must reject the attempt of extremists to misuse their faith for political advantage. And Western nations, especially the United States, must stay out of Middle East conflicts.

Global Science Report is a feature from the Center for the Study of Science, where we highlight one or two important new items in the scientific literature or the popular media. For broader and more technical perspectives, consult our monthly “Current Wisdom.”

We realize that we are 180° out of sync with the news cycle when we discuss heat-related death in the middle of Northern Hemisphere winter, but we’ve come across a recent paper that can’t wait for the heat and hype of next summer.

The paper, by Arizona State University’s David Hondula and colleagues, is a review of the recent scientific literature on “human health impacts of observed and projected increases in summer temperature.”

This topic is near and dear to our hearts, as we have ourselves contributed many papers to the scientific literature on this matter (see here).  We are especially interested in seeing how the literature has evolved over the past several years and Hondula and colleagues’ paper, which specifically looked at findings published in the 2012-2015 timeframe, fills this interest nicely.

Here’s how they summed up their analysis:

We find that studies based on projected changes in climate indicate substantial increases in heat-related mortality and morbidity in the future, while observational studies based on historical climate and health records show a decrease in negative impacts during recent warming. The discrepancy between the two groups of studies generally involves how well and how quickly humans can adapt to changes in climate via physiological, behavioral, infrastructural, and/or technological adaptation, and how such adaptation is quantified.

Did you get that? When assessing what actually happens to heat-related mortality rates in the face of rising temperatures, researchers find that “negative impacts” decline. But, when researchers attempt to project the impacts of rising temperature in the future on heat-related mortality, they predict “substantial increases.”

In other words, in the real world, people adapt to changing climate conditions (e.g., rising temperatures), but in the modeled world of the future, adaptation can’t keep up. 

But rather than assert this as a problem with model world behavior that needs serious attention, most assessments of the projected impacts of climate change (such as the one produced by our federal government as a foundation for its greenhouse gas mitigation policies) embrace model world forecasts and run with storylines like “global warming set to greatly increase deaths from heat waves.”

We’ve been railing against this fact for years. But, it never seems gain any traction with federal climatologists.

Interestingly, in all the literature surveyed by Hondula’s group, they cite only one study which suggested that climate change itself may be aiding and abetting the adaptive processes. The idea forwarded in that study was that since people adapt to heat waves, and since global warming may be, in part, leading to more heat waves, that global warming itself may be helping to drive the adaptive response.  Rather than leading to more heat-related deaths, global warming may actually be leading to fewer.

Who were the authors of that study? Perhaps they may by familiar to you: Chip Knappenberger, Pat Michaels, and Anthony Watts.

While Hondula and colleagues seem to be amenable to our premise, they point out that putting an actual magnitude on this effect is difficult:

If changing climate is itself a modifier of the relationship between temperature and mortality (e.g., increasing heat wave frequency or severity leads to increasing public awareness and preventative measures), a quantitative approach for disentangling these effects has yet to be established.

We concur with this, but, as we point out in our paper using the history of heat-related mortality in Stockholm as an example, it doesn’t take much of a positive influence from climate change to offset any negatives:

[R]aised awareness from climate change need only be responsible for 288 out of 2,304 (~13%) deaths saved through adaptation to have completely offset the climate-related increase in heat-related mortality [there].  For any greater contribution, climate change would have resulted in an overall decline in heat-related mortality in Stockholm County despite an increase in the frequency of extreme-heat events.

We went on to say (in somewhat of an understatement):

Our analysis highlights one of the many often overlooked intricacies of the human response to climate change.

Hondula’s team adds this, from their conclusion:

By directing our research efforts to best understand how reduction in heat mortality and morbidity can be achieved, we have the opportunity to improve societal welfare and eliminate unnecessary health consequences of extreme weather—even in a hotter future.

Well said.

References:

Hondula, D. M., R. C. Balling, J. K. Vanos, and M. Georgescu, 2015. Rising temperatures, human health, and the role of adaptation. Current Climate Change Reports, 1, 144-154.

Knappenberger, P. C., P. J. Michaels, and A. Watts, 2014. Adaptation to extreme heat in Stockholm County, Sweden. Nature Climate Change, 4, 302–303.

On January 14th, the White House announced that Gen. Joseph Votel - the current head of U.S. Special Operations Command – will take over as the head of U.S. Central Command, a position which will place him in charge of America’s wars in Iraq, Syria, and Afghanistan. The symbolism of the appointment could not be clearer. As Foreign Policy noted,

“With 3,000 special operations troops currently hunting down Taliban militants in Afghanistan, and another 200 having just arrived on the ground in Iraq to take part in kill or capture missions against Islamic State leadership, Votel’s nomination underscores the central role that the elite troops play in the wars that President Barack Obama is preparing to hand off to the next administration.”

The growing use of special operations forces has been a hallmark of the Obama administration’s foreign policy, an attempt to thread the needle between growing public opposition to large-scale troop deployments and public demands for the United States to ‘do more’ against terrorist threats, all while dancing around the definition of the phrase ‘boots on the ground.’ But the increasing use of such non-traditional forces – particularly since the start of the Global War on Terror – is also reshaping how we think about U.S. military intervention overseas.

It’s not just the growing use of special operations forces. New technologies like drones permit America’s military to strike terrorist training camps and high value targets abroad with limited risk to operators. The diffusion of terrorist groups and non-state actors across the globe enables terrorist groups and their affiliates to be present in many states. And the breadth of the 2001 Congressional Authorization to Use Military Force (AUMF) – which permits attacks on any forces ‘associated’ with Al Qaeda – has permitted the executive branch to engage in numerous small military interventions around the globe without congressional approval or much public debate.

The result has been a series of conflicts which are effectively invisible to the public. Indeed, depending on your definition, America is currently fighting between three and nine wars. Iraq, Syria, and Afghanistan are obvious. But U.S. troops are also actively fighting in counterterrorism operations in Somalia, Nigeria, and Uganda. The United States is conducting drone strikes in Pakistan, Libya, and Somalia. And our commitment to the Saudi-led campaign in Yemen is even more ambiguous: though the U.S. is not engaged in fighting, it is certainly providing material support in the form of logistical and intelligence support.

On January 25th, Cato is hosting a panel discussion on the issues raised by the growth of these small, ‘invisible’ wars, and by the growing ubiquity of U.S. military intervention around the world. Moderated by Mark Mazzetti of the New York Times, and featuring Bronwyn Bruton of the Atlantic Council, Charles Schmitz of Towson University and Moeed Yusuf of the United States Institute of Peace, the event will seek to explore three key ‘invisible wars’ - Yemen, Pakistan, and Somalia – and the broader questions they raise. What is the nature and scope of America’s involvement in such conflicts? Does lack of public awareness impact U.S. national security debates? And does U.S. involvement actually serve U.S. interests?

The event will be held on January 25th at 11am. You can register here.

Parker and Ollier (2015) set the tone for their new paper on sea level change along the coastline of India in the very first sentence of their abstract: “global mean sea level (GMSL) changes derived from modelling do not match actual measurements of sea level and should not be trusted” (emphasis added). In contrast, it is their position that “much more reliable information” can be obtained from analyses of individual tide gauges of sufficient quality and length. Thus, they set out to obtain such “reliable information” for the coast of India, a neglected region in many sea level studies, due in large measure to its lack of stations with continuous data of sufficient quality.

A total of eleven stations were selected by Parker and Ollier for their analysis, eight of which are archived in the PSMSL database (PSMSL, 2014) and ten in a NOAA sea level database (NOAA, 2012). The average record length of the eight PSMSL stations was 54 years, quite similar to the average record length of 53 years for the eleven NOAA stations.

Results indicated an average relative rate of sea level rise of 1.07 mm/year for all eleven Indian stations, with an average record length of 51 years. However, the two Australian researchers report this value is likely “overrated because of the short record length and the multi-decadal and interannual oscillations” of several of the stations comprising their Indian database. Indeed, as they further report, “the phase of the 60-year oscillation found in the tide gauge records is such that sea level in the North Atlantic, western North Pacific, Indian Ocean and western South Pacific has been increasing since 1985-1990,” which increase most certainly skews the rate trend of the shorter records over the most recent period of record above the actual rate of rise.

One additional important finding of the study was gleaned from the longer records in the database, which revealed that the rates of sea level rise along the Indian coastline have been “decreasing since 1955,” which observation of deceleration stands in direct opposition to model-based claims that sea level rise should be accelerating in recent decades in response to CO2-induced global warming.

In comparing their findings to those reported elsewhere, Parker and Ollier note there is a striking similarity between the trends they found for the Indian coastline and for other tide gauge stations across the globe. Specifically, they cite Parker (2014), who calculated a 1.04 ± 0.45 mm/year average relative rate of sea level rise from 560 tide gauges comprising the PSMSL global database. And when that database is restricted in analysis to the 170 tide gauges with a length of more than 60 years at the present time, the average relative rate of rise declines to a paltry 0.25 ± 0.19 mm/year, without any sign of positive or negative acceleration.

The significance of Parker and Ollier’s work is noted in the “sharp contrast” they provide when comparing the rates of sea level rise computed from tide gauge data with model-based sea level reconstructions produced from satellites, such as the 3.2 mm/year value observed by the CU Sea Level Research Group (2014), which Parker and Ollier emphatically claim “cannot be trusted because it is so far from observed data.” Furthermore, it is clear from the observational tide gauge data that there is nothing unusual, unnatural, or unprecedented about current rates of sea level rise, with the exception that they appear to be decelerating, as opposed to accelerating, despite a period of modern warmth that climate alarmists contend is unequaled over the past millennium, and which should be melting away the polar ice caps and rapidly rising sea levels.

 

References

CU Sea Level Research Group. 2014. Global Mean Sea Level. sealevel.colorado.edu (retrieved May 30, 2014).

National Oceanic and Atmospheric Administration (NOAA). 2012. MSL global trend table, tidesandcurrents.noaa.gov/sltrends/MSL_global_trendtable.html (retrieved May 30, 2014).

Parker, A. 2014. Accuracy and reliability issues in computing absolute sea level rises. Submitted paper.

Parker, A. and Ollier, C.D. 2015. Sea level rise for India since the start of tide gauge records. Arabian Journal of Geosciences 8: 6483-6495.

Permanent Service on Mean sea level (PSMSL). 2013. Data, www.psmsl.org (retrieved October 1, 2013).

Leaders of the worldwide Anglican church are meeting at Canterbury Cathedral this week, with some observers predicting an open schism over homosexuality. There is fear that archbishops from six African countries – Uganda, Kenya, Nigeria, South Sudan, Rwanda and the Democratic Republic of the Congo – may walk out if the archbishop of Canterbury, the symbolic head of the worldwide Anglican Communion, won’t sanction the U.S. Episcopal Church for consecrating gay bishops. Since about 60 percent of the world’s Anglicans are in Africa, that would be a major break.

I am neither an Anglican nor a theologian, but I did reflect on the non-religious values that shape some of these disputes in the Guardian a few years ago:

The Anglican Archbishop of South Africa, Njongonkulu Ndungane, says his church should abandon its “practices of discrimination” and accept the gay Episcopal bishop V. Gene Robinson of New Hampshire. That makes him unusual in Africa, where other Anglican bishops have strongly objected to the ordination of practicing homosexuals.

The Nigerian primate, for instance, Archbishop Peter Akinola, condemned the consecration of Robinson as bishop, calling it a “satanic attack on the church of God.” According to the San Francisco Chronicle, “He even issued a statement on behalf of the ‘Primates of the Global South’ - a group of 20 Anglican primates from Africa, the West Indies, South America, India, Pakistan, and Southeast Asia - deploring the action and, along with Uganda and Kenya, formally severed relations with Robinson’s New Hampshire diocese.”

So what makes Ndungane different? He’s the successor to Nobel laureate Desmond Tutu, one might recall. And they both grew up in South Africa, where enlightenment values always had a foothold, even during the era of apartheid. Ndungane studied at the liberal English-speaking University of Cape Town, where Sen. Robert F. Kennedy gave a famous speech in 1966.

Ndungane didn’t hear that speech, alas, because he was then imprisoned on Robben Island. But after he was released he decided to enter the church and took two degrees at King’s College, London. The arguments of the struggle against apartheid came from western liberalism - the dignity of the individual, equal and inalienable rights, political liberty, moral autonomy, the rule of law, the pursuit of happiness.

So it’s no surprise that a man steeped in that struggle and educated in the historic home of those ideas would see how they apply in a new struggle, the struggle of gay people for equal rights, dignity, and the pursuit of happiness as they choose.

The South African Anglicans remain in favor of gay marriage. And of course, such church schisms are not new. The Baptist, Methodist, and Presbyterian churches in the United States split over slavery. The Methodists and Presbyterians reunited a century later, but the Baptists remain separate bodies.

In 2009, Duracell, a subsidiary of Proctor & Gamble, began selling “Duracell Ultra” batteries, marketing them as their longest-lasting variety. A class action was filed in 2012, arguing that the “longest-lasting” claim was fraudulent. The case was removed to federal court, where the parties reached a global settlement purporting to represent 7.26 million class members.

Attorneys for the class are to receive an award of $5.68 million, based on what the district court deemed to be an “illusory” valuation of the settlement at $50 million. In reality, the class received $344,850. Additionally, defendants agreed to make a donation of $6 million worth of batteries over the course of five years to various charities.

This redistribution of settlement money from the victims to other uses is referred to as cy pres. “Cy pres” means “as near as possible,” and courts have typically used the cy pres doctrine to reform the terms of a charitable trust when the stated objective of the trust is impractical or unworkable. The use of cy pres in class action settlements—particularly those that enable the defendant to control the funds—is an emerging trend that violates the due process and free speech rights of class members.

Accordingly, class members objected to the settlement, arguing that the district court abused its discretion in approving the agreement and failed to engage in the required rigorous analysis to determine whether the settlement was “fair, reasonable, and adequate.” The U.S. Court of Appeals for the Eleventh Circuit affirmed the settlement, however, noting the lack of “precedent prohibiting this type of cy pres award.”

Now an objecting class member has asked the Supreme Court to review the case, and Cato filed an amicus brief arguing that the use of cy pres awards in this manner violates the Fifth Amendment’s Due Process Clause and the First Amendment’s Free Speech Clause.

Specifically, due process requires—at a minimum—an opportunity for an absent plaintiff to remove himself, or “opt out,” from the class. Class members have little incentive or opportunity to learn of the existence of a class action in which they may have a legal interest, while class counsel is able to make settlement agreements that are unencumbered by an informed and participating class.

In addition, when a court approves a cy pres award as part of a class action settlement, it forces class members to endorse certain ideas, which constitutes a speech compulsion. When defendants receive money—essentially from themselves—to donate to a charity, the victim class members surrender the value of their legal claims. Class members are left uncompensated, while defendants are shielded from any future claims of liability and even look better than they did before the lawsuit given their display of “corporate social responsibility.”

The Supreme Court will consider whether to take up Frank v. Poertner later this winter.

If federal statutory law expressly commands that all covered federal employees shall be “free from any discrimination based on … race,” does that forbid the federal government from adopting race-based affirmative action plans? That is one of the important—and seemingly obvious—questions posed by Shea v. Kerry, a case brought by our friends at the Pacific Legal Foundation. William Shea is a white State Department Foreign Service Officer. In 1990, he applied for his Foreign Service Officer position and began working in 1992 at a junior-level post. At the time, the State Department operated a voluntary affirmative action plan (read: voluntary as “mandated by Congress”) whereby minorities were able to bypass the junior levels and enter the mid-level service. The State Department attempted to justify its racial plan by noting that there were statistical imbalances at the senior Foreign Service levels, even though the path to the senior levels is unrelated to service at the lower levels.

In 2001, Shea filed an administrative complaint against the State Department for its disparate treatment of white applicants under its 1990-92 hiring plan, complaining that he did not enter at as high a grade as he may have and that the discrimination cost him in both advancement opportunities and earnings. After exhausting administrative remedies, Shea took his complaint to the courts, resulting in this case. The Cato Institute has joined with the Southeastern Legal Foundation, the Center for Equal Opportunity, and the National Association of Scholars to file an amici curiae brief calling for the Supreme Court to take up the case and reverse the federal appellate court below.

In fairness to the court below, Title VII jurisprudence, as it stands, is both unclear and unworkable. The text of Title VII expressly prohibits discrimination on the basis of race—what’s called “disparate treatment.” Indeed, in the specific provisions on federal hiring, Title VII employs very expansive language to ensure that disparate treatment is not permitted. But such a “literal construction” of the Title VII statute was eschewed by Justice William Brennan in 1979, writing for the Court in United Steelworkers v. Weber. Relying on cherry-picked statutory history, Brennan found that Title VII’s plain text did not prohibit collectively bargained, voluntary affirmative action programs that attempt to remedy disparate impact—statistical imbalances in the racial composition of employment groups—even if such plans used quota systems. Later, in Johnson v. Transportation Agency, Santa Clara County, Cal. (1987), the Court exacerbated the issue by extending the Weber rule from purely private hiring to municipal hiring. In Shea, the U.S. Court of Appeals for the D.C. Circuit extended the rule from Johnson and Weber to federal hiring, not just municipal and private employment.

To make matters more confusing, in Ricci v. DeStefano (2009) (in which Cato also filed a brief), the Supreme Court held that, in order for a remedial disparate treatment to be permissible under Title VII, an employer must show a strong basis in evidence that they would face disparate-impact liability if they didn’t make discriminatory employment decisions. The Ricci Court held that the new strong-basis-in-evidence standard is “a matter of statutory construction to resolve any conflict between the disparate-treatment and disparate-impact provisions of Title VII.” While this holding implicitly jettisons the rubric created by Johnson and Weber, the court did not expressly overrule those cases, leaving lower courts in disarray as to when to apply Johnson and Weber or Ricci. Indeed, this conflict between precedents was noted by both the federal trial court and the federal appellate courts below.

As amici point out in our brief, the outcome of Shea’s case hangs on the applicable standard of review. In Ricci, this Court noted that, standing alone, “a threshold showing of a significant statistical disparity” is “far from a strong basis in evidence.” Yet, as the federal trial court noted, “[the] State [Department] does rely solely on a statistical imbalance in the mid- and senior-levels” in order to justify its affirmative action plan. Had the lower Courts applied Ricci, which superseded Johnson and Weber, Shea would have prevailed on his claims. The Court should take up this current case and clarify the jurisprudence applicable in Title VII cases in light of Ricci.

Lost in all the hoopla over “y’all queda” and “VanillaISIS” is any basic history of how public rangelands in the West–and in eastern Oregon in particular–got to this point. I’ve seen no mention in the press of two laws that are probably more responsible than anything else for the alienation and animosity the Hammonds felt towards the government.

The first law, the Public Rangelands Improvement Act of 1978, set a formula for calculating grazing fees based on beef prices and rancher costs. When the law was written, most analysts assumed per capita beef consumption would continue to grow as it had the previous several decades. In fact, it declined from 90 pounds to 50 pounds per year. The formula quickly drove down fees to the minimum of $1.35 per cow-month, even as inflation increased the costs to the government of managing the range. 

The 1978 law also allowed the Forest Service and Bureau of Land Management (BLM) to keep half of grazing fees for range improvements. Initially, this fund motivated the agencies to promote rancher interests. But as inflation ate away the value of the fee, agency managers began to view ranchers as freeloaders. Today, the fee contributes will under 1 percent of agency budgets and less than 10 percent of range management costs. Livestock grazing was once a profitable use of federal range lands but now costs taxpayers nearly $10 for every dollar collected in fees.

Ranching advocates argue that the grazing fee is set correctly because it costs more to graze livestock on federal land than on state or private land. But the BLM and Forest Service represent the sellers, not the buyers, and the price they set should reflect the amount that a seller is willing to accept. Except in cases of charity, no seller would permanently accept less than cost, and costs currently average about $10 per animal unit month.

The second law, the Steens Mountain Cooperative Management and Protection Act of 2000, was an environmentalist effort that affected the lands around the Hammonds ranch. The “protection” portion of the law put 176,000 acres of land into wilderness, which ordinarily does not shut down grazing operations. The “cooperative” portion offered local ranchers other grazing lands if they would voluntarily give up their leases in the wilderness. Nearly 100,000 acres were closed to domestic livestock when every ranch family accepted the offer except one: the Hammonds.

The Hammonds came away from the bargaining table convinced the government was trying to take their land. The government came away convinced the Hammonds were unreasonable zealots. The minuscule effect of grazing fees on their budgets also probably led local officials to think domestic livestock were more of a headache than a contributor to the local or national economy. Dwight and Steven Hammonds’ convictions for arson were more due to this deterioration in their relations with the BLM than to any specific action the Hammonds took. 

It doesn’t take much scrutiny to see that domestic grazing is not a viable use of most federal lands. The Forest Service and BLM manage close to 240 million acres of rangelands that produce roughly 12 million cattle-years of feed. While the best pasturelands can support one cow per acre, federal lands require 200 acres for the same animal. This 240 million acres is nearly 10 percent of the nation’s land, yet it produces only about 2 percent of livestock feed in this country, while less than four percent of cattle or sheep ever step on federal lands.

The problem is that decisions are made through a political tug-of-war rather than through the market. One year, Congress passes a law favorable to ranchers; another year, it passes a law favorable to environmentalists. The result is polarization and strife.

If these lands were managed in response to market forces, the agencies would probably shed much of their bureaucratic bloat, but the costs of providing feed for domestic livestock would still be several times the current grazing fee. Ranchers unwilling to pay that higher price would find alternate sources of feed. If some ranchers continued to graze their cattle and sheep on public land and environmentalists didn’t like it, they could outbid the ranchers or pay them to manage their livestock in ways that reduced environmental impacts. 

The current system is far from a free market, and free-market advocates should not defend it. A true market system would reduce polarization, lead to better outcomes for both consumers and the environment, and would not have resulted in ranchers being sent to prison for accidentally burning a few acres of land.

It is an appalling story: A thoughtful academic uses his training and profession’s tools to analyze a major, highly controversial public issue. He reaches an important conclusion sharply at odds with the populist, “politically correct” view. Dutifully, the professor reports his findings to other academics, policymakers, and the public. But instead of being applauded for his insights and the quality of his work, he is vilified by his peers, condemned by national politicians, and trashed by the press. As a result, he is forced to resign his professorship and abandon his academic career.

Is this the latest development in today’s oppressive P.C. wars? The latest clash between “science” and powerful special interests?

Nope, it’s the story of Hugo Meyer, a University of Chicago economics professor in the early 1900s. His sad tale is told by University of Pisa economist Nicola Giocoli in the latest issue of Cato’s magazine, Regulation. Meyer is largely forgotten today, but his name and story should be known and respected by free-marketers and anyone who cherishes academic freedom and intellectual integrity.

Here’s a brief summary: At the turn of the 20th century, the U.S. economy was dramatically changing as a result of a new technology: nationwide railroading. Though small railroads had existed in America for much of the previous century, westward expansion and the rebuilding of southern U.S. railways after the Civil War resulted in the standardization, interconnection, and expansion of the nation’s rail network.

As a result, railroading firms would compete with each other vigorously over price for long-distance hauling because their networks provided different routes to move goods efficiently between major population centers. However, price competition for short-hauls over the same rail lines between smaller towns wasn’t nearly as vigorous, as it was unlikely that two different railroads, with different routes, would efficiently serve the same two locales. The result was that short-distance hauls could be nearly as expensive as long-distance hauls, which greatly upset many people, including powerful politicians and other societal leaders.

Meyer examined those phenomena carefully, ultimately determining that there was nothing amiss in the high prices for short hauls.

Railroads bear two types of costs, he explained: operating costs (e.g., paying the engineer and fireman, buying the coal and water, etc.) and fixed costs (e.g., the cost of laying and maintaining the rails, rolling stock, and fixed assets). Because of the heavy competition on long hauls, those freight prices mainly covered just the routes’ operating costs, while less competitive short-haul prices covered both those routes’ operating costs and most (if not all) fixed costs.

This wasn’t bad for short-haul customers, Meyer reasoned, because if it weren’t for the long-haul revenues, railroads would provide less (and perhaps no) service to the short-haul towns. Thus, though the short-haul towns were not happy with their freight prices, they were nonetheless better-off because of this arrangement.

Meyer’s reasoning would today be associated with the field of Law & Economics, which uses economic thinking to analyze laws and regulations. Today, this type of analysis is highly respected by economists, policymakers, and U.S. courts, and is heavily linked to the University of Chicago (though it has roots in other places as well). But it hadn’t emerged as a discipline in Meyer’s era; sadly, he was a man too far ahead of his time.

And, for that, he paid a steep price. As Giocoli describes, when Meyer presented his analysis at a 1905 American Economic Association conference, he was set upon by other economists; indeed, the AEA had shamefully engineered his paper session as a trap. When he testified on his work to policymakers in Washington, he was publicly accused of corruption by a powerful bureaucrat, Interstate Commerce Commissioner Judson Clements, and a U.S. senator, Jonathan P. Dolliver. And when a monograph of his work appeared the following year, he was dismissed by the Boston Evening Transcript (then a prominent daily newspaper) as “partisan and untrustworthy.”

Meyer was disgraced. He resigned his position at Chicago and moved to Australia, where he continued his research on railroad economics but he never worked for a university again.

Back at Chicago, his former department head, James Laurence Laughlin, took to the pages of the Journal of Political Economy to lament what had befallen his colleague:

In some academic circles the necessity of appearing on good terms with the masses goes so far that only the mass-point-of-view is given recognition; and the presentation of the truth, if it happens to traverse the popular case, is regarded as something akin to consternation. … It is not amiss to demand that measure of academic freedom that will permit a fair discussion of the rights of those who do not have the popular acclaim. It is going too far when a carefully reasoned argument which happens to support the contentions of the railways is treated as if necessarily the outcome of bribery by the money kings.

There is a happy ending for Meyer’s analysis. Academic research in the latter half of the century buttressed his view that market competition was enough to produce fairly honest, publicly beneficial railroad pricing, and that government intervention harmed public welfare. The empirical evidence marshaled by that research was so compelling that Congress deregulated the railroads and even abolished the Interstate Commerce Commission. Apparently, not all federal agencies have eternal life.

But though his analysis has triumphed, Meyer’s name and story have largely been forgotten. They shouldn’t be. Hopefully, Giocoli’s article will give Meyer the remembrance he deserves.

You Ought to Have a Look is a feature from the Center for the Study of Science posted by Patrick J. Michaels and Paul C. (“Chip”) Knappenberger.  While this section will feature all of the areas of interest that we are emphasizing, the prominence of the climate issue is driving a tremendous amount of web traffic.  Here we post a few of the best in recent days, along with our color commentary.

What’s lost in a lot of the discussion about human-caused climate change is not that the sum of human activities is leading to some warming of the earth’s temperature, but that the observed rate of warming (both at the earth’s surface and throughout the lower atmosphere) is considerably less than has been anticipated by the collection of climate models upon whose projections climate alarm (i.e., justification for strict restrictions on the use of fossil fuels) is built.

We highlight in this issue of You Ought to Have a Look a couple of articles that address this issue that we think are worth checking out.

First is this post from Steve McIntyre over at Climate Audit that we managed to dig out from among all the “record temperatures of 2015” stories. In his analysis, McIntyre places the 2015 global temperature anomaly not in real world context, but in the context of the world of climate models.

Climate model-world is important because it is in that realm where climate change catastrophes play out, and that influences the actions of real-world people to try to keep them contained in model-world.

So how did the observed 2015 temperatures compare to model world expectations? Not so well.

In a seriesoftweets over the holidays, we pointed out that the El Niño-fueled, record-busting, high temperatures of 2015 barely reached to the temperatures of an average year expected by the climate models.

In his post, unconstrained by Twitter’s 140-character limit, McIntyre takes a bit more verbose and detailed look at the situation, and includes additional examinations of the satellite record of temperatures in the lower atmosphere as well as a comparison of observed trends and model expected trends in both the surface and lower atmospheric temperatures histories since 1979.

The latter comparison for global average surface temperatures looks like this, with the observed trend (the red dashed line) falling near or below the fifth percentile of the expected trends (the lower whisker) from a host of climate models:

 McIntyre writes:

All of the individual models have trends well above observations…   There are now over 440 months of data and these discrepancies will not vanish with a few months of El Nino.

Be sure to check out the whole article here. We’re pretty sure you won’t read about any of this in the mainstream media.

Next up is an analysis by independent climate researcher Nic Lewis, who’s specialty these days is developing estimates of the earth’s climate sensitivity (how much the earth’s average surface temperature is expected to rise under a doubling of the atmospheric concentrations of carbon dioxide) based upon observations of the earth’s temperature evolution over the past 100-150 years. Lewis’s general findings are that the climate sensitivity of the real world is quite a bit less than it is in model world (a reason that could explain much of what McIntyre reported above).

The current focus of Lewis’s attention is a recently published paper by a collection of NASA scientists, led by Kate Marvel, that concluded observational-based estimates of the earth climate sensitivity, such as those performed by Lewis, greatly underestimate the actual sensitivity. After accounting for the reasons why, Marvel and her colleagues conclude that climate models are, contrary to the assertions of Lewis and others, accurately portraying how sensitive the earth’s climate is to changing greenhouse gas concentrations (see here for details). It thus follows that these models serve as reliable indicators of the future evolution of the earth’s temperature.

As you may imagine, Lewis isn’t so quick to embrace this conclusion. He explains his reasons why in great detail in his lengthy (technical) article posted at Climate Audit, and provides a more easily digestible version over at Climate Etc.

After detailing a fairly long list of inconsistencies contained not only internally within the Marvel et al. study itself, but also between the Marvel study and other papers in the scientific literature, Lewis concludes:

The methodological deficiencies in and multiple errors made by Marvel et al., the disagreements of some of its forcing estimates with those given elsewhere for the same model, and the conflicts between the Marvel et al. findings and those by others – most notably by James Hansen using the previous GISS model, mean that its conclusions have no credibility.

Basically, Lewis suggests that Marvel et al.’s findings are based upon a single climate model (out of several dozen in existence) and seem to arise from improper application of analytical methodologies within that single model.

Certainly, the Marvel et al. study introduced some interesting avenues for further examination. But, despite how they’ve been touted–as freshly-paved highways to the definitive conclusion that climate models are working better than real-world observations seem to indicate–they are muddy, pot-holed backroads leading nowhere in particular.

Finally, we want to draw your attention to an online review of a paper recently published in the scientific literature which sought to dismiss the recent global warming “hiatus” as nothing but the result of a poor statistical analysis. The paper, “Debunking the climate hiatus,” was written by a group of Stanford University researchers led by Bala Rajaratnam and published in the journal Climatic Change last September.

The critique of the Rajaratnam paper was posted by Radford Neal, a statistics professor at the University of Toronto, on his personal blog that he dedicates to (big surprise) statistics and how they are applied in scientific studies.

In his lengthy, technically detailed critique, Neal pulls no punches:

The [Rajaratnam et al.] paper was touted in popular accounts as showing that the whole hiatus thing was mistaken — for instance, by Stanford University itself.

You might therefore be surprised that, as I will discuss below, this paper is completely wrong. Nothing in it is correct. It fails in every imaginable respect.

After tearing through the numerous methodological deficiencies and misapplied statistics contained in the paper, Neal is left shaking his head at the peer-review process that gave rise to the publication of this paper in the first place, and offered this warning:

Those familiar with the scientific literature will realize that completely wrong papers are published regularly, even in peer-reviewed journals, and even when (as for this paper) many of the flaws ought to have been obvious to the reviewers.  So perhaps there’s nothing too notable about the publication of this paper.  On the other hand, one may wonder whether the stringency of the review process was affected by how congenial the paper’s conclusions were to the editor and reviewers.  One may also wonder whether a paper reaching the opposite conclusion would have been touted as a great achievement by Stanford University. Certainly this paper should be seen as a reminder that the reverence for “peer-reviewed scientific studies” sometimes seen in popular expositions is unfounded.

Well said.

Pages