Cato Op-Eds

Individual Liberty, Free Markets, and Peace
Subscribe to Cato Op-Eds feed

Of course we’re referring to Hurricane Alex here, which blew up in far eastern Atlantic waters thought to be way too cold to spin up such a storm.  Textbook meteorology says hurricanes, which feed off the heat of the ocean, won’t form over waters cooler than about  80°F.  On the morning of January 14, Alex exploded over waters that were a chilly 68°.

Alex is (at least) the third hurricane observed in January, with others in 1938 and 1955.  The latter one, Hurricane Alice2, was actually alive on New Year’s Day.

The generation of Alex was very complex.  First, a garden-variety low pressure system formed over the Bahamas late last week and slowly drifted eastward.  It was derived from the complicated, but well-understood processes associated with the jet stream and a cold front, and that certainly had nothing to do with global warming.

The further south cold fronts go into the tropical Atlantic, the more likely that they will just dissipate, and that’s what happened last week, too.  Normally the associated low-pressure would also wash away.  But after it initially formed near the Bahamas  and drifted eastward, it was  in a region where sea-surface temperatures (SSTs) are running about 3°F above the long-term average consistent  with a warmer world. This may have been just enough to fuel the persistent remnant cluster of thunderstorms that meandered in the direction of Spain.

Over time, the National Hurricane Center named this collection “Alex” as a “subtropical” cyclone, which is what we call a tropical low pressure system that doesn’t have the characteristic warm core of a hurricane.

(Trivia note:  the vast majority of cyclones in temperate latitudes have a cold core at their center.  Hurricanes have a warm core.  There was once a move to call the subtropical hybrids “himicanes” (we vote for that!), then “neutercanes” (not bad, either) but the community simply adopted the name “subtropical.”)

In the early hours of January 14, thanks to a cold low pressure system propagating through the upper atmosphere, temperatures plummeted above the storm to a rather astounding -76°F.  So even though the SSTs were a mere 68°, far to cold to promote a hurricane, the difference between there and high altitudes was a phenomenal  144°, was so large that one could form.

Vertical motion, which is what causes the big storm clouds that form the core of a hurricane, is greatest when the change in temperature between the surface that the upper atmosphere is largest, and that 144° differential  exploded the storms that were in subtropical Alex, quickly creating a warm core and a hurricane eyewall. 

A far-south invasion of such cold air over the Atlantic subtropics is less likely in a warmer world, as the pole-to-equator temperature contrast lessens.  Everything else being equal, that would tend to confine such an event to higher latitudes.

So, yes, warmer surface temperatures may have kept the progenitor storms of Alex alive, but warmer temperatures would have made the necessary outbreak of extremely cold air over the storm less likely.

Consequently, it’s really not right to blame global warming for Hurricane Alex, though it may have contributed to subtropical storm Alex.

On December 1, 2015, the Bank of England released the results of its second round of annual stress tests, which aim to measure the capital adequacy of the UK banking system. This exercise is intended to function as a financial health check for the major UK banks, and purports to test their ability to withstand a severe adverse shock and still come out in good financial shape.

The stress tests were billed as severe. Here are some of the headlines:

“Bank of England stress tests to include feared global crash” “Bank of England puts global recession at heart of doomsday scenario” “Banks brace for new doomsday tests”

This all sounds pretty scary. Yet the stress tests appeared to produce a comforting result: despite one or two small problems, the UK banking system as a whole came out of the process rather well. As the next batch of headlines put it:

“UK banks pass stress tests as Britain’s ‘post-crisis period’ ends” “Bank shares rise after Bank of England stress tests” “Bank of England’s Carney says UK banks’ job almost done on capital”

At the press conference announcing the stress test results, Bank of England Governor Mark Carney struck an even more reassuring note:

The key point to take is that this [UK banking] system has built capital steadily since the crisis. It’s within sight of [its] resting point, of what the judgement of the FPC is, how much capital the system needs. And that resting point — we’re on a transition path to 2019, and we would really like to underscore the point that a lot has been done, this is a resilient system, you see it through the stress tests.[1] [italics added]

But is this really the case? Let’s consider the Bank’s headline stress test results for the seven financial institutions involved: Barclays, HSBC, Lloyds, the Nationwide Building Society, the Royal Bank of Scotland, Santander UK and Standard Chartered.

In this test, the Bank sets its minimum pass standard equal to 4.5%: a bank passes the test if its capital ratio as measured by the CET1 ratio — the ratio of Common Equity Tier 1 capital to Risk-Weighted Assets (RWAs) — is at least 4.5% after the stress scenario is accounted for; it fails the test otherwise.

The outcomes are shown in in Chart 1:

Chart 1: Stress Test Outcomes for the CET1 Ratio with a 4.5% Pass Standard

Note: The data are obtained from Annex 1 of the Bank’s stress test report (Bank of England, December 2015).

Based solely on this test, the UK banking system might indeed look to be in reasonable shape. Every bank passes the test, although one (Standard Chartered) does so by a slim margin of under 100 basis points and another (RBS) does not perform much better. Nonetheless, according to this test, the UK banking system looks broadly healthy overall.

Unfortunately, that is not the whole story.

One concern is that the RWA measure used by the Bank is essentially nonsense — as its own (now) chief economist demonstrated a few years back. So it is important to consider the second set of stress tests reported by the Bank, which are based on the leverage ratio. This is defined by the Bank as the ratio of Tier 1 capital to leverage exposure, where the leverage exposure attempts to measure the total amount at risk. We can think of this measure as similar to total assets.

In this test, the pass standard is set at 3% — the bare minimum leverage ratio under Basel III.

The outcomes for this stress test are given in the next chart:

Chart 2: Stress Test Outcomes Using the Tier 1 Leverage Ratio with a 3% Pass Standard

Based on this test, the UK banking system does not look so healthy after all. The average post-stress leverage ratio across the banks is 3.5%, making for an average surplus of 0.5%. The best performing institution (Nationwide) has a surplus (that is, the outcome minus the pass standard) of only 1.1%, while four banks (Barclays, HSBC, Lloyds and Santander) have surpluses of less than one hundred basis points, and the remaining two don’t have any surpluses at all — their post-stress leverage ratios are exactly 3%.

To make matters worse, this stress test also used a soft measure of core capital — Tier 1 capital — which includes various soft capital instruments (known as additional Tier 1 capital) that are of questionable usefulness to a bank in a crisis.

The stress test would have been more convincing had the Bank used a harder capital measure. And, in fact, the ideal such measure would have been the CET1 capital measure it used in the first stress test. So what happens if we repeat the Bank’s leverage stress test but with CET1 instead of Tier 1 in the numerator of the leverage ratio?

Chart 3: Stress Test Outcomes Using the CET1 Leverage Ratio with a 3% Pass Standard

In this test, one bank fails, four have wafer-thin surpluses and only two banks are more than insignificantly over the pass standard.

Moreover, this 3% pass standard is itself very low. A bank with a 3% leverage ratio will still be rendered insolvent if it makes a loss of 3% of its assets.

The 3% minimum is also well below the potential minimum that will be applied in the UK when Basel III is fully implemented — about 4.2% by my calculations — let alone the 6% minimum leverage ratio that the Federal Reserve is due to impose in 2018 on the federally insured subsidiaries of the eight globally systemically important banks in the United States.

Here is what we would get if the Bank of England had carried out the leverage stress test using both the CET1 capital measure and the Fed’s forthcoming minimum standard of 6%:

Chart 4: Stress Test Outcomes for the CET1 Leverage Ratio with a 6% Pass Standard

Oh my! Every bank now fails and the average deficit is nearly 3 percentage points.

Nevertheless, I leave the last word to Governor Carney: “a lot has been done, this is a resilient system, you see it through the stress tests.”

——

[1] Bank of England Financial Stability Report Q&A, 1st December 2015, p. 11.

[Cross-posted from Alt-M.org]

As part of his 2017 budget proposal, Secretary of Transportation Anthony Foxx proposes to spend $4 billion on self-driving vehicle technology. This proposal comes late to the game, as private companies and university researchers have already developed that technology without government help. Moreover, the technology Foxx proposes is both unnecessary and intrusive of people’s privacy.

In 2009, President Obama said he wanted to be remembered for promoting a new transportation network the way President Eisenhower was remembered for the Interstate Highway System. Unfortunately, Obama chose high-speed rail, a 50-year-old technology that has only been successful in places where most travel was by low-speed trains. In contrast with interstate highways, which cost taxpayers nothing (because they were paid for out of gas taxes and other user fees) and carry 20 percent of all passenger and freight travel in the country, high-speed rail would have cost taxpayers close to a trillion dollars and carry no more than 1 percent of passengers and virtually no freight.

The Obama adminstration has also promoted a 120-year-old technology, streetcars, as some sort of panacea for urban transportation. When first developed in the 1880s, streetcars averaged 8 miles per hour. Between 1910 and 1966, all but six American cities replaced streetcars with buses that were faster, cost half as much to operate, and cost almost nothing to start up on new routes. Streetcars funded by the Obama administration average 7.3 miles an hour (see p. 40), cost twice as much to operate as buses, and typically cost $50 million per mile to start up.

The point is that this administration, if not government in general, has been very poor at choosing transportation technologies for the twenty-first century. While I’ve been a proponent of self-driving cars since 2010, I believe the administration is making as big a mistake with its latest $4 billion proposal as it made with high-speed rail and streetcars.

The problem is that the technology the government wants is very different from the technology being developed by Google, Volkswagen, Ford, and other companies. The cars designed by these private companies rely on GPS, on-board sensors, and extremely precise maps of existing roadways and other infrastructure. A company called HERE, which was started by Nokia but recently purchased by BMW, Daimler, and Volkswagen, has already mapped about two-thirds of the paved roads in the United States and makes millions of updates to its maps every day.

Foxx proposes to spend most of the $4 billion on a very different technology called “connected vehicle” or vehicle-to-infrastructure communications. In this system, the government would have to install new electronic infrastructure in all streets and highway that would help guide self-driving cars. But states and cities today can’t fill potholes or keep traffic lights coordinated, so they are unlikely to be able to install an entirely new infrastructure system in any reasonable amount of time.

Moreover, the fixed infrastructure used for connected corridors will quickly become obsolete. Your self-driving car will be able to download software upgrades while sitting in your garage overnight–Teslas already do so. However, upgrading the hardware for a connected vehicle system could take years and might never happen due to the expense of converting from one technology to another. Thus, Foxx’s plan would lock us into a system that will be obsolete long before it is fully implemented.

Privacy advocates should also worry that connected roads would also connect cars to government command centers. The government will be able to monitor everyone’s travel and even, if you drive more than some planner thinks is the appropriate amount, remotely turn your car off to “save the planet.” Of course, Foxx will deny that this is his goal. Yet the Washington legislature has passed a law mandating a 50 percent reduction in per capita driving by 2050, and California and Oregon have similar if not quite-so-draconian rules, and it is easy to imagine that the states, if not the feds, will take advantage of Foxx’s technology to enforce their targets. No such monitoring or control is possible in the Google-like self-driving cars.

Foxx’s infrastructure is entirely unnecessary for self-driving cars, as Google, Audi, Delphi, and other companies have all proven that their cars can work without it. Not to worry: Foxx also promises that his department will write national rules that all self-driving cars must follow. No doubt these rules will mandate that the cars work on connected streets, whether they need to or not.

Some press reports suggest that Foxx’s plan will make Google happy, but it is more likely to disappoint. Google is already disappointed with self-driving car rules written by the California Department of Motor Vehicles. But what are the chances that federal rules will be any better–especially if the federal government is dead-set on its own technology that is very different from Google’s? If the states come up with 50 different sets of rules, some of them are likely to be better than the others, and the others can follow the best examples.

If Congress approves Foxx’s program, the best we can hope for is that Google and other private companies are able to ignore the new technology. The worst case is that the department’s new rules not only mandate that cars be able to use connected streets, but that they work in self-driving mode only on roads that have connected-streets technology. In that case, the benefits of self-driving cars will be delayed for the decades that it takes to install that technology–and may never happen at all if people don’t the extra cost for cars that can drive themselves only on a few selected roads and streets.

All government needs to do for the next transportation revolution to happen is keep the potholes filled, the stripes painted, and otherwise get out of the road. In contrast, Foxx’s is a costly way of doing more harm than good.

Americans often move between different income brackets over the course of their lives. As covered in an earlier blog post, over 50 percent of Americans find themselves among the top 10 percent of income-earners for at least one year during their working lives, and over 11 percent of Americans will be counted among the top 1 percent of income-earners for at least one year.   

Fortunately, a great deal of what explains this income mobility are choices that are largely within an individual’s control. While people tend to earn more in their “prime earning years” than in their youth or old age, other key factors that explain income differences are education level, marital status, and number of earners per household.  As HumanProgress.org Advisory Board member Mark Perry recently wrote

The good news is that the key demographic factors that explain differences in household income are not fixed over our lifetimes and are largely under our control (e.g. staying in school and graduating, getting and staying married, etc.), which means that individuals and households are not destined to remain in a single income quintile forever.  

According to the U.S. economist Thomas Sowell, whom Perry cites, “Most working Americans, who were initially in the bottom 20% of income-earners, rise out of that bottom 20%. More of them end up in the top 20% than remain in the bottom 20%.”  

While people move between income groups over their lifetime, many worry that income inequality between different income groups is increasing. The growing income inequality is real, but its causes are more complex than the demagogues make them out to be. 

Consider, for example, the effect of “power couples,” or people with high levels of education marrying one another and forming dual-earner households. In a free society, people can marry whoever they want, even if it does contribute to widening income disparities. 

Or consider the effects of regressive government regulations on exacerbating income inequality. These include barriers to entry that protect incumbent businesses and stifle competition. To name one extreme example, Louisiana recently required a government-issued license to become a florist. Lifting more of these regressive regulations would aid income mobility and help to reduce income inequality, while also furthering economic growth. 

Chaos and conflict have become constants in the Middle East. Frustrated U.S. policymakers tend to blame ancient history. Said President Barack Obama in his State of the Union speech, the region’s ongoing transformation was “rooted in conflicts that date back millennia.”

Of course, war is a constant of human history. But while today’s most important religious divisions go back thousands of years, bitter sectarian conflict does not. The Christian Crusades and Muslim conquests into Europe ended long ago.

All was not always calm within the region, of course. Sectarian antagonism existed. Yet religious divisions rarely caused the sort of hateful slaughter we see today.

Tolerance lived on even under political tyranny. The Baathist Party, which ruled Iraq and Syria until recently, was founded by a Christian. Christians played a leading role in the Palestinian movement.

The fundamental problem today is politics. Religion has become a means to forge political identities and rally political support.

As I point out in Time: “Blame is widely shared. Artificial line-drawing by the victorious allies after World War I, notably the Sykes-Picot agreement, created artificial nation states for the benefit of Europeans, not Arabs. Dynasties were created with barely a nod to the desires of subject peoples.”

Lebanon’s government was created as a confessional system, which exacerbated political reliance on religion. The British/American-backed overthrow of Iran’s democratic government in 1953 empowered the Shaw, an authoritarian, secular-minded modernizer. His rule was overturned by the Islamic Revolution.

This seminal event greatly sharpened the sectarian divide, which worsened through the Iran-Iraq war and after America’s invasion of Iraq. Out of the latter emerged the Islamic State. The collapse of Syria’s Assad regime has provided another political opportunity for radical movements.

Nothing about the history of the Middle East makes conflict inevitable. To reverse the process both Shiites and Sunnis must reject the attempt of extremists to misuse their faith for political advantage. And Western nations, especially the United States, must stay out of Middle East conflicts.

Global Science Report is a feature from the Center for the Study of Science, where we highlight one or two important new items in the scientific literature or the popular media. For broader and more technical perspectives, consult our monthly “Current Wisdom.”

We realize that we are 180° out of sync with the news cycle when we discuss heat-related death in the middle of Northern Hemisphere winter, but we’ve come across a recent paper that can’t wait for the heat and hype of next summer.

The paper, by Arizona State University’s David Hondula and colleagues, is a review of the recent scientific literature on “human health impacts of observed and projected increases in summer temperature.”

This topic is near and dear to our hearts, as we have ourselves contributed many papers to the scientific literature on this matter (see here).  We are especially interested in seeing how the literature has evolved over the past several years and Hondula and colleagues’ paper, which specifically looked at findings published in the 2012-2015 timeframe, fills this interest nicely.

Here’s how they summed up their analysis:

We find that studies based on projected changes in climate indicate substantial increases in heat-related mortality and morbidity in the future, while observational studies based on historical climate and health records show a decrease in negative impacts during recent warming. The discrepancy between the two groups of studies generally involves how well and how quickly humans can adapt to changes in climate via physiological, behavioral, infrastructural, and/or technological adaptation, and how such adaptation is quantified.

Did you get that? When assessing what actually happens to heat-related mortality rates in the face of rising temperatures, researchers find that “negative impacts” decline. But, when researchers attempt to project the impacts of rising temperature in the future on heat-related mortality, they predict “substantial increases.”

In other words, in the real world, people adapt to changing climate conditions (e.g., rising temperatures), but in the modeled world of the future, adaptation can’t keep up. 

But rather than assert this as a problem with model world behavior that needs serious attention, most assessments of the projected impacts of climate change (such as the one produced by our federal government as a foundation for its greenhouse gas mitigation policies) embrace model world forecasts and run with storylines like “global warming set to greatly increase deaths from heat waves.”

We’ve been railing against this fact for years. But, it never seems gain any traction with federal climatologists.

Interestingly, in all the literature surveyed by Hondula’s group, they cite only one study which suggested that climate change itself may be aiding and abetting the adaptive processes. The idea forwarded in that study was that since people adapt to heat waves, and since global warming may be, in part, leading to more heat waves, that global warming itself may be helping to drive the adaptive response.  Rather than leading to more heat-related deaths, global warming may actually be leading to fewer.

Who were the authors of that study? Perhaps they may by familiar to you: Chip Knappenberger, Pat Michaels, and Anthony Watts.

While Hondula and colleagues seem to be amenable to our premise, they point out that putting an actual magnitude on this effect is difficult:

If changing climate is itself a modifier of the relationship between temperature and mortality (e.g., increasing heat wave frequency or severity leads to increasing public awareness and preventative measures), a quantitative approach for disentangling these effects has yet to be established.

We concur with this, but, as we point out in our paper using the history of heat-related mortality in Stockholm as an example, it doesn’t take much of a positive influence from climate change to offset any negatives:

[R]aised awareness from climate change need only be responsible for 288 out of 2,304 (~13%) deaths saved through adaptation to have completely offset the climate-related increase in heat-related mortality [there].  For any greater contribution, climate change would have resulted in an overall decline in heat-related mortality in Stockholm County despite an increase in the frequency of extreme-heat events.

We went on to say (in somewhat of an understatement):

Our analysis highlights one of the many often overlooked intricacies of the human response to climate change.

Hondula’s team adds this, from their conclusion:

By directing our research efforts to best understand how reduction in heat mortality and morbidity can be achieved, we have the opportunity to improve societal welfare and eliminate unnecessary health consequences of extreme weather—even in a hotter future.

Well said.

References:

Hondula, D. M., R. C. Balling, J. K. Vanos, and M. Georgescu, 2015. Rising temperatures, human health, and the role of adaptation. Current Climate Change Reports, 1, 144-154.

Knappenberger, P. C., P. J. Michaels, and A. Watts, 2014. Adaptation to extreme heat in Stockholm County, Sweden. Nature Climate Change, 4, 302–303.

On January 14th, the White House announced that Gen. Joseph Votel - the current head of U.S. Special Operations Command – will take over as the head of U.S. Central Command, a position which will place him in charge of America’s wars in Iraq, Syria, and Afghanistan. The symbolism of the appointment could not be clearer. As Foreign Policy noted,

“With 3,000 special operations troops currently hunting down Taliban militants in Afghanistan, and another 200 having just arrived on the ground in Iraq to take part in kill or capture missions against Islamic State leadership, Votel’s nomination underscores the central role that the elite troops play in the wars that President Barack Obama is preparing to hand off to the next administration.”

The growing use of special operations forces has been a hallmark of the Obama administration’s foreign policy, an attempt to thread the needle between growing public opposition to large-scale troop deployments and public demands for the United States to ‘do more’ against terrorist threats, all while dancing around the definition of the phrase ‘boots on the ground.’ But the increasing use of such non-traditional forces – particularly since the start of the Global War on Terror – is also reshaping how we think about U.S. military intervention overseas.

It’s not just the growing use of special operations forces. New technologies like drones permit America’s military to strike terrorist training camps and high value targets abroad with limited risk to operators. The diffusion of terrorist groups and non-state actors across the globe enables terrorist groups and their affiliates to be present in many states. And the breadth of the 2001 Congressional Authorization to Use Military Force (AUMF) – which permits attacks on any forces ‘associated’ with Al Qaeda – has permitted the executive branch to engage in numerous small military interventions around the globe without congressional approval or much public debate.

The result has been a series of conflicts which are effectively invisible to the public. Indeed, depending on your definition, America is currently fighting between three and nine wars. Iraq, Syria, and Afghanistan are obvious. But U.S. troops are also actively fighting in counterterrorism operations in Somalia, Nigeria, and Uganda. The United States is conducting drone strikes in Pakistan, Libya, and Somalia. And our commitment to the Saudi-led campaign in Yemen is even more ambiguous: though the U.S. is not engaged in fighting, it is certainly providing material support in the form of logistical and intelligence support.

On January 25th, Cato is hosting a panel discussion on the issues raised by the growth of these small, ‘invisible’ wars, and by the growing ubiquity of U.S. military intervention around the world. Moderated by Mark Mazzetti of the New York Times, and featuring Bronwyn Bruton of the Atlantic Council, Charles Schmitz of Towson University and Moeed Yusuf of the United States Institute of Peace, the event will seek to explore three key ‘invisible wars’ - Yemen, Pakistan, and Somalia – and the broader questions they raise. What is the nature and scope of America’s involvement in such conflicts? Does lack of public awareness impact U.S. national security debates? And does U.S. involvement actually serve U.S. interests?

The event will be held on January 25th at 11am. You can register here.

Parker and Ollier (2015) set the tone for their new paper on sea level change along the coastline of India in the very first sentence of their abstract: “global mean sea level (GMSL) changes derived from modelling do not match actual measurements of sea level and should not be trusted” (emphasis added). In contrast, it is their position that “much more reliable information” can be obtained from analyses of individual tide gauges of sufficient quality and length. Thus, they set out to obtain such “reliable information” for the coast of India, a neglected region in many sea level studies, due in large measure to its lack of stations with continuous data of sufficient quality.

A total of eleven stations were selected by Parker and Ollier for their analysis, eight of which are archived in the PSMSL database (PSMSL, 2014) and ten in a NOAA sea level database (NOAA, 2012). The average record length of the eight PSMSL stations was 54 years, quite similar to the average record length of 53 years for the eleven NOAA stations.

Results indicated an average relative rate of sea level rise of 1.07 mm/year for all eleven Indian stations, with an average record length of 51 years. However, the two Australian researchers report this value is likely “overrated because of the short record length and the multi-decadal and interannual oscillations” of several of the stations comprising their Indian database. Indeed, as they further report, “the phase of the 60-year oscillation found in the tide gauge records is such that sea level in the North Atlantic, western North Pacific, Indian Ocean and western South Pacific has been increasing since 1985-1990,” which increase most certainly skews the rate trend of the shorter records over the most recent period of record above the actual rate of rise.

One additional important finding of the study was gleaned from the longer records in the database, which revealed that the rates of sea level rise along the Indian coastline have been “decreasing since 1955,” which observation of deceleration stands in direct opposition to model-based claims that sea level rise should be accelerating in recent decades in response to CO2-induced global warming.

In comparing their findings to those reported elsewhere, Parker and Ollier note there is a striking similarity between the trends they found for the Indian coastline and for other tide gauge stations across the globe. Specifically, they cite Parker (2014), who calculated a 1.04 ± 0.45 mm/year average relative rate of sea level rise from 560 tide gauges comprising the PSMSL global database. And when that database is restricted in analysis to the 170 tide gauges with a length of more than 60 years at the present time, the average relative rate of rise declines to a paltry 0.25 ± 0.19 mm/year, without any sign of positive or negative acceleration.

The significance of Parker and Ollier’s work is noted in the “sharp contrast” they provide when comparing the rates of sea level rise computed from tide gauge data with model-based sea level reconstructions produced from satellites, such as the 3.2 mm/year value observed by the CU Sea Level Research Group (2014), which Parker and Ollier emphatically claim “cannot be trusted because it is so far from observed data.” Furthermore, it is clear from the observational tide gauge data that there is nothing unusual, unnatural, or unprecedented about current rates of sea level rise, with the exception that they appear to be decelerating, as opposed to accelerating, despite a period of modern warmth that climate alarmists contend is unequaled over the past millennium, and which should be melting away the polar ice caps and rapidly rising sea levels.

 

References

CU Sea Level Research Group. 2014. Global Mean Sea Level. sealevel.colorado.edu (retrieved May 30, 2014).

National Oceanic and Atmospheric Administration (NOAA). 2012. MSL global trend table, tidesandcurrents.noaa.gov/sltrends/MSL_global_trendtable.html (retrieved May 30, 2014).

Parker, A. 2014. Accuracy and reliability issues in computing absolute sea level rises. Submitted paper.

Parker, A. and Ollier, C.D. 2015. Sea level rise for India since the start of tide gauge records. Arabian Journal of Geosciences 8: 6483-6495.

Permanent Service on Mean sea level (PSMSL). 2013. Data, www.psmsl.org (retrieved October 1, 2013).

Leaders of the worldwide Anglican church are meeting at Canterbury Cathedral this week, with some observers predicting an open schism over homosexuality. There is fear that archbishops from six African countries – Uganda, Kenya, Nigeria, South Sudan, Rwanda and the Democratic Republic of the Congo – may walk out if the archbishop of Canterbury, the symbolic head of the worldwide Anglican Communion, won’t sanction the U.S. Episcopal Church for consecrating gay bishops. Since about 60 percent of the world’s Anglicans are in Africa, that would be a major break.

I am neither an Anglican nor a theologian, but I did reflect on the non-religious values that shape some of these disputes in the Guardian a few years ago:

The Anglican Archbishop of South Africa, Njongonkulu Ndungane, says his church should abandon its “practices of discrimination” and accept the gay Episcopal bishop V. Gene Robinson of New Hampshire. That makes him unusual in Africa, where other Anglican bishops have strongly objected to the ordination of practicing homosexuals.

The Nigerian primate, for instance, Archbishop Peter Akinola, condemned the consecration of Robinson as bishop, calling it a “satanic attack on the church of God.” According to the San Francisco Chronicle, “He even issued a statement on behalf of the ‘Primates of the Global South’ - a group of 20 Anglican primates from Africa, the West Indies, South America, India, Pakistan, and Southeast Asia - deploring the action and, along with Uganda and Kenya, formally severed relations with Robinson’s New Hampshire diocese.”

So what makes Ndungane different? He’s the successor to Nobel laureate Desmond Tutu, one might recall. And they both grew up in South Africa, where enlightenment values always had a foothold, even during the era of apartheid. Ndungane studied at the liberal English-speaking University of Cape Town, where Sen. Robert F. Kennedy gave a famous speech in 1966.

Ndungane didn’t hear that speech, alas, because he was then imprisoned on Robben Island. But after he was released he decided to enter the church and took two degrees at King’s College, London. The arguments of the struggle against apartheid came from western liberalism - the dignity of the individual, equal and inalienable rights, political liberty, moral autonomy, the rule of law, the pursuit of happiness.

So it’s no surprise that a man steeped in that struggle and educated in the historic home of those ideas would see how they apply in a new struggle, the struggle of gay people for equal rights, dignity, and the pursuit of happiness as they choose.

The South African Anglicans remain in favor of gay marriage. And of course, such church schisms are not new. The Baptist, Methodist, and Presbyterian churches in the United States split over slavery. The Methodists and Presbyterians reunited a century later, but the Baptists remain separate bodies.

In 2009, Duracell, a subsidiary of Proctor & Gamble, began selling “Duracell Ultra” batteries, marketing them as their longest-lasting variety. A class action was filed in 2012, arguing that the “longest-lasting” claim was fraudulent. The case was removed to federal court, where the parties reached a global settlement purporting to represent 7.26 million class members.

Attorneys for the class are to receive an award of $5.68 million, based on what the district court deemed to be an “illusory” valuation of the settlement at $50 million. In reality, the class received $344,850. Additionally, defendants agreed to make a donation of $6 million worth of batteries over the course of five years to various charities.

This redistribution of settlement money from the victims to other uses is referred to as cy pres. “Cy pres” means “as near as possible,” and courts have typically used the cy pres doctrine to reform the terms of a charitable trust when the stated objective of the trust is impractical or unworkable. The use of cy pres in class action settlements—particularly those that enable the defendant to control the funds—is an emerging trend that violates the due process and free speech rights of class members.

Accordingly, class members objected to the settlement, arguing that the district court abused its discretion in approving the agreement and failed to engage in the required rigorous analysis to determine whether the settlement was “fair, reasonable, and adequate.” The U.S. Court of Appeals for the Eleventh Circuit affirmed the settlement, however, noting the lack of “precedent prohibiting this type of cy pres award.”

Now an objecting class member has asked the Supreme Court to review the case, and Cato filed an amicus brief arguing that the use of cy pres awards in this manner violates the Fifth Amendment’s Due Process Clause and the First Amendment’s Free Speech Clause.

Specifically, due process requires—at a minimum—an opportunity for an absent plaintiff to remove himself, or “opt out,” from the class. Class members have little incentive or opportunity to learn of the existence of a class action in which they may have a legal interest, while class counsel is able to make settlement agreements that are unencumbered by an informed and participating class.

In addition, when a court approves a cy pres award as part of a class action settlement, it forces class members to endorse certain ideas, which constitutes a speech compulsion. When defendants receive money—essentially from themselves—to donate to a charity, the victim class members surrender the value of their legal claims. Class members are left uncompensated, while defendants are shielded from any future claims of liability and even look better than they did before the lawsuit given their display of “corporate social responsibility.”

The Supreme Court will consider whether to take up Frank v. Poertner later this winter.

If federal statutory law expressly commands that all covered federal employees shall be “free from any discrimination based on … race,” does that forbid the federal government from adopting race-based affirmative action plans? That is one of the important—and seemingly obvious—questions posed by Shea v. Kerry, a case brought by our friends at the Pacific Legal Foundation. William Shea is a white State Department Foreign Service Officer. In 1990, he applied for his Foreign Service Officer position and began working in 1992 at a junior-level post. At the time, the State Department operated a voluntary affirmative action plan (read: voluntary as “mandated by Congress”) whereby minorities were able to bypass the junior levels and enter the mid-level service. The State Department attempted to justify its racial plan by noting that there were statistical imbalances at the senior Foreign Service levels, even though the path to the senior levels is unrelated to service at the lower levels.

In 2001, Shea filed an administrative complaint against the State Department for its disparate treatment of white applicants under its 1990-92 hiring plan, complaining that he did not enter at as high a grade as he may have and that the discrimination cost him in both advancement opportunities and earnings. After exhausting administrative remedies, Shea took his complaint to the courts, resulting in this case. The Cato Institute has joined with the Southeastern Legal Foundation, the Center for Equal Opportunity, and the National Association of Scholars to file an amici curiae brief calling for the Supreme Court to take up the case and reverse the federal appellate court below.

In fairness to the court below, Title VII jurisprudence, as it stands, is both unclear and unworkable. The text of Title VII expressly prohibits discrimination on the basis of race—what’s called “disparate treatment.” Indeed, in the specific provisions on federal hiring, Title VII employs very expansive language to ensure that disparate treatment is not permitted. But such a “literal construction” of the Title VII statute was eschewed by Justice William Brennan in 1979, writing for the Court in United Steelworkers v. Weber. Relying on cherry-picked statutory history, Brennan found that Title VII’s plain text did not prohibit collectively bargained, voluntary affirmative action programs that attempt to remedy disparate impact—statistical imbalances in the racial composition of employment groups—even if such plans used quota systems. Later, in Johnson v. Transportation Agency, Santa Clara County, Cal. (1987), the Court exacerbated the issue by extending the Weber rule from purely private hiring to municipal hiring. In Shea, the U.S. Court of Appeals for the D.C. Circuit extended the rule from Johnson and Weber to federal hiring, not just municipal and private employment.

To make matters more confusing, in Ricci v. DeStefano (2009) (in which Cato also filed a brief), the Supreme Court held that, in order for a remedial disparate treatment to be permissible under Title VII, an employer must show a strong basis in evidence that they would face disparate-impact liability if they didn’t make discriminatory employment decisions. The Ricci Court held that the new strong-basis-in-evidence standard is “a matter of statutory construction to resolve any conflict between the disparate-treatment and disparate-impact provisions of Title VII.” While this holding implicitly jettisons the rubric created by Johnson and Weber, the court did not expressly overrule those cases, leaving lower courts in disarray as to when to apply Johnson and Weber or Ricci. Indeed, this conflict between precedents was noted by both the federal trial court and the federal appellate courts below.

As amici point out in our brief, the outcome of Shea’s case hangs on the applicable standard of review. In Ricci, this Court noted that, standing alone, “a threshold showing of a significant statistical disparity” is “far from a strong basis in evidence.” Yet, as the federal trial court noted, “[the] State [Department] does rely solely on a statistical imbalance in the mid- and senior-levels” in order to justify its affirmative action plan. Had the lower Courts applied Ricci, which superseded Johnson and Weber, Shea would have prevailed on his claims. The Court should take up this current case and clarify the jurisprudence applicable in Title VII cases in light of Ricci.

Lost in all the hoopla over “y’all queda” and “VanillaISIS” is any basic history of how public rangelands in the West–and in eastern Oregon in particular–got to this point. I’ve seen no mention in the press of two laws that are probably more responsible than anything else for the alienation and animosity the Hammonds felt towards the government.

The first law, the Public Rangelands Improvement Act of 1978, set a formula for calculating grazing fees based on beef prices and rancher costs. When the law was written, most analysts assumed per capita beef consumption would continue to grow as it had the previous several decades. In fact, it declined from 90 pounds to 50 pounds per year. The formula quickly drove down fees to the minimum of $1.35 per cow-month, even as inflation increased the costs to the government of managing the range. 

The 1978 law also allowed the Forest Service and Bureau of Land Management (BLM) to keep half of grazing fees for range improvements. Initially, this fund motivated the agencies to promote rancher interests. But as inflation ate away the value of the fee, agency managers began to view ranchers as freeloaders. Today, the fee contributes will under 1 percent of agency budgets and less than 10 percent of range management costs. Livestock grazing was once a profitable use of federal range lands but now costs taxpayers nearly $10 for every dollar collected in fees.

Ranching advocates argue that the grazing fee is set correctly because it costs more to graze livestock on federal land than on state or private land. But the BLM and Forest Service represent the sellers, not the buyers, and the price they set should reflect the amount that a seller is willing to accept. Except in cases of charity, no seller would permanently accept less than cost, and costs currently average about $10 per animal unit month.

The second law, the Steens Mountain Cooperative Management and Protection Act of 2000, was an environmentalist effort that affected the lands around the Hammonds ranch. The “protection” portion of the law put 176,000 acres of land into wilderness, which ordinarily does not shut down grazing operations. The “cooperative” portion offered local ranchers other grazing lands if they would voluntarily give up their leases in the wilderness. Nearly 100,000 acres were closed to domestic livestock when every ranch family accepted the offer except one: the Hammonds.

The Hammonds came away from the bargaining table convinced the government was trying to take their land. The government came away convinced the Hammonds were unreasonable zealots. The minuscule effect of grazing fees on their budgets also probably led local officials to think domestic livestock were more of a headache than a contributor to the local or national economy. Dwight and Steven Hammonds’ convictions for arson were more due to this deterioration in their relations with the BLM than to any specific action the Hammonds took. 

It doesn’t take much scrutiny to see that domestic grazing is not a viable use of most federal lands. The Forest Service and BLM manage close to 240 million acres of rangelands that produce roughly 12 million cattle-years of feed. While the best pasturelands can support one cow per acre, federal lands require 200 acres for the same animal. This 240 million acres is nearly 10 percent of the nation’s land, yet it produces only about 2 percent of livestock feed in this country, while less than four percent of cattle or sheep ever step on federal lands.

The problem is that decisions are made through a political tug-of-war rather than through the market. One year, Congress passes a law favorable to ranchers; another year, it passes a law favorable to environmentalists. The result is polarization and strife.

If these lands were managed in response to market forces, the agencies would probably shed much of their bureaucratic bloat, but the costs of providing feed for domestic livestock would still be several times the current grazing fee. Ranchers unwilling to pay that higher price would find alternate sources of feed. If some ranchers continued to graze their cattle and sheep on public land and environmentalists didn’t like it, they could outbid the ranchers or pay them to manage their livestock in ways that reduced environmental impacts. 

The current system is far from a free market, and free-market advocates should not defend it. A true market system would reduce polarization, lead to better outcomes for both consumers and the environment, and would not have resulted in ranchers being sent to prison for accidentally burning a few acres of land.

It is an appalling story: A thoughtful academic uses his training and profession’s tools to analyze a major, highly controversial public issue. He reaches an important conclusion sharply at odds with the populist, “politically correct” view. Dutifully, the professor reports his findings to other academics, policymakers, and the public. But instead of being applauded for his insights and the quality of his work, he is vilified by his peers, condemned by national politicians, and trashed by the press. As a result, he is forced to resign his professorship and abandon his academic career.

Is this the latest development in today’s oppressive P.C. wars? The latest clash between “science” and powerful special interests?

Nope, it’s the story of Hugo Meyer, a University of Chicago economics professor in the early 1900s. His sad tale is told by University of Pisa economist Nicola Giocoli in the latest issue of Cato’s magazine, Regulation. Meyer is largely forgotten today, but his name and story should be known and respected by free-marketers and anyone who cherishes academic freedom and intellectual integrity.

Here’s a brief summary: At the turn of the 20th century, the U.S. economy was dramatically changing as a result of a new technology: nationwide railroading. Though small railroads had existed in America for much of the previous century, westward expansion and the rebuilding of southern U.S. railways after the Civil War resulted in the standardization, interconnection, and expansion of the nation’s rail network.

As a result, railroading firms would compete with each other vigorously over price for long-distance hauling because their networks provided different routes to move goods efficiently between major population centers. However, price competition for short-hauls over the same rail lines between smaller towns wasn’t nearly as vigorous, as it was unlikely that two different railroads, with different routes, would efficiently serve the same two locales. The result was that short-distance hauls could be nearly as expensive as long-distance hauls, which greatly upset many people, including powerful politicians and other societal leaders.

Meyer examined those phenomena carefully, ultimately determining that there was nothing amiss in the high prices for short hauls.

Railroads bear two types of costs, he explained: operating costs (e.g., paying the engineer and fireman, buying the coal and water, etc.) and fixed costs (e.g., the cost of laying and maintaining the rails, rolling stock, and fixed assets). Because of the heavy competition on long hauls, those freight prices mainly covered just the routes’ operating costs, while less competitive short-haul prices covered both those routes’ operating costs and most (if not all) fixed costs.

This wasn’t bad for short-haul customers, Meyer reasoned, because if it weren’t for the long-haul revenues, railroads would provide less (and perhaps no) service to the short-haul towns. Thus, though the short-haul towns were not happy with their freight prices, they were nonetheless better-off because of this arrangement.

Meyer’s reasoning would today be associated with the field of Law & Economics, which uses economic thinking to analyze laws and regulations. Today, this type of analysis is highly respected by economists, policymakers, and U.S. courts, and is heavily linked to the University of Chicago (though it has roots in other places as well). But it hadn’t emerged as a discipline in Meyer’s era; sadly, he was a man too far ahead of his time.

And, for that, he paid a steep price. As Giocoli describes, when Meyer presented his analysis at a 1905 American Economic Association conference, he was set upon by other economists; indeed, the AEA had shamefully engineered his paper session as a trap. When he testified on his work to policymakers in Washington, he was publicly accused of corruption by a powerful bureaucrat, Interstate Commerce Commissioner Judson Clements, and a U.S. senator, Jonathan P. Dolliver. And when a monograph of his work appeared the following year, he was dismissed by the Boston Evening Transcript (then a prominent daily newspaper) as “partisan and untrustworthy.”

Meyer was disgraced. He resigned his position at Chicago and moved to Australia, where he continued his research on railroad economics but he never worked for a university again.

Back at Chicago, his former department head, James Laurence Laughlin, took to the pages of the Journal of Political Economy to lament what had befallen his colleague:

In some academic circles the necessity of appearing on good terms with the masses goes so far that only the mass-point-of-view is given recognition; and the presentation of the truth, if it happens to traverse the popular case, is regarded as something akin to consternation. … It is not amiss to demand that measure of academic freedom that will permit a fair discussion of the rights of those who do not have the popular acclaim. It is going too far when a carefully reasoned argument which happens to support the contentions of the railways is treated as if necessarily the outcome of bribery by the money kings.

There is a happy ending for Meyer’s analysis. Academic research in the latter half of the century buttressed his view that market competition was enough to produce fairly honest, publicly beneficial railroad pricing, and that government intervention harmed public welfare. The empirical evidence marshaled by that research was so compelling that Congress deregulated the railroads and even abolished the Interstate Commerce Commission. Apparently, not all federal agencies have eternal life.

But though his analysis has triumphed, Meyer’s name and story have largely been forgotten. They shouldn’t be. Hopefully, Giocoli’s article will give Meyer the remembrance he deserves.

You Ought to Have a Look is a feature from the Center for the Study of Science posted by Patrick J. Michaels and Paul C. (“Chip”) Knappenberger.  While this section will feature all of the areas of interest that we are emphasizing, the prominence of the climate issue is driving a tremendous amount of web traffic.  Here we post a few of the best in recent days, along with our color commentary.

What’s lost in a lot of the discussion about human-caused climate change is not that the sum of human activities is leading to some warming of the earth’s temperature, but that the observed rate of warming (both at the earth’s surface and throughout the lower atmosphere) is considerably less than has been anticipated by the collection of climate models upon whose projections climate alarm (i.e., justification for strict restrictions on the use of fossil fuels) is built.

We highlight in this issue of You Ought to Have a Look a couple of articles that address this issue that we think are worth checking out.

First is this post from Steve McIntyre over at Climate Audit that we managed to dig out from among all the “record temperatures of 2015” stories. In his analysis, McIntyre places the 2015 global temperature anomaly not in real world context, but in the context of the world of climate models.

Climate model-world is important because it is in that realm where climate change catastrophes play out, and that influences the actions of real-world people to try to keep them contained in model-world.

So how did the observed 2015 temperatures compare to model world expectations? Not so well.

In a seriesoftweets over the holidays, we pointed out that the El Niño-fueled, record-busting, high temperatures of 2015 barely reached to the temperatures of an average year expected by the climate models.

In his post, unconstrained by Twitter’s 140-character limit, McIntyre takes a bit more verbose and detailed look at the situation, and includes additional examinations of the satellite record of temperatures in the lower atmosphere as well as a comparison of observed trends and model expected trends in both the surface and lower atmospheric temperatures histories since 1979.

The latter comparison for global average surface temperatures looks like this, with the observed trend (the red dashed line) falling near or below the fifth percentile of the expected trends (the lower whisker) from a host of climate models:

 McIntyre writes:

All of the individual models have trends well above observations…   There are now over 440 months of data and these discrepancies will not vanish with a few months of El Nino.

Be sure to check out the whole article here. We’re pretty sure you won’t read about any of this in the mainstream media.

Next up is an analysis by independent climate researcher Nic Lewis, who’s specialty these days is developing estimates of the earth’s climate sensitivity (how much the earth’s average surface temperature is expected to rise under a doubling of the atmospheric concentrations of carbon dioxide) based upon observations of the earth’s temperature evolution over the past 100-150 years. Lewis’s general findings are that the climate sensitivity of the real world is quite a bit less than it is in model world (a reason that could explain much of what McIntyre reported above).

The current focus of Lewis’s attention is a recently published paper by a collection of NASA scientists, led by Kate Marvel, that concluded observational-based estimates of the earth climate sensitivity, such as those performed by Lewis, greatly underestimate the actual sensitivity. After accounting for the reasons why, Marvel and her colleagues conclude that climate models are, contrary to the assertions of Lewis and others, accurately portraying how sensitive the earth’s climate is to changing greenhouse gas concentrations (see here for details). It thus follows that these models serve as reliable indicators of the future evolution of the earth’s temperature.

As you may imagine, Lewis isn’t so quick to embrace this conclusion. He explains his reasons why in great detail in his lengthy (technical) article posted at Climate Audit, and provides a more easily digestible version over at Climate Etc.

After detailing a fairly long list of inconsistencies contained not only internally within the Marvel et al. study itself, but also between the Marvel study and other papers in the scientific literature, Lewis concludes:

The methodological deficiencies in and multiple errors made by Marvel et al., the disagreements of some of its forcing estimates with those given elsewhere for the same model, and the conflicts between the Marvel et al. findings and those by others – most notably by James Hansen using the previous GISS model, mean that its conclusions have no credibility.

Basically, Lewis suggests that Marvel et al.’s findings are based upon a single climate model (out of several dozen in existence) and seem to arise from improper application of analytical methodologies within that single model.

Certainly, the Marvel et al. study introduced some interesting avenues for further examination. But, despite how they’ve been touted–as freshly-paved highways to the definitive conclusion that climate models are working better than real-world observations seem to indicate–they are muddy, pot-holed backroads leading nowhere in particular.

Finally, we want to draw your attention to an online review of a paper recently published in the scientific literature which sought to dismiss the recent global warming “hiatus” as nothing but the result of a poor statistical analysis. The paper, “Debunking the climate hiatus,” was written by a group of Stanford University researchers led by Bala Rajaratnam and published in the journal Climatic Change last September.

The critique of the Rajaratnam paper was posted by Radford Neal, a statistics professor at the University of Toronto, on his personal blog that he dedicates to (big surprise) statistics and how they are applied in scientific studies.

In his lengthy, technically detailed critique, Neal pulls no punches:

The [Rajaratnam et al.] paper was touted in popular accounts as showing that the whole hiatus thing was mistaken — for instance, by Stanford University itself.

You might therefore be surprised that, as I will discuss below, this paper is completely wrong. Nothing in it is correct. It fails in every imaginable respect.

After tearing through the numerous methodological deficiencies and misapplied statistics contained in the paper, Neal is left shaking his head at the peer-review process that gave rise to the publication of this paper in the first place, and offered this warning:

Those familiar with the scientific literature will realize that completely wrong papers are published regularly, even in peer-reviewed journals, and even when (as for this paper) many of the flaws ought to have been obvious to the reviewers.  So perhaps there’s nothing too notable about the publication of this paper.  On the other hand, one may wonder whether the stringency of the review process was affected by how congenial the paper’s conclusions were to the editor and reviewers.  One may also wonder whether a paper reaching the opposite conclusion would have been touted as a great achievement by Stanford University. Certainly this paper should be seen as a reminder that the reverence for “peer-reviewed scientific studies” sometimes seen in popular expositions is unfounded.

Well said.

Since the Affordable Care Act came into effect in September 2010, its dependent coverage provision has mandated that insurers allow children to remain on their parents’ plans until age twenty-six. According to some polls, it is the single most popular provision in the law, enjoying high levels of support from Independents, Democrats, and Republicans alike. This is a marked contrast to the law as a whole, which remains unpopular to this day.  The Obama administration has pointed to increasing insurance coverage among young adults as a sign of the provision’s success. A new working paper might dampen some of this fanfare, as the researchers find evidence that the dependent coverage provision led to reduced wages of roughly $1,200 a year for affected workers. Policies and choices have trade-offs, and this mandate is no exception. 

Popularity of Select Provisions of the Affordable Care Act

Source: Kaiser Health Tracking Poll: March 2014.

Prior to the ACA, some states had passed some form of extended dependent coverage mandates, while others had not. The researchers were able to exploit this variation, using the states with prior mandates as a control group. They estimate that the dependent coverage provision reduced wages by roughly $1,200 per year for workers 26 and older. As might be expected, firms that offer health insurance are more affected than those that do not. Somewhat more surprisingly, they find that workers with eligible children are not the only ones affected: there is some degree of pooling and childless workers see wage reductions as well. These findings also imply at least a moderate level of crowd-out, meaning that some proportion of the young adults gaining coverage shifted from other forms of coverage. The authors also failed to find any significant change in labor supply resulting from the reductions in wages, but suggest one possible explanation may be the timing of the provision’s implementation in the weak labor market in late 2010. Parents might value the extended insurance coverage the provision allows, and some young adults who gained coverage might be better off, but these changes come at a cost in the form of reduced wages.

The dependent coverage provision is one of the most broadly popular aspects of the health reform, and it does seem to have increased insurance coverage among the target population to some extent. These gains come with trade-offs, however, and this evidence indicates that workers at affected firms saw wage reductions of about $1,200 per year. Favorable reports citing increased health insurance coverage should take this new evidence into account. Once the costs are fully understood, the provision might not be quite as popular. 

Cato Institute Scholars Respond to Obama’s Final State of the Union Address

Cato Institute scholars Emma Ashford‎, Trevor Burrus‎, Benjamin Friedman‎, Dan Ikenson,‎ Neal McCluskey‎, Pat Michaels‎, Aaron Powell‎, and Julian Sanchez‎ respond to President Obama’s final State of the Union Address.

Video produced by Caleb O. Brown, Tess Terrible and Cory Cooper.

Presidential candidate Hillary Clinton is proposing to increase taxes on high earners with a four percent surtax on people making more than $5 million.

Imposing such a tax hike would:

  • Encourage more tax avoidance and evasion, the sort of behaviors that Clinton and other politicians bemoan.
  • Increase deadweight losses, or economic damage, of taxation. This damage rises by the square of the tax rate, so for example, a 40 percent rate causes four times the damage of a 20 percent rate. This feature of our tax system means that raising the top rate is the worst way to raise revenue, even if raising more revenue made sense, which it does not.
  • Discourage the work and investment efforts of the most productive people in the economy. Entrepreneurs, executives, angel investors, venture capitalists, and other high earners add enormous value to the economy. Politicians should focus on removing barriers to their efforts, rather than penalizing them. Besides, the income of high earners is more flexible and mobile than the income of other people, so raising their taxes causes the strongest behavioral responses and largest deadweight losses.
  • Raise taxes on the people already paying the highest rates. Average tax rates rise rapidly as income rises. In 2015 those earning more than $1 million paid an average federal tax rate (including income, payroll, and excise taxes) of 33 percent. That is twice the rate of people with middling incomes, and many times the rate of people at the bottom.
  • Push the top U.S. income tax rate substantially higher than our trading partners in the OECD (Table 1.7). The top U.S. federal and average state tax rate is already 46 percent, which compares to the OECD average of 43 percent.
  • Move even further away from the fair and efficient ideal of a proportional, or flat, tax system. The United States already has the most graduated, or progressive, tax system in the OECD.

Perhaps the biggest problem with Clinton’s plan is that the federal government already taxes and spends too much. The American economy and average citizens would be better off if the size and scope of the government were reduced. Clinton’s tax increase would not solve any problems, but rather would add fuel to the fire of rampant bureaucratic failure in Washington.

Last week, the World Bank updated its commodity database, which tracks the price of commodities going back to 1960. Over the last 55 years, the world’s population has increased by 143 percent. Over the same time period, real average annual per capita income in the world rose by 163 percent. What happened to the price of commodities?

Out of the 15 indexes measured by the World Bank, 10 fell below their 1960 levels. The indexes that experienced absolute decline included the entire non-energy commodity group (-20 percent), agricultural index (-26 percent), beverages (-32 percent), food (-22 percent), oils and minerals (-32 percent), grains or cereals (-32 percent), raw materials (-32 percent), “other” raw materials (-56 percent), metals and minerals (-4 percent) and base metals (-3 percent).

Five indexes rose in price between 1960 and 2015.  However, only two indexes, energy and precious metals, increased more than income, appreciating 451 percent and 402 percent respectively. Three indexes increased less than income. They included “other” food (7 percent), timber (7 percent) and fertilizers (38 percent).

Taken together, commodities rose by 43 percent. If energy and precious metals are excluded, they declined by 16 percent. Assuming that an average inhabitant of the world spent exactly the same fraction of her income on the World Bank’s list of commodities in 1960 and in 2015, she would be better off under either scenario, since her income rose by 163 percent over the same time period.

This course of events was predicted by the contrarian economist Julian Simon some 35 years ago. In The Ultimate Resource, Simon noted that humans are intelligent animals, who innovate their way out of scarcity. In some cases, we have become more parsimonious in using natural resources. An aluminum can, for example, weighed about 3 ounces in 1959. Today, it weighs less than half an ounce. In other cases, we have replaced scarce resources with others. Instead of killing whales for lamp oil, for instance, we burn coal, oil and gas.

I will have a paper on this subject soon. In the meantime, please visit www.humanprogress.org.

(P.S.: This post appeared originally here.)

1. Cheaper oil lowers the cost of transporting people and products (including exports), and also the cost of producing energy-intensive goods and services.

For the seventh week in a row, the average price of a gallon of diesel declines, to $2.235 https://t.co/X3Zaq52YSO pic.twitter.com/vYMStMZ2mj

— St. Louis Fed (@stlouisfed) January 2, 2016  

2. Every upward spike in oil prices has been followed by recession, while sustained periods of low oil prices have been associated with relatively brisk growth of the U.S. economy (real GDP).

   

3. Far from being a grave danger (as news reports have frequently speculated), lower inflation since 2013 has significantly increased real wages and real consumer spending.

 

4. Cheaper energy helps explain why the domestic U.S. economy (less trade & inventories) has lately been growing faster than 3% despite the unsettling Obama tax shock of 2013.

   

Why do I keep harping on interest on reserves? Because, IMHO, the Fed’s decision to start paying interest on reserves contributed at least as much as the failure of Lehman Brothers or any previous event did to the liquidity crunch of 2008:Q4, which led to a deepening of the recession that had begun in December 2007.

That the liquidity crunch marked a turning point in the crisis is itself generally accepted. Bernanke himself (The Courage to Act, pp. 399ff.) thinks so, comparing the crunch to the monetary collapse of the early 1930s, while stating that the chief difference between them is that the more recent one involved, not a withdrawal of retail funding by panicking depositors, but the “freezing up” of short-term, wholesale bank funding. Between late 2006 and late 2008, Bernanke observes, such funding fell from $5.6 trillion to $4.5 trillion (p. 403). That banks altogether ceased lending to one another was, he notes, especially significant (p. 405). The decline in lending on the federal funds market alone accounted for about one-eighth of the overall decline in wholesale funding.

For Bernanke, the collapse of interbank lending was proof of a general loss of confidence in the banking system following Lehman Bothers’ failure. That same loss of confidence was still more apparent in the pronounced post-Lehman increase in the TED spread:

The skyrocketing cost of unsecured bank-to-bank loans mirrored the course of the crisis. Usually, a bank borrowing from another bank will pay only a little more (between a fifth and a half of a percentage point) than the U.S. government, the safest of all borrowers, has to pay on short-term Treasury securities. The spread between the interest rate on short-term bank-to-bank lending and the interest rate on comparable Treasury securities (known as the TED spread) remained in the normal range until the summer of 2007, showing that general confidence in banks remained strong despite the bad news about subprime mortgages. However, the spread jumped to nearly 2-1/2 percentage points in mid-August 2007 as the first signs of panic roiled financial markets. It soared again in March (corresponding to the Bear Stearns rescue), declined modestly over the summer, then showed up when Lehman failed, topping out at more than 4-1/2 percentage points in mid-October 2008 (pp. 404-5).

These developments, Bernanke continues, “had direct consequences for Main Street America. … During the last four months of 2008, 2.4 million jobs disappeared, and, during the first half of 2009, an additional 3.8 million were lost.” (406-7)

There you have it, straight from the horse’s mouth: the fourth-quarter, 2008 contraction in wholesale funding, as reflected in the collapse of interbank lending, led to the loss of at least 6.2 million jobs.

But was the collapse of interbank lending really evidence of a panic, brought on by Lehman’s bankruptcy? The timing of that collapse, as indicated in the following graph, tells a much different story.

The first of the three vertical lines is for September 15, 2008, when Lehrman went belly-up. Interbank lending on the next reporting date — September 17th — was actually up from the previous week. Thereafter it declined a bit, and then rose some. But these variations weren’t all that unusual. As for the TED spread, although it rose sharply after Lehman’s failure, the rise reflected, not an actual increase in the effective federal funds rate (as the “panic” scenario would suggest), but the fact that that rate, though it actually declined rapidly, did not do so quite as rapidly as the Treasury Bill rate did:

OK, now on to those other vertical lines. They show the dates on which banks first began receiving interest payments on their excess reserves. There are two lines because back then two different sets of banks had different “reserve maintenance periods,” and therefore started getting paid at different dates. (The maintenance periods have since been made uniform.) Those (mostly smaller) banks with one-week reserve maintenance periods began earning interest on October 15th; the rest, with two-week maintenance periods, started getting paid on October 22nd. The collapse in interbank payments volume coincides with the latter date. Notice also that the collapse continues after the TED spread has returned to a level not so different from its levels before Lehman failed.

If you still aren’t convinced that IOR was the main factor behind the collapse in interbank lending, perhaps some more graphs will help. The first shows the progress of interbank lending over a somewhat longer period, along with the 3-month Treasury Bill rate and (starting in October 2008) the interest rate on excess reserves:

To understand this graph, think of the banks’ opportunity cost of holding excess reserves as being equal to the difference between the Treasury Bill rate and the rate of interest on excess reserves. Prior to October 15th, 2008, the opportunity cost, being simply equal to the Treasury Bill rate itself, is necessarily positive. But when IOR is first introduced, it becomes practically zero; and shortly thereafter it becomes, and remains, negative. Mere inspection of the chart should suffice to show that the volume of interbank lending tends to vary directly with this opportunity cost.

Once the interest rate on excess reserves is fixed at 25 basis points after mid-December 2008, things get simpler, as the volume of interbank lending varies directly with the Treasury Bill rate. Here is a chart showing that period, with the opportunity cost itself (that is, the Treasury Bill rate minus 25 basis points) plotted along with the volume of interbank lending:

Now, it would be one thing if Bernanke were merely guilty of misunderstanding the cause of the decline in interbank lending, without having actually been responsible for that decline. But Bernanke was responsible, as was the rest of the Fed gang that took part in the misguided decision to start rewarding banks for holding excess reserves in the middle of a financial crisis.

What’s more, it is hard to see how Bernanke can insist that the Fed’s decision to pay IOR had nothing to do with the drying-up of the federal funds market given the justification he himself offers for that decision earlier in his memoir, which bears quoting once again, this time with emphasis added:

[W]e had been selling Treasury securities we owned to offset the effect of our lending on reserves… . But as our lending increased, that stopgap measure would at some point no longer be possible because we would run out of Treasuries to sell….The ability to pay interest on reserves…would help solve this problem. Banks would have no incentive to lend to each other at an interest rate much below the rate they could earn, risk-free, on their reserves at the Fed (p. 325).

Yet when he turns to explain the causes of the collapse in interbank lending, just eighty pages after this passage, Bernanke never mentions interest on reserves. Instead, he blames the collapse on panicking private-market lenders, while treating the Fed — and, by implication, himself — as a White Knight, galloping to the rescue. “As the government’s policy response took effect,” he writes, “the TED spread declined toward normal levels by mid-2009” (p. 405). What rubbish. We’ve already seen why the TED spread went up and then declined again. And although interbank lending itself revived somewhat during the first half of 2009, it declined steadily thereafter, ultimately falling to lower levels than ever.

And the Fed’s “policy response”? According to Bernanke, it had “four main elements: lower interest rates to support the economy, emergency liquidity lending…and the stress-test disclosures of banks’ conditions” (409). Let Kevin Dowd tell you about those idiotic stress tests. As for “lower interest rates,” they were proof, not that the Fed was taking desirable steps, but that it was failing to do so, for although the Fed did get around to reducing its federal funds rate target, its doing so was a mere charade: the equilibrium federal funds rate had long since fallen well below the Fed’s target, and the subsequent moves merely amounted to a belated recognition of that fact, without making any other difference. Finally, although the Fed’s emergency lending aided the loans’ immediate recipients, as well as their creditors, it contributed not a jot to overall liquidity, the very point of IOR having been — as Bernanke himself admits, and as I explained in my first post on this topic — to prevent it from doing so!

As the next chart shows, IOR, besides contributing to the collapse of interbank lending, also played an important part in the dramatic increase in the banking system reserve ratio. The vertical lines represent the same three dates as those referred to in the very first chart. Although the ratio did rise considerably following Lehmans’ failure, it rose even more dramatically — and, quite unlike the TED spread, never recovered again — after the Fed started paying interest on excess reserves:

To better understand what went on, here is another diagram, this one showing banks’ choice of optimal reserve and liquid asset ratios as a function of the interest paid on bank reserves:

In the diagram, the vertical axis represents the interest rate on reserve balances, in basis points, while the horizontal axis represents the reserve-deposit ratio. The picture shows two upward-sloping schedules. The first is for reserve balances at the Fed, while the second is for liquid assets more generally, here meaning (for simplicity’s sake) reserves plus T-bills. The horizontal line shows the yield on T-bills at the time of implementation of IOR, here assumed to be a constant 20 basis points. The two dots, finally, represent equilibrium ratios, the first (at the lower left) for before the crisis and IOR, the other for afterwards. Note that, the high post-IOR ratio reflects, not just the interest-sensitivity of reserve demand, but that, with IOR set at 25 basis points, reserves dominate T-bills. Thus, although the demand for excess reserves may not be all that interest sensitive so long as the administered interest rate on reserves is less than the rate earned by other liquid assets, that demand can jump considerably if that rate is set above rates on liquid and safe securities.

The last chart I’ll trouble you with today tracks changes in total commercial bank reserves, interbank loans, Treasury and agency securities, and commercial and industrial loans, from mid-2006 through mid-2009, this time with a single vertical line only, for October 22, 2008, when IOR was in full effect:

The chart shows clearly how the beginning of IOR coincided, not only with a substantial decline in interbank lending (green line), but in a leveling-off of other sorts of bank lending, which later becomes a pronounced decline. For illustration’s sake, the chart shows the course of C & I lending only; other sorts of bank lending fell off even more.

Don’t get the wrong idea: I don’t wish to suggest that IOR was responsible for the post-2008 decline in bank lending, apart from overnight lending to other banks. There’s little doubt that that decline mainly reflects the effects of both a declining demand for credit and much stricter regulation of bank lending, especially as Dodd-Frank and Basel III came into play, Nor do I believe that merely eliminating IOR, as opposed to either reducing the regulatory burdens on bank lending, or resorting to negative IOR (as some European central banks have done), or both, would have sufficed to encourage any substantial increase in bank balance sheets, and especially in bank lending, after 2009, when most estimates (including the Fed’s own) have “natural” interest rates sliding into negative territory. But as I noted in my first post in this series, when IOR was first introduced, natural rates were, according to these same estimates, still positive. And one thing IOR certainly did do, both before 2009 and afterwards, was to allow banks, and some banks more than others, to treat trillions in new reserves created by the Fed starting in October 2008, not as an inducement to expand their balance sheets, but as a direct source of risk- and effort-free income. (Note, by the way, how, just before IOR was introduced, but after the Fed stopped sterilizing its emergency loans, bank loans and security holdings did in fact increase along with reserves.)

Moreover, it’s evident that the FOMC itself, rightly or wrongly, sees IOR as continuing to play a crucial part in limiting banks’ willingness to expand credit. Otherwise, how can one possibly understand that bodies’ decision last month to raise the rate of IOR (and, with it, the upper bound of its federal funds rate target range) from 25 to 50 basis points? That decision, recall, was aimed at making sure that bank credit expansion would not progress to the point of causing inflation to exceed the Fed’s 2 percent target:

The Committee judges that there has been considerable improvement in labor market conditions this year, and it is reasonably confident that inflation will rise, over the medium term, to its 2 percent objective. Given the economic outlook, and recognizing the time it takes for policy actions to affect future economic outcomes, the Committee decided to raise the target range for the federal funds rate to 1/4 to 1/2 percent. The stance of monetary policy remains accommodative after this increase, thereby supporting further improvement in labor market conditions and a return to 2 percent inflation.

Bernanke’s implementation and defense of IOR would be more than bad enough, were it not also for his particular determination to avoid repeating the mistakes the Fed made during the Great Depression. “[M]ost of my colleagues and I were determined,” he says, “not to repeat the blunder the Federal Reserve had committed in the 1930s when it refused to deploy its monetary tools to avoid a sharp deflation that substantially worsened the Great Depression” (p. 409). Among the Fed’s more notorious errors during that calamity were its failure to expand its balance sheet sufficiently, through open-market purchases or otherwise, to offset the dramatic, panic-driven collapse in the money multiplier during the early 1930s, and its recovery-scuttling decision to double reserve requirements in 1936-7.

Of course, Bernanke’s Fed didn’t commit the very same mistakes committed by the Fed of the 1930s. But, as David Beckworth had already recognized by late October 29, 2008, it made remarkably similar ones that also resulted in a collapse of credit. “History,” Bernanke credits Mark Twain with saying, “does not repeat itself, but it rhymes” (p. 400). If you ask me, Bernanke himself was a far better versifier — and a far worse central banker — than he and his many champions realize.

[Cross-posted from Alt-M.org]

Pages