Policy Institutes

Just days before the Trans-Pacific Partnership is scheduled to be signed by its 12 member governments, an official expert from the UN Human Rights Council released a statement criticizing the agreement for being incompatible with the goals of the UN human rights regime.  The criticism isn’t about the TPP in particular so much as the modern model of trade agreements as an inadequate vehicle for furthering wealth redistribution and massive regulatory intervention to pursue progressive goals.  That is, it’s a complaint about what the TPP doesn’t do.

There are, of course, lots of things the TPP doesn’t do.  Critics have complained that the TPP doesn’t prevent climate change, doesn’t eliminate human trafficking, and doesn’t reform repressive regimes in Vietnam and Brunei.  But these are not things the TPP was ever supposed to do.  It’s like complaining that Obamacare doesn’t end the drug war.

There are legitimate criticisms to be leveled against the TPP—things it does but shouldn’t and things it doesn’t do as well as it should.  There’s also a lot to like.  But debates over trade agreements often get bogged down with unrelated controversies that are easier to argue about.  Not one of the complaints the UN expert makes is explicitly about trade liberalization.  

The statement includes two specific criticisms of the TPP.  One is the secrecy of the negotiations, and the other is investor-state dispute settlement.  These are well-worn, standard complaints opponents of the TPP have been making for years.  The persuasiveness of both arguments relies on reflexive fear of the unknown—opponents can hint at what horrible things might happen from the TPP rather than looking at specific, measurable impacts.

These issues have become so controversial, in fact, that eliminating ISDS from future trade agreements and increasing transparency in negotiations would probably result in more free trade

The proliferation and prominence of non-trade arguments against trade agreements show that agreements like the TPP have strayed too far away from their core mission.  Using “human rights” as an argument against trade agreements will be harder to do if they focus more on simply eliminating tariffs, quotas, and subsidies.  A debate over the value of protectionism in promoting national and global welfare sounds very appealing and would surely lead to better policy.

I’ve been quite hard on President Obama for his abuse of executive power – and will soon file another brief in the 26-state challenge to his immigration action – but there are certainly things that he or any president can do to protect and secure our liberty without violating the Constitution. One such executive action would be to “declassify” marijuna: remove it from the list of controlled substances (or at least move it further down the list, which would have significant positive legal effects). I explain in this video:

What the President Should Do: Declassify Marijuana

In case you don’t have time to watch, here’s a transcript:

While legalizing marijuana as a matter of federal law would take an act of Congress, President Obama can decriminalize it. He can do this by moving it out of Schedule I of the Controlled Substances Act, which is reserved for substances of no medical purpose and a high potential for abuse, and therefore have high criminal penalties attached to their mere possession.

Virtually all marijuana-­related arrests are handled by state and local law enforcement. The federal Drug Enforcement Agency (DEA) simply lacks the resources to enforce the federal ban across all 50 states. That’s why the Justice Department decided not to fight the legalization of marijuana in the handful of states that have taken that step.

President Obama — without rewriting any laws or going outside of his constitutional authority — can direct the attorney general to start the process of reclassifying marijuana as a Schedule IV or V substance, or declassifying it altogether.

Reclassifying marijuana as a Schedule III substance or lower would have significant benefits for the budding marijuana industry and individual users. For example:

Declassifying marijuana would solve all of these problems.

But even merely reclassifying it would make it easier for legal businesses to access the full economy and reduce violent crime.

Marijuana deregulation sits squarely within the control of the executive. The president should use his executive powers to allow for intelligent enforcement of drug policy without eroding the rule of law.

I guarantee that if President Obama does this, he won’t be impeached for high times crimes and misdemeanors.

The majority of federally insured savings and loans failed in the 1980s, wiping out the Federal Savings and Loan Insurance Corporation in 1989.  The fiasco ultimately cost taxpayers around $150 billion to make savings depositors whole.  Two years later, the failures of hundreds of commercial banks put the Federal Deposit Insurance Corporation in the red.  (The FDIC got a bridge loan from the US Treasury, which it eventually repaid.)  It became clear that deposit insurance had fostered immense moral hazard, enabling the growth of unsound S&Ls and commercial banks.

For many reformers these events raised the question of how the core services of banks (intermediation and payments) might be provided without the expense of tax-funded guarantees, and yet without the danger of runs that had prompted the creation of the FSLIC and FDIC.  A number of economists (myself included) pointed to checkable money-market mutual funds (MMMFs) as an alternative to bank deposits that are not run-prone and therefore have no need for taxpayer-funded guarantees.

MMMFs, like other mutual funds and unlike banks, offer savers not debt claims promising specified dollar payouts on specified dates but rather equity claims (shares) in the dollar value of a portfolio.  Like other mutual funds, a MMMF buys back shares on demand at the current “net asset value” or NAV.  The modifier “money-market” means that a fund invests only in fixed-income securities with less than a year in remaining maturity, which means that present-value losses will be negligible from a rise in interest rates.  A fund can keep default and liquidity risks low by maintaining a diversified portfolio of highly rated securities with active secondary markets.

In 1976 Merrill Lynch introduced a MMMF that allowed customers to write checks against their account balances, an innovation which was quickly copied by other funds.  Money-market share accounts now combined the services of checking accounts with much higher returns, because they were not subject to the binding interest-rate ceiling (under the Fed’s Regulation Q) then constraining bank accounts.  To make them seem more like bank accounts, fund providers adopted the convention of pegging the share redemption value or NAV at $1, and varying the number of shares in an account, rather than varying the share price to reflect changes in the value of portfolio assets.  The popularity of MMMFs soared.  MMMFs that hold only Treasury obligations are called “government” funds.  Those that hold mostly commercial paper and jumbo bank CDs are called “prime” funds.

J. Huston McCulloch put the case for MMMFs not needing government guarantees well in a 1993 article: “[E]ven though MMMFs invest in financial instruments that may not come due for many weeks or months, they are entirely run-proof.  Should the volume of withdrawals be high enough” to require net sales that shrink the asset portfolio, “the fund’s liability to its remaining depositors simply falls in the same proportion.”  That is, each MMMF share is a claim not a fixed dollar sum, but only a percentage of the portfolio’s value.  A fall in the total value of the asset portfolio, whether from redemptions or from bad-news events that reduce assets’ market prices, immediately reduces the total value of shares so that they never over-claim the available assets.  Any bad-news net market value loss is immediately spread evenly over shareholders rather than being concentrated “on the last unlucky depositors in line, as occurs in a run on a traditional bank.”  With no greater losses falling on the person last in line to withdraw, there is no incentive to run to withdraw ahead of others.  Thus, “as long as MMMFs behave like true mutual funds,” continuously marking portfolio assets and shares to market value, the problem of the me-first incentive to run “cannot arise.”

I made essentially the same argument in chapter 6 of my text The Theory of Monetary Institutions.  There I argued that a run arises from the combination of three conditions: (1) claims are redeemable in pre-specified dollar amounts (i.e. are debts), (2) redemption is unconditionally available on demand, with a first-come first-served rule for meeting redemption demands, and (3) the last claim in line has a lower expected value.  Mutual funds eliminate the first element (claims are equity rather than debt), which is sufficient to eliminate the run problem.  It’s no use rushing to redeem when bad news about the asset portfolio arrives, because your account balance has already been marked down.  They also eliminate the third element (because every share redemption receives the same percentage of the portfolio value) when assets are liquid enough or the fund is small enough to make “fire-sale” losses from asset sales negligible.

But wait — doesn’t this argument assume that MMMFs vary the price of their shares like ordinary mutual funds?  Doesn’t it matter that the share redemption value is pegged at $1?  McCulloch explained why it should not matter: “Some MMMFs offer investors a variable number of shares of fixed value instead of a fixed number of shares of variable value.  This is merely a cosmetic difference with no substance, however.”  The problem of claims exceeding portfolio value “arises [only] when funds try to offer investors a fixed number of shares of fixed value.”  In other words, so long as $1 shares are promptly subtracted from each account in proportion to any decline in total portfolio value, or alternatively promptly marked below $1 (an event called “breaking the buck”), there remains no incentive to run.

In practice, subtracting $1 shares is not done (for reasons not immediately obvious), and breaking the buck has become an occasion to liquidate the fund.  Accordingly parent companies, to keep a MMMF alive and preserve its brand-name capital, almost always choose to eat losses and maintain the $1 share value.  A 2010 report by Moody’s identified 147 occasions over the period 1980-2007 when a MMMF suffered a net decline in portfolio assets that, without a rescue, would require breaking the buck.  Only one fund actually broke the buck.  (It was then liquidated, with shareholders receiving 96.1 cents per share.)  In 146 cases the parent firm stepped in, absorbing losses to keep the share value at $1.  If a parent firm acts immediately, upon news of critical asset losses, either to break the buck or instead to pitch in to preserve the par value, then running to get a better payoff than other shareholders remains either impossible or pointless.

Fast-forward to September 2008.  At midday on Sunday the 15th, insolvent and without a rescuer, Lehman Brothers filed for bankruptcy.  A money-market fund called The Reserve Primary Fund was caught holding $785 million in Lehman paper, about 1.3% of its $62.5 billion in assets under management.  (This size put it in the top twenty, but outside the top ten.)  An immediate 20% write-down on Lehman paper meant that a $157 million gap needed to be filled immediately if the fund was to have the asset value necessary to maintain its $1 share price.  For the next 24 hours, shareholders ran on the fund.  They did not believe, for good reason as it turned out, reassurances from the fund’s sales reps, repeating what The Reserve’s ownership had said but not done, that the parent company would pitch in to support the price.  By 1pm Monday (the 16th) shareholders had redeemed a bit more than a quarter of their claims at $1 per share.  The ownership had dithered and did not fill the hole in the balance sheet.  The fund’s custodian State Street Bank finally refused to make further payouts, and the fund broke the buck.  The Reserve also imposed daily withdrawal limits on its other funds.

During that Monday, and again on Tuesday and Wednesday, other prime funds experienced heavier than normal redemption outflows.  Other MMMF parent firms, by contrast to The Reserve, immediately supported their prime funds that had Lehman-related losses, and continued to redeem at $1 per share.  No other funds broke the buck.  By the 19th the industry-wide dollar value of assets under management by MMMFs was down by $247 million, a bit less than 7 percent of the value held ten days earlier.

After these three days of relatively heavy net redemptions following the Lehman bankruptcy and Reserve Primary buck-breaking, on Thursday the 19th, the US Treasury stepped in to stanch the redemptions, which it considered equivalent to runs, with something that it considered equivalent to federal deposit insurance.  It announced what Secretary Hank Paulson described as a “temporary guaranty program for the U.S. money market mutual fund industry,” assuring shareholders in participating funds that their shares would be redeemed at $1 even if their fund’s net asset value fell below par.  The Federal Reserve pitched in on September 22 by creating a special “Asset-Backed Commercial Paper Money Market Mutual Fund Liquidity Facility” to lend funds to banks for acquiring the commercial paper assets that MMMFs were shedding.

As later described by Philip Swagel, who was a Treasury official at the time, the MMMF guarantee program was initially funded, in an unprecedented and legally dubious move, from the Treasury’s Exchange Stabilization Fund:

The US Department of the Treasury (2008) used the $50 billion Exchange Stabilization Fund—originally established back in the 1930s to address issues affecting the exchange rate of the US dollar—to set up an insurance program to insure depositors in money market funds. … Use of the Exchange Stabilization Fund for this purpose was plausibly legal—after all, a panicked flight from US dollar-denominated securities could be seen as posing a threat to the exchange value of the dollar—but its use in this way was without precedent.

It should be noted that there was in fact no panicked flight from US dollar-denominated securities in general.  US Treasury securities rose in value during the crisis as investors worldwide considered them a safe haven.  The trade-weighted US dollar index actually rose sharply in the six months after Lehman fell and the Primary Reserve Fund broke the buck.  In its indifference to the rule of law, the US Treasury acted much like the Federal Reserve System did during the crisis.

After one year, the Treasury ended its MMMF guarantee program.  It has since imposed new pricing restrictions, liquidity requirements, and accounting rules on the funds in the name of reducing the problem of runs.  (I will discuss these regulatory changes in my next Alt-M post.)

So what happened in September 2008?  Is the run on Reserve Primary and heavy redemptions at other prime funds evidence that, contrary to McCulloch’s and my argument, prime MMMFs with a fixed $1 share price are in fact inherently fragile?

Stephen G. Cecchetti, former Director of Research at the Federal Reserve Bank of New York, and co-blogger Kermit L. Schoenholt have said so:

The fundamental problem facing U.S. regulators is that money market funds are banks in everything but their outward legal form.  They perform liquidity and credit functions that are identical to those of chartered banks; in particular, they offer the equivalent of bank checking deposits, making them vulnerable to a run.

This argument won’t do.  It completely fails to engage the basic counter-argument that checkable equity claims (MMMFs) are not run-prone because they distribute portfolio asset losses in an essentially different way from checkable debt claims (bank deposits).

Useful analysis of the run-proneness of MMMFs is provided by a 2013 comment on SEC rule proposals by the Squam Lake Group, a committee of 13 center-left to center-right financial economists.  They note that a MMMF (like a bank) will be run-prone whenever the aggregate redemption value of its shares or NAV exceeds the actual market value of the fund’s assets, so that early redeemers can expect to get more than late redeemers.  Under current accounting rules for money-market mutual funds (which they abbreviate MMFs), they point out, this can happen for two reasons:

First, mutual funds have the option to account for assets at amortized cost if they have a maturity of 60 days or less.  With that option, the [total redemption value of shares] is not a true reflection of the fair market value of fund assets.  Whenever investors can redeem at a NAV that is higher than the fair value of the assets, investors have incentives to run.

Second, and more fundamentally, prime MMFs invest substantially in assets without a liquid secondary market.  This creates an incentive for fund investors to run during a period of financial stress, because even “fair market value” may exceed by a significant amount the value at which the fund can quickly sell assets to meet investor redemptions.  Therefore, … the first MMF investors to redeem their shares during a crisis are likely to receive a higher price for their shares than those who follow once the fund is forced to meet redemption demands by selling assets that have not yet matured. … This first-to-redeem advantage, which is exacerbated by amortized cost accounting, creates an incentive for MMF shareholders to run.

In other words, MMMFs in August 2008 did not exhibit the immunity to runs that McCulloch and I expected in cases where the accounting rules did not, as we assumed they generally do, rule out an excess of aggregate share redemption value over actual asset portfolio value.  Some funds used accounting rules that allowed them not to mark 60-days-or-fewer assets to market at all, and not to mark other assets to a market price that corresponded to their actual immediate liquidation value.

In summary, we learned in August 2008 that MMMFs using certain accounting rules are not run-proof.  For 24 hours The Reserve Primary Fund carried a diminished asset portfolio without either topping it up or diminishing the claims against it, and consequently was rationally run upon.  We did not learn that MMMFs are inherently fragile, but rather that run-proneness depends on the accounting practices that a fund uses.

From this diagnosis, no policy intervention is indicated.  What follows is rather that in a market where losses remain private, investors can be expected to consider the relative fragility under certain circumstance of funds that opt to use potentially run-incentivizing accounting practices.  Such funds, if they do not offer some fully compensating advantage, should be expected to lose their market share.  Money-market mutual funds that instead credibly bind themselves to thoroughgoing mark-to-market accounting and other run-proofing practices (such as perhaps a pre-funded commitment by the parent company to shelter shareholders from losses), and advertise that fact, should be expected to flourish in the marketplace.  Such MMMFs remain an available payment mechanism that is not susceptible to runs and therefore has no need for guarantees at taxpayer expense.

To come in a later post: What to make of the US Treasury’s new restrictions on MMMFs?


*Acknowledgment: I thank Kyle Davidson for research assistance.

[Cross-posted from Alt-M.org]

Global Science Report is a weekly feature from the Center for the Study of Science, where we highlight one or two important new items in the scientific literature or the popular media. For broader and more technical perspectives, consult our monthly “Current Wisdom.”

Second only to incidences of high temperature, supporters of government action to restrict energy choice like to say “extreme” precipitation events–be they in the form of rain, sleet, snow, or hail falling from tropical cyclones, mid-latitude extratropical storms, or summer thunderstorm complexes–are evidence that greenhouse gas emissions from human activities make our climate and daily weather worse.

The federal government encourages and promotes such associations. Take, for example, the opening stanzas of its 2014 National Climate Assessment: Climate Change Impacts in the United States, a document regularly cited by President Obama in support of his climatic perseverations:

This National Climate Assessment concludes that the evidence of human-induced climate change continues to strengthen and that impacts are increasing across the country.

Americans are noticing changes all around them. Summers are longer and hotter, and extended periods of unusual heat last longer than any living American has ever experienced. Winters are generally shorter and warmer. Rain comes in heavier downpours.

President Obama often calls out the extreme rain meme when he is running through his list of climate change evils. His Executive Order “Preparing for the Impacts of Climate Change,” includes:

The impacts of climate change – including…more heavy downpours… – are already affecting communities, natural resources, ecosystems, economies, and public health across the Nation.

So, certainly the science must be settled demonstrating a strong greenhouse-gas altered climate signal in the observed patterns of extreme precipitation trends and variability across the United States in recent decades, right?


Here are the conclusions of a freshly minted study, titled “Characterizing Recent Trends in U.S. Heavy Precipitation” from a group of scientists led by Dr. Martin Hoerling from the NOAA’s System Research Laboratory in Boulder, Colorado:

Analysis of the seasonality in heavy daily precipitation trends supports physical arguments that their changes during 1979-2013 have been intimately linked to internal decadal ocean variability, and less to human-induced climate change…Analysis of model ensemble spread reveals that appreciable 35-yr trends in heavy daily precipitation can occur in the absence of forcing, thereby limiting detection of the weak anthropogenic influence at regional scales [emphasis added].

Basically, after reviewing observations of heavy rains across the country and comparing them to climate model explanations/expectations, Hoerling and colleagues determined that natural variability acting through variations in sea surface temperature patterns, not global warming, is the main driver of the observed changes in heavy precipitation.

They summed up their efforts and findings this way (emphasis also added):

In conclusion, the paper sought to answer the question whether the recent observed trends in heavy daily precipitation constitute a strongly constrained outcome, either of external radiative forcing alone [i.e., greenhouse gas increase], or from a combination of radiative and internal ocean boundary forcing. We emphasized that the overall spatial pattern and seasonality of US trends has been more consistent with internally driven ocean-related forcing than with external radiative forcing. Yet, the magnitude of these forced changes since 1979 was at most equal to the magnitude of observed trends (e.g. over the Far West), and in areas such as the Far Northeast where especially large upward trends have occurred, the forced signals were several factors smaller. From the perspective of external forcing alone [i.e., changes in atmospheric carbon dioxide], the observed trends appear not to have been strongly constrained, and apparently much less so than the efficacy of an external driving mechanism surmised in the National Climate Assessment.

Hoerling’s team tried to say it nicely, but, basically they’re saying that the federal government’s assessment of the impacts of climate change greatly overstates the case for linking dreaded carbon dioxide emissions to extreme precipitation events across the United States (Note: We weren’t as nice when saying that, in fact, the National Assessment Report overstates the case for linking carbon dioxide emissions to darn near everything.)

This is not to say that Hoerling and colleagues don’t think that an increasing atmospheric concentration of carbon dioxide isn’t supposed to lead to an enhancement of heavy precipitation over the course of the 21st century. (If they didn’t say that, they’d probably be exiled to the federal climatologist rubber room). Rather, they think that folks (including the president and the authors of the National Climate Assessment) are far too premature in linking observed changes to date with our reliance on coal, oil, and natural gas as primary fuels for our energy production.

Whether or not at some later date a definitive and sizeable (actionable) anthropogenic signal is identifiable in the patterns and trends in heavy precipitation occurrence across the United States is a question whose answer will have to wait—most likely until much closer to the end of the century or beyond.


Hoerling, M., J. Eischeid, J. Perlwitz, X. Quan, K. Wolter, and L. Cheng, 2016. Characterizing Recent Trends in U.S. Heavy Precipitation. Journal of Climate. doi:10.1175/JCLI-D-15-0441.1, in press.


In the 1990s, the Clinton administration proposed restructuring our air traffic control (ATC) system, creating a self-funded organization outside of the Federal Aviation Administration (FAA). The idea went nowhere in Congress at the time.

Since then, numerous countries have successfully privatized their ATC systems, including Britain and Canada. Meanwhile, our ATC is still trapped inside the FAA bureaucracy, and it continues to fall short on crucial technology upgrade projects.

The good news is that major restructuring is back on the agenda in Congress. House Transportation and Infrastructure Committee chairman, Bill Shuster, is expected to soon unveil a major reform proposal, perhaps along the lines of Canada’s non-profit ATC corporation, Nav Canada. The FAA must be reauthorized by the end of March, which gives some momentum to reform. If President Obama wants an important pro-growth legacy in his final year in office, he should get behind this effort.

Canada’s ATC privatization has been a huge success. In a recent Wall Street Journal interview, the head of Nav Canada, John Crichton said, “This business of ours has evolved long past the time when government should be in it … Governments are not suited to run … dynamic, high-tech, 24-hour businesses.” Exactly—and for all the reasons I discuss here.

Please join us Thursday for a Capitol Hill forum to discuss these issues (Rayburn B-354, noon). We will hear from two top experts. Dorothy Robyn was a top economic advisor to both Presidents Clinton and Obama, and she wrote an excellent study on ATC reform for Brookings. Stephen Van Beek is a long-time aviation industry expert.  

A popular knock against vouchers and other school choice programs is that private schools do not serve many students with disabilities, whereas public schools serve everyone. If that’s true, then the vast majority of public schools in New York City must actually be private.

According to a federal investigation just rejected by the de Blasio administration, the large majority of New York City elementary schools – 83 percent – are not “fully accessible” to students with disabilities. That forces many disabled students to travel far afield from their local public schools, which are supposed to serve every zoned child. The U.S. Department of Justice’s letter to the city laying all this out contains this anecdote:

In the course of our investigation, we spoke to one family who went to extreme measures to keep their child enrolled in their zoned local school, rather than subject the child to a lengthy commute to the closest “accessible” school. A parent of this elementary school child was forced to travel to the school multiple times a day, every school day, in order to carry her child up and down stairs to her classroom, to the cafeteria, and to other areas of the school in which classes and programs were held.

Of course, it is unrealistic to expect that every school is going to be able to provide the best possible education for every child – all kids learn different things at different rates and have different strengths and weaknesses – but it is especially true for children with disabilities. Yet while the public schools often fall lightyears short of the goal, that is the standard to which public schooling advocates love to hold schools in choice programs. And not only is it unrealistic no matter what, but vouchers are usually a fraction of the funding public schools get, averaging around $7,000, versus New York City’s nearly $19,000 per pupil.

The scope of NYC’s failure to live up to the ideal is sobering, but revelations of double standards on this front are not new. School districts often pay for kids with the most challenging disabilities to attend private institutions, and there are several choice programs that are, in fact, specifically designed for children with disabilities. But maybe now, before choice opponents attack private schools again, they’ll at least try to get their own house in order. Or in New York City, their hundreds of houses not fully serving disabled children.

Jeb Bush spent at least $14.9 trying to win the Iowa Republican primary, the most of any candidate in either party. He finished sixth.

Will this persuade people that money does not buy elections? Probably not. The belief that “money buys elections” is not really falsifiable. It is a matter of faith.

But perhaps those who believe that money buys elections will now think it is somewhat less probable they are correct.

On Wednesday, February 3, the Senate Environment and Public Works committee will hold a hearing on a new “Stream Protection Rule” being proposed by the Department of the Interior’s Office of Surface Mining (OSM) that looks to be another nail being hammered into the coal industry’s coffin by the Obama Administration.

Energy and mineral resource development in the U.S. is being thwarted by a wave of agenda-driven federal agency rulemakings being rushed through before the end of this administration. Oil, natural gas, and coal have been targeted for replacement by renewable energy sources. The coal industry has been fast-tracked by the OSM’s proposed new “Stream Protection Rule” (SPR). 

The new SPR would supersede the existing Stream Buffer Zone Rule, enacted in 2008 to control the increasingly few negative effects of surface coal mining on aquatic environments in the nation’s three largest coal mining areas: Appalachia, the Illinois Basin—Midwest, and Rocky Mountains—and Northern Great Plains. But, as is so often the case in the world of environmental regulation, that was not sufficient for the OSM, and, over the past seven years it has continued to press for more and stricter regulations on coal mining all across the United States.  They seem to prefer a nationwide one-size-fits-all regulatory enforcement scenario, even though local geology, geochemistry, and terrain vary widely between states and basins.  As it is, these concerns are more efficiently addressed by the states and policed by the industry.

That aside, the real impacts of the SPR, openly acknowledged by OSM, leave tens of billions of dollars’ worth of coal in the ground with no chance of future development—“stranded reserves,” as OSM terms them in the rule. Those coal deposits, according to OSM, “…are technically and economically minable, but unavailable for production given new requirements and restrictions included in the proposed rule.”  Yet, OSM’s engineering analysis, cited by a Congressional Research Service study, states that there will be no increase in “stranded reserves” under the SPR. In other words, the same volume of coal will be mined under the proposed rule as under the current rule…an OSM oversight, no doubt.

The proposed rulemaking employs questionable geoscience and mining engineering issues such as overemphasizing the importance of ephemeral streams to limit mining activities in all areas, requiring needless increases of subsurface drilling and geologic sampling, redefining accepted technical terms such as “approximate original contour” and “material damage to hydrologic balance,” and creating new unfamiliar terms such as “hydrological form” and “ecological function.”

But OSM likely is not focused on technical issues as much as their main concern: that the new rule is more stringent than the existing 2008 rule as is possible, and that it will apply nationally. Hence, the rule appears to be more for the benefit of regulators and places undue burden and expense on coal miners. Neither is OSM overly concerned with the big three tangible adverse impacts of their proposed rulemaking: lost jobs, lost resources, and lost tax revenue—with Appalachia being hit the hardest. Consensus estimates—not OSM’s—of the number of mining-related jobs lost nationally due to the SPR: in excess of 100,000 to upwards of 300,000. The decrease in coal tonnage recovered: between roughly 30 to 65 percent less. The annual value of coal left in the ground because of the rule: between 14 to 29 billion dollars. The estimated decrease in Federal and coal state tax bases: between 3.1 to 6.4 billion dollars. These are not very encouraging statistics for an industry that is currently responsible for supplying 40 percent of U.S. electrical power generation.   

Interior’s Office of Surface Mining has failed to adequately justify its proposed Stream Protection Rule in light of the federal and state rules and regulations already in place. Rather, OSM has embarked on a seven year odyssey of agenda-driven rulemaking that would force-fit regional and local characteristics coal mining operations to a nationwide template. However, Congress and the courts had already established that a uniform nationwide federal standard for coal mining would not be workable given the significant differences in regional and local geology, hydrology, topography, and environmental factors related to mining operations everywhere. On the non-technical side, OSM does not retreat from its admission in the preamble to the proposed rule that the SPR is politically motivated. Press reports have quoted an OSM official as acknowledging that there was pressure to get the SPR done in this administration’s last year.

Enacting the new SPR would be an ominous threat to a coal mining industry that deserves much better from this or any other future administration. This is one reason why OSM’s proposed SPR has been tagged by the National Mining Association as “a rule in search of a problem.” However, to paraphrase a more appropriate quote: the voluminous Stream Protection Rule is not the solution to the coal industry’s problems—rather the Stream Protection Rule is the problem.

It will be interesting to see how this all plays out in the Senate on Wednesday.

The U.S. Department of Labor’s Occupational Safety and Health Administration (OSHA) is soon set to release new exposure limits to air-borne silica dust. The rulemaking has been in the works for about three years with a final rule scheduled to be announced this year. The silica industry is not enthused.

Silica dust is known to cause respiratory illnesses (e.g., silicosis, lung cancer, other airways diseases) that may contribute to or lead directly to death when it is breathed in high enough concentrations over long enough time periods.

OSHA explains that exposure to respirable silica “occurs in operations involving cutting, sawing, drilling and crushing of concrete, brick, block and other stone products and in operations using sand products, such as in glass manufacturing, foundries and sand blasting.”

OSHA’s proposal, generally, is to lower the existing permissible exposure limits (adopted in 1971) by about 50%, dropping them from around 0.1mg/m3  to 0.05mg/m3 (specific details here). OSHA explains:

The agency currently enforces 40-year-old permissible exposure limits (PELs) for crystalline silica in general industry, construction and shipyards that are outdated, inconsistent between industries and do not adequately protect worker health. The proposed rule brings protections into the 21st century.

And, as the government likes to claim with all of its regulations, the added restrictions will save lots of lives, and in doing so, will save lots of money:

OSHA estimates that the proposed rule will save nearly 700 lives and prevent 1,600 new cases of silicosis per year once the full effects of the rule are realized.

The proposed rule is estimated to provide average net benefits of about $2.8 to $4.7 billion annually over the next 60 years.

Interestingly, a visit to the Centers for Disease Control in search of deaths from silica inhalation produces this chart graphing silicosis mortality over time. The numbers have dropped considerably over the past 40+ years, and by 2010 had fallen to about 100 or so deaths per year (U.S. residents over the age of 15) attributed to silicosis as either the underlying or contributing cause.

Figure 1. Silicosis: Number of deaths, crude and age-adjusted death rates, U.S. residents age 15 and over, 1968–2010 (Source: CDC).

The CDC data shows that silicosis deaths have been declining and although the decline has slowed, it continues to drop while under the current OSHA guidelines. And further, the 100 or so deaths that are occurring annually are several times less than the annual number of deaths that OSHA predicts will be saved by the new regulations. That’s a pretty neat trick—the new regs are going to save several times more lives than are actually lost!

This means not only that the OSHA mortality benefits from the new regulations are questionable, but so too must be the economic benefits (as they are tied directly to the mortality savings).

The silica industry isn’t taking this lightly.

They contend that the OSHA mortality projections are based on dose-response relationships that are not truly indicative of what is really going on, for several reasons, primarily that they are based upon poor and inadequate data and analyses.

Dose-response curves used by the federal government are notorious for producing forecasts of a much greater health benefit than actually occurs. For one reason, often, the federal dose-response curves aren’t actually curves at all, but are rather straight lines. Which means that the response is the same for all dosage increments. This allows government regulatory agencies to claim the continually cranking down the exposure limits will continually produce positive health outcomes.

But, more and more research is proving that this is not the case. As the dose gets lower, the response often flattens out (i.e. there is a threshold) or may in fact become positive (low dosages are actually good for you). While this sounds like common sense to most of us (consider sunshine or alcohol), it is a ground-breaking notion in the field. And much of the research on this groundbreaking theory is being done by Dr. Edward Calabrese at University of Massachusetts, also an adjunct scholar to Cato’s Center for the Study of Science.

A leading industry group (the National Industrial Sand Association, NISA) makes a strong case that the existing OSHA standards are quite effective at greatly reducing, or even eliminating, occurrence of silicosis. So really, all OSHA needs to do is unify the existing regulations and better insure that they are enforced.

NISA has commissioned a scientific study of the dose-response behavior that is wider in scope and includes a greater and more detailed amount of epidemiological data than the existing studies relied upon by OSHA. From a NISA report:

While the association between silicosis and exposure to respirable crystalline silica is indisputable, there is still considerable uncertainty regarding the dose/response relationship of this association, particularly in the case of chronic simple silicosis, which is the most common form of silicosis.  It is unclear, for example, whether there is an effect threshold, or whether instead the dose/response curve is linear at even the lowest doses.  The slope of that curve is also uncertain.  As a result, there is uncertainty regarding the degree of risk remaining at various 8 hour time-weighted average exposures, including, most importantly, the current PEL of [0.10 mg/m3].  

In NISA’s view, this degree of uncertainty is unacceptable for a rulemaking of this magnitude.

The industry is being up-front about their commission (involving scientists at major universities involved in silica research) and has taken steps to be open and transparent about their involvement—or rather lack of involvement—in the study’s outcome. 

But, the results of the new study, instigated several years ago, are still forthcoming and consequently the industry has asked OSHA to wait until they are available (expected later this year) before issuing its final rule.

It’ll be interesting to see how this plays out, but preliminarily, this looks like another case of the government solving a problem that doesn’t really exist, in that existing regulations are sufficient to address the health concerns if they were better applied and enforced, and further, the promise of the new regulations is being greatly overplayed, saving many times more lives than our CDC says silicosis takes away.

But that’s the government at work—if some regulations are good, more must be better. Sadly, for the taxpayers and the regulated industries, it doesn’t always work out that way.

As I recall from my time in the Senate, there’s nothing like an energy bill to attract misguided proposals.  This week the Senate begins consideration of S.2012 — the Energy Policy Modernization Act of 2015.  Among the almost two hundred filed amendments is a proposal (Amendment #3042) from former real estate broker, Senator Isakson, to mandate that the Federal Housing Administration (FHA) reduce the quality of its loans in order to encourage more efficient energy use.

The two most concerning aspects of Amdt 3042 are 1) it would allow “estimated energy savings” to be used to increase the allowable debt-to-income (DTI) ratios for the loan and; 2) require “that the estimated energy savings…be added to the appraised value…”

These changes might not be so bad in the abstract but when combined with existing FHA standards, they set the borrower up for failure and leave the taxpayer holding the bag. Let’s recall that borrowers can already get a FHA mortgage at a loan to value (LTV) of 96.5%, and that’s assuming an accurate appraisal.  If borrowers were required to put 20 percent down, then this amendment would be a minor problem, but under existing standards, borrowers would mostly likely leave the table with an LTV over 100%, that is already underwater before they’ve even moved in.  Did Congress learn nothing from the crisis?

The increase in DTI might not matter if FHA did not already allow a DTI as high as 43% of income.  Under Amdt 3042 borrowers could easily leave the closing table devoting over half their income to their mortgage.  Again, did Congress learn nothing from the crisis?

To illustrate that the intent of the proposal is to have the taxpayer take more risk, Amdt 3042 actually prohibits FHA from imposing any standards that would offset this risk.  If these new loans perform worse, as one would expect, FHA cannot put them back to the lenders.   And let’s not forget FHA allows the borrower to have a credit history deep in the subprime range.  So you could have a subprime borrower, say FICO down to 580, LTV > 100% and DTI > 43% - what could go wrong?

If indeed energy savings actually increased the value of the home, that would be reflected in the price.  There would be no need to mandate such.  Not only does this proposal weaken FHA standards, and expose the taxpayer to greater risk, it takes us further down the path of an already politicized housing policy, where instead of relying on market prices, values are dictated by Soviet-style bureaucratic guesswork.

Afghanistan is a bust. The Taliban is expanding its control. The number of “security incidents” was up a fifth in the last months of 2015 over the previous year. Popular confidence is at its lowest level in a decade. U.S. military officers now speak of a “goal line” defense of Kabul.

While the deadly geopolitical game is not yet over, it is hard to see how the current regime can survive without Washington’s continued combat support. The nation-building mission always was Quixotic.

Indeed, the latest report from the Special Inspector General for Afghanistan Reconstruction shows how far this Central Asian land was and remains from developed status. And how ineffective U.S. aid programs have been in transforming it.

While Afghanistan enjoyed some boom years in the flood of Western cash, the foreign money also inflamed the problem of corruption. The Stockholm International Peace Research Institute explained: “The significant amount of aid and vast international military spending post-2001 has re-ingrained a culture of aid-rentierism: the Afghan elite competes internally for political rents from the international community.”

Tougher times have not increased honesty. In its latest quarterly report, SIGAR noted that a recent Afghan task force “reportedly found that millions of dollars were being embezzled while Afghanistan pays for numerous nonexistent ‘ghost’ schools, ‘ghost’ teachers, and ‘ghost’ students.”

Even worse, the same practice apparently afflicts the security forces. SIGAR cited an Associated Press investigation: “In that report, a provincial council member estimated 40% of the security forces in Helmand do not exist, while a former provincial deputy police chief said the actual number was ‘nowhere near’ the 31,000 police on the registers, and an Afghan official estimated the total ANDSF number at around 120,000—less than half the reported 322,638.”

Security never has been good during the conflict. Today it is worse than ever.

Explained SIGAR: “The Taliban now controls more territory than at any time since 2001. Vicious and repeated attacks in Kabul this quarter shook confidence in the national-unity government. A year after the Coalition handed responsibility for Afghan security to the Afghan National Defense and Security Forces (ANDSF), American and British forces were compelled on several occasions to support ANDSF troops in combat against the Taliban.”

Yet the failure of U.S. aid programs reaches well beyond insecurity. Despite pouring $113.1 billion into Afghanistan, Washington has surprisingly few sustainable, long-term benefits to show for it.

Citing just a few of its earlier audits, SIGAR reported on Afghan government agencies suffering from “divergent approaches and a lack of overall strategy, poor coordination and limited information sharing,” and unable to “handle contract research, awards, and management.” U.S.-funded “power and water systems [were] inoperable for lack of fuel” while an industrial park had minimal occupancy.

Its latest audits yielded little better results.

USAID devoted $488 million to develop Afghanistan’s oil, gas, and minerals industries. SIGAR found “limited progress overall.” Afghan ministries weren’t committed to reforms, “many mining operations are still controlled by political elites, warlords, military personnel, and the police,” transportation networks were inadequate, and several projects showed no results.

Tens of millions of dollars went for training and equipping an Afghan National Engineer Brigade. The NEB was hampered by “army staff on leave for holidays, political events, low literacy levels, and security concerns.” The brigade “lacked initiative” and “was not capable of carrying out its mission.”

Some $2.3 billion in USAID money went for stability programs, yet, said SIGAR, “villages that received USAID assistance showed a marked decrease in their stability scores relative to the overall decrease in stability scores for both villages that did and those that did not receive USAID assistance.”

The official line remains positive. On one of my visits to Afghanistan a Marine Corps officer warned me that “everyone is selling something.” Private reports were different than the glowing reviews from my NATO handlers.

As I point out on Forbes: “The U.S. has been fighting in Afghanistan for more than 14 years. It’s time to bring home the troops. No more Americans should die in Afghanistan for nothing.”

Secretary John Kerry went to Beijing to again lecture his hosts about the need for China to pressure North Korea over the latter’s nuclear program. As expected, his mission failed. The Xi government again proved unwilling to threaten the survival of the Kim dynasty.

Immediately after Pyongyang’s fourth nuclear test Kerry attacked Beijing’s policy: it “has not worked and we cannot continue business as usual.” Even before Kerry arrived the PRC made clear it disagreed. “The origin and crux of the nuclear issue on the Korean Peninsula has never been China,” said a Ministry of Foreign Affairs spokeswoman: “The key to solving the problem is not China.”

While he was in Beijing she cited the behavior of other parties as “one major reason why the denuclearization process on the peninsula has run into difficulties.” Beijing officialdom has shown plenty of irritation with the Democratic People’s Republic of Korea, but China has demonstrated it has yet to be convinced to destroy its own ally and strengthen America’s position in Northeast Asia.

Kerry made the best of an embarrassing situation when he announced that the two sides agreed to an “accelerated effort” by the UN Security Council to approve a “strong resolution that introduces significant new measures” against the DPRK. No one should hold their breath as to the nature of those “measures,” however.

Foreign Minister Wang Yi echoed Kerry in supporting passage of “a new resolution,” but added the devastating caveat: “In the meantime, we must point out that the new resolution should not provoke new tensions in the situation, still less destabilize the Korean peninsula.” Wang explained that “Sanctions are not an end in themselves” but should encourage negotiation, not punish.

As I point out in National Interest: “If Kerry wants the Chinese to follow U.S. priorities, he must convince them that America’s proposals advance Chinese interests. Which means explain to them why they should risk destroying their one military ally in the region, with the possibility of creating chaos and conflict next door and adding the entire peninsula to America’s anti-China alliance network.”

Good luck.

In 1950, the PRC went to war with the U.S. to preserve the North Korean state and prevent American forces from advancing to the Yalu River. Even today Beijing wants to see a united Korea allied with Washington about as much as it desires to have a nuclear North Korea.

Indeed, even without a U.S. garrison, a more powerful ROK would pose a challenge to the PRC. Moreover, Beijing’s favored economic position in the North would disappear as South Korean money swept away Chinese concessions.

Worse, the process of getting to a reunified Korea likely could be disastrous. Nothing in the DPRK’s history suggests a willingness to gently yield to foreign dictates. In the late 1990s the regime allowed a half million or more people to starve to death. Irrespective of China’s threats, Kim Jong-un might say no and continue in power, irrespective of the human cost.

If China ended up breaking a recalcitrant Kim dynasty by sanctioning oil and food, the result could be extraordinary hardship and armed factional combat followed by mass refugee flows across the Yalu—multiply the desperation and number of Syrians heading to Europe. Then toss in loose nuclear weapons and a possible South Korean/U.S. military push across the Demilitarized Zone to force reunification.

The result would be a first rate nightmare for Chinese President Xi Jinping. So, explain to me again, Secretary Kerry, why my country should ruin its geopolitical position to further Washington’s ends?

If John Kerry’s private message was the same as his public pronouncements, he had no hope of winning Chinese support for taking decisive action against the DPRK. Next time he visits he should employ the art of persuasion—or stay home.                      

Possibly the strangest foreign policy decision the Obama administration has made was their decision to support the Saudi-led war in Yemen. The White House has made quiet counterterrorism operations a key plank of its foreign policy agenda, and the administration includes a number of officials best known for their work on human rights issues, most notably Samantha Power. As such, the President’s decision to supply logistical, intelligence and targeting support for the Saudi-led coalition’s military campaign – a campaign which has been horrifically damaging to human rights inside Yemen, as well as detrimental to U.S. counterterrorism goals – was deeply surprising.

Less surprising was the fact that the conflict has turned into a disastrous quagmire. Yemen was already arguably a failed state when the intervention began in April 2015. The power transition negotiated in the aftermath of the Arab Spring was weak and failing, with Yemen’s perpetual insurgencies worsening the situation. Since the intervention began, the United Nations estimates that over 21 million Yemenis have been deprived of life’s basic necessities. Thousands have been killed. Even more concerning, United Nations monitors reported to the Security Council that they believed the Saudi-led coalition may be guilty of crimes against humanity for its indiscriminate air strikes on civilians.

Strategically, the coalition has made few gains. Despite the terrible loss of life, the coalition has stalled south of the capital, Sanaa. Further advances will be exceedingly difficult. At the same time, Al Qaeda inside Yemen has grown in strength and size, benefitting from the conflict, and even presenting itself as a viable partner for the Saudi coalition. It is hard to see how U.S. strategic interests - counterterrorism, human rights, or even regional stability – are being served by this conflict.

So what should the president do? In his last few months in office, President Obama should take advantage of his executive power to end U.S. support for the war in Yemen, and direct America’s diplomats to aggressively pursue a diplomatic settlement. This war is humanitarian disaster and a strategic failure; ending our support for it should be a no-brainer. 

What the President Should Do: U.S. Support in Yemen

Gabriel Roth, who turns 90 years young today, is a rock star among transportation economists, and a special inspiration for those of us who support reducing the federal government’s role in transportation. According to his C.V., Roth earned degrees in engineering from London’s Imperial College in 1948 and economics from Cambridge in 1954.

In 1959, he began research into improved road pricing systems. This led to his appointment to a Ministry of Transport commission that published a 1964 report advocating pricing congested roads in order to end that congestion.

In 1966, the Institute for Economic Affairs published his paper, A Self-Financing Road System, which argued that user fees should pay for all roads, and not just be used to relieve congestion. Roads should be expanded, Roth noted, wherever user fees exceeded the cost of providing a particular road, but not elsewhere.

In 1967, Roth moved to the United States to work for the World Bank, where he did road pricing studies for many developing countries and cities, including Bangkok, Manila, and Singapore. After leaving the World Bank in 1987, he continued to work as a consultant until 2000, among other things helping design the Dulles Toll Road and writing Roads in a Market Economy, a book published in 1996.

Since then, he has been a regular participant in transportation conferences, meetings, and hearings. He edited a 2006 book, Street Smart, co-authored a 2008 paper showing how electronic tolling could be done without invading people’s privacy, and made a presentation about tolling at the 2010 American Dream conference.

My home state of Oregon is now experimenting with mileage-based user fees, and I’m one of the volunteers in this experiment. If it goes well, we may see the realization of Roth’s ideas before he turns 100.

I hope to see Gabe on my next trip to DC. I know I’ll be able to find him by looking for the nearest transportation conference.

Quite a number of media fact-checkers tripped over Ted Cruz’s claim in last night’s debate that Barack Obama had “dramatically degraded our military,” and Marco Rubio’s related pledge to rebuild a U.S. military that is “being diminished.”

The Dallas Morning News noted that “amounts spent on weapons modernization are about the same as they were when Republican George W. Bush was president.” Meanwhile, to the extent that the military’s budget “is being squeezed,” they wrote, it is because of “the insistence of lawmakers in both parties that money be spent on bases and equipment that the Pentagon says it doesn’t need.”

Politico’s Bryan Bender (accessible to Politico Pro subscribers), concluded that while Cruz’s “facts may hold up to scrutiny…they are nonetheless misleading.” Bender pointed out that “Military technology has advanced significantly in the last quarter century and combat aircraft and warships are much more precise and pack a more powerful punch.” Politifact agreed, rating Cruz’s claim “Mostly False.”

Ultimately, alas, whether the U.S. military has been severely degraded is a judgment call. Relative to what? And when? And what does that mean for U.S. security?

But while the answers to such questions are subjective, the facts on spending are not. Undaunted by the realization that committed partisans are unlikely to be converted by them, I’m also doing my part to try to inject some facts into the debate over the Pentagon’s budget. A few weeks ago, I posed five sets of questions to the candidates at The National Interest’s Blog, The Skeptics, including, for those calling for more military spending:

Why would you spend more? What is the United States unable to do right now to preserve its security because it isn’t spending enough? To what extent is insufficient military strength the critical factor explaining America’s inability to achieve satisfactory results with respect to an array of challenges, from destroying ISIS, to repairing failed states, to halting North Korea’s nuclear program?

This morning at TNI, I offered my take on whether lower military spending as a share of GDP is to blame for the U.S. military’s supposed precipitous decline. I’m skeptical.

For one thing, the Pentagon’s base budget, excluding the costs of our recent wars, remains near historic highs. Under the bipartisan Budget Control Act passed in 2011, and as amended in 2013 and late 2015, U.S. taxpayers will spend more on the military in each of the next five years ($510 billion) than we spent, on average, during the Cold War ($460 billion). Those figures are adjusted for inflation. And the actual gap between what we spend now, and what we spent then, will be larger, because the BCA doesn’t cover war costs.

Meanwhile, it isn’t even true that spending under Barack Obama is lower than under George W. Bush. In inflation-adjusted dollars, military spending – both war and non-war – averaged $606 billion per year during Bush’s two terms in office; under Obama, it has averaged $668 billion. The United States will have spent nearly $500 billion more in the period 2009-2016 than from 2001-2008 ($4.8 trillion vs. $5.3 trillion).

So the most important question, it seems, is “Why is more spending leading to – in Cruz’s estimation (and Rubio and Jeb Bush and any other candidate that wants to spend more on the military) – less capability? A smaller Army. A smaller Navy. Fewer Air Force planes.

Do fewer troops and ships and planes imply that the military is dramatically degraded? Not necessarily. The troops are better trained than a generation ago. The ships are more capable. The weapons are more accurate.

We should not assume that less military spending – if spending did decline – would necessarily lead to a less capable military. Meanwhile, there are many possible explanations for why militaries degrade over time – for example, fighting foolish, unnecessary wars. Far fewer American troops are being killed and wounded in Iraq and Afghanistan now than in 2008.

I conclude at TNI:

it isn’t obvious that a more costly force is needed to preserve U.S. security and protect vital U.S. interests. That we are spending less as a share of GDP than at some points in U.S. history does not necessarily mean that we should spend more. It could also be true that we are spending less and getting more, or that we could safely get by with less. Once we get beyond the confusion over different ways to measure our spending, let’s examine what the U.S. military truly must do in order to keep Americans safe, and how much that will cost.

Read the whole thing here.

Did our message finally get through? (See “How ADA-for-the-Web Regulations Menace Online Freedom,” 2013). Or that of other commentators like Eric Goldman, who warned (of a related court case) that “all hell will break loose” if the law defines websites as public accommodations and makes them adopt “accessibility”? At any rate, the U.S. Department of Justice, after years of declaring that it was getting ready any day now to label your website and most others you encounter every day as out of compliance with the ADA, has suddenly turned around and done this:

In an astonishing move, the Department of Justice (DOJ) announced that it will not issue any regulations for public accommodations websites until fiscal year 2018 — eight years after it started the rulemaking process with an Advanced Notice of Proposed Rulemaking (ANPRM).

Yes, eight years is a very long time for a rulemaking, especially one pursuing issues that have been in play for many years (that link discusses testimony I gave in 2000). And predictably, some disabled interest-group advocates are already charging that the latest delay is “outrageous” and shows “indifference.” More likely, it shows that even an administration that has launched many audacious and super-costly initiatives in regulation has figured out that this one is so audacious and super-costly that it should be – well, not dropped, but left as a problem for a successor administration.

Besides, as so often happens, for regulated parties the issue is (to borrow a phrase) not freedom from obligation, but freedom from specification as to what that obligation might be. Court decisions, which for years ran mostly against ADA advocates’ “public accommodations” claim, now point confusingly in both directions. And in the mean time both private litigants and DoJ itself continue to sue online providers and fasten on them new settlements and decrees, as when Amazon lately agreed to caption more videos for the deaf; Harvard and MIT, meanwhile, were still being sued for the audacity of having offered uncaptioned online courses to the public. Minh Vu and Kristina Launey of Seyfarth Shaw:

…since issuing that [2010] ANPRM, DOJ’s enforcement attorneys have investigated numerous [entities claimed to be] public accommodations, pressuring them to make their websites accessible. DOJ even intervened in recent lawsuits (e.g., herehere, and here) taking the position that the obligation to have an accessible website has existed all this time in the absence of any new regulations.

The next administration – or better yet Congress – should summon the courage to give a firm and final No.

In recent years, politicians set impossibly high mandates for the amounts of ethanol motorists must buy in 2022 while also setting impossibly high standards for the fuel economy of cars sold in 2025.  To accomplish these conflicting goals, motorists are now given tax credits to drive heavily-subsidized electric cars, even as they will supposedly be required to buy more and more ethanol-laced fuel each year.  

Why have such blatantly contradictory laws received so little criticism, if not outrage? Probably because ethanol mandates and electric car subsidies are lucrative sources of federal grants, loans, subsidies and tax credits for “alternative fuels” and electric cars.  Those on the receiving end lobby hard to keep the gravy train rolling while those paying the bills lack the same motivation to become informed, or to organize and lobby. 

With farmers, ethanol producers and oil companies all sharing the bounty, using subsidies and mandates to pour ever-increasing amounts of ethanol into motorists’ gas tanks has been a win-win deal for politicians and the interest groups that support them and a lose-lose deal for consumers and taxpayers.

The political advantage of advocating contradictory future mandates is that the goals usually prove ridiculous only after their promoters are out of office.  This is a bipartisan affliction.  In his 2007 State of the Union Address, for example, President Bush called for mandating 35 bil­lion gallons of biofuels by 2017, an incredible target equal to one-fourth of all gasoline consumed in the United States in 2006.  Not to be outdone, “President Obama said during the presidential campaign that he favored a 60 billion gallon-a-year target.”

The Energy Independence and Security Act of 2007 (EISA) did not go quite as far as Bush or Obama, at least in the short run.  It required 15 billion gallons of corn-based ethanol by 2015 (about 2 billion more than were actually sold), but 36 billion gallons of all biofuels by 2022 (which would be more than double last year’s sales). The 2007 energy law also raised corporate average fuel economy (CAFE) standards for new cars to 35 miles per gallon by 2030, which President Obama in 2012 ostensibly raised to 54.5 mpg by 2025 (a comically precise guess, since requirements are based on the size of vehicles we buy).

The 36 billion biofuel mandate for 2022 is the mandate Iowa Governor Terry Branstad (and Donald Trump) now vigorously defend against the rather gutsy opposition of Sen. Ted Cruz.  But it is impossible to defend the impossible: Ethanol consumption can’t possibly double as fuel consumption falls.  

From 2004 to 2013, cars and light trucks consumed 11% less fuel.  The Energy Information Agency likewise predicts that fuel consumption of light vehicles will fall by another 10.1% from 2015 to 2022.   So long ethanol is no more than 10% of a gallon (much higher than Canada or Europe), ethanol use must fall as we use less gasoline rather rise as the mandates require. If we ever buy many electric cars or switch from corn to cellulosic sources of ethanol, as other impossible mandates pretend, then corn-based ethanol must fall even faster.

If raising ethanol’s mandated share above 10% is any politician’s secret plan, nobody dares admit it.  Most pre-2007 cars can’t handle more than 10 percent ethanol without damage, and drivers of older cars often lack the income or wealth to buy a new one.  Since ethanol is a third less efficient than gasoline, adding more ethanol would also make it even more impossible for car companies to comply with Obama’s wildly-ambitious fuel economy standards (which must also reduce ethanol use, if they work).

The 2007 law also mandated an astonishing 16 billion gallons of nonexistent “cellulosic” ethanol by 2022 from corn husks or whatever.  We were already supposed to be using a billion gallons of this marvelous snake oil by 2013. Despite lavish taxpayer subsidies, however, production of cellulosic biofuel was only about 7.8 million barrels a month by April, 2015 (about 94 million a year).  The Environmental Protection Agency (EPA) mandate in June 10, 2015 was 230 million billion in 2016, which is more fantasy.

It doesn’t help that the Spanish firm Abenoga – which received $229 million from U.S. taxpayers to produce just 1.7 million gallons of ethanol – is trying to sell its plant in Kansas to avoid the bankruptcy fate of cellulosic producer KiOR. It also doesn’t help that a $500,000 federally-funded study paid finds biofuels made with corn residue release 7% more greenhouse gases than gasoline.

The contradictory, fantastic and often scandalous history of ethanol mandates illustrates the increasing absurdity of mandates from Congress and the EPA. 

The 2007 biofuel mandate was not just bad policy.  It was and remains an impossible, bizarre policy.

The Obama administration has been easing restrictions on travel, exports, and export financing. Commerce Secretary Penny Pritzker spoke of “building a more open and mutually beneficial relationship.”

However, the administration expressed concern over Havana’s dismal human rights practices. Despite the warm reception given Pope Francis last fall, the Castro regime has been on the attack against Cubans of faith.

In a new report the group Christian Solidarity Worldwide warned of “an unprecedented crackdown on churches across the denominational spectrum,” which has “fueled a spike in reported violations of freedom of religion or belief.” There were 220 specific violations of religious liberties in 2014, but 2300 last year, many of which “involved entire churches or, in the cases of arrests, dozens of victims.”

Even in the best of times the Castros have never been friends of faith in anything other than themselves. The State Department’s 2014 report on religious liberty noted that “the government harassed outspoken religious leaders and their followers, including reports of beating, threats, detentions, and restrictions on travel. Religious leaders reported the government tightened controls on financial resources.”

Last year the U.S. Commission on International Religious Freedom was similarly critical. The Commission explained: “Serious religious freedom violations continue in Cuba, despite improvements for government-approved religious groups.” Never mind the papal visit, “the government continues to detain and harass religious leaders and laity, interfere in religious groups’ internal affairs, and prevent democracy and human rights activists from participating in religious activities.”

Now CSW has issued its own report. Last year’s increase in persecution “was largely due to the government declaring 2000 Assemblies of God (AoG) churches illegal, ordering the closure or demolition of 100 AoG churches in three provinces, and expropriating the properties of a number of other denominations, including the Methodist and Baptist Conventions.”

This wide-ranging campaign was led by the Office of Religious Affairs. Noted CSW: “In 2015, the ORA continued to deny authorization for a number of religious activities and in cooperation with other government agencies, issued fines and threats of confiscation to dozens of churches and religious organizations.”

Through the ORA the Communist Party exercises control over religious activities. Indeed, reported CSW, the Office “exists solely to monitor, hinder and restrict the activities of religious groups.”

The regime also has increasingly targeted church leaders and congregants, for the first time in years jailing one of the former. In early January two churches were destroyed, church members arrested, and three church leaders held incommunicado. One of the government’s more odious practices, according to CSW, has been to threaten churches with closure if they “do not comply with government demands to expel and shun specific individuals.”

The regime’s destructive activities have been justified as enforcing zoning laws. But in practice the measure is a subterfuge to shut down churches.

Other legislation threatens house churches. While not consistently implemented in the past, “church leaders have repeatedly expressed concern at its potential to close down a large percentage of house churches.”

CSW concluded that the ongoing crackdown was an attempt to limit calls for social reform which would complement ongoing, though limited, economic changes. Detentions initially were concentrated on “Cubans considered by the government to be political dissidents,” including a group of Catholic women called the Ladies in White. The regime crackdown later “expanded to include other individuals associated with independent civil society, including human rights and democracy activists.”

The Obama administration was right to engage Cuba. After more than 50 years, the embargo serves no useful purpose.

However, even lifting all economic restrictions won’t turn Cuba into a democracy. Only sustained pressure from within and without Cuba is likely to force the Castro regime to yield control to the Cuban people.

As I wrote in Forbes: “Americans should forthrightly encourage freedom in Cuba. Religious believers should be particularly vocal in supporting people seeking to live out their faith under Communist oppression. Some day autocracy will give way to liberty even in Cuba.”

In the past two decades, much scientific research has been conducted to examine the uniqueness (or non-uniqueness) of Earth’s current climate in an effort to discern whether or not rising atmospheric CO2 concentrations are having any measurable impact. Recent work by Thapa et al. (2015) adds to the growing list of such studies with respect to temperature.

According to this team of Nepalese and Indian researchers, the number of meteorological stations in Nepal are few (particularly in the mountain regions) and sparsely distributed across the country, making it “difficult to estimate the rate and geographic extent of recent warming” and to place it within a broader historical context. Thus, in an attempt to address this significant data void, Thapa et al. set out “to further extend the existing climate records of the region.”

The fruits of their labors are shown in the figure below, which presents a nearly four-century-long (AD 1640-2012) reconstruction of spring (Mar-May) temperatures based on tree-ring width chronologies acquired in the far-western Nepalese Himalaya. This temperature reconstruction identifies several periods of warming and cooling relative to its long-term mean (1897-2012). Of particular interest are the red and blue lines shown on the figure, which demark the peak warmth experienced during the past century and the temperature anomaly expressing the current warmth, respectively. As indicated by the red line, the warmest interval of the 20th century is not unique, having been eclipsed four times previous (see the shaded red circles) in the 373-year record – once in the 17th century, twice in the 18th century and once in the nineteenth century. Furthermore, the blue line reveals that current temperatures are uncharacteristically cold. Only two times in the past century have temperatures been colder than they are now!

Figure 1. Reconstructed spring (March-May) temperature anomalies of the far western Nepal Himalaya, filtered using a smoothing spline with a 50 % frequency cut off of 10 years. The red line indicates the peak temperature anomaly of the past century, the blue line indicates the current temperature anomaly, the shaded red circles indicate periods in which temperatures were warmer than the peak warmth of the past century, and the shaded blue circles indicate periods during the past century that were colder than present. Adapted from Thapa et al. (2015).

In light of the above facts, it is clear there is nothing unusual, unnatural or unprecedented about modern spring temperatures in the Nepalese Himalaya. If rising concentrations of atmospheric CO2 are having any impact at all, that impact is certainly not manifest in this record.



Thapa, U.K., Shah, S.K., Gaire, N.P. and Bhuju, D.R. 2015. Spring temperatures in the far-western Nepal Himalaya since AD 1640 reconstructed from Picea smithiana tree-ring widths. Climate Dynamics 45: 2069-2081.


An editorial in today’s New York Times calls for a financial transactions tax – a tenths of a percent charge on the market value of every trade of a stock, bond, or derivative. My Working Papers column two years ago described the pitfalls of such a tax.  While tax rates in the range of tenths of a percent sound small they would have large effects on stock values.  Bid-ask spreads are now 1 cent for large cap stocks. A 0.10 percent tax would add 5 cents to the spread for a $50 stock.

The alleged purpose of such a tax is to reduce the arms race among High Frequency Traders who exploit differences in the timing of bids and offers across exchanges at the level of thousandths of a second to engage in price arbitrage.  In the Fall 2015 issue I review a paper that demonstrates that this arms race is the result of stock exchanges’ use of “continuous-limit-order-book” design (that is, orders are taken continuously and placed when the asset reaches the order’s stipulated price). The authors use actual trading data to show that the prices of two securities that track the S&P 500 are perfectly correlated at the level of hour and minute, but at the 10 and 1 millisecond level, the correlation breaks down to provide for mechanical arbitrage opportunities even in a perfectly symmetrical information environment.  In a “frequent batch” auction design (where trades are executed, by auction, at stipulated times that can be as little as a fraction of a second apart), the advantage of incremental speed improvements disappears. In order to end the arbitrage “arms race,” the authors propose that exchanges switch to batch auctions conducted every tenth of a second.  No need for a tax.