Cato Op-Eds

Individual Liberty, Free Markets, and Peace
Subscribe to Cato Op-Eds feed

E-Verify is an electronic eligibility for employment verification system run by the federal government. It is supposed to check the identity information of new hires against government databases to see if they are legally eligible to work. The government created E-Verify to deny employment to illegal immigrants as a means of turning off the wage magnet that attracts so many here in the first place, but it has serious and unsolvable problems. Four states have mandated E-Verify for all new hires: Arizona, Alabama, South Carolina, and Mississippi. Their experiences with a state-level E-Verify mandate have produced several lessons of how the program would likely function if Congress ever mandated it nationwide.

The first lesson is that E-Verify data is insufficiently detailed to gauge the program’s effectiveness. As I recently wrote about South Carolina, there were several quarters from 2012 through 2014 where there were more E-Verify checks than there were new hires (Table 1). Under the smooth and lawful operation of E-Verify, as its architects intended, that is not supposed to happen by the wide margins reported below except in communities where the number of illegal immigrants is grossly disproportionate to the size of the population (not the case here). Rather than running a single E-Verify check for each new hire, employers ran more checks than were required in South Carolina. There are innocent explanations for this, such as simple corrections to human error, but also potentially destructive explanations like pre-screening of applicants by employers. Knowing the number of E-Verify checks run per each individual new hire would help estimate accurate compliance rates. Regardless, the number of E-Verify checks is greater than the number of hires. This means we cannot know how many new hires are actually run through the system in states where 100 percent of them are supposed to be. That makes accurately measuring compliance rates impossible.

Table 1

E-Verify Checks as a Percent of All New Hires

Year Quarter

Arizona

Alabama

South Carolina

Mississippi

2008 2

43.06%

NM

NM

NM

2008 3

50.75%

NM

NM

NM

2008 4

49.90%

NM

NM

NM

2009 1

52.80%

NM

NM

NM

2009 2

45.48%

NM

NM

NM

2009 3

49.89%

NM

NM

NM

2009 4

39.28%

NM

NM

NM

2010 1

47.01%

NM

NM

NM

2010 2

48.10%

NM

NM

NM

2010 3

82.48%

NM

NM

NM

2010 4

59.52%

NM

NM

NM

2011 1

61.18%

NM

NM

NM

2011 2

52.13%

NM

NM

NM

2011 3

55.21%

NM

NM

NM

2011 4

55.05%

NM

NM

55.75%

2012 1

55.51%

NM

NM

34.19%

2012 2

53.68%

NM

80.68%

41.03%

2012 3

58.96%

NM

110.32%

45.57%

2012 4

59.26%

54.68%

117.53%

57.62%

2013 1

63.09%

45.42%

112.93%

44.34%

2013 2

57.66%

40.26%

87.91%

42.15%

2013 3

64.12%

52.99%

118.09%

46.45%

2013 4

60.74%

52.90%

126.95%

51.48%

2014 1

60.98%

41.75%

123.51%

38.34%

2014 2

59.18%

37.48%

89.93%

36.09%

2014 3

66.72%

47.02%

125.32%

42.73%

2014 4

64.85%

51.40%

77.14%

48.22%

2015 1

69.38%

44.29%

73.27%

42.36%

2015 2

61.64%

37.02%

53.27%

35.64%

2015 3

73.61%

47.23%

69.89%

39.77%

2015 4

80.27%

51.60%

73.00%

44.78%

2016 1

84.61%

44.22%

69.86%

39.55%

2016 2

73.17%

39.63%

54.34%

38.46%

2016 3

83.98%

46.94%

69.47%

39.29%

2016 4

92.08%

52.71%

83.80%

49.62%

2017 1

97.87%

43.38%

71.37%

39.30%

2017 2

81.71%

41.27%

62.57%

38.19%

Sources: Department of Homeland Security and Longitudinal Employer-Household Dynamics Survey

Note: NM means “no E-Verify mandate.”

The second lesson is that compliance rates are likely very low. Figure 1 shows the E-Verify compliance rates as the percent of new hires run through E-Verify in each quarter in states where the program is mandated. As mentioned before, these compliance rates include every E-Verify check by participating employers in these states and many are for the same new hire, so they all look higher than they are on the ground. According to the rosy data presented above, that is likely impossible as only about 61 percent of all new hires who were supposed to be run through E-Verify in these states in the second quarter of 2017 were actually checked (Figure 1). That is a low level of compliance. 

Figure 1

E-Verify Compliance Rates in States with Mandated E-Verify

 

Sources: Department of Homeland Security and Longitudinal Employer-Household Dynamics Survey

The third lesson is that laws do not enforce themselves. It is easy for any level of government to pass a law declaring that E-Verify is now mandatory for all new hires. Enforcing that labor market regulation is a different challenge that requires workplace visits, inspections, and audits. Other than South Carolina, which has a remote audit program run by the state Department of Labor, Licensing, and Regulation, there is no enforcement of E-Verify by states that mandate its use for all new hires. To make matters worse, South Carolina’s audits might make the E-Verify data less reliable as employers check the same employees multiple times just to make sure they pass remote audits. Worst of all, we do not even know how often E-Verify succeeds in denying employment to illegal immigrants. South Carolina’s audits only measure if E-Verify is used, not whether it succeeds in its goal.

E-Verify is an expensive and intrusive labor market regulation that mostly affects American citizens, as every person needs to be run through it in order for it to have a hope of working. Members of Congress introduce bills to mandate it nationwide nearly every session. It is time that policymakers in Washington, DC look at the states where E-Verify is mandatory to judge how it works in reality rather than relying on Pollyanish odes about its intended effects. The low E-Verify compliance rates in states where the program is mandated point to serious problems that its cheerleaders must directly address.

Yesterday I was on a panel at Heritage looking at a Swiss-style debt brake and whether it was appropriate for the US.

US federal debt is now at its highest level as a proportion of GDP since 1950. Even prior to the recent tax cuts and budget-cap busting omnibus spending deal, debt was forecast to rise to 150 percent of GDP over the next three decades, primarily due to an aging population interacting with existing entitlement promises. Next week the Congressional Budget Office will publish its economic and fiscal outlook, which will show much higher deficits over the coming years following recent policy changes, and hence an even worse baseline of debt to ride into this fiscal headwind. Analysts expect the annual deficit could rise to around 5.3 percent of GDP in the next year or so.

Why does this matter from an economic perspective?

There are good economic reasons why we should desire a lower long-term debt-to-GDP ratio. For starters, a lower debt burden is insurance against the kind of “earthquake” debt crisis that John Cochrane and his Hoover Institution colleagues wrote about in the Washington Post last week. There are also obviously significant intergenerational consequences for taxpayers stemming from continually kicking the can down the road with ever-rising accumulated debt, with rising debt interest payments taking up a much higher proportion of government spending.

But a new Dallas Fed Economic Letter builds on previous research suggesting potentially the most damaging consequence: a rising debt trajectory seems to be associated with slower economic growth.

Back in the early part of this decade there was a huge debate about this. Ken Rogoff and Carmen Reinhart published a paper suggesting that growth across countries tended to slow substantially when government debt exceeded 90 percent of GDP. This “threshold effect” was taken by some commentators and politicians as gospel, but economists were more skeptical of thinking 90 percent represented a magical threshold beyond which disaster would strike. Then mistakes were found in the Reinhart-Rogoff work, and that hook was used to discredit the idea that there was a negative transmission mechanism between high debt and low growth at all.

This was an overreaction. Reinhart and Rogoff were not the only ones to find such an association. In fact, there was a lot of evidence out there that high debts were associated with slower growth. Stephen Cecchetti, M. S. Mohanty, and Fabrizio Zampolli identified a debt-to-GDP threshold of about 85 percent as a point beyond which growth tends to slow. Even Thomas Herndon, Michael Ash, and Robert Pollin, who replicated Reinhart and Rogoff’s work correcting for the errors, found that, on average, growth was 1 percentage point per year lower when government debt exceeded 90 percent of GDP than when debt levels were between 60 and 90 percent.

That’s what makes the new Dallas Fed note so interesting. They acknowledge, in line with basic intuition, that “the debt–growth relationship is complex, varying across countries and affected by global factors.” They also highlight the problem of disentangling the two-way causality between the two, and the possibility of discontinuities. Nevertheless, looking at a panel of advanced and emerging economies they conclude:

persistent accumulation of public debt over long periods is associated with a lower level of economic activity. Moreover, the evidence suggests that debt trajectory can have more important consequences for economic growth than the level of debt to gross domestic product (GDP).

Although there is no universally applicable threshold beyond which growth slows, countries with “rising debt-to-GDP ratios exceeding 60 percent tend to have lower real output growth rates.” What’s more, persistent accumulations of debt are associated with worse long-run growth outcomes:

These estimates are all negative and in the range of -5.7 to -9.4 percent, suggesting that a persistent accumulation in the debt-to-GDP ratio at an annual pace of 3 percent is eventually associated with annual GDP growth outcomes that are 0.2 to 0.3 percentage points lower on average.

Though the authors are careful to point out that this does not prove causality, the study does present evidence that if there is a transmission mechanism from high debt to low growth, the key to overcoming it is credible commitments and action to ensure debt increases are temporary phenomena. For the US federal government, rising debt looks a permanent reality right now as far as the eye can see.

For more on how fiscal rules could help play a part in changing this, read here.

And here’s the full discussion at Heritage from yesterday.

In the May/June 1983 issue of Regulation, economist Bruce Yandle outlined a theory of regulation he referred to as “bootleggers and Baptists.” Using the example of laws forcing bars to close on Sundays, Yandle explains how well-intentioned Baptists worried about the “dangers” of alcohol and self-interested bootleggers hoping to profit when bars are closed both support the law for fundamentally different reasons.

While public discourse over regulation frequently identifies the “Baptists,” the role of “bootleggers” is often ignored. For example, a recent New York Times article discussing the Gun Control Act of 1968 overlooks the influence of U.S. gun manufacturers. The article describes the Act as a response to the assassinations of Martin Luther King Jr. and Robert F. Kennedy, which motivated President Lyndon Johnson to call for gun control legislation.

While the assassinations did propel the gun control movement, the Gun Control Act was also a chapter in a long running battle between U.S. gun manufacturers and importers of foreign firearms. In the fall 2015 issue of Regulation, Joseph Michael Newhard recounts the history of gun control legislation and contends that U.S. manufacturers have frequently benefitted from gun control legislation that reduces their competition.

The 1968 Gun Control Act was no different. Limits were imposed on foreign imports and manufacturers, importers, and dealers were forced to be licensed, reducing the ability of individuals to sell firearms. Instead of a decline in firearms sales in the United States, the law simply transferred profits originally going to importers to manufacturers. As the Times reported on May 4, 1969, the Act,

which barred the importation of cheap, concealable-type handguns, is being defeated by the domestic manufacture of the guns or by the importation of foreign parts for assembly in this country….A domestic industry is thus blossoming to meet the brisk demand and will soon be turning out about 500,000 cheap pistols a year, compared to roughly 75,000 made here before the import restriction.

The recent Times article, however, ignores the role of gun manufacturers and the benefits the Act conferred to them. The article focuses on provisions that were ultimately left out of the act—namely, licensing and registration requirements—because of political pressure and lobbying by the NRA. The fact that provisions that directly benefitted gun manufacturers survived political opposition and remained in the final bill is a testament to the power of bootleggers and Baptists.

Written with research assistance from David Kemp.

The Surgeon General issued an “Advisory on Naloxone and Opioid Overdose” today, drawing attention to the effectiveness of the opioid overdose antidote naloxone. The drug, approved for use since 1971, is an effective remedy that can be safely administered by lay personnel who receive basic instructions. The Advisory cites research demonstrating that community-based overdose education and naloxone distribution reduces overdose deaths, and points out that first responders in most states and communities are now equipped with the drug.

Because naloxone is available only by prescription, most states have developed workarounds to make it more available to patients and, in some cases, third parties who have proximity to medical and non-medical opioid users. This way, witnesses to an overdose can be capable of rescuing the victim. This usually involves a state authorizing pharmacists to prescribe the drug or, in many cases, the state health director, acting as the state’s physician, issuing a “standing order” to pharmacists to distribute it.

The Advisory lists a number of conditions and situations that might place a person at risk of opioid overdose and encourages such people, or people who know them, to avail themselves of naloxone. It supports efforts at wider distribution at the community level.

Unfortunately, because of the stigma that has developed in association with opioid use, many opioid patients are reluctant to speak to the pharmacist and request a naloxone prescription. In some states, the naloxone will not be prescribed to third parties who know an opioid user. Also, numerous instances have been reported where pharmacists are reluctant to prescribe the antidote, believing they are “enabling” a drug abuser.

Recognizing this obstacle to naloxone distribution, Australia made it available over-the-counter in 2016, making it as easy to purchase as cold remedies or antacids. This way medical and nonmedical opioid users can discreetly make a purchase and check out at the cash register without having to answer any questions or face scrutiny from a pharmacist. The drug has been over-the-counter in Italy for over 20 years.

Based upon an August 2016 blog post, the Food and Drug Administration thinks it is reasonable to consider reclassifying naloxone as over-the-counter, and the FDA Deputy Director stated in the post that the agency would be willing to assist manufacturers in submitting applications for reclassification. But FDA regulations allow the commissioner to order a rescheduling review and allow petitions for OTC rescheduling from “any interested person”—not just drug manufacturers.

I’ve argued here that the FDA Commissioner should order an expedited reclassification review, and, if the Commissioner is unwilling to do so, then the Secretary of Health and Human Services—or even Congress—can see that it occurs. I’ve also contended that state legislatures—or even governors—as interested parties, can formally request an FDA review.

The Surgeon General’s April 5th Advisory singing the praises of naloxone and calling for its wider distribution in communities across the nation makes it obvious that the Surgeon General is an interested party.  Because he thinks naloxone should be used more to reduce overdose deaths, the Surgeon General should formally request that the FDA Commissioner order an expedited review of naloxone in hopes that it will be available over-the-counter as quickly as possible.

The Federal Reserve Bank of New York has made its decision. On June 18 of this year John Williams, currently serving as the President of the Federal Reserve Bank of San Francisco, will succeed William Dudley as New York Fed President. This decision will have many ramifications in the years to come, but several key points immediately stand out.

It means that Jerome Powell, the newly minted chair of the Fed, who is a lawyer not an economist, will have someone steeped in monetary policy by his side on the Federal Open Market Committee (FOMC) — the NY Fed President serves as the vice chair of the FOMC and is a permanent voter.

Williams has spent nearly his entire career within the Federal Reserve System. A student of John Taylor, Williams earned his PhD in 1994 at Stanford and immediately went to work at the Board of Governors. He stayed at the Board until 2002 when he moved to the San Francisco Fed. In 2011, Williams succeeded Janet Yellen as president there, having served as her research director.

The move to NY continues Williams’ ascension through the ranks of the Fed. While the NY Fed is one of a dozen regional banks, it is far and away the most important.

As president of the San Francisco Fed, Williams was already a voter on this year’s FOMC, but leading the NY Fed gives him a vote every year — the San Francisco Fed President votes only once every three years.

With the trading desk housed at the NY Fed, it is responsible for executing the monetary policy decisions of the FOMC. The NY Fed’s special placement makes its president one of the top three officials within the Federal Reserve System — along with the Vice Chair of the Board and, of course, the Fed Chair.

While Williams has never worked directly in financial markets, that fact should not be seen as a criticism. His career in monetary policy complements Chair Powell’s business background, which is important as long as the Vice Chair of the Board remains a vacant seat. Furthermore, and perhaps because he has not worked directly in markets, Williams understands that monetary policy should not overreact to short term data, particularly drops in financial markets. In his most recent speech, he said that there is no reason to expect a “knee-jerk reaction” from the Fed in response to recent events.

Williams’ specific monetary policy expertise is important in additional ways.

For nearly twenty years Williams has been discussing ways to measure the natural rate of interest (r*), which is a key input into many monetary policy rules, including the Taylor Rule. Though r* had been declining shortly before the Great Recession, it fell sharply during the financial crisis and has not recovered. This line of research has led Williams to the conclusion that the Fed is quite likely to be at the Zero Lower Bound again during future downturns. Because of this, Williams has become one of the most prominent advocates inside the Fed for finding an alternative target for conducting monetary policy. His preferred alternative to the Fed’s current inflation rate target is price-level targeting.

Targeting the inflation rate means that policy errors are ignored going forward. When the Fed undershoots its inflation rate, as it has almost without exception since adopting it in 2012, it doesn’t take corrective action to atone for those errors. Price level targeting, on the other hand, does correct for these mistakes by returning the price level to its overall trend. Level targeting offers a stability that inflation rate targeting cannot.

While Williams should be commended for his willingness to rethink the central bank’s inflation target from within the Fed, a price level target is not the answer.

Level targeting is certainly preferable to targeting the growth rate of a nominal variable, like the inflation rate, because level targeting obliges a central bank to make up for past policy errors. But price level targeting can sometimes contribute to the business cycle.

For example, when there is a negative real shock to the economy, like a decline in the output of oil, the price level will naturally tend to rise. Yet Fed tightening under these circumstances would only cause output to fall still further. (Williams, to his credit, admitted that adverse supply shocks would pose a challenge to a price level targeting central bank when I asked him about this at a recent Shadow Open Market Committee event).

A central bank that keeps the price level stable despite rapid productivity growth may, on the other hand, overheat the economy. Partly for these reasons, there is little reason to believe that price level targeting would have improved the Fed’s performance during the financial crisis and Great Recession, even if it might have improved upon the Fed’s actual policies at other times.

That said, John Williams is a monetary policy expert steeped in decades of research that shows evolution in his thinking — which, over the years, has earned him a reputation as a data-driven Fed official. As more economists call for a new target, we can hope he joins them in seeing the benefits of a nominal GDP level target over a price level target.

And critically important, at the NY Fed he will see the new operating framework up close and speak to those in charge of executing it on a daily basis. Janet Yellen called the new operating framework, particularly the interest rate paid on banks’ reserves, the key tool for monetary policy. Powell at his first press conference said there was essentially no decision made or forthcoming on whether the Fed will return to the corridor system used before the crisis.

Let us hope that Williams uses his expertise to critically examine the new operating framework during his leadership of the NY Fed — particularly the role this operating system may be playing in the Fed’s inability to reach its inflation target for years.

There are of course still many open questions regarding the future conduct of monetary policy, including how much the balance sheet will shrink, and whether the Fed will ever return to its pre-crisis operating framework based upon a market-determined federal funds rate. The New York Fed will play a lead role in deciding how such questions will be answered. Williams’ openness to new ideas and in engaging his colleagues, scholars, and the public on monetary policy issues is a good sign for those interested in the debate about how the Fed should conduct monetary policy.

[Cross-posted from Alt-M.org]

Conservative groups including the Heritage Foundation are circulating a proposal that builds on legislation by Sens. Lindsay Graham (R-SC) and Bill Cassidy (R-LA) to overhaul ObamaCare. Even though I don’t know whether Graham and Cassidy have endorsed these updates, I will go ahead and call this proposal Graham-Cassidy 2.0. The proposal seems ill-advised, particularly since there is an alternative that is not only far superior in terms of policy, but also an easier political lift that would deliver more political benefit.

The key to evaluating any proposal to overhaul ObamaCare is to understand the law’s centerpiece is its pre-existing conditions provisions. Those provisions are actually a bundle of regulations, including a requirement that insurers offer coverage to all comers, restrictions on underwriting on the basis of age, an outright prohibition on underwriting on the basis of health, and a requirement that insurers treat different market segments as being part of a single risk pool. ObamaCare’s preexisting-conditions provisions have the unintended and harmful effect of penalizing high-quality coverage and rewarding low-quality coverage. ObamaCare contains other harmful regulations, but its preexisting-conditions provisions are by far the worst. Unless a proposal would repeal or completely free consumers from ObamaCare’s preexisting-conditions provisions, it is simply nibbling around the edges.

From what I have seen, Graham-Cassidy 2.0 nibbles around the edges.

To its credit, Graham-Cassidy 2.0 would zero-out funding for and repeal the entitlements to both ObamaCare’s premium-assistance tax credits (read: Exchange subsidies) and benefits under the Medicaid expansion. Unfortunately, it would not repeal that spending. Instead, it would take that money and send it to states in the form of block grants. The aggregate spending level for those block grants would grow more slowly over time than Exchange subsidies and federal Medicaid-expansion grants would under current law.

Limiting the growth of those spending streams seems like a better idea than letting them grow without limit, as current law allows. However, there is more downside than upside here.

First, Graham-Cassidy 2.0 would transform a purely federal spending stream into a intergovernmental transfer. At present, Exchange subsidies are payments the federal government makes to private insurance companies. Under Graham-Cassidy 2.0 (and 1.0), the feds would send those funds to states, which would use them to subsidize health insurance in various ways. Roping in a second layer of government diffuses responsibility and reduces accountability, regardless of whether the feds send those funds to states in the form of a block grant. Voters who don’t like how those funds are being spent would have difficulty knowing which level of government to blame, and whichever level of government is actually responsible could avoid accountability by blaming the other. Intergovernmental transfers are so inherently corrupting, there should be a constitutional amendment prohibiting them. And yet Graham-Cassidy 2.0 would substitute an intergovernmental transfer for spending with clearer lines of accountability.

Second, Graham-Cassidy 2.0 also diffuses accountability for ObamaCare’s preexisting-conditions provisions. Those provisions would continue to operate (with slight modifications). As a result, they would continue to destabilize the individual market, punish high-quality coverage, and reward low-quality coverage. The purpose of the Exchange subsidies is to mitigate that instability. Today, it is clear that Congress is responsible for any harm those provisions inflict, and the success or failure of the Exchange subsidies to mitigate those harms. Graham-Cassidy 2.0 would give that money to states and task them with mitigating those harms. When states fail to do so, as at least some states inevitably will, whom should voters blame? Congress, which started the fire? Or states, to whom Congress handed the fire extinguisher?

Third, while Graham-Cassidy 2.0 would eliminate two federal entitlements, eliminating entitlements is desirable only to the extent it limits government control over economic resources—in this case, spending. And while Graham-Cassidy 2.0 proposes to hold the growth of this repurposed ObamaCare spending below what it would be under current law, there is reason to doubt such a spending limitation would hold.

When examining the merits of any policy proposal, one must also consider the political dynamics the proposal would unleash. Generally speaking, states are a more politically powerful and sympathetic constituency than the current recipients of Exchange subsidies (private insurance companies). States have been able to use that political clout to get Congress to disregard the spending limits it imposed on SCHIP, for example, when so-called emergencies led states to blow through their initial allotments. Moreover, since Graham-Cassidy 2.0 would preserve ObamaCare’s preexisting-conditions provisions, it would come with its own built-in emergencies. As sure as the sun rises in the East, states will come to Congress and claim their block-grant allocations were insufficient to mitigate the resulting harms. Congress would be unlikely to say no—members rely on state officials for political support, after all—which means the spending restraints in Graham-Cassidy 2.0 are less than guaranteed.

Fourth, also pushing the direction of bigger government, Graham-Cassidy 2.0 would expand the constituency for ObamaCare spending. At present, the money the federal government spends on ObamaCare’s Medicaid expansion does not enjoy the support of the 19 states that have not implemented the expansion. The block grants in Graham-Cassidy 2.0, by contrast, would go to all states. As a result, non-expansion states like Texas would go from not caring about whether that federal spending continues to insisting that it does. At the same time, Graham-Cassidy 2.0 would expand the constituency of voters who want to preserve that spending. At present, able-bodied, childless adults in non-expansion states receive no benefit from ObamaCare’s Medicaid expansion or its Exchange subsidies. Graham-Cassidy 2.0 would allow (and in some cases require) states to provide subsidies to such adults below the poverty level, thereby creating another constituency that will reliably vote to expand those subsidies.

Fifth, a provision of Graham-Cassidy 2.0 that supporters consider a selling point would expand the constituency for more spending yet again. The proposal would require all states to allow all able-bodied, non-elderly Medicaid enrollees to use their Medicaid subsidy to purchase private insurance. Since greater choice would make Medicaid enrollment more valuable, and since roughly one third of people who are eligible for Medicaid are not enrolled, this would perversely lead to a large “woodwork effect,” where people who were previously eligible for Medicaid but not enrolled begin to enroll in the program. When Medicaid enrollment increases, so will Medicaid spending, and so will the population of voters who are willing to vote for higher Medicaid spending and the higher taxes required to finance it.

Since Graham-Cassidy 2.0 would preserve ObamaCare’s preexisting-conditions provisions, it is hard to see what would justify taking these one or two uncertain steps forward and multiple steps backward.

This is particularly true since there is a much better alternative on the table: strongly encouraging the Trump administration to allow insurers to offer short-term health insurance plans with renewal guarantees that protect enrollees from having their premiums increase because they got sick. Doing so would allow consumers to avoid all of ObamaCare’s unwanted regulatory costs, particularly those imposed by its preexisting-conditions provisions. The Trump administration can create this “freedom option” by administrative rulemaking—comments on the administrations proposed rule are due April 23—which is a much easier political lift than garnering 217 votes in the House and 51 votes in the Senate. Expanding short-term plans would also create salutary political dynamics that would force Democrats begin negotiating a permanent overhaul of ObamaCare.

As of today, Graham-Cassidy 2.0 just can’t compete with that cost-benefit ratio. Every ounce of energy spent on it, rather than on expanding short-term plans, is a waste.

Despite over a century of Supreme Court decisions holding that a state cannot force wholly out-of-state entities to collect taxes for them, South Dakota wants to do just that. In 2017, South Dakota passed Senate Bill 106, which attempts to force out-of-state sellers that ship to South Dakota residents to collect and remit South Dakota’s sales tax. The law is in direct contravention to the 1992 case of Quill Corp. v. North Dakota, which held that states could not compel any entity to collect taxes unless the entity has a physical presence within the state. South Dakota sued Wayfair, a popular home goods vendor, among other retailers, in an attempt to enforce their law and overturn Quill in the process.

South Dakota’s law is at odds with the Constitution. Quill’s physical-presence requirement stemmed from decades of developments in tax law that struck an important balance between due process and the Commerce Clause of our Constitution. Due process requires some definite link—some minimum contacts—between the state and any person, property, or transaction that a state seeks to tax or regulate. Wayfair does not own property in South Dakota, elects no representatives in South Dakota, and was afforded no protection by South Dakota’s police. South Dakota’s only justification for binding a foreign entity to its law is that some of Wayfair’s many customers happen to live there. To allow South Dakota to compel Wayfair’s collection of its state taxes raises serious concerns of taxation without representation. If states can directly compel people who live outside state boundaries to adhere to state standards—standards the people had no chance to influence—the concept of statehood itself is undermined.

Cato has filed an amicus brief in support of Wayfair, because, as the Supreme Court once said, it is a “principle of universal application, recognized in all civilized states, that the statutes of one state have…no force or effect in another.” A federal constitutional structure inevitably poses difficulties like South Dakota is experiencing, especially where trade is flowing freely between states. South Dakota may have to think of other ways to raise revenue, but that is not a justification for undermining our constitutional structure. Governments around the world are prone to complain about the difficulties of collecting taxes, but our Constitution was not written to bend to the states’ desires to raise revenue.

As Washington Post readers know, there has been extensive corruption in the District of Columbia government for many years.

But D.C. is a city of 700,000 people within a metro area of 6.1 million. So I’ve been surprised about the relative dearth of news articles on corruption in the suburbs, particularly the Virginia suburbs.

Is that because there is: a) less corruption in VA, b) less media interest in covering it, or c) fewer auditors in VA digging for it?

Where there is government spending, there is corruption. There is a lot of federal, state, and local government spending in VA, so I’ve wondered whether “c” might be the right answer.

Well, how about that—the Post just reported on major corruption in VDOT:

During a snowstorm two years ago, Virginia Department of Transportation official Anthony Willie decided he should book a hotel room in Northern Virginia for the night. After all, he was in charge of snowplowing for the Burke area of Fairfax County.

While he was there, he wanted to have some fun. So he tried to get contractors and a co-worker to send women up to his room, according to court documents.

… Willie, of Culpeper, was sentenced this year to seven years in prison, having pleaded guilty to public corruption charges. He is among seven people convicted in a sweeping investigation.

Willie and his deputy, Kenneth Adams of Fairfax, demanded bribes from snowplow drivers in exchange for work. For six years they picked up payoffs at Outback Steakhouse and McDonald’s restaurants in the Washington suburbs. Along the way they increased their demands: Contractors said they were threatened when they balked at paying more.

… All seven defendants said in court that the corruption at the Virginia Department of Transportation is endemic to the culture and more extensive than the scheme that put them behind bars.

“It is happening now, it will happen in the future,” contractor John Williamson said before being sentenced to three months in jail. “It is rampant, and it is part of the culture of the agency.”

Prosecutor Samantha Bateman acknowledged in court that “this is a more pervasive problem in the Virginia Department of Transportation than is known.”

…Their crimes came to light only because another snowplow contractor complained to FBI agents looking into yet more alleged corruption at the agency involving falsified vehicle registrations. It is unclear where that investigation stands, but when FBI agents came to search Rolando Pineda Moran’s home, he told them they were missing the big picture.

“You’re looking at the trees. There’s a big forest out there,” Moran told them,

Adams also was selling cocaine — including to his boss, who was videotaped snorting the drug in his office, according to court documents. In addition to a public corruption charge, Adams pleaded guilty to possession with intent to distribute cocaine.

Both agency officials took between $200,000 and $300,000 in bribes, court documents said.

Isn’t that interesting? Corruption is alleged to be “rampant” and “endemic” in a major Virginia government agency, and the corruption has been apparently going on for years.

These crimes came to light because of a tip to federal investigators, who were investigating another different crime in VDOT.

Where were state auditors and investigators? And why has VDOT corruption festered so long?

Virginia has an Office of State Inspector General, which is supposed to investigate fraud, waste, and corruption in state agencies and contractors. Was this office aware of the alleged VDOT corruption? The Office does not post its investigative reports, and the last VDOT performance review in 2015 does not mention the problems.

Also, why is the FBI so involved in state and local public corruption? Look at this long list of public corruption investigations. Are the Feds doing work that state governments should be doing? Is federal help making the states helpless in policing themselves?

David C. Henrickson will be at Cato on Monday, April 16, at 11am to discuss his new book, Republic in Peril: American Empire and the Liberal Tradition in which he contends that American foreign policy is well over-due for “renovation.”

He argues that that

  • The United States does not and cannot function at the legitimate umpire of the international system.
  • Its ostensibly liberal ends have concealed highly illiberal means.
  • Its doctrine holding the world’s states to one standard has been destabilizing.
  • Its military policy has been overly aggressive.

He further contends that the people who loudly praise “the liberal world order” have lost touch with critical elements of the liberal tradition,” and he seeks to revive that tradition in what he calls “a new internationalism” emphasizing “restraint rather than braggadocio” and the acceptance by the United States of “its role as a nation among nations” rather than arrogantly “extolling its exceptional virtue and superior wisdom.”

And that’s all on just two pages.

He also suggests, quite unfashionably, that foreign policy elites might consider containing their congenital hysteria over Russian assertiveness in its area and over China’s (rather screwball) islands in the seas to its south.

Hendrickson is the author of eight books and is a professor of political science at Colorado College where he has enjoyed the view since 1983.

Commenting will be Michael Mandelbaum of the Johns Hopkins University’s School of Advanced International Studies and the author of even more books than Hendrickson. I will be moderating. And there’s a free lunch afterward.

Click here to register or to learn more.

Last August, the Federal Register announced a period of public commentary on information germane to a new set of Corporate Average Fuel Economy (CAFE) standards for the 2022-2025 period. The extant standard, of roughly 50 miles per gallon (MPG) for passenger cars and other light vehicles, was put in place in January 2017, right at the end of the Obama Administration.

It is not surprising that EPA Administrator Scott Pruitt announced the Obama standards are not to stand; we hope our extensive public comments submitted on September 27 exerted some influence on this decision.

We noted:

There is a paradigm-shift occurring in global warming that is highly relevant… It began with the revelation of remarkable and increasing discrepancies between the climate models…in the most recent report of the U.N’s Intergovernmental Panel on Climate Change (IPCC), and observations in the bulk atmosphere over vast swaths of the planet.

Figure 1 (below) shows this discrepancy.

Figure 1. Average of the IPCC computer model projections for the tropical mid-troposphere versus three standard sets of observations: weather balloons, temperature sensed from satellites, and “reanalysis” data used to initialize the daily weather map. The growing discrepancy is obvious, even with the 2016 El Nino warm spike at the end.

Figure 2 (above) shows the problem in the vertical tropics, where as much as seven times as much warming has been predicted at high altitude since the satellites became operational in 1979.

Figure 2 has enormous consequences for weather. It is the vertical distribution of temperature that determines how much moisture is wafted sky ward in the fetid tropics. Much of the extratropical temperate zone depends upon this juice.

It is noteworthy that models in general predict the greatest amounts of future warming, while observationally-based studies, often about interglacial-glacial transitions, or differences between geological eras, tend to come up with less warming. Given data or a model, most folks will pick the former.

We detail our reasons for re-examining the 2022-25 CAFÉ standards here. Have a look and we think you’ll agree that Administrator Pruitt has pretty sound science behind the need to revisit standards that were generated absent some of this very important data.

The Economic Development Administration (EDA) is a Department of Commerce agency that subsidizes local business activities. The EDA cost taxpayers $287 million in 2017.

The EDA funds activities that should be funded by local governments and the private sector. In the photo below, a federal official and a congressman are handing a check to local representatives to pay for road and water facilities at a new Home Depot store.

President Trump proposed abolishing the EDA in his recent federal budget. That would be a good reform, and a new study at DownsizingGovernment.org discusses the EDA’s history and reasons to eliminate it.

Republican Hal Rogers of Kentucky worries that starvation may increase in his district without the program. That claim defies logic, but congressional Republicans recently sided with Rogers, and  increased EDA spending by $25 million.

A long time ago, legendary anti-pork Democrat Senator William Proxmire targeted the EDA with his “Golden Fleece Awards.” He pointed to wasteful EDA spending such as this “cursed pyramid” in Indiana.

Proxmire argued that the EDA “deserves to die,” and he was right.

Trump will have to try again next year. If the Republicans can’t cut the EDA, they can’t cut anything.

My analysis, with Tad DeHaven, of the EDA is here.

Forty years ago, in the spring of 1978, I had no intention of becoming an economist. Instead, I was studying marine biology at Duke University’s Marine Lab at Pivers Island, on the beautiful North Carolina Coast. There, when the wind was up, my classmate Alan Kahana and I enjoyed going out on his Hobie 16, with Alan manning the tiller and myself hiked-out on the trapeze. We weren’t, truth be told, especially prudent sailors. On the contrary: we were so inclined to push things to the limit that one day we took the Hobie out just as a gale was getting up, and ended up…well, that’s a long, sad story. Suffice to say that it doesn’t take much to capsize a Hobie, and that on that day we capsized Alan’s boat once and for all.

What, you are no doubt wondering, has any of this to do with Interest on Excess Reserves? I’m getting there. You see, although it doesn’t take much to capsize a Hobie — a little over-trimming of the sail will suffice — once one capsizes, it’s likely to start to turn turtle as its mast fills with water. And as that’s happening, it may be all that two reasonably trim lads can do — by pulling for dear life on a righting line attached to the boat’s mast, whilst leaning backwards on its uppermost hull — to lever the thing back upright. The more the mast fills, the harder it gets. And the same sort of thing goes for letting a central bank slip into, and then trying to wrest it out of, a floor system of monetary control: the more liquidity the banking system takes in while that system’s in place, the more effort it takes to pull out of it.

As faithful readers know, I’ve long insisted that the modest interest rates the Fed began paying on excess reserves in October 2008 were enough to encourage bankers, who long made do with only the slimmest of excess reserve cushions, to hoard all the reserves they could lay their hands on. That modest little bit of Fed sail-trimming was enough to overturn  the Fed’s traditional monetary control system. The Fed had long relied on a sort of asymmetrical “corridor” system, with a target fed funds rate set somewhere between zero and the Fed’s discount rate, and the effective federal funds rate kept near that target by means of small-scale open market operations. Now it had flipped-over to a “floor” system, with changes in the Fed-administered IOER rate serving as its chief instrument of monetary control.

Some economists, to be sure, refuse to believe that the modest IOER rates the Fed paid in the early stages of the crisis could account for the switch in question, or for banks’ subsequent tendency to hoard reserves. Paul Krugman even accused those who thought so of failing a “reality test,” by overlooking how, in the U.S. in the 1930s and in Japan more recently, banks hoarded non-interest-bearing reserves. But Professor Krugman himself might be said to have failed a “logic test,” calling for an understanding of the difference between a necessary and a sufficient cause. Then there’s the pesky fact that Ben Bernanke and other Fed officials secured permission to pay interest on bank reserves for the express purpose of getting banks to hoard them. Had they done so for no reason? Had Ben Bernanke himself forgotten about the 1930s? A page from his 2014 textbook is illuminating (HT: Alex Schibuola):

But allowing that a modest above-zero return on bank reserves was indeed all it took to establish a floor system, and to get banks to stock-up on excess reserves, it doesn’t follow that restoring the IOER rate to zero would have the opposite effects. The reason has to do with the immense growth in the total supply of reserve balances that has since taken place. That growth matters because, under a floor system, the greater the nominal stock of bank reserves, the lower the IOER rate must be to reduce the quantity of excess reserves demanded to zero. The accumulation of liquidity in a “floored” monetary control system thus acts like the accumulation of water in the mast of a capsized Hobie Cat, making it much harder to revert from a floor to a corridor system than it was to switch to the floor system in the first place. Call it the Hobie Cat effect.

The Hobie Cat effect can be illustrated formally using a diagram showing the supply of and demand for bank reserves or federal funds under a floor system. The supply schedule for federal funds is, as usual, a vertical line, the position of which varies with changes in the size of the Fed’s balance sheet. The reserve demand schedule, on the other hand, slopes downward, but only until it reaches the going IOER rate, here initially assumed to be set at 25 basis points. At that point the demand schedule becomes horizontal, because banks would rather accumulate excess reserves that yield the IOER rate than lend reserves overnight for an even lower return.

For the initial stock of reserves R(1), starting at the equilibrium point “a,” a slight reduction in the IOER rate would suffice to get the banking system back onto the sloped part of its reserve demand schedule, at point “b,” where reserves are again scarce at the margin. But once the stock of reserves has increased to R(2), it takes a much more substantial reduction in the IOER rate — perhaps, as the move in the illustration from “c” to “d” suggests, even into negative territory — to make reserves scarce at the margin again, and to thereby make switching to a corridor system, by a modest further reduction in the IOER rate, possible without any need for central bank asset sales.[1]

Does this mean that the Fed can never hope to escape from its current floor system unless it reduces its IOER rate substantially, and that it might even have to resort to a negative rate? It doesn’t. And it’s here that the Hobie Cat analogy fails, for while you can’t drain the mast of a Hobie Cat that’s turned turtle, the Fed can drain the banking system of any or all of the reserves it created after 2008. The obvious, and most prudent, way out of the floor system is, therefore, for the Fed to retrace the steps that got it into that system, by first shrinking its balance sheet far enough to return the stock of reserves to a point close to the kink in the federal funds demand schedule, and then reducing the IOER rate enough to make reserves scarce at the margin, thereby reviving interbank lending and establishing a corridor system.

Considering how many excess reserves banks are now holding, all of this is still a tall order. But it beats having an operating framework that leaves our monetary system sodden and adrift.

____________________________

[1] Because the move from “c” to “d,” like that from “a” to “b,” involves no change in the total stock of bank reserves, readers may be tempted to assume that it also involves no reduction in the quantity of excess reserves, and hence no change in banks’ inclination to hoard such reserves. The temptation should be resisted: although banks hold the same total quantity of reserves at “d” as at “c,” the former equilibrium involves a higher quantity of bank lending and deposit creation, hence a higher value of required reserves, with a correspondingly lower value of excess reserves.

[Cross-posted from Alt-M.org]

President Trump recently said that he would deploy troops to the Mexican border in response to the over-hyped story of about 1,000 Central Americans who are walking to the U.S. border to ask for asylum, which is their right under American law. “Until we can have a wall and proper security, we’re going to be guarding our border with the military,” President Trump said on Tuesday. “That’s a big step. We really haven’t done that before, or certainly not very much before.” On the contrary, American Presidents have ordered troops to the border to assist in immigration enforcement several times and all of them when the flow of illegal immigrants was significantly greater than it is today.

When the old Immigration and Naturalization Service (INS) launched Operation Wetback in 1954 (yes, that is what the government called it), then-Attorney General Herbert Brownell asked the U.S. Army to help round up and remove illegal immigrants.  According to Matt Matthews in his “The US Army on the Mexican Border: A Historical Perspective,” the Army refused to deploy troops for that purpose because it would disrupt training, cost too much money at a time of budget cuts, and it would have required at least a division of troops to secure the border.  According to Matthews, head of the INS General Swing remarked in 1954 that deploying U.S. Army troops on the border was a “perfectly horrible” idea that would “destroy relations with Mexico.” It was also unnecessary.

In 1954, the 1,079 Border Patrol agents made 1,028,246 illegal immigrant apprehensions or 953 apprehensions per agent that year.  For the entire border, Border Patrol agents collectively made 2,817 apprehensions per day in 1954 with a force that was 95 percent smaller than today’s Border Patrol. In other words, the average Border Patrol agent apprehended 2.6 illegal immigrants per day in 1954. Neither President Eisenhower nor the military considered that inflow of illegal immigrants to be large enough to warrant the deployment of troops along the border.  The expansion of the Bracero guest worker visa program increased the opportunity for legal migration to such an extent that it drove virtually all would-be illegal immigrants into the legal market, crashing the number of apprehensions by 93 percent by 1956.

In 2018, President Trump has ordered troops to the border to help the current number of 19,437 Border Patrol agents apprehend the roughly 1,000 Central American asylum seekers who are slowly making their way north (but probably won’t make it all the way to the border). There are currently about 19 Border Patrol agents for each Central American asylum-seeker in this caravan. In 2017, Border Patrol apprehended about 360,000 illegal immigrants or about 18 per Border Patrol agents over the entire year, which works out to one apprehension per Border Patrol agents every 20 days. By that measure, Border Patrol agents in 1954 individually apprehended an average of 53 times as many illegal immigrants as Border Patrol agents did in 2017. If the current caravan makes it to the United States border, it would add about a single day’s worth of apprehensions. Border Patrol should be able to handle this comparatively small number of asylum seekers without military aid as they have done so before many times.

It is also unclear what the troops will actually accomplish on the border. Since the members of the caravan intend to surrender to Border Patrol or Customs Officers and ask for asylum, the troops serve no purpose. They will not deter asylum seekers. Border Patrol agents are not overwhelmed by entries even though they constantly plead poverty in an effort to capture more taxpayer resources. The most likely explanation for the proposed deployment is politics, just like the previous deployments.

Other Border Deployments

Since 1982, most U.S. military deployments and operations along the Mexican border were intended to counter the import of illegal drugs.  The regular deployment of troops for that purpose ended in 1997 after a U.S. Marine shot and killed Esequiel Hernandez Jr., an American citizen, as he was out herding goats.  By July of that year, Secretary of Defense William Cohen suspended the use of armed soldiers on the border for anti-drug missions. 

On May 15, 2006, President Bush ordered 6,000 National Guard troops to the border as part of Operation Jump Start to provide a surge of border enforcement while the government was hiring more Border Patrol agents.  In 2006, there were about 59 apprehensions per Border Patrol agent or one per agent every four days.  Operation Jump Start ended on July 15, 2008.  In that year, there was an average of one apprehension every nine days per agent during the entire year.  President Obama also deployed 1,200 troops to the border in 2010 to assist Border Patrol during a time of falling apprehensions.  They left in 2012.  In that year, Border Patrol agents individually apprehended an average of one illegal immigrant every 16 days. 

The two recent deployments to assist in enforcing immigration law along the border occurred when there were fewer apprehensions, represented by more days between each apprehension for each agent (Figure 1).  The higher the number for the blue line in Figure 1, the fewer people Border Patrol agents individually apprehend.  From about 1970 through 2006, the Border Patrol faced an annual inflow of illegal immigrants far in excess of anything in recent years yet President Trump has decided that this is the time to put troops on the border.

Figure 1

The Average Number of Days Between Each Border Patrol Apprehension Per Year

Sources: Customs and Border Protection and Immigration and Naturalization Service.  

Legal Issues

Whether President Trump’s proposed deployment of troops along the border is legal is a difficult question to answer.  The use of the military in domestic law enforcement has to be authorized by Congress but they have authorized it for drug enforcement several times along the border.  Furthermore, border enforcement might be distinct from domestic law enforcement as even the rights of American citizens are legally curtailed in border zones.  Additionally, Congress arguably granted funds and a blanket authority to deploy troops along the border in the Defense Authorization Act for 2005 to defend against a “threat” or “aggression” against the territory or domestic population of the United States.  As with most powers, Congress has ceded most of its authority to the President in this area.

Regardless of the legalities, the proposed deployment of American troops to the border without a clear mission at a time of low and falling illegal immigrant entries is an unnecessary waste of time and resources that could put Americans in harm’s way for no gain.  

I am a fan of Kimberley Strassel’s columns about federal politics in the Wall Street Journal. But her recent column about the omnibus spending bill—which increased spending 13 percent in one year—was off the mark.

Strassel suggested that Trump and the Republicans did not want to increase spending that much, but the Democrats forced them into it. Trump “felt pressured to sign it,” while the “Democrats used the bill to hold the military hostage to their own domestic boondoggles.”

Watching Congress in recent years, I have concluded something different. The real problem is that most Republicans support higher spending on nearly all programs. The problem is not that Democrats push them into accepting higher spending. Most Republicans want it, and that is why majorities of them in the House and Senate voted for the omnibus.

President Trump proposed an array of spending cuts in his 2019 budget. He proposed cutting subsidies for agriculture, community development, economic development, education, energy, foreign aid, housing, urban transit, and many other things. How many congressional Republicans—let alone GOP leaders—have you seen actively pushing those cuts? Very few I would guess, with the exception some members pushing to cut subsidies for Planned Parenthood.

Recent congressional hearings on Trump’s budget reveal broad GOP support for spending increases, and virtually no support for his proposed cuts. Cabinet secretaries have been testifying to House appropriations subcommittees on the president’s budget, and each committee member is generally given five minutes to make comments.

My intern, John Postiglione, watched seven of these recent hearings and took notes on what each Republican member said. (Hearing details are below).

Here is what John found:

  • Not a single Republican made a supportive comment about a specific Trump spending cut during the seven hearings. These hearings included 47 speaking time slots by 26 different Republican members (members can be on multiple subcommittees).
  • Numerous Republicans objected to Trump’s proposed cuts. In the Commerce hearing, Hal Rogers (R-KY) and Evan Jenkins (R-WVA) opposed cuts to the Economic Development Administration (EDA). In the Education hearing, Tom Cole (R-OK) opposed cuts to impact aid, academic enrichment grants, and other subsidies. In the Energy hearing, Jaime Beutler (R-WA) opposed privatizing the power marketing administrations, while Dan Newhouse (R-WA) opposed cuts to energy subsidies. In the HUD hearing, David Valadao (R-CA) opposed cuts to community development. In the Labor hearing, Cole and Chuck Fleischmann (R-TN) opposed cuts to Job Corps. In the Health and Human Services hearing, Rodney Frelinghuysen (R-NJ) opposed cuts to numerous programs.
  • Many Republicans made comments supportive of various federal spending activities, particularly on programs they viewed as important to their districts.

The comments opposing Trump’s cuts were sometimes subtle, but it was clear what side the member came down on. Some comments were not subtle. Here is Hal Rogers in the Commerce hearing objecting to Trump’s proposed cut to the EDA:

We can’t afford to leave behind Americans in certain sections of the country like mine. I want to ask you about the economic development administration. … This dire need is exactly why over these 50 years, this EDA administration has been so helpful to us in recruiting jobs. To keep our people at home and prevent starvation. Mr. Secretary, I am deeply concerned about this proposal. 

I have a few questions for Rep. Rogers:

  • If the government has been subsidizing sections of Kentucky for 50 years, and they are still poor, doesn’t it suggest that subsidies are not the answer?
  • Would state and local governments in Kentucky, and the Kentucky people, let Kentuckians starve if federal subsidies were cut?
  • Isn’t Kentucky’s EDA funding of about $8 million a year too small to make a difference in Kentucky’s $197 billion GDP, let alone the state’s level of starvation?

President Trump set the stage for spending reforms by proposing perhaps the largest cuts to liberal, big-government programs since President Reagan. That provided congressional Republicans a great opportunity to push hard for cuts—an opportunity that they have completely blown.

My intern, John, looked at appropriations committee hearings, but a similar pro-spending tilt is evident with Republicans on the authorizing committees, such as the agriculture and transportation committees. Some members, such as those in the House Freedom Caucus, do push for spending cuts, but they are heavily outnumbered even in their own party.

Here are the hearings that John reviewed, with the date, names of cabinet secretaries, and the number of Republican members who made comments:

Commerce, March 20, witness Wilbur Ross, 6 GOP members.

Education, March 20, witness Betsy DeVos, 7 GOP members.

Energy, March 15, witness Rick Perry, 7 GOP members.

Health and Human Services, March 15, witness Alex Azar, 7 GOP members.

Housing and Urban Development, March 20, witness Ben Carson, 6 GOP members.

Labor, March 6, witness Alexander Acosta, 7 GOP members.

Treasury, March 6, witness Steven Mnuchin, 7 GOP members.

I think we have interpreted the comments of the members fairly, but my apologies if we misinterpreted, or if we missed any members who expressed support for cuts.

In sum, on reviewing seven recent budget hearings, we did not find any supportive statements for any of President Trump’s specific cuts by members of his own party. A number of Republicans made comments generally supportive of fiscal restraint, but that does not move the ball forward if we actually want to downsize particular programs.

Economist Steven Horwitz writes in The USA Today about President Trump’s proposal to reduce legal opioid prescriptions by one third. Such a drastic reduction would inevitably harm people like Horwitz, who relates his experience with excruciating back pain and how opioids were essential to relieving his agony and helping his body heal:

People who wish to drastically limit access to opioids need to know the reality of this kind of pain. Getting out of bed took 10 minutes or more because even one small wrong movement while getting to a sitting position would cause severe back spasms, making me shudder with pain. Walking around my house required balancing myself on walls and door frames.

The pain from sitting down and standing up from the toilet required that I use a chair to hold my weight like one would use a walker. I had visions of being found in the bathroom, stuck on the toilet or even unable to get up off of the floor. Every little twist and turn of my body risked those spasms and shuddering.

Eventually I realized my mistake and got a prescription for opioids. The quality of my life quickly and dramatically improved, as within two or three days, the pain was reduced substantially and my mobility and mood were significantly better. I could walk comfortably and hug my kids again.

It’s important to understand that this kind of debilitating pain not only causes unnecessary suffering, it prevents patients from healing. It takes every bit of energy you have to fight it, and your body has little to nothing left to use to heal. Some medical professionals call pain “the fifth vital sign” because of the way in which it matters for a patient’s health. Opioids enabled me to relax, to sleep and to heal.

I too am one of the people Trump’s policy might harm.

I suffer from episodic back pain. Everything Horwitz describes I have experienced. If anything, I would say he understates the agony. In my experience, the pain can be more like torture—as if someone were deliberately trying to inflict as much pain as possible, for the purpose of breaking me emotionally and leaving me trembling in fear of its return.

Like Horwitz, I did not want to treat my back pain with opioids. I had previously used them to recover from knee surgery and I disliked the experience so much that after my second knee surgery, I refused them. Like Horwitz, I feared addiction. So I tried stretching. I tried physical therapy. I tried non-prescription analgesics.

Nothing worked until I broke down—until the pain broke me—and I tried opioids. They worked. They eliminated my pain and, as Horwitz says, that allowed me to heal. My pain could come back at any time, and so I too could be one of the people Trump’s policy would leave to suffer in excruciating pain. 

People who have never experienced back pain have no business making opioid policy.

Qualified immunity is a doctrine that can shield police officers and other public officials from civil suits when they violate individual rights in the course of their official duties. According to the doctrine, courts are supposed to first determine whether an individual’s right was violated and then proceed to determine whether the violation was “clearly established” in the jurisdiction—that is, whether the circumstances had happened before. This can lead to perverse outcomes in which a court can find an officer violated someone’s rights, but if the officer did so in a way completely novel, then the officer cannot be held liable for the violation. In other cases, courts can find that similar sounding circumstances aren’t the same and thus officers may prevail because those differences render the right not “clearly established.”

This morning, the Supreme Court ordered a summary reversal of a Ninth Circuit Court of Appeals opinion that had denied qualified immunity to an officer for shooting and injuring a woman. The woman, Ms. Amy Hughes, had a knife at her side and she posed no immediate threat to the officers or the person she was speaking to at the time she was shot. Other officers on the scene held their fire and were trying to gain Hughes’ cooperation before Officer Andrew Kisela shot at her four times. Unfortunately, such decisions have become all too familiar at SCOTUS.

Justice Sonia Sotomayor, joined by Justice Ruth Bader Ginsburg, wrote a scathing dissent of the per curiam order:

If this account of Kisela’s conduct sounds unreasonable, that is because it was. And yet, the Court today insulates that conduct from liability under the doctrine of qualified immunity, holding that Kisela violated no “clearly established” law. I disagree. Viewing the facts in the light most favorable to Hughes, as the Court must at summary judgment, a jury could find that Kisela violated Hughes’ clearly established Fourth Amendment rights by needlessly resorting to lethal force. In holding otherwise, the Court misapprehends the facts and misapplies the law, effectively treating qualified immunity as an absolute shield.

This Court’s precedents make clear that a police officer may only deploy deadly force against an individual if the officer “has probable cause to believe that the [person] poses a threat of serious physical harm, either to the officer or to others.” It is equally well established that any use of lethal force must be justified by some legitimate governmental interest. Consistent with those clearly established principles, and contrary to the majority’s conclusion, Ninth Circuit precedent predating these events further confirms that Kisela’s conduct was clearly unreasonable. Because Kisela plainly lacked any legitimate interest justifying the use of deadly force against a woman who posed no objective threat of harm to officers or others, had committed no crime, and appeared calm and collected during the police encounter, he was not entitled to qualified immunity.

In sum, precedent existing at the time of the shooting clearly established the unconstitutionality of Kisela’s conduct. The majority’s decision, no matter how much it says otherwise, ultimately rests on a faulty premise: that those cases are not identical to this one. But that is not the law, for our cases have never required a factually identical case to satisfy the “clearly established” standard. It is enough that governing law places “the constitutionality of the officer’s conduct beyond debate.” Because, taking the facts in the light most favorable to Hughes, it is “beyond debate” that Kisela’s use of deadly force was objectively unreasonable, he was not entitled to summary judgment on the basis of qualified immunity.

[The majority’s] decision is not just wrong on the law; it also sends an alarming signal to law enforcement officers and the public. It tells officers that they can shoot first and think later, and it tells the public that palpably unreasonable conduct will go unpunished. Because there is nothing right or just under the law about this, I respectfully dissent. (Citations omitted)

Today’s order was disappointing, but not surprising. Regular readers know that Cato’s Project on Criminal Justice is now dedicating resources to fighting the doctrine of qualified immunity, and it’s clear that most of the sitting justices support the doctrine. But the fight is worth it because qualified immunity effectively guts the best civil rights protection in federal law and, more broadly, police officers must be held accountable for their unconstitutional actions.

If you’re interested in learning more, you can view the launch event of our qualified immunity effort here. You can read our first amicus brief in the effort here. You can also read Will Baude’s excellent law review article that Justice Sotomayor cited in her dissent, “Is Qualified Immunity Unlawful?” here, (spoiler alert: Yes it is!). And more here, here, and here.

The New York Times recently reported on a proposed policy change at the Environmental Protection Agency that would require the agency to only rely on scientific research with publicly available data when setting pollution exposure standards. Proponents of the rule argue that the practice would allow other researchers to examine and replicate findings, an essential characteristic of the scientific method. Opponents argue the rule would exclude large amounts of research that rely on confidential health information that cannot be public. The Times quotes opponents who view the policy change as an attempt by the Trump administration to attack regulations they don’t agree with by undermining the scientific results on which they are based.

Increased transparency in data used in empirical research and the facilitation of replication of studies are like mom and apple pie. In an ideal world, such practices are the very essence of the scientific method. In practice, academic journals in many disciplines already require that data used in empirical and experimental work be available for replication. The International Committee of Medical Journal Editors, no strangers to the limitations of data that include personal information, recently affirmed their commitment to responsible data sharing.

While opposition to transparency certainly has bad optics, the opponents of this rule change do have a point. The struggle over transparency isn’t really about transparency. Instead, it is simply the latest chapter in the scrum over two studies whose results are the bases of EPA decisions about appropriate clean air exposure standards.

The Harvard Six Cities Study (SCS) and the American Cancer Society Study (ACS), both published in the 1990s, provide the data on which the EPA estimates all mortality risks from air pollution. Both studies looked at the health damage caused by particulate matter (PM), which accounts for 90 percent of the health benefits from emission regulation, and found that higher levels of PM exposure were associated with increased mortality. And these studies both rely on individual health data given under conditions of confidentiality

So the opponents are probably correct that the transparency rule is a clever attempt to undermine the current basis for EPA regulation of PM. And reduced PM exposure is the exclusive basis on which current conventional pollution regulation is justified because the benefits of additional emissions controls on other conventional pollutants are low. In this environmentalists’ nightmare, a transparency mandate ends additional regulation of air quality by the EPA.

But contrary to the Times assertion, this a not simply a tale of good science versus evil polluters. The SCS has been the subject of intense scientific scrutiny and much criticism because of results that are biologically puzzling. The increased mortality was found in men but not women, in those with less than high school education but not more, and those who were moderately active but not sedentary or very active. Among those who migrated away from the six cities, the PM effect disappeared. Cities that lost population in the 1980s were rust belt cities that had higher PM levels and those who migrated away were younger and better educated. Thus, had the migrants stayed in place it is possible that the observed PM effect would have been attenuated.

Furthermore, a survey of 12 experts (including 3 authors of the ACS and SCS) asked whether concentration-response functions between PM and mortality were causal. Four of the 12 experts attached nontrivial probabilities to the relationship between PM concentration and mortality not being causal (65 percent to 10 percent). Three experts said there is a 5 percent probability of noncausality. Five said a 0-2 percent probability of noncausality. Thus 7 out of the 12 experts would not reject the hypothesis that there is no causality between PM levels and mortality. Based on these findings, a 95 percent confidence interval would include zero mortality effect for any reductions in PM concentration below 16 micrograms per cubic meter.

So the scientific fragility of the two studies has been known for some time. Despite that fragility, in December 2012, EPA set a much lower fine-PM standard of 12 micrograms per cubic meter of air to be met by 2020.

If the EPA is forced to use other studies, such as two recent papers by Michael L. Anderson and by Tatyana Deryugina, Garth Heutel, Nolan H. Miller, David Molitor, and Julian Reif, the estimated benefits from PM exposure reduction are reduced. Anderson’s estimate is 60 percent smaller. Deryugina et al conclude that declining PM concentrations from 1999 to 2011 resulted in an additional 150,000 life-years per year, which, if valued at $100,000 per life year, would equal $15 billion in annual benefits. The EPA estimates that the annual compliance costs of the 1990 Clean Air Act standards were $44 billion in 2010. With these lopsided costs and benefits, it is certainly true that eliminating the ACS and SCS studies from consideration and forcing reliance on other studies would result in less stringent regulation except in areas with bad geography, such as Los Angeles, that prevents pollution dispersion.

The Times article mentions the fact that the Congress has had the opportunity to enact language achieving the same goal of transparency but has not done so. Libertarians have long criticized the growth of the discretionary administrative state. Thus because Congress has explicitly considered but failed to enact a research transparency requirement, libertarians should be cautious about using administrative discretion to achieve their preferred outcome.

Written with research assistance from David Kemp.

Former U.S. Secretary of Education Arne Duncan has taken to the pages of the Washington Post to let you know that you shouldn’t listen to people who tell you that “education reform” hasn’t worked well. At least, that is, reforms that he likes—he ignores the evidence that private school choice works because, as far as can be gathered from the op-ed, he thinks such choice lacks “accountability.” Apparently, parents able to take their kids, and money to educate them, from schools they don’t like to ones they do is not accountability.

Anyway, I don’t actually want to re-litigate whether reforms since the early 1970s have worked because as time has gone on I’ve increasingly concluded that we do not agree on what “success” means and the measures we have of what we think might be “success” often don’t tell us what we believe they do. These are, by the way, major concerns that I’ll be tackling with Dr. Patrick Wolf in a special Facebook live event on Wednesday. Join us!

Rather than assessing the impacts of specific reforms on what are often fuzzy and moving targets, I want to examine one crucial assertion that Duncan says needs to be “noted”: students today are “relatively poorer than in 1971.”

To back this, Duncan links to a Post article from 2015 that said, “For the first time in at least 50 years, a majority of U.S. public school students come from low-income families.” The article is based on a report from the Southern Education Foundation, which only mentions low-income rates as far back as 1989. More important, it is based on the share of students eligible for free and reduced-price lunches (FRPL), a flawed indicator of child poverty.

As the National Center for Educational Statistics (NCES) has pointed out, families earning up to 185 percent of the poverty level are eligible for reduced-price lunches, and now many students get free lunches no matter their income if their schools use the Community Eligibility option. As the NCES summarizes:

[T]he percentage of students receiving free or reduced price lunch includes all students at or below 185 percent of the poverty threshold, plus some additional non-poor children who meet other eligibility criteria, plus other students in schools and districts that have exercised the Community Eligibility option, which results in a percentage that is more than double the official poverty rate [italics added].

What is the poverty rate for families with children? In 1971, according to the U.S. Census, 12 percent of families with children under the age of 18 had incomes at or below the poverty level. By 2016 the rate was 15 percent. Up, but not hugely. And you have to know what “poverty” means: It is about cash income, and excludes major benefits such as food stamps, housing subsidies, and tax credits. Include those, according to the Center for Budget and Policy Priorities, and incomes for the poorest fifth of Americans have risen from about $20,000 in 1973 to over $22,000 in 2011. And with technological change, what that money can buy has afforded a much higher standard of living. Smartphones versus stuck-to-your-wall phones, anyone?

Finally, national test scores don’t gauge the performance of just the poor, but of all Americans. And while the poor are almost certainly better off today than in 1971, the nation as a whole is definitely better off. Indeed, as the figure above shows, inflation-adjusted, per-capita income nearly doubled from $18,603 in 1971 to $33,205 in 2016. Indeed, returning to the telephone theme, 13 percent of Americans didn’t have regular access to a telephone in 1971, versus about 2 percent today without a phone in their “housing unit.”

It is misleading, at best, to say that the “student population is relatively poorer” than in 1971. Burrow into the evidence and it is clear that American students are appreciably better off today than they were in 1971. It’s a basic reality we at least need to acknowledge before crediting broad “reform” for supposedly better outcomes.

In 2014 the government of Ecuador, under then-President Rafael Correa, announced with great fanfare that the Ecuadorian Central Bank (BCE) would soon begin issuing an electronic money (dinero electrónico, or DE). Users would keep account balances on the central bank’s own balance sheet and transfer them using a mobile phone app. Enabling legislation was passed in September, qualified users could open accounts beginning in December, and the accounts became spendable in February 2015. A headline on CNBC’s website declared: “Ecuador becomes the first country to roll out its own digital cash.”

The subsequent fate of the electronic money project has received less attention in the American press. Less than three years after opening, the system is now shutting down. In December 2017 Ecuador’s National Assembly, at the urging of President Lenin Moreno, Correa’s hand-picked successor who took office earlier in the year, passed legislation to decommission the central bank electronic money system. The legislation simultaneously opens the market to mobile payment alternatives from the country’s private commercial banks and savings institutions. As described below, the state system had failed to attract a significant number of users or volume of payments. Account holders now have until the end of March 2018 to withdraw their funds. Complete deactivation is scheduled for mid-April.

The substitution of open competition for state monopoly in mobile money is an important victory for the people of Ecuador. The entire episode is important internationally for the lesson it teaches us about the limits to a central bank’s ability to launch a new form of money when the public prefers established forms. The lesson provides an instructive contrast to “the case for central bank electronic money” recently made by Aleksander Berentsen and Fabian Schär in the pages of the Federal Reserve Bank of St. Louis Review.

The Birth of the Project

There is an important backstory to the episode: Ecuador had suffered a hyperinflation of its domestic currency, the sucre, in 1999, prompting residents to dollarize their own payments and finances. In January 2000 the government, bowing to the popular verdict, announced that it would officially dollarize, fixing a parity of the sucre to the US dollar and retiring all sucres from circulation by September.  (Concerning Ecuador’s dollarized system see my earlier post here.)

The electronic money project was born in 2014 legislation that gave the state a monopoly in electronic money. Only the central bank could issue electronic dollars, and only the state-owned mobile phone company CNT could provide mobile payment services. The law barred the private mobile phone companies and private financial institutions from providing competing systems. The legislature also banned cryptocurrencies.

Because President Correa (in office 2007-2017) had often complained about the discipline that dollarization imposed on his government, observers wondered whether the electronic money system was intended merely as a way for the government to gain some monopoly profits, or was a first step toward de-dollarization. To calm fears that the electronic money would become a forced currency to be followed by de-dollarization, the law declared that use of the electronic money would be voluntary, and that even public employees and state contractors would not be obliged to accept it in payments from the state. (Everyone knew that the Assembly could later revise that provision of the law, of course.)

The government was quite optimistic that the system would rapidly prove popular. The leading newspaper El Comercio reported on Christmas Day of 2014: “Fausto Villavicencio, responsible for the new payment mechanism in Ecuador, said that the authorities expect that some 500,000 people will use e-money in 2015.”[1]  The actual number of accounts opened in 2015 turned out to be less than 5000. The economist Diego Grijalva of the Universidad de San Francisco de Quito, citing the Ecuadorian central bank’s balance sheet, noted in early 2016 that “the Ecuadorian Electronic Money System is already implemented, but it has an uncertain future. In particular, financial institutions are not obliged to use it and the use thereof (less than US $ 800,000 for the end of January 2016) corresponded to less than 0.003% of the monetary liabilities of the Ecuadorian financial system.”

Its Failure to Achieve Popularity

One had to be skeptical of the stated rationale for the central bank electronic money project, to benefit the unbanked. Invited to speak in Ecuador about the dollarization regime in November 2014 (working paper in English, later published in Spanish) at events organized by the USFQ and the think-tank IEEP, I added some critical comments on the new project:

There is no reason to believe that a national government can run a mobile payment system more efficiently than private firms … If the government sincerely wishes to help the poor and unbanked, it should let private providers enter the competition, which will drive down the fees that the poor and unbanked will have to pay.

Private bankers in Ecuador made similar arguments during the life of the project. In its December 2017 legislation, the government conceded the case. According to one news account, “the Government hopes that with the transfer to the private financial system the means of payment can reach more unbanked population.”

I attributed a fiscal rationale to the project:

[W]hy does the government want to issue mobile payment credits as a monopolist? It seems likely that the project is meant as a fiscal measure. One million dollars held by the public in the form of government-issued credits is a million-dollar interest-free loan from the public to the government.

From the fact that the government is now closing its service, I infer that the central bank failed to make a profit even as a statutory monopolist. Float was smaller, and expenses higher, than had been hoped (see below). The new administration had no fiscal reason to keep it open.

Although I did not foresee the system’s failure to achieve sustainability, I did add one final dig at the system’s low trustworthiness:

Personally, I would find dollar-denominated account credits that are claims on [the leading private mobile phone companies] Movistar or Claro more credible than claims on the government of Ecuador. After all, unlike the government, neither company defaulted on its bonds in the past 12 years.

Trust, it turned out, was the crucial issue.

Unlike what is usually envisioned under the rubric “central bank electronic money,” the BCE was not creating nominally default-risk-free accounts denominated in its own domestic fiat money. It was issuing claims to US dollars that it might become unable or unwilling to repay. The government under Correa had in fact defaulted on sovereign dollar-denominated bonds in 2008. Although the sucre hyperinflation of 1999 had brought with it a banking crisis, since dollarization the commercial banks had by all indications become stable and prudently run.

Consequently it was reasonable for an informed citizen in 2014-17 to think that dollars on deposit at a private commercial bank in Ecuador were less risky than dollars on deposit at the central bank. The private banks had better incentives to behave prudently than the BCE had. A private bank could be taken to court if it failed to pay, but not so the government central bank with its sovereign immunity.[2] The enabling legislation specified no limit on the volume of electronic dollars the BCE could create, and no prudential requirement that the central bank hold adequate assets to redeem them.

The Ecuadorian public recognized a risk of default or devaluation with the central bank’s electronic money accounts, and stayed away from them, defying to the optimistic projections of government officials promoting the system. In June 2016 President Correa recognized that the project had critics, but he dismissed them as merely members of the opposition party and certain private bankers annoyed that the business was not going to them. In fact mistrust in the system was much more widespread.

At least one print commentator at the time pointed to the BCE’s lack of trustworthiness.  El Comercio’s economic columnist Gabriela Calderón de Burgos in a June 2016 column clearly predicted that because of public mistrust the DE system would not succeed. She noted that, unlike the private bankers with their own wealth at stake, the BCE could behave irresponsibly, and would be pressured to do so by the Treasury with its chronic financing problems. Thus the electronic claims on the BCE are “a currency that does not inspire confidence,” and as a result “the DE will not work because it will not enjoy widespread acceptance. It would only achieve this if the government declares it to be a curso forzoso [forced tender].” But the government knows that such a move “would lead to chaos.”

Later that month, in a column on the “antics” of the central bank, she observed: “The government has intensified its campaign for people to deposit their dollars in the BCE and use ‘electronic money.’ I suspect that the campaign will have little success because of the justified distrust that the government and the BCE have earned in terms of their ability to take care of others’ funds.”

In May 2017 Calderón de Burgos returned to the theme, in a column entitled “Only the Dollar is Trusted.” The project for the Ecuadorian government to issue its own digital currency, denominated in US dollars, she wrote, “faces an insurmountable inconvenience: people will not voluntarily accept the new currency. That’s why they’ve been trying to convince us to use electronic money for three years and still few use it.” The US dollar itself, because it is something that Ecuadorian politicians cannot devalue, “generates much more confidence than any alternative that can occur to our politicians, even and particularly in times of financial crisis.”

A news article in December 2017 reported the answers that ordinary people gave when asked them directly why they weren’t using the BCE’s electronic money. Their answers confirm that many found the system not creditworthy. For example: “Mistrust is among the reasons, says Frank Guijarro, owner of a tire network.” And: “I do not trust opening an account with the Central Bank, so I pay in cash and sometimes with a debit card when I cannot get out,” says Katherine Alcivar, 26.” The president of the association of cooperative savings banks gave a similar answer in an interview: “The greatest confidence we can give is that your resources are in your financial institution and not in the BCE.” The BCE system was haunted by the “ghost” of the previous government’s default. In addition, the BCE did too little to promote acceptance by shopkeepers and other businesses: “Not enough strength was given to the reception channels.”

As a result of these shortcomings, the system peaked at only $11.3 million in account balances, less than 5 hundredths of 1% of the country’s narrow money stock M1 ($24.5 billion). According to the deputy general manager of the central bank electronic money system, before the announcement of the coming shutdown ironically raised the average level of activity due to withdrawals, the system averaged only about 1,100 transactions per day.  The total value transacted over the entire life of the system was only about $65 million. Only 7,067 businesses ever conducted transactions with the electronic money. While a total of 402,515 accounts were eventually opened, the BCE found in retrospect that only 41,966 were ever used to acquire goods and services or to make payments. Another 76,105 were used only to upload and download money. The remaining 286,207 accounts (71%) that were opened were never used. (I do not know why the three reported component figures do not sum exactly to the reported total.)

Lessons from the Failure of the Project

We can make a back-of-the-envelope calculation of the Ecuadorian government’s profit from its monopoly electronic money project. Between 2014 and the present, the Ecuadorian government has been paying roughly 8% interest on the bonds it sells in international markets. Replacing $11.3 million of 8% bonds with zero-interest liabilities of the central bank provides an annual debt-service savings of less than $1 million, specifically $904,000. From the BCE’s 2014 income statement (the most recent that seems to be available), its “administrative expenses” (presumably payroll) were roughly $38 million. Thus the project would have turned a loss if it enlarged the BCE payroll by as little as 2.4%, even leaving aside non-salary expenditures on promoting and operating the project.

An accounting report on the DE project issued by GPR, an Ecuadorian government accounting office, puts the government’s expenditures on the project at $7,967,553.78. Comparing that figure to the estimated debt service savings of only $904,000, the fiscal loss is clear. My thanks to Luis Espinosa Goded and Santiago Gangotena of the USFQ Department of Economics for pointing me toward the GPR report and helping me read it.

It is instructive to contrast the outcome in Ecuador with the optimistic picture of central bank electronic money drawn by Berentsen and Schär, who write:

We believe that there is a strong case for central bank money in electronic form, and it would be easy to implement. Central banks would only need to allow households and firms to open accounts with them, which would allow them to make payments with central bank electronic money instead of commercial bank deposits. As explained earlier, the main benefit is that central bank electronic money satisfies the population’s need for virtual money without facing counterparty risk.

The BCE deposits, by contrast to the scenario they have in mind, were not free of counterparty risk. More generally, in a sound banking system a commercial bank’s counterparty risk for depositors can be negligible, very close to zero, so that the central bank’s zero default risk need not be a big draw. In episodes where the central bank and the commercial banks simultaneously circulate banknote liabilities (e.g. today’s Scotland or Northern Ireland), no public concern about a risk difference is evident.

The Ecuadorian case also shows that implementation of a central bank electronic money system isn’t so easy. It requires more than merely setting up a website (the US federal government has sometimes proven not even competent at that) and letting households and firms open deposits. A convenient point-of-sale deposit-transfer mechanism, requiring both hardware and software, must be provided to many thousands of merchants. Consumer service and marketing are part of the business of providing retail payments. There is no reason to think that central banks are or would be good at a commercial business operation. In short, it is far from clear that asking bureaucrats to build a “public option” electronic money system would have benefits in excess of its cost.

[1] All quoted statements from Ecuadorean sources are my own translations, assisted by Google Translate.

[2] George Selgin and I raised this point some years ago as part of a general case for preferring private competition to sovereign monopoly in currency.

[Cross-posted from Alt-M.org]

On March 30, Sally Satel, a psychiatrist specializing in substance abuse at Yale University School of Medicine, co-authored an article with addiction medicine specialist Stefan Kertesz of the University of Alabama Birmingham School of Medicine condemning the plans of the Center for Medicare and Medicaid Services to place limits on the amount of opioids Medicare patients can receive. The agency will decide in April if it will limit the number of opioids it will cover to 90 morphine milligram equivalents (MME) per day. Any opioids beyond that amount will not be paid for by Medicare. One year earlier, Dr. Kertesz made similar condemnations in a column for The Hill. While 90 MME is considered a high dose, they point out that many patients with chronic severe pain have required such doses or higher for prolonged periods of time to control their pain. Promoting the rapid reduction of opioid doses in such people will return many to a life of anguish and desperation.

CMS’s plan to limit opioid prescriptions mimics similar limitations put into effect in more than half of the states and is not evidence-based. These restrictions are rooted in the false narrative that the opioid overdose problem is mostly the result of doctors over-prescribing opioids to patients in pain, even though it is primarily the result of non-medical opioid users accessing drugs in the illicit market. Policymakers are implementing these restrictions based upon a flawed interpretation of opioid prescribing guidelines published by the Centers for Disease Control and Prevention in 2016.

Drs. Satel and Kertesz point out that research has yet to show a distinct correlation between the overdose rate and the dosages on which patients are maintained, and that the data show a majority of overdoses involve multiple drugs. (2016 data from New York City show 97 percent involved multiple drugs, and 46 percent of the time one of them was cocaine.)

Not only are the Medicare opioid reduction proposals without scientific foundation, but they run counter to the recommendations of CMS in its 2016 guidelines. As Dr. Kertesz stated in 2017:

“In its 7th recommendation, the CDC urged that care of patients already receiving opioids be based not on the number of milligrams, but on the balance of risks and benefits for that patient. That two major agencies have chosen to defy the CDC ignores lessons we should have learned from prior episodes in American medicine, where the appeal of management by easy numbers overwhelmed patient-centered considerations.”

In an effort to dissuade the agency, Dr. Kertesz sent a letter to CMS in early March signed by 220 health professionals, including eight who had official roles in formulating the 2016 CDC guidelines. The letter called attention to the flaws in the proposal and to its great potential to cause unintentional harm. CMS will render its verdict as early as today.

Until policymakers cast off their misguided notions about the forces behind the overdose crisis, patients will suffer needlessly and overdose rates will continue to climb. 

Pages