Cato Op-Eds

Individual Liberty, Free Markets, and Peace
Subscribe to Cato Op-Eds feed

Waste, fraud, and abuse are a common target on the campaign trail. Politicians from both parties point promise that eliminating this problem is a cure-all for whatever mathematical problems their tax and spending proposals might face. Eliminating waste, fraud, and abuse is not controversial, and allows them to avoid naming any actual programs they would phase out or reduce. As my Cato colleagues have pointed out, even completely eliminating all improper payments (which are somewhat related but not quite the same thing) won’t magically make next year’s budget deficit disappear and would do nothing to address the country’s more serious longer-term fiscal issues. Even with that caveat, improper payments are a pervasive and persistent problem, reaching $137 billion in 2015, a new record. Given the persistently high error rates and the outsized problems in government health care programs, it’s very likely that there will be another record high next year.

Total Improper Payments by Program, 2011-2015

Source: OMB via

Improper payments are somewhat related to the oft-cited triumvirate of waste, fraud, and abuse. These payments can stem from fraud and abuse, but also misidentification, insufficient documentation, and clerical errors. The vast majority of improper payments are overpayments, 92 percent in 2015, but  a portion of this total does come from underpayments, or payments that are too low according to program rules. These amounts only measure amounts and rates relative to these rules, so say nothing about the effectiveness or propriety of the programs themselves.

If the $137 billion in combined improper payments were one program, it would be almost as large as the Social Security Disability Insurance program. As the figure above illustrates, most of these improper payments are concentrated in a few “high-error” programs. Three of the programs with the biggest improper payment amounts are health care programs, and projected growth in health care spending overall and recent government expansions in this sphere will likely lead to even higher amounts over the next decade. If Medicaid’s error payment rate were to remain constant over the next decade, improper payments in this program alone would be more than $60 billion in 2026.

While the magnitude of these improper payments is a serious concern, it is almost more troubling that they are so persistent, despite repeated efforts to address them. Since 2011, cumulative improper payments totaled almost $600 billion, which is more than total domestic discretionary federal spending last year.

While a few “high-error” programs have seen some progress in reducing improper payment rates, in most cases the rate has been relatively steady over the past five years, and in some instances has even increased. Medicare Fee-for-Service, one of the largest components with $358 billion in total outlays last year, had an improper payment rate over 12 percent the last two years, the highest on record.

Improper Payment Rates by Program, 2011-2015


Source: OMB via

Note: Some programs lack data for specific years.

Cutting down the scope of the federal government’s improper payments won’t solve all of our other fiscal problems, but these outlays do represent a significant misallocation and waste of taxpayer money. In many cases, error rates remain stubbornly high even after a program has been identified as one needing additional oversight. It is one thing for a politician to promise they’ll address the persistent improper payment problem, and quite another to actually do it.

Over the course of the last decade, as the United States got bogged down in quagmires in Iraq and Afghanistan, many Americans anticipated that war-weariness or an Iraq/Afghanistan syndrome would diminish the United States’ propensity to use military force. That expectation is proving to be somewhat unjustified, however. President Obama, who staunchly opposed the war in Iraq, has overseen new wars in Libya and Iraq/Syria, as well as an ongoing drone strike campaign throughout the greater Middle East. Although the lessons of Iraq and Afghanistan have not dissuaded President Obama from employing military force, they have influenced the manner in which he has done so. To avoid becoming embroiled in another ground war in the Middle East, the Obama administration has adopted a “light footprint” approach. Rather than deploying large contingents of ground troops, the administration has employed standoff strike capabilities and small contingents of Special Operations Forces, often in support of indigenous ground troops.

In a new Cato Policy Analysis, I present a thorough analysis of that approach, arguing that the light footprint essentially constitutes a tactical shift. The Obama administration has sought to achieve the same strategic objectives—the eradication of terrorism and the promotion of democracy in the greater Middle East—with less obtrusive military instruments. Unfortunately, although the light footprint has yielded tactical dividends, it appears unlikely to advance those strategic objectives. As the Obama administration draws to a close, it is the perfect time to stop tinkering with military tactics and begin a serious discussion about reorienting U.S. national security strategy—particularly with regard to the Middle East. 

I have been saving bits of misreported statistical string about Venezuela’s inflation over the past couple of months, and it has become a giant ball. The bits all come from the International Monetary Fund (IMF)

The IMF’s World Economic Outlook (April 2016) forecasts inflation to rise to 720 percent by the end of 2016. This number, which is nothing more than a guestimate, is now carved in stone. The media, from Bloomberg, the New York Times, the Washington Post, the Wall Street Journal, to countless other ostensibly credible sources, repeats that guestimate ad nauseam.

Instead of reporting pie-in-the-sky estimates for future inflation rates in Venezuela, the press should stop worshiping at the IMF’s altar and, instead, stick to reporting current inflation rate. These are updated regularly and are available from the Johns Hopkins-Cato Institute Troubled Currencies Project. The current implied annual inflation rate is 140 percent; while it is currently the world’s highest, it is well below the IMF’s oft-reported forecast of 720 percent.

Not all government takings of private property proceed by condemnation or regulation (or taxation).  In February the U.S. Supreme Court denied certiorari in Taylor et al. v. Yee, a case challenging California’s practice of seizing unclaimed property after only three years of idleness with relatively minimal efforts to contact owners. Unclaimed property can consist of such things as “forgotten security deposits, uncashed money orders, unused insurance benefits, idle shares of stock, and even the undisturbed contents of safedeposit boxes,” for starters, to quote the Court. In a concurrence, Justice Samuel Alito joined by Justice Clarence Thomas agreed with the majority in denying review, saying the “convoluted history” of the California dispute made it a poor candidate for a clean review under constitutional principles. But the trend among self-interested states in unclaimed-property, or escheat, law – such as truncating dormancy periods to a mere three years, from as long as 15, while “doing less and less to meet their constitutional obligation to provide adequate notice” to owners – inevitably raises constitutional questions, because the Due Process Clause “undoubtedly requires that, before seizing private property, the government must give ‘notice and opportunity for hearing appropriate to the nature of the case.’” In revamping escheat practices in ways that grab more money for their budgets, states might well be overstepping this bound.

Another part of the picture, while not mentioned in Alito’s brief opinion, adds practical importance: states have creatively expanded their definitions of what they consider abandoned property, to include such things as unused minutes on calling cards (for which they seek a cash equivalent from the phone company) and gift certificates (make the retailer pay). Three years ago I wrote a post at Overlawyered titled “Delaware: Your Escheating Heart.” Excerpt: 

…The revenue [from these laws] looms peculiarly large for the state of Delaware, because it is the state of incorporation for so many businesses. In recent years friction has been growing between the state and its corporate citizens as the state government has taken an increasingly aggressive stance in auditing corporations for unreported escheatable property. [WSJ] So far, perhaps, so routine (except for the parties to the dispute), but some accounts omit one of the most salient angles, summed up by one critic [Douglas Lindholm, IBD via Volokh] as follows:

“Last year alone, Delaware seized $319.5 million from liquidated property while returning only $18.9 million of unclaimed property to its rightful owners.

“Delaware does this through an unfair, onerous and expensive audit system that ‘looks back’ to 1981, and contrives unclaimed property if the company doesn’t have records for all those years. This process often costs companies millions of dollars, mires them in years of audits, and forces them to deal with third-party auditors who are motivated by contingent fees to invent unclaimed property where none exists….”

Again and again – whether in forfeiture laws entitling law enforcers to a share of the booty seized, or percentage awards for informants under whistleblower laws, or traffic camera systems in which the operators of the cameras get a share of ticket revenue, contingency fees for participants in law enforcement prove deeply problematic. … Delaware seems to have gotten its image in trouble through a variant on tax farming.

As for the argument that if you didn’t want to have your pockets rifled by a given state, you shouldn’t have done business there, it’s not really any stronger than the argument that if you didn’t want to have your property seized for private use at a big knock-off from fair value, you shouldn’t have done business in a state with poor eminent domain laws.

In the closing paragraph of my last entry I offered two hypotheses about the post-2008 US economy. The first is that “real GDP has shifted to a lower path because of a shrinkage in the economy’s productive capital stock — a problem that better monetary policy (not feeding the boom) could have helped to avoid, but cannot now fix.” It is reasonable to suppose that the capital stock has shrunk, I argued, because the housing boom diverted investible resources from more productive capital formation into housing construction. The second is that potential output, as estimated by the Congressional Budget Office’s method, “is currently overestimated because capital wastage has not been fully recognized.”

Here again is the chart that frames the common account of our recent macroeconomic history, showing the paths of actual real GDP and of the CBO’s estimate of potential real GDP, this time in natural logs so that a constant growth rate corresponds to a straight line with constant slope:

This picture of the estimated “output gap” suggests no unsustainable boom in the US economy before 2007. There was no bubble. There was merely a return to full employment after the previous “dot-com” recession of 2001 pulled output below potential. The Great Recession of 2007-09 then appears not as a reaction to an unsustainable path, but as a bolt from the blue, an exogenous shock. The initial drop in real GDP has to be explained by going off chart, e.g., by reference to the bursting of the housing bubble. But the housing bubble is itself unexplained by macro data, not part of any general malinvestment-and-overconsumption boom.

Next, as Market Monetarists have emphasized, households responded to the start of the recession by hoarding money, reducing aggregate demand. As I showed last time, there was indeed a jump in hoarding (as measured by the ratio of M2 balances to GDP) during 2009. The Fed failed to increase the quantity of M2 in response, so aggregate demand did fall, which in a sticky-price world brought down real output. (In 2009 the CPI and PCE price indexes also fell, but in this view not enough to clear the markets.) This nominal shock helps to explain some part of the severity of the recession, but it can’t be the whole story. It can’t explain why the economy has remained below its potential output level for more than six years. It cannot explain why recovery to potential output has continued to fall short for so long. That remains a puzzle. The “output gap” has shrunk only because the potential output path has been revised downward, a revision explained by shrinking labor force participation.

The account of macroeconomic events that I prefer can be framed by taking the same path for actual real GDP, but instead contrasting it with a simple constant growth-rate path that extrapolates from the 2000-2003 trend, as follows:

This picture suggests that, between 2003 and 2008, real GDP rose unsustainably above its old trend. The recession brought a return to reality, and then some. Consistent with the view that the unsustainable boom was fueled by Federal Reserve credit expansion, here is the bulge in real M2 before and during the period:

Since 2009 the economy has followed a lower real GDP path, with no tendency to return to the old dashed path, let alone to the bubble path of potential output as estimated by the CBO. To explain that, I suggest, we need to recognize a drop in the stock of productive capital goods due to the misallocation of investment to housing construction during the housing boom.

Consistent with capital shrinkage, the Bureau of Economic Analysis shows gross private domestic investment making a negative contribution to real GDP for nine consecutive quarters, 2007Q3 to 2009Q3 inclusive. The CBO method of estimating potential output does not recognize any capital wastage during the period, however. The CBO’s data website reports a continuously rising value for its capital services index, an input to its estimate of potential real GDP, during 2000-2014. This is of course consistent with its continuously rising estimate for potential output.

I don’t know the literature on econometric estimation of the size of the capital stock well enough to criticize the CBO’s method in any detail, or to propose an alternative method that would give us a better way to estimate whether the path of capital accumulation has been shifted downward. I would be grateful for pointers to any sites that use a method distinct from the CBO’s to provide explicitly derived estimates of the path of productive capital.

[Cross-posted from]

As Republicans fall in line behind Donald Trump, despite their misgivings, many of them are urging him to “change his tone” as he moves toward the general election. But is a change in tone sufficient or even honest?

Last Thursday, announcing his endorsement, Speaker Paul Ryan said, “It is my hope the campaign improves its tone as we go forward and it’s all a campaign we can be proud of.” Former Republican nominee Bob Dole says, “I can already see sort of a shift with Trump. He needs to start talking (like) he is about to be president.” Asked about Trump’s repeated comments that offend Hispanic voters, Senate majority leader Mitch McConnell says, “I hope he’ll change his direction on that.” Republican chair Reince Priebus says, “I think there’s work to do, and I think that there’s work on tone to do. I’ve been clear about that…. I think he gets it…I think you’re going to see the change in tone.”

But what does “change his tone” mean? These pleas don’t ask him to change his policies. He has proposed, among other things, building a wall on our southern border, deporting 11 million Mexican-Americans, banning Muslims from entering the United States, blowing up U.S.-China trade, forcing American companies to stop manufacturing products overseas, torturing suspected terrorists and killing their families, not touching entitlement benefits, ending our 200-year-old policy of birthright citizenship, “loosen[ing] up” libel laws to make it easier to sue newspapers, and much more. He has also supported, in the recent past, single-payer health care and the largest tax increase in world history. Are Republicans OK with those policies as long as Trump changes his tone?

He remains, as George Will puts it, an “impetuous, vicious, ignorant and anti-constitutional man.” He insults Mexicans, women, disabled Americans, Muslim Americans, and so on. Are Republicans comfortable with that man having the nuclear codes, as long as he tones it down?

For the past 11 months Donald Trump has been making his character, temperament, and egotism very clear. I wrote in January that “not since George Wallace has there been a presidential candidate who made racial and religious scapegoating so central to his campaign,” and that “he’s effectively vowing to be an American Mussolini, concentrating power in the Trump White House and governing by fiat,” and I have seen no reason to change that assessment. Indeed, I don’t think Trump’s endorsers disagree with it. They just seem to value party above the future of the republic and their own complicity.

There’s a folk tale that goes something like this: A scorpion asks a frog to carry him across the river. The frog is reluctant because he’s afraid the scorpion will sting him. The scorpion assures the frog that he would do no such thing, pointing out that then they would both drown. The frog agrees. As they are crossing the river, the frog feels a searing pain in his side. “What did you do that for?” the frog demands. “Now we’re both going down!” The scorpion replies, “You knew what I was when you picked me up.” 

When Republicans say that Trump must change his tone, they are saying that they want him to conceal his character for the duration of the election. But he’s a scorpion, and they knew that when they picked him up.

Footnote: If anyone reads this as an endorsement for Donald Trump’s principal opponent, they should check out my references to her in The Libertarian Mind.

You Ought to Have a Look is a feature from the Center for the Study of Science posted by Patrick J. Michaels and Paul C. (“Chip”) Knappenberger.  While this section will feature all of the areas of interest that we are emphasizing, the prominence of the climate issue is driving a tremendous amount of web traffic.  Here we post a few of the best in recent days, along with our color commentary.

We’ve put together an interesting collection of articles this week for your consideration.

First up is a shout out to lukewarming from Bloomberg View columnist Megan McArdle. In her piece “Global Warming Alarmists, You’re Doing It Wrong,” McArdle suggests that lukewarmers have a lot to bring to the climate change table, but are turned away by the entrenched establishment and tarred with labels like climate “denier”—a label which couldn’t be further from the truth. McArdle writes:

Naturally, proponents of climate-change models have welcomed the lukewarmists’ constructive input by carefully considering their points and by advancing counterarguments firmly couched in the scientific method.

No, of course I’m just kidding. The reaction to these mild assertions is often to brand the lukewarmists “deniers” and treat them as if what they were saying was morally and logically equivalent to suggesting that the Holocaust never happened.

In her article, McArdle calls for less name calling and less heel digging and more open, constructive discussion:

There is a huge range of possible beliefs that go into assessing the various complicated theories about how the climate works, and the global-warming predictions generated by those theories range from “could well be catastrophic” to “probably not a big deal.” I know very smart, well-informed, decent people who fall at either end of the spectrum, and others who are somewhere in between. Then there are folks like me who aren’t sure enough to make a prediction, but are very sure we wouldn’t like to find out, too late, that the answer is “oops, catastrophic.”

These are not differences that can be resolved by name calling. Nor has the presumed object of this name calling – to delegitimize thoughtful opposition, and thereby increase the consensus in favor of desired policy proposals – been a notable political success, at least in the U.S. It has certainly rallied the tribe, and produced a lot of patronizing talk about science by people who aren’t actually all that familiar with the underlying scientific questions. Other than that, we remain pretty much where we were 25 years ago: holding summits, followed by the dismayed realization that we haven’t, you know, really done all that much except burn a lot of hydrocarbons flying people to summits. Maybe last year’s Paris talks will turn out to be the actual moment when things started to change – but having spent the last 15 years as a reporter listening to people tell me that no, really, we’re about to turn the corner, I retain a bit of skepticism.

How was this bit of advice from McArdle received by some of the loudest name-callers? Not well, as she describes in this follow-up:

In response, climate scientist Michael Mann tweeted this:

Then he blocked me. You will correctly infer that I was also inundated with other interlocutors on social media and e-mail. Many of them were respectful. Others were … less so. At worst, they suggested, I was a paid shill for fossil fuel interests. (Not so. I accept no pay from anyone other than Bloomberg.) At best, they said, I was a fool who was giving aid and comfort to the enemy. My editor was thusly chided for the column: “shame on you for publishing it, especially if you have children.”

This should come as a big surprise to no one.

Next, we point you to Judith Curry’s (herself no stranger to treatment like McArdle’s, and worse) excellent blog post in which she provides a 21st-century update to Michael Polanyi’s 1962 essay titled “The Republic of Science: Its Political and Economic Theory.” Curry delivers an introduction to his work (“Polanyi provides an interesting perspective from the mid 20th century, as the U.S. and Europe were contemplating massive public investments in science.  Polanyi’s perspective was colored by his early years in Hungary, which led him to oppose central planning in the sciences.”) and excerpts from Polanyi’s work and then follows with an offering of comments as to how Polanyi’s perspective stands up some half a century later. For instance [embedded links in original]:

Polanyi’s analogy of the scientific process with markets captures the pure incentives that drive scientists – search of truth, intellectual satisfaction and individual ego. What happens when the externalities of the Republic of Science produce perverse incentives, and careerism becomes a dominant incentive that requires publishing a lot of papers rapidly and producing headline-worthy results (who even cares if these papers don’t survive scrutiny beyond their press release)? (see What is the measure of scientific success?) What happens is that you get increasing incidence of scientific fraud (see Science: in the doghouse?), cherry picking and meaningless papers on headline grabbing topics that don’t stand up to the test of time (see Trust and don’t bother to verify).

And what happens when the ‘hand’ guiding science isn’t ‘invisible’, i.e. science is driven by politics, such as a political imperative to move away from fossil fuels and towards renewable energy?  Federal funding can bias science, particularly in terms of selecting which scientific problems receive attention (link).

And what of Polanyi’s statement:  “Such self-coordination of independent initiatives leads to a joint result which is unpremeditated by any of those who bring it about.”  The ‘result’ of dangerous anthropogenic climate change and the harms of dietary fat were hardly unpremeditated.

We also have our own humble opinion on Polanyi, and how he influenced Thomas Kuhn, who, as a result of Polanyi’s view, noted that the intellectual market may not be all that fluid. From “Lukewarming: The New Climate Science that Changes Everything”:

…Polanyi…recognized the horrors of government intervention in science and the pernicious influence of central planning.  He argued that science should be considered a free market with spontaneous order a perspective akin to [a list of libertarian economist luminaries]. Thomas Kuhn, a physicist and philosopher who attended several of Polanyi’s lectures, went him one better and argued in his classic The Structure of Scientific Revolutions that order created paradigms, or encompassing philosophical strucures, that lie at the core of science. 

We went on to demonstrate that paradigms must become even more entrenched when the government becomes the monopoly provider of funding for science with political and policy consequences.

Curry offers up these suggestions as to how to improve on the current sad state of scientific affairs [again, links in original]:

So, what should the Republic of Science look like in the 21st century?  The overwhelming issue for the health of science is to reassert the importance of intellectual and political diversity in science, and to respect and even nurture scientific mavericks.  The tension between pure (curiosity driven) science and use-inspired and applied science [see Pasteur’s quadrant] needs to be resolved in a way that supports all three, with appropriate roles for universities, government and the private sector. And finally, the reward structure for university scientists need to change to reward more meaningful science that stands the test of time, versus counting papers and press releases, which may not survive even superficial scrutiny even after being published in prestigious journals that are more interested in impact than in rigorous methods and appropriate conclusions.

Failure to give serious thought to these issues risks losing the public trust and support for elite university science (at least in certain fields).  Scientists are becoming their own worst enemy when they play into the hands of politicians and others seeking to politicize their science.

We urge you to read the whole thing. As always, Curry is insightful, interesting, informative, and right on target.

And finally, we suggest that you ought to have a look at Julie Kelly’s “The EPA vs. Science” in National Review. In this article, Kelly looks at recent developments in the long-running controversy surrounding the use of the herbicide glyphosate (i.e., Roundup) and EPA’s recent released and then withdrawn report on glyphosate’s health effects. Here’s a teaser:

On April 29, the EPA posted a report concluding that glyphosate (the active ingredient in Roundup herbicide and other products) is “not likely to be carcinogenic.” The committee found no relationship between glyphosate exposure and a number of cancers, including leukemia, multiple myeloma, and Hodgkin lymphoma. The 86-page assessment was signed by the EPA’s cancer review committee back in October 2015 and marked “final.”

But the EPA took it down on May 2, claiming the documents were “inadvertently” posted and only a preliminary report. “EPA has not completed our cancer review. We will look at the work of other governments… . our assessment will be peer reviewed and completed by end of 2016,” said an EPA spokeswoman.

Kelly notes “GMO foes are now targeting glyphosate in their ongoing campaign against genetically engineered crops” and “[a]ctivists are also using the court system to punish companies that use glyphosate” and adds “[i]t seems that the EPA may be taking some cues from these anti-GMO activists.”

House Science Committee chairman Lamar Smith (R-TX) is looking into what prompted the rather unusual move by the EPA. According to Kelly:

Chairman Smith also senses that EPA foot-dragging might be based more on politics than on science: “That the EPA would remove a report, which was marked as a ‘Final Report’ and signed by thirteen scientists, appears to be yet another example of this agency’s attempt to allow politics rather than science [to] drive its decision making. Sound, transparent science should always be the basis for EPA’s decisions.”

Kelly smartly concludes:

If the science indeed shows (again) that glyphosate does not cause cancer, the anti-pesticide Center for Biological Diversity says it will be a “major roadblock” for the anti-GMO movement, which wants to ban genetically engineered crops worldwide. It will be a blow the anti-GMO movement richly deserves.

You can check out her full story here.

“It is not rational, never mind ‘appropriate,’ to impose billions of dollars in economic costs in return for a few dollars in health or environmental benefits,” the Supreme Court held last year in Michigan v. EPA.It seems that the U.S. Fish and Wildlife Service (USFWS) did not get the message, with its willy-nilly imposition of significant economic costs when designating “critical habitat” for endangered species.

A California builders’ association is now asking the Court to establish that judicial review is available for individuals and businesses affected by these agency actions that purport to enforce the Endangered Species Act (ESA). The ESA specifically requires federal agencies to take economic impacts into consideration, but the USFWS routinely ignores the costs of designating land as a critical habitat. The San Francisco-based U.S Court of Appeals for the Ninth Circuit held that the designation of critical habitat is an action fully committed to agency discretion, and that it may ignore any cost implications at its leisure, but this would seem to contradict Michigan v. EPA and other precedent.

The USFWS employs a cost-benefit accounting method called “baseline analysis,” which separates the impacts that would occur absent designation (baseline impacts) from the impacts attributable to designation (incremental impacts). It then only considers the incremental impacts, despite enormous disparities between baseline and incremental costs—one order of magnitude or two—and fanciful estimates that the economic impact of critical habitat designation is often $0.

Cato, joined by the Reason Foundation and National Federation of Independent Business, filed an amicus brief urging the Supreme Court to take up this important question of whether courts can even review the government’s Enron-style of cost-benefit analysis. Independent research by Reason’s Brian Seasholes found that in examining 159 of the 793 species that have critical habitat designation, there are at least $10.7 billion in economic impacts, hundreds of jobs lost per species designated, and regulatory burdens affecting 60,169,546 acres of land (11,261,054 privately owned) spanning 37 states and two territories.

And what is the purported conservation benefit to these billions in costs? Nothing. As the USFWS itself has stated, “[i]n 30 years of implementing the Act, the Service has found that the designation of statutory critical habitat provides little additional protection to most listed species, while consuming significant amounts of available conservation resources.”

Moreover, critical habitat designation is counterproductive for conservation. Again, the federal government is the source for the best material on this: “Mounting evidence suggests that some regulatory actions by the Federal government, while well-intentioned and required by law, can (under certain circumstances) have unintended negative consequences for the conservation of species on private lands.” These negative consequences are caused by the ESA’s regulatory reach and severe penalties—up to $50,000 and 1 year in jail for misdemeanor harm to an endangered fish, bird, or habitat, whether the habitat is occupied or not—coupled with the ability to regulate vast amounts of land, water and natural resources.

As Australian environmental-law expert David Farrier has described, “disgruntled landowners make poor conservationists”—and foisting enormous costs and regulatory burdens onto homeowners with criminal penalties for non-compliance certainly makes them disgruntled. 

The Supreme Court will consider whether to take up Building Industry Association of the Bay Area v. U.S. Dept. of Commerce either right before it goes on summer recess or right when it gets back in September.

Over at Cato’s Police Misconduct web site, we have selected the worst case for the month of May.  It was the case of one Shane Mauger.  Over a period of about 10 years, this former police officer told lies to obtain search warrants and would then falsify police reports by under-reporting any cash that he seized during those raids.

Now, because of his corruption, officials cannot tell how many of his previous cases were based on valid police work and how many were based upon dishonest work.  Many cases are being reviewed and thrown out.

Federal investigators discovered other corrupt officers in the same Reynoldsburg, Ohio police department.  Former officer Tye Downard was arrested in February for dealing in narcotics.  Shortly after his arrest, Downard committed suicide in his jail cell.

Global Science Report is a feature from the Center for the Study of Science, where we highlight one or two important new items in the scientific literature or the popular media. For broader and more technical perspectives, consult our monthly “Current Wisdom.”

Methane is all the rage. Why? Because 1) it is a powerful greenhouse gas, that molecule for molecule, is some 25 times as potent as carbon dioxide (when it comes to warming the lower atmosphere),  2) it plays a feature role in a climate scare story in which climate change warms the Arctic, releasing  methane stored there in the (once) frozen ground, which leads to more warming and more methane release, ad apocalypse, and 3) methane emissions are  also be linked to fossil fuel extraction (especially fracking operations). An alarmist trifecta!

Turns out, though, that these favored horses aren’t running as advertised.

While methane is a more powerful greenhouse gas in our atmosphere than carbon dioxide, its lifetime there is much shorter, even as the UN’s Intergovernmental Panel on Climate Change can’t quite say how long the CO2 residence time actually is. This means that it is harder to build-up methane in the atmosphere and that methane releases are more a short-term issue than a long-term one. If the methane releases are addressed, their climate influence is quickly reduced.

This is why methane emissions from fracking operations—mainly through leaks in the wells or in the natural gas delivery systems—really aren’t that big of a deal. If they can be identified, they can be fixed and the climate impact ends. Further, identifying such leaks are in the fracking industry’s best interest, because, in many cases, they represent lost profits. And while the industry says it has good control of the situation, the EPA isn’t so sure and has proposed regulations aimed at reducing methane emissions from new and existing fossil fuel enterprises. The recent scientific literature is somewhat split on who is right. A major paper recently published in Science magazine seemed to finger Asian agriculture as the primary suspect for recent increases in global methane emissions, while a couple of other recent studies seemed to suggest U.S. fracking operations as the cause (we reviewed those findings here).

And as to the runaway positive feedback loop in the Arctic, a new paper basically scratches that pony.

A research team led by University of Colorado’s Colm Sweeney set out to investigate the strength of the positive feedback between methane releases from Arctic soil and temperature (as permafrost thaws, it releases methane). To do this, they examined data on methane concentrations collected from a sampling station in Barrow, Alaska over the period 1986 through 2014. In addition to methane concentration, the dataset also included temperature and wind measurements. They found that when the wind was blowing in from over the ocean, the methane concentration of the air is relatively low, but when the wind blew from the land, methane concentration rose–at least during the summer/fall months, when the ground is free from snow and temperature is above freezing. When the researchers plotted the methane concentration (from winds blowing over land) with daily temperatures, they found a strong relationship. For every 1°C of temperature increase, the methane concentration increased by 5 ± 3.6 ppb (parts per billion)—indicating that higher daily temperatures promoted more soil methane release. However (and here is where things get real interesting), when the researchers plotted the change in methane concentration over the entire 29-yr period of record, despite an overall temperature increase in Barrow of 3.5°C, the average methane concentration increased by only about 4 ppm—yielding a statistically insignificant change of 1.1 ± 1.8 ppm/°C. The Sweeney and colleagues wrote:

The small temperature response suggests that there are other processes at play in regulating the long-term [methane] emissions in the North Slope besides those observed in the short term.

As for what this means for the methane/temperature feedback loop during a warming climate, the authors summarize [references omitted]:

The short- and long-term surface air temperature sensitivity based on the 29 years of observed enhancements of CH4 [methane] in air masses coming from the North Slope provides an important basis for estimating the CH4 emission response to changing air temperatures in Arctic tundra. By 2080, autumn (and winter) temperatures in the Arctic are expected to change by an additional 3 to 6°C. Based on the long-term temperature sensitivity estimate made in this study, increases in the average enhancements on the North Slope will be only between -2 and 17 ppb (3 to 6°C x 1.1 ± 1.8 ppb of CH4/°C). Based on the short-term relationship calculated, the enhancements may be as large as 30 ppb. These two estimates translate to a -3 – 45% change in the mean (~65 ppb) CH4 enhancement observed at [Barrow] from July through December. Applying this enhancement to an Arctic-wide natural emissions rate estimate of 19 Tg/yr estimated during the 1990s and implies that tundra-based emissions might increase to as much as 28 Tg/yr by 2080. This amount represents a small increase (1.5%) relative to the global CH4 emissions of 553 Tg/yr that have been estimated based on atmospheric inversions.

In other words, even if the poorly understood long-term processes aren’t sustained, the short term methane/temperature relationship itself doesn’t lead to climate catastrophe.

The favorite thoroughbreds of the methane scare are proving to be little more than a bunch of claimers.



Sweeney, C., et al., 2016.  No significant increase in long-term CH4 emissions on North Slope of Alaska despite significant increase in air temperature. Geophysical Research Letters, doi: 10.1002/GRL.54541.


A new issue of the Cato Journal, which collects the proceedings of last year’s Annual Monetary Conference, was released last week.  Those proceedings include a paper by Claudio Borio, head of the Bank for International Settlement’s monetary and economic department, which Alt-M readers may find particularly interesting.

According to Borio, conventional thinking on monetary policy rests on three faulty assumptions:

First, that natural interest rates are those consistent with output at potential and low, stable inflation.

This assumption is important because monetary authorities are supposed to track natural interest rates when they set policy.  Unfortunately, says Borio, the mainstream view of natural interest rates is imprecise, since we know that dangerous financial build ups can occur even when growth is strong and inflation is on target.  Crucially, such build ups—excessive credit, inflated asset prices, and too much risk-taking — may be caused by interest rates that are too low.  Could it be that “natural” rates are themselves sometimes inconsistent with financial stability?  Borio thinks not, and suggests that we need instead to define natural rates more carefully, as rates “consistent with sustainable financial and macroeconomic stability.”  In practice, such a definition would lead monetary policymakers to “lean against” booms when times are good, and also to worry more about the long-term consequences of expansionary monetary policy (which Borio suggests may sow the seeds of future crises) during busts.

Second, that monetary policy is neutral over the medium- to long-term.

By contrast, Borio believes that monetary policy may in fact have significant long-term effects on the real economy.  It is hard to argue, for example, that low interest rates are not a factor in fueling financial booms and busts, given that monetary policy generally operates through its impact on credit expansion, asset prices, and risk-taking.  And when such booms and busts lead to financial crises, the effects can be very long-lasting, if not permanent: growth rates may recover, but output might never catch up with its pre-crisis, long-term trend.  Borio points out that financial busts weaken demand, since falling asset prices and over-indebtedness often combine to wreak havoc on balance sheets.  Financial booms, meanwhile, affect supply: BIS research suggests they “undermine productivity growth as they occur” by attracting resources towards lower productivity growth sectors.  Taken together, these points have important implications: on the one hand, monetary policymakers ought to be more careful about supporting booms; on the other, apart from resisting the temptation to encourage booms, there may not be much that monetary policy can do about busts, since “agents wish to deleverage” and “easy monetary policy cannot undo the resource misallocations.”

Third, that deflation is everywhere and always a bad thing.  

Not so, says Borio (and many here at Alt-M would agree with him).  In fact, BIS research has found that there is only a weak association between deflation and output.  When you control for falling asset prices, moreover, that association disappears altogether — even in the case of the Great Depression.  The key here is to distinguish between supply-driven deflations, which Borio suggests depress prices while also boosting output, and demand-driven deflations, which tend to be bad news all around.  By failing to draw this distinction, monetary authorities have introduced an easy-money bias into their policy decisions: in the boom years, when global disinflationary forces should have led to falling consumer prices, loose monetary policy instead kept inflation “on target”; then, in the bust years, central banks eased aggressively — and persistently — to stave off the mere possibility of a demand-driven deflation.  (Or did they?)

This leads neatly to the broader theory that Borio outlines in his Cato Journal article: that the long-term decline in real interest rates we have witnessed since the 1990s is not, as proponents of the “savings glut” and “secular stagnation” hypotheses suggest, an equilibrium phenomenon, driven by deep, exogenous forces; rather, it is a disequilibrium phenomenon driven by asymmetrical monetary policy, and may be inconsistent with lasting financial and macroeconomic stability.

In a nutshell, Borio believes that the three fundamental misconceptions outlined above have inclined central banks towards monetary policy that is expansionary when times are good, and then even more expansionary when times are bad.  Over the course of successive financial and business cycles, this skewed approach to monetary policy imparts a downward bias to interest rates and an upward bias to debt, which in turn leads to “a progressive loss of policy room for maneuver” as central banks cannot push interest rates any lower, but also cannot raise rates “owing to large debts and the distortions generated in the real economy.”  The result is entrenched instability and “chronic weakness in the global economy,” as well as what Borio calls an “insidious form of ‘time inconsistency,’” in which policy decisions that seem reasonable — even unavoidable — in the short term, nevertheless lead us ever-further astray as time goes by.  This will, undoubtedly, strike many readers as an apt description of the current state of play in monetary policy.

Here again is Borio’s complete article.  I encourage you to read the whole thing.  The entire monetary issue of the Cato Journal, titled “Rethinking Monetary Policy,” can be found here, and features articles from Stanford economist John Taylor, Richmond Fed president Jeffrey Lacker, and St. Louis Fed president James Bullard, as well as from Alt-M’s own George Selgin, Larry White, and Kevin Dowd, among others.  Happy reading!

[Cross-posted from]

Most press reports about Zimbabwe’s fantastic hyperinflation are off the mark – way off the mark. Even our most trusted news sources fail to get the facts right. This confirms the “95 Percent Rule”: 95 percent of what you read in the financial press is either wrong or irrelevant.

When it comes to the reportage about hyperinflation, there are no excuses. All 56 of the world’s hyperinflations have been carefully documented in “World Hyperinflations”. This record is available in the Routledge Handbook of Major Economic Events in Economic History (2013) and has been available online since 2012 at the Cato Institute.

The International Monetary Fund (IMF) is the main culprit, a prominent source of the faulty data. EvenThe Economist magazine has fallen into the trap of uncritically accepting figures pumped out by the IMF and further propagating them. It’s no wonder that there is a massive gap between the public’s perception and economic reality. A gap that, ironically, The Economist reports on this week

The Economist’s most recent infraction on Zimbabwe’s hyperinflation appeared in the May 2016 issue. The magazine claimed that the hyperinflation peaked at an annual rate of 500 billion percent. Where did this figure originate? You guessed it. That figure is buried in the IMF’s 2009 Article IV Consultation Staff Report on Zimbabwe

In reality, Zimbabwe’s annual inflation rate in September 2008 was 471 billion percent, not 500 billion percent. More importantly, Zimbabwe’s hyperinflation peaked in November, not September. It was then that Zimbabwe recorded the second-highest hyperinflation in history: a whopping 89.7 sextillion percent. This is 179 billion times greater than the IMF’s figure.

That said, the IMF did attempt to cover its backside from questions about its hyperinflation guestimate. 2009 Article IV Staff Report on Zimbabwe states clearly that “data have serious shortcomings that significantly hamper surveillance due to capacity constraints.” Despite the red flag, The Economistcontinues to blindly propagate a figure that is neither reliable nor replicable. I stress the word “continues”.

It turns out that The Economist is a serial propagator of inaccurate IMF figures. The magazine has cited IMF’s incorrect figure of 500 billion percent before, in June 2009 and October 2015.

For accurate estimates of Zimbabwe’s fantastic hyperinflation that are used in the professional literature – estimates that are reliable and replicable – the IMF and the financial press corps should take a look at the following table from “On the Measurement of Zimbabwe’s Hyperinflation”, which was published in The Cato Journal, (2009):

In a speech this week, President Obama called for an expansion of Social Security, saying “it’s time we finally made Social Security more generous, and increased its benefits.” Obama was undoubtedly influenced  to some degree by the developments in the Democratic primary, where both Bernie Sanders and Hillary Clinton have expressed support for some form of expansion.  This represents a reversal in part for Obama. While he had always supported increasing payroll taxes on higher-earning Americans, he had also previously supported a change in the way benefits were adjusted each year that would have reduced the growth rate of benefits over a long timeframe in the interest of improving the program’s fiscal trajectory. Social Security’s long-term oultook has only gotten worse in the intervening years, but in his speech he signalled that he no longer believed “all options were on the table” to address solvency concerns  and instead supports further expansion. This reversal is misguided. If his favored reforms are implemented it will increase the economic distortions introduced by Social Security and do nothing to address its serious fiscal problems.  The more likely result is that with this retrenchment, policymakers will continue to make promises but fail to actually do anything. Younger workers will bear the brunt of the cost resulting from failures to put forward constructive reform.

Inexorable demographic changes and the program’s structure mean that today’s younger workers were already going to get a worse deal from Social Security than previous generations. As work from C. Eugene Steuerle and Caleb Quakenbush has shown, a married-couple both earning the average wage retiring in 1960 received more than seven dollars in benefits for each dollar it paid in taxes over their lifetime. A similar couple reaching age 65 in 1980 received roughly $2.60in benefits for each dollar contributed, and a couple retiring in 2030 will receive about $1.12 for each dollar paid into the program. As they note, this ratio probably overstates how good a deal future retirees will get as it does not incorporate the reforms needed to pay scheduled benefits, so that couple that is currently in their 50s could end up having to pay more in taxes or taking a substantial benefit cut.

Present Value Lifetime Benefits and Taxes in Social Security, Married Couple at Average Wage

Source: Steuerle and Quakenbush (2015)

Given that neither Hillary Clinton nor Donald Trump are likely to make any substantive reforms to improve the program’s fiscal trajectory, she has expressed support for some form of expansion and he has promised to protect old-age entitlements from any kind of cuts, it is likely policymakers will continue to kick that can further down the road and closer to the trust fund exhaustion date in 2034. The longer these reforms are delayed, the larger the required reforms become. In order to make the program solvent through the 75-year projection period, scheduled benefits would have to be cut by 16.4 percent for all current and future beneficiaries. If policymakers delay until 2034, scheduled benefits would have to be by 21 percent, with these reductions increasing in later decades.

Changes Needed to Reach 75-Year Solvency

Source: Social Security Administration, The 2015 Annual Report of the Board of Trustees of the Federal Old-Age and Survivors Insurance and Federal Disability Insurance Trust Funds, July 2015, p. 25.

Recent experience has shown that even would-be reformers have expressed a reluctance to make any changes that would affect current retirees, and if this continues it would make addressing the program’s significant unfunded obligations more difficult. Even completely eliminating benefits for those newly eligible in 2034 would not be enough to enable the program to pay out all scheduled benefits in that year. Perhaps it is not surprising that almost two-thirds of people 18 to 29 think Social Security will be unable to pay them benefits when they retire.

President Obama’s reversal is misguided, and will make it harder to enact Social Security reforms that would actually begin to address the program’s issues. Younger workers will bear the burden of policymakers’ reticence to put forward constructive reforms, and they have shown that they are skeptical of Social Security promises made by politicians.

Imagine that you run a daycare business out of your home. Some of your clients are poor families whom your state has decided to help with daycare. The state program allows such families to choose any daycare they want and then reimburses the provider up to a certain amount. Now the state has declared that because of this program, you—and even people who provide at-home daycare for family members’ children—will be considered a state employee for the sole purpose of giving a union exclusive representation rights.

You don’t get state medical or dental insurance. You don’t get state retirement benefits. You don’t get paid vacation on national holidays. The only thing you get is a union you didn’t choose and you refuse to join that is now representing your “interests” before the state, which isn’t even your employer. Does this sound far-fetched? Yet it’s what’s happened to Kathleen D’Agostino and seven other women in Massachusetts who are asking the Supreme Court to take their case after the lower courts dismissed their lawsuit.

The plaintiffs argue that the state’s imposition of an exclusive representative on them violates their First Amendment freedom of association. In the 2014 case Harris v. Quinn, the Supreme Court ruled that states that unionize healthcare aides and other home-based workers who are “not full-fledged public employees” cannot require those who do not wish to join the union to pay fees to support it. This new case asks the question Harris left unanswered: May a state even mandate an exclusive representative for those who are “not full-fledged public employees”?

The U.S. Court of Appeals for the First Circuit said that the case is easily resolved under Abood v. Detroit Board of Education (1977)—which allowed the imposition of “agency fees” on union nonmembers—and does not require further First scrutiny. Abood, however, is like a house built on the sand: it treated the First Amendment concerns public unions (should) raise as already resolved by earlier cases when in fact those earlier cases merely resolved the question of whether the Commerce Clause gave Congress the power to regulate those public unions (the old cases having arisen at a time when the Commerce Clause was only starting to be read expansively).

Abood’s reliance on the notion of “labor peace”—which was significant in those old cases but shouldn’t be a valid First Amendment interest—conflicts with the constitutional ban on compelled speech and association absent a substantial government interest. Although the First Circuit treated this case as automatically resolved under Abood, it would actually be a vast expansion of precedent to say that “labor peace” justifies forcibly unionizing at-home workers who are independent except for the sole fact that some of their clients pay them through a government-subsidy program.

States are already doing this in a number of fields, but expanding Abood would enable the states to go as far as mandating exclusive representation for private-school teachers whose schools receive funding through state voucher or tax-credit programs. Or apartment-building owners who lease to people in rental-assistance programs. Or for the federal government to impose exclusive representation on bank tellers who work at FDIC-backed institutions.

Where does it stop? Cato has filed a brief asking the Supreme Court to answer that question in the case of D’Agostino v. Baker.

The Trans-Pacific Partnership is the economic centerpiece of the Obama administration’s much ballyhooed “strategic pivot” to Asia, which – in 2009 – heralded U.S. intentions to extricate itself from the messes in Iraq and Afghanistan and to reassert its interests in the world’s fastest-growing region. After six years of negotiations, the comprehensive trade deal was completed last year and signed by its 12 charter members earlier this year. But the TPP must be ratified before it can take effect – and prospects for that happening in 2016 grow dimmer with each passing day.

One would assume TPP ratification a policy priority of President Obama. After all, he took office promising to restore some of the U.S. foreign policy credibility that had been notoriously squandered by his predecessor. If Congress fails to ratify the agreement before Christmas, Obama will leave office with American commercial and strategic positions weakened in the Asia-Pacific, and U.S. credibility further diminished globally.  The specter of that outcome would keep most presidents awake at night.

In Newsweek today, I put most of the blame for this precarious situation on a president who, throughout his tenure, has remained unwilling to challenge the guardians of his party’s anti-trade orthodoxy by making the case for trade liberalization generally, or the TPP specifically:

Superficially, one could blame election-year politics and a metastasizing popular antipathy toward trade agreements for the situation, but the original sin is the president’s lackluster effort to sell the TPP to his trade-skeptical party and the American public. In the administration’s division of labor, those tasked with negotiating the TPP kept their noses to the grindstone and brought back an agreement that reduces taxes and other protectionist impediments to trade…

Meanwhile, those responsible for explaining the deal’s merits domestically spent too much time on the golf course. With scarcely greater frequency than a couple of sentences in his past two States of the Union addresses has President Obama attempted to articulate the importance of trade and the TPP to the American public. Even then, his “advocacy” has been grudging and couched in enough skepticism to create and reinforce fears about trade and globalization.

When Hillary Clinton – the president’s former secretary of state, co-architect of the Asian pivot, and champion of the TPP – announced her opposition to the negotiated deal because it became a political liability for her, President Obama remained silent. If the president really believes in the trade agenda his administration has pursued for eight years, his decision not to challenge Clinton was a significant tactical error – and a profoundly lamentable display of cowardice. Foregone was a prime opportunity to inject an affirmative case for the trade deal into the fact-deprived election debate. And how could Obama let Clinton’s political ambitions take priority over his policy agenda?  How could the President of the United States be so cavalier about actions and inactions that amount to kneecapping the U.S. foreign policy agenda and subverting American commercial interests?

The president’s near total absence of promotion of the TPP explains why, in the waning months of his tenure, ratification of the economic centerpiece of the vaunted Asian pivot is unlikely. In this absence emerged a fallacious, hysterical narrative about the allegedly deleterious effects of the TPP on jobs, the environment, public health, and even cancer rates, which became the dry tinder that fueled the fiery antitrade rhetoric of this year’s demagogic presidential campaigns.

Lately, it has become convenient for the president’s apologists to point to Donald Trump’s bluster and the dampening enthusiasm for the TPP among Republicans in Congress as the major obstacles to ratification. But that is all a consequence of President Obama’s failure to rebut the trade fallacies and tall tales concocted by groups on the left, like the Sierra Club, the AFL-CIO, and Public Citizen, and his abject deference to Harry Reid, Nancy Pelosi, and then candidate Hillary Clinton in helming his party’s platform on trade and the TPP. I warned of this problem over the years, and unless things change in a New York minute, we will soon be reaping the whirlwind.

Late in the 11th hour, selling the TPP for its immediate economic benefits will not be enough.  The president must make a compelling, comprehensive case to Congress and the public about what is really at stake. This is what I suggested in Newsweek:

There is a strategic rationale for trade agreements and those kinds of arguments can be more politically persuasive than the economic ones. Indeed, the post-WWII liberal global trading order reflects the lessons of history: commerce and economic interdependence are the best guarantors of peace. Or, as the French economics writer Fredric Bastiat is alleged to have quipped a century earlier: “When goods don’t cross borders, armies will.”

It was with those lessons in mind that President George W. Bush paid a visit to the Senate in June 2005, with legislation to implement his Central American Free Trade Agreement facing uncertain prospects. Citing the rise of Hugo Chavez in Venezuela and the return of Daniel Ortega in Nicaragua, Bush urged his colleagues to consider CAFTA an agreement that would serve long-standing U.S. strategic interests – with a sprinkling of economic benefits to boot. The following day the agreement was ratified.

President Obama is down to his last chance to fulfill his obligation to posterity.  Success requires that he put the TPP into its broader geopolitical and geoeconomic contexts and describe a world with and without its ratification. The president attempted as much in a Washington Post opinion piece last month, describing the TPP as an opportunity for the United States to write the new rules of global trade before China does. It was a laudable start, if not too reliant on the myth of trade as a competition between the United States and China.

With so many Americans leery of China’s rise and U.S.-China relations growing more contentious over economic and security issues, the president may be tempted to describe the TPP as an agreement that “excludes,” “contains,” or “isolates” China.  That characterization would certainly resonate with members of Congress looking for cover to support the TPP. But portraying the TPP as a weapon of economic warfare essential to “beating” or “defeating” China is short-sighted and fraught with the perils of self-fulfilling prophesy. Our economic relationship with China is more collaborative than competitive and the costs of estrangement would be felt deeply in the United States.

Instead, President Obama should argue that U.S. leadership, and immersion in the process of crafting the 21st century rules that will govern the trade of China’s most important partners, will leave Beijing with no better alternatives than to embrace those rules. Accession to the TPP is open to all newcomers that can meet the deal’s relatively high standards, and that, importantly, includes China.

The president’s comprehensive case for the TPP must go well beyond the static benefits estimated by the ITC.  It must include the benefits associated with the liberalizing policy reactions of other countries in the region, as they aspire to become parties to the agreement.  It must include the benefits associated with TPP expansion to include South Korea, the Philippines, Indonesia, Thailand, Colombia, Taiwan, and China. It must include the benefits of TPP as a catalyst for an eventual Free Trade Area of the Asia Pacific to include countries like India and Russia. And it must include the benefits of the TPP as an inducement to Europe, Brazil, South Africa, and the rest of the world to resuscitate the process of multilateral trade liberalization, which has been mostly defunct for over 20 years.

These enormous potential benefits of TPP ratification this year are also the costs of failure to ratify this year. If the United States fails to ratify the agreement this year, TPP members that are also party to the China-centric Regional Comprehensive Economic Partnership negotiations will be drawn more deeply into China’s ambit. While that doesn’t mean that U.S. entities will be excluded from engaging in commerce with entities in those countries, it does mean that existing China-centric investment and supply chain relationships will be reinforced, new ones will emerge and become established, and the costs of reorienting those relationships in the event of some future TPP implementation will increase with each passing year.

But at a deeper, institutional level, failure to ratify would impair U.S. commercial and diplomatic interests in the region. Foreign governments that incurred political costs to push the TPP in their countries with expectations of U.S. participation wouldn’t soon forget that the United States proved to be an unreliable partner. Expectations that the United States is still capable of leading the world to the economic liberalization it so desperately needs would erode, and with that diminished credibility, U.S. policy objectives would become more difficult or, in some cases, impossible to meet.

Those would be the costs of a U.S. failure to ratify the TPP this year. Avoiding that outcome is President Obama’s obligation to posterity.

The Police Commission in San Francisco recently voted 5-2 to approve a body worn camera (BWC) plan. The plan, which one commissioner described as a “travesty,” prohibits supervisors from viewing BWC videos in order to find policy violations. It also requires officers involved in a shooting or in-custody death to submit an “initial statement” before they review BWC footage. Whether officers should be allowed to view BWC footage before making a statement is one of the most pressing issues in body camera debates. Unfortunately, the San Francisco BWC plan does not adequately address this issue.

Your memory isn’t always reliable. While many of us are confident that we’re pretty good at remembering specific incidents, it turns out that even our memories of notable and historic events, such as 9/11, are hardly as well-formed and clear as we might hope.

The legality of an officer’s use of deadly force depends in large part on the reasonableness of what the officer believed at the time of the incident. For instance, whether an officer who shot someone reasonably feared for his life, or the lives of innocent bystanders, will be an important factor in determining whether the shooting was legal.

BWCs, like other cameras, don’t have fuzzy memories. What’s filmed by BWCs is stored and, absent tampering, won’t change. The same can’t be said of police officers’ memories. This is one of the factors that has prompted debate about whether police officers should be allowed to view BWC footage of a deadly use-of-force incidents before they file a report.

I and others have argued that police should not view BWC footage related to a deadly use-of-force incidents before filing a report. A policy that allows officers to view BWC footage before filing a report would allow officers an unfair chance to exculpate themselves of wrongdoing. Officers could search for justifications for use-of-force that didn’t occur to them while the incident in question was happening.

Others could argue that police officers, like all human beings, don’t have perfect memories and might not accurately remember important facts concerning a stressful incident under investigation. Rather than being seen as an honest lapse of memory, the omission of crucial facts in a report could be portrayed as an officer trying to avoid the consequences of poor behavior.

San Francisco’s body camera plan requires officers involved in a shooting or in-custody death to submit an “initial statement” before he reviews body camera footage.

At first glance, this policy seems like a decent compromise between the two positions I outlined above. Such a policy ensures that officers can view BWC footage, but only after providing a statement outlining what they remember about the incident under investigation.

However, the “initial statement” required by the recently approved San Francisco plan is explicitly required to be brief and resembles a collection of basic facts rather than an explanatory report:

The initial statement by the subject officer shall briefly summarize the actions that the officer was engaged in, the actions that required the use of force, and the officer’s response.

These initial statement requirements are too narrow. As Alan Schlosser, legal director for the American Civil Liberties Union of Northern California, said, officers should fill out a full report before viewing body camera footage:

When we said there should be an initial report, we didn’t mean there should be a brief report,” he said. “When we support an initial report, we meant there would be a full report and then the officer would see the video and then there would be a supplemental report, with the understanding that recollections change.

Police in San Francisco will be wearing BWCs in the not too distant future. With the current plan in place there is still room for improvement when it comes to using BWCs as tools for increased law enforcement accountability. If San Francisco’s police commissioners ever want to revisit their body camera plan they could do worse than taking inspiration from their neighbors across San Francisco Bay. In Oakland, officers involved in shootings cannot view body camera footage without first being interviewed and submitting a report.


Two months ago, the Supreme Court ruled that states have leeway in determining how to draw their legislative districts, more specifically that they don’t have to equalize the number of voters per district to satisfy the constitutional principle of “one person, one vote.” The decision was really a “punt,” not resolving the tensions between “representational equality” and “voter equality”; it’ll take some future case after the next census to force the justices to face the issues left unresolved. 

Former Cato intern (and future legal associate) Tommy Berry and I have now published an essay in the Federalist Society Review explaining how the Court “shanked” that punt by misreading constitutional structure and application. Here’s a sample (footnotes omitted):

In Evenwel, the Court decided that it is acceptable for a state to ignore the distinction between voters and nonvoters when drawing legislative district lines. According to the Court, a state may declare that equality is simply providing representatives to equal groups of people, without distinction as to how many of those people will actually choose the representative. A state may use this constituent-focused view of equality because “[b]y ensuring that each representative is subject to requests and suggestions from the same number of constituents, total-population apportionment promotes equitable and effective representation.”

But ignoring the distinction between voters and nonvoters achieves a false picture of equality at the expense of producing far more serious inequalities. Rather than placing nonvoters and voters on anything approaching an equal political footing, it instead gives greater power to those voters who happen to live near more nonvoters, and less power to those who do not.

As we argued before the decision came down, the framers of the Fourteenth Amendment recognized that granting such extra voting power runs the risk of harming the very nonvoters to whom it ostensibly grants representation. This recognition manifested itself in the enactment of the Fourteenth Amendment’s Penalty Clause. In both ignoring that clause and oversimplifying the debates over the Fourteenth Amendment, the Court’s opinion paints an incomplete picture of constitutional history.

Read the whole thing. For more, see Tommy’s blogpost on our article, as well as our earlier criticism of Justice Ginsburg’s majority opinion for misreading the Federalist Papers.

Hillary Clinton clearly believes that she enjoys a decided advantage over Donald Trump when it comes to foreign policy. Her speech today in San Diego launched what will clearly be a sustained attack on Trump’s qualifications as commander-in-chief. Citing his support for torturing the families of terrorists, his loose talk about using nuclear weapons on ISIS, and his calls for walking away from NATO and other allies, Clinton argued that Trump’s ideas about foreign policy are “dangerously incoherent.” His main tools of global statecraft, she said, would include bragging, mocking, and composing nasty tweets. In short, Clinton’s central theme is that Trump is simply “not up to the job” of president and if elected, Trump would lead America down a “truly dark path.”

Though most of Clinton’s attacks by this point have already been well rehearsed, the account against Trump is nonetheless devastating. Or at least the attack would be devastating to some other candidate in some other election year. This year, however, things look very different.

The most recent Washington Post/ABC News survey found Americans almost evenly divided over whether Hillary Clinton or Donald Trump would do a better job keeping the country safe, dealing with terrorism, and dealing with international trade. Can these numbers be real? Can almost half of the American public honestly prefer a man who clearly has given so little thought to international affairs over a woman who has traveled the world, served as a United States senator, and spent four years as Secretary of State? The surprising answer is yes.

There are three things keeping Clinton from winning the foreign policy debate.

The first dynamic fueling this situation is partisan polarization. As research has begun to make clear, the United States now suffers from an extreme case of “partyism.” Republicans and Democrats now dislike each other so much that they oppose each other instinctively regardless of the facts – witness how much Republicans still think President Obama is a Muslim. On the question of keeping the country safe, the Post/ABC survey found that 84% of Democrats think Clinton will do a better job but 83% of Republicans think Trump will do a better job. The fact that Trump commands such partisan loyalty despite his clear lack of knowledge and experience illustrates just how powerful a force partisan polarization has become in the United States. This alone will make it very difficult for Clinton’s (or anyone else’s) substantive arguments to gain any traction.

The second force at work is the appeal of Trump’s foreign policy views. Whatever his deficits on paper, on the campaign trail Trump’s “America First” rhetoric aligns more closely with public preferences than Clinton’s liberal interventionism does. Clinton denounces Trump for unrealistic and dangerous talk about allies, trade deals, and refugees, arguments that resonate with pundits and party leaders inside the Beltway. Trump, meanwhile, scores points with the public for understanding that for most Americans the best foreign policies are those that improve things at home. A recent Pew survey, for example, found that 57% of the public thinks the United States should deal with its own problems and let other countries deal with theirs as best they can. That same survey found that more Americans now believe American involvement in the global economy is a bad thing than a good thing. And a whopping 70% of the public wants the next president to focus on domestic policy; just 17% want him or her to focus on foreign policy. In treating foreign policy as an extension of domestic policy, Trump has plugged into a deep reservoir of public concern that the White House has allowed foreign policy to distract the United States from more pressing matters.

Finally, Clinton’s own weaknesses on foreign policy are helping buoy Trump’s case. Her foreign policy record includes a long list of decisions that challenge her narrative of superior judgment and temperament. Bernie Sanders has paved the way for Trump on this score, pressing Clinton repeatedly on her decision to vote in support of the 2003 invasion of Iraq when she was in the Senate and criticizing her for the mishandling of the Libyan intervention. Nor has Trump been shy about following Sanders’ lead. At a rally earlier this month Trump called Clinton “trigger happy” and said, “Her decisions in Iraq, Syria, Egypt, Libya have cost trillions of dollars, thousands of lives and have totally unleashed ISIS.” Nor is that just campaign rhetoric. From Afghanistan to the Libyan intervention to the Syrian civil war, Clinton has repeatedly staked out aggressive interventionist positions that go beyond what most of the public supports, leaving her wide open to Trump’s counterattacks.

In the end, Clinton is correct: Trump clearly does not possess the qualifications or the temperament to lead the United States. Unfortunately, Clinton’s critique leaves voters with only a “less bad” alternative to Trump rather than with a compelling vision of America’s role in the world. And with the approval ratings of both candidates at historic lows, it is unlikely that either will manage to score a knock out blow on foreign policy in the general election. In fact, with approval ratings of the two candidates at historic lows, it would not be surprising if large numbers of disaffected Democrats and Republicans leaned toward a third party ticket that eschews the aggressive interventionism of Clinton and the belligerent nationalism of Trump. 

Setting the stage for their study, Roy et al. (2015) write that rice is “one of the most important C3 species of cereal crops,” adding that it “generally responds favorably to elevated CO2.” However, they note that the actual response of rice crops to elevated CO2 and warming “is uncertain.” The team of five Indian scientists set out “to determine the effect of elevated CO2 and night time temperature on (1) biomass production, (2) grain yield and quality and (3) C [carbon], N [nitrogen] allocations in different parts of the rice crop in tropical dry season.”

The experiment they designed to achieve these objectives was carried out at the ICAR-Central Rice Research Institute in Cuttack, Odisha, India, using open-top-chambers in which rice (cv. Naveen) was grown in either control (ambient CO2 and ambient temperature), elevated CO2 (550 ppm, ambient temperature) or elevated CO2 and raised temperature (550 ppm and +2°C above ambient) conditions for three separate growing seasons.

In discussing their findings, Roy et al. write that the aboveground plant biomass, root biomass, grain yield, leaf area index and net C assimilation rates of the plants growing under elevated CO2 conditions all showed significant increases (32, 26, 22, 21, and 37 percent, respectively) over their ambient counter-parts. Each of these variables were also enhanced under elevated CO2 and increased temperature conditions over ambient CO2 and temperature, though to a slightly lesser degree than under elevated CO2 conditions alone. 

With respect to grain quality, the authors report there was no difference among the parameters they measured in any of treatments, with the exception of starch and amylose content, which were both significantly higher in the elevated CO2 and elevated CO2 plus elevated temperature treatments. The C and N grain yields were also both significantly increased in both of these treatments compared with control conditions.

The results of this study thus bode well for the future of rice production in India during the dry season. As the CO2 concentration of the air rises, yields will increase.  And if the temperature rises as models project, yields will still increase, though by not quite as much. These findings, coupled with the fact that the grain nutritional quality (as defined by an increase in amylose content) was enhanced by elevated CO2, suggest there is a bright future in store for rice in a carbon dioxide-enhanced atmosphere.



Roy, K.S., Bhattacharyya, P., Nayak, A.K., Sharma, S.G. and Uprety, D.C. 2015. Growth and nitrogen allocation of dry season tropical rice as a result of carbon dioxide fertilization and elevated night time temperature. Nutrient Cycling in Agroecosystems 103: 293-309.

Sen. Jeff Flake (R-AZ), Rep. Dave Brat (R-VA), and other members of Congress have introduced legislation based on the “Large HSAs” concept I first proposed here and developed herehereherehere, and here.

The “Health Savings Account Expansion Act” (H.R. 5324S. 2980) would expand the availability and benefits of tax-free health savings accounts (HSAs) in several ways. It would nearly triple existing HSA contribution limits from $3,400 for individuals and $6,750 for families to $9,000 and $18,000. It would allow tax-free HSA funds to purchase health insurance, over-the-counter medications, and direct primary care. It would eliminate the mandate that HSA holders purchase a government-designed high-deductible health plan. And it would repeal ObamaCare’s increase of the penalty on non-medical withdrawals. Americans for Tax Reform and FreedomWorks have endorsed the bill.

I’m sure I will have lots to say about Flake-Brat, but here are a few initial impressions.

  1. Flake-Brat would free workers from the government program we call employer-sponsored insurance—but only if that’s what workers want. The federal tax code currently tells the average worker with family coverage she can either surrender $13,000 of income to her employer and let her employer choose her health plan, or surrender a huge chunk of that money to the government by paying income and payroll taxes on it. The Flake-Brat bill would allow her to keep that money and either save it, use it to stay on her employer’s health plan, or use it to purchase better coverage somewhere else, all tax-free. The choice would belong to her, not to Congress or the IRS.
  2. Flake-Brat is a bigger tax cut than you’ve ever seen.  Large HSAs would be the largest-ever scaling back of the federal government’s role in health care. The Flake-Brat bill is effectively a $9 trillion tax cut. That’s how much money the current tax exclusion for employer-sponsored insurance will divert from workers to their employers over the next decade. Flake-Brat would return that money to the workers who earned it. Flake-Brat is thus an effective tax cut equal to all of the Reagan and Bush tax cuts combined. It is nine times the size of the tax cut associated with repealing ObamaCare.  Unlike health-insurance tax credits, Large HSAs involve no government spending and would not mandate that taxpayers purchase health insurance, as existing HSAs and health-insurance tax credits do. (The bill and its sponsors describe that requirement as a “mandate.”)
  3. Flake-Brat would make health care better, more affordable, and more secure. It would do so by dramatically reducing government’s influence over the health care sector. By shifting from employers to consumers nearly a quarter of the $3 trillion Americans spend annually on health care, Large HSAs would begin to make the health care sector and health policy respond to the needs of patients. Large HSAs are also less restrictive than existing HSA law or health-insurance tax credits. As a replacement for ObamaCare, Large HSAs would encourage innovative products like pre-existing conditions insurance that make coverage more affordable and secure.
  4. Flake-Brat shows Congress could create Large HSAs with or without repealing ObamaCare. Large HSAs are the most promising ObamaCare replacement plan to date, but Congress can create them before it repeals ObamaCare. The Flake-Brat bill would create Large HSAs even with ObamaCare still on the books. In fact, Flake-Brat would build support for repealing ObamaCare by exposing consumers to the full cost of its hidden taxes.
  5. Flake-Brat is a marker. The Flake-Brat bill defers consideration of a number of issues. All else equal, expanding tax breaks for HSA contributions would reduce federal revenues and increase federal deficits and debt. Like any proposal to level the playing field between employer-sponsored coverage and other coverage, the bill creates the potential for employer plans to unravel as (healthy) people choose better options. Were Congress to enact Flake-Brat with ObamaCare still on the books, there could be even more complicated interactions. The bill doesn’t totally level the playing field, either. Everyone would get an income-tax break, but only those with an employer who facilitates HSA contributions would get the payroll tax break. (Large HSAs can completely level the playing field with a simple tax credit that mimics that exclusion for such workers.) The authors don’t address these issues in the bill, or their supplemental materials. They will have to address them at some point. Fortunately, there are solutions. (For more on those solutions, see the “developed” links in the second paragraph.)

All in all, the Flake-Brat bill is a much-needed addition to the debate over the future of American health care.