Feed aggregator

Education reporters frequently make the claim that government ought to fund and operate educational institutions because schooling is a public good. However, since schooling fails both conditions required for a public good to exist, schools should not be publicly operated.

Schooling is Not a Public Good

According to the economic definition, a public good is non-rival in consumption and non-excludable. The first condition means that one individual’s consumption of the good does not hinder others’ abilities to use the product. Schooling fails this condition since students take up seats when receiving an education. The second condition means that the producer of the good is unable to exclude non-payers. Schooling fails this part of the definition since school leaders can prevent students from attending their institutions, if necessary.

Since schooling is not a true public good, the basic free-rider problem does not exist. This is important because it means that government does not need to operate schools or force residents to pay for them.

A Merit Good?

When journalists claim that schooling is a public good, I believe they actually mean to say that education is a merit good since it produces positive externalities. When an educational product is purchased, both the consumer and the provider benefit, as in all other voluntary transactions. However, the rest of society may also benefit if schooling actually creates citizens that are more educated. This argument leads many scholars to support government subsidization of schooling.

The Problem

Obviously, schooling is only one channel through which people can achieve an education. Since children can learn in various settings, the current system of schooling may actually harm their overall educational levels. In other words, schooling may impose negative externalities on society through providing a less than optimal level of education to all children.

Similarly, schooling likely creates a more obedient citizenry. This generates positive externalities, through less criminal activity, and negative externalities, through less creativity and technological innovation.

Since there are large positive and negative externalities that result from schooling, in theory, it is impossible to determine the overall sign of the net spillover. Consequently, it is unclear whether we ought to tax or subsidize schooling; and even if we could somehow figure out the overall sign, we would not be able to determine the optimal magnitude of the intervention.

We should recognize the fact that externalities do exist in education. However, as the founder of externalities conceded, we must also realize that attempting to reach the socially optimal level of schooling through government intervention may ironically result in much harm. Instead, we ought to limit this probable detriment by allowing individual families to seek ideal educational experiences for their own children.

Megyn Kelly is probably kicking herself for not delaying her interview of Vladimir Putin. Had she waited just a few days, she could’ve brought a leaked copy of the latest NSA estimate of the timeline, motivations, and targets of alleged Russian hackers during the 2016 election cycle to her chat with Putin and asked a lot of pointed questions about it. Even though that opportunity never materialized, she and other journalists still have the chance to ask some equally important questions of American officials about this rather interesting document and the young woman responsible for sharing it with the world. What follows are some of my suggested lines of inquiry for our friends in the Fourth Estate.

The Leaker: Reality Leigh Winner

As I read The Intercept’s story, I kept asking myself one question, over and over: did this young woman learn nothing from Ed Snowden? 

This extract from the arrest warrant affidavit contains details that, if accurate, speak to a total lack of awareness of or concern for the kind of “insider threat” detection measures that now exist in most, if not all, Intelligence Community components:

Why did Winner not use a truly secure means of contacting The Intercept? Why did she select this particular document? Why did she not contact a whistleblower advocacy organization for legal advice before even contemplating such a rash act?

The Media Outlet: The Intercept

In a statement published a short time ago, The Intercept claimed that

On June 5 The Intercept published a story about a top-secret NSA document that was provided to us completely anonymously. Shortly after the article was posted, the Justice Department announced the arrest of Reality Leigh Winner, a 25-year-old government contractor in Augusta, Georgia, for transmitting defense information under the Espionage Act. Although we have no knowledge of the identity of the person who provided us with the document, the U.S. government has told news organizations that Winner was that individual.

That statement is at odds with the search warrant affidavit quoted above, which claims that Winner was in “email contact” with the “News Outlet” (The Intercept).

Who’s telling the truth here vis a vis Winner’s alleged email contact with The Intercept–the Department of Justice or the paper? Could Winner have emailed the wrong reporter at The Intercept, and the actual story authors were in the dark that she’d contacted the paper? Did Winner’s email bounce? And why did Intercept staff share an exact copy of the purloined document with NSA officials in the first place? Why didn’t they simply read key passages of the document over the phone, or include extracts in an email to NSA officials?

Given the fact that Winner printed the document and thus left investigators a digital trace of her actions, perhaps The Intercept’s decision to share a scanned version of the document wouldn’t have mattered–but maybe it would have, and why endanger a source (annonymous or otherwise) by behaving in such an irresponsible way with the document?

The Document: Some Answers, More Questions

The NSA report that Winner leaked contained a number of new details about the alleged Russian hacking campaign, including a flow chart that lays out in greater detail the precise mechanism used by the attackers in not only the spearphishing campaign but their attempts to actually gain access voter-related data and possibly voting machines themselves. Here’s the key paragraph from the story, which quotes Alex Halderman, director of the University of Michigan Center for Computer Security and Society, at length:

“Usually at the county level there’s going to be some company that does the pre-election programming of the voting machines,” Halderman told The Intercept. “I would worry about whether an attacker who could compromise the poll book vendor might be able to use software updates that the vendor distributes to also infect the election management system that programs the voting machines themselves,” he added. “Once you do that, you can cause the voting machine to create fraudulent counts.”

How long has the Intelligence Community known that Putin ignored Obama’s warnings not to interfere in our election? How much of the vulnerability-related information has been shared with municipalities that employ voting technology susceptible to the kinds of attacks described by NSA? Are states and localities currently assessing whether their electronic voting machine and voter roll infrastructure is vulnerable to these kinds of attacks?

When I worked for then-Rep. Rush Holt (D-NJ), one of his major concerns was the security–or lack thereof–of electronic voting machines and the infrastructure that supports them. It’s been nearly 10 years since Holt had Princeton professor and computer scientist Ed Felton conduct a live hack of a Diebold voting machine for the House Committee on Administration, an event that should have served as a wake-up call about the potential for digital election fraud by one or more hostile actors. The leaked NSA assessment underscores that such cyber vulnerabilities in our election process remain. Whether one accepts or rejects the Intelligence Community assessment that Vladimir Putin ordered his intelligence services to interfere in our election is almost beside the point now. What’s clear is that this digital vulnerability is real and we ignore the implications at our peril.


Any serious efforts to improve the tax system inevitably comes up against dubious assertions that such changes won’t improve economic growth or reduce tax avoidance, and will therefore not be “revenue neutral” but will simply increase deficits and debt for no reason.

The easiest way to block growth-oriented tax reforms is to insist that any such changes must be “revenue neutral” even in the short run.  However, that goal typically relies on uncritical acceptance of dubious estimates of (1) how much “baseline” revenue the existing system will bring in over 10-20 years, and (2) how much revenue a better tax system would bring in under the conventional and official assumption that higher or lower marginal tax rates on added income have no significant effect on anything.

As Harvard economist Greg Mankiw importantly notes, “A key question is how revenue neutrality is to be judged.” 

Before Congress could even attempt to be “revenue neutral” they must first have credible estimates of future revenue under the current tax regime.  Unfortunately, the Congressional Budget Office and Joint Committee on Taxation have so far provided only incredible projections.  

Here are links to my critiques of official revenue projections for corporate and individual income taxes. 

For Congress to judge “revenue neutrality” on the basis of these extremely flawed hyper-static CBO/JCT estimates would be economically and fiscally irresponsible.

In the latest issue of Survival, Hal Brands and Peter Feaver address an important debate in American foreign policy circles. Was the rise of ISIS inevitable, or was it the result of misguided U.S. policies? Most agree it is the latter, but the dispute gets fraught on the question of whether it was U.S. military interventionism or inaction that deserves the blame. Some say it was the invasion of Iraq that led to the rise of ISIS. Others insist it was Obama’s decision to withdraw from Iraq in 2011.

Brands and Feaver use counterfactual analysis to assess whether different U.S. policy decisions at four “inflection points” could have nipped the rise of ISIS in the bud. The first of these points was the Bush administration’s decision to invade Iraq in 2003. The other three occurred during the Obama administration and include the decision not to press Iraq to allow the United States to leave behind a significant number of U.S. troops, the decision not to intervene aggressively early on in the Syrian civil war, and the decision not to intervene more forcefully to help the government of Iraq defeat ISIS before it took the city of Mosul.

The authors take a middle road, arguing that, “the rise of ISIS was indeed an avertable tragedy,” but that both restraint and activism share the blame. Had U.S. policymakers not invaded Iraq in 2003, or been more aggressive in Iraq and Syria from 2011-2014, they argue, “ISIS might not have emerged at all.”

With suitable analytic humility, however, the authors warn against overconfidence that any of the alternatives would have made a decisive difference to the eventual outcome:

We find, for instance, that limited intervention in Syria in 2011-13 might have had benefits, but it probably would not have shifted the course of the conflict so fundamentally as to head of ISIS’s rise. Likewise, not invading Iraq in 2003 would have left the United States saddled with the costs of continuing to contain that country, whereas striking ISIS militarily in late 2013 or early 2014 might have weakened that organization militarily while exacerbating the political conditions that were fueling its rise. Intervening more heavily in Iraqi politics in 2010 in order to bring about a less sectarian government than that which ultimately emerged, and leaving a stay-behind force in Iraq after 2011, represent a fairly compelling counterfactual in the sense that such policies could have had numerous constructive effects. But even here, choosing a different path from the one actually taken would have meant courting non-trivial costs, liabilities, uncertainties and limitations (p. 10).

We applaud Brands and Feaver, who served in the Obama and George W. Bush administrations, respectively, for their attempt to “move away from polemical and polarized assessments focused on assigning blame, and toward more granular, balanced analysis based on a fairer-minded view of what went wrong (p. 10).” At the same time, there is plenty of room for disagreement over their interpretation of the “what ifs” of such a complex historical question.

The most problematic issue is their treatment of the invasion of Iraq. By bundling the invasion of Iraq with the other three inflection points, the authors introduce a false sense of equality among them, making it seem as if they were all the same sort of decision, and of equal magnitude. In so doing, they obscure the most critical lesson from not only the invasion of Iraq but from the entire war on terror: the fact that American military intervention creates more problems than it solves, leading to destabilization and the amplification of civil conflicts.

To their credit, Brands and Feaver do acknowledge, in the conclusion, that “the most fateful choice was also the oldest one: the decision to invade Iraq in 2003, followed by mismanagement of the occupation” (p. 41) but they then temper that note by arguing that “it is not correct to claim that the invasion of Iraq set in motion forces that led ineluctably to the problems that the United States has faced since mid-2014.”

In a strict sense, of course, this is true. Other things could, in theory, have happened to blunt the rise of ISIS. But only a decision not to invade Iraq in 2003 would clearly and unequivocally have averted the rise of ISIS. The reason is simple: the single clearest cause of the rise of ISIS was the invasion of Iraq. As President Obama explained in 2015, “ISIL is a direct outgrowth of Al-Qaeda in Iraq, that came out of our invasion, which is an example of unintended consequences, which is why we should generally aim before we shoot.” David Kilcullen, who worked on counterterrorism at the State Department in 2005-06 and was senior adviser to General David Petraeus at the height of the Iraq surge in 2007-08, put it even more bluntly: “There would be no ISIS if we had not invaded Iraq.”

It is also true that mismanagement during the Iraq war made things worse. Most notably, the decision to dismantle the Iraqi army and “de-Ba’athify” the post-Saddam government made enemies out of former Ba’athists, many of whom would later join the insurgency. But the invasion and occupation itself was the main ingredient that made Iraq a magnet for Muslim militants from throughout the Middle East and a hotbed of insurgency and terrorism. By 2006, the U.S. National Intelligence Estimate on Trends in Global Terrorism found that the Iraq war was “shaping a new generation of terrorist leaders and operatives.” The war had “become the ‘cause celebre’ for jihadists, breeding a deep resentment of U.S. involvement in the Muslim world and cultivating supporters for the global jihadist movement.”

By contrast, though Obama could have made greater efforts in 2010 to arrange a leave behind force, there was no way in 2010 that the administration could have predicted that their failure to do so would lead to the emergence of an ISIS. Moreover, even had 20,000 American troops remained in Iraq, blocking some of the immediate avenues for ISIS to emerge, their presence would have done nothing to alleviate the motivations behind its rise. In fact, as Brands and Feaver acknowledge, the continued high-visibility presence of U.S. troops would potentially have exacerbated many of the grievances that gave the group its energy and raison d’etre. As the endurance of the Taliban in Afghanistan has shown, the United States might have stayed in Iraq indefinitely without “defeating terrorism” and thus without resolving the problem of how to leave without risk.

Given the fact that it was in the interest of the United States to leave Iraq at some point, 2010 looked about as good as one can imagine for doing so. As Brands and Feaver point out, Al Qaeda in Iraq – the predecessor to ISIS – had been seriously degraded over the previous three years. Certainly the administration must have expected some level of increased instability after the withdrawal, but no one was arguing that a withdrawal would result in the stunning rise of ISIS. On the other hand, many people – including experts in the Bush State Department, predicted much chaos, violence, and civil conflict resulting from toppling Saddam Hussein.

The bottom line is that all three Obama-era counterfactuals that Brands and Feaver explore involve battling ISIS (in either its current or previous iterations) more forcefully and earlier in risky military interventions that themselves would undoubtedly have wrought future negative unintended consequences and blowback. The lesson to draw about how to avoid future monsters like ISIS is not that sometimes America should be more eager to use force, but that military action, especially in the Middle East, inevitably delivers negative unintended consequences, and so should remain an absolute last resort.

If libertarians were in charge of legalizing marijuana, their first instinct would be to reach for an eraser.

That is, libertarians would simply eliminate existing laws that outlaw marijuana, rather than “design” the marijuana market by establishing a licensing board, capping the number of legal marijuana retailers, and the like.

Actual state marijuana legalizations, however, have generally capped the number of retail establishments and put a government board in charge of doling out the lucrative licenses to run them.

Predictably, this means that well-connected, white entrepreneurs benefit at the expense of African-Americans: 

Darryl Hill, hailed for integrating college football in his youth half a century ago, was a successful entrepreneur with no criminal record and plenty of capital when he applied for a license to grow marijuana in Maryland — a perfect candidate, or so he thought, to enter a wide-open industry that was supposed to take racial diversity into account.

To his dismay, Hill was shut out on his first attempt. So were at least a dozen other African American applicants for Maryland licenses. They were not told why.

The good news is that, in this instance, Hill seems to have circumvented the apparent bias:

… the 73-year-old great-grandfather who was the first black football player at the University of Maryland sought an ally in his quest to help other minorities — and himself — break into the closed ranks of cannabis cultivation and sales.

Hill’s new business partner, Rhett Jordan, happens to be a groundbreaker in his own right. The 33-year-old Colorado industry pioneer, who is white, founded one of the largest legal marijuana operations in the nation.

But Hill’s success should not obscure licensing’s harm: restricted supply, higher prices, and crony capitalism.

This week, the Supreme Court issued a unanimous opinion finding what should have been obvious from the start.  That when a government agency requires someone to turn over money to the U.S. Treasury as a result of the person being found guilty of wrongdoing, that constitutes a penalty. 

In Kokesh v. SEC, the Securities and Exchange Commission argued that disgorgement is not a penalty or forfeiture and therefore, due to a particular law’s limitations, the SEC is entitled to bring cases that are even decades old.  Disgorgement is a remedy that requires the defendant to pay back something that was obtained through unlawful means.  Under 28 U.S.C. § 2462, the federal government has five years in which to bring any “action, suit, or proceeding for the enforcement of any civil fine, penalty, or forfeiture, pecuniary or otherwise.” The SEC has held that disgorgement falls into none of these categories and therefore there is no limit on how long the agency has to bring a case in which it is seeking disgorgement.

As we argued in our amicus brief filed on behalf of Charles Kokesh, it is a well-established principle in law that cases must be timely to be just.  When actions are fresh, access to evidence will likely be robust.  Witnesses’ memories are more likely to be clear.  Both sides will likely have the relevant documents at hand.  The court will have the best chance of getting to the truth.  But when a case is stale, memories will have faded, documents will have been lost or innocently destroyed, and it will be uncertain whether the existing evidence will present the most accurate picture of what really happened.

In ruling in favor of Mr. Kokesh a unanimous court found that disgorgement is indeed a penalty.  In writing for the Court, Justice Sotomayor announced two factors that determine whether a payment is a penalty.  First, it must be determined whether the payment has been imposed to redress a wrong to the public, or a wrong to an individual.  A penalty is imposed to redress the former, not the latter.  “This is because penal laws, strictly and properly, are those imposing punishment for an offense committed against the State.”  (internal quotations omitted.)  The second question is whether the payment was imposed to deter future wrongdoing.

What is remarkable about disgorgement in SEC cases is the fact that often the defendant is required to pay back more than was even received as a result of wrongdoing.  In the case of insider trading, for example, an insider can be liable for the full amount of profit made by others who traded on the inside information.  This is money that the insider never actually saw.  This means disgorgement does not simply undo the effects of the wrongdoing, making the insider pay back money illicitly earned, it puts the insider in a materially worse position.

Disgorgement in SEC cases, the Court noted, “bears all the hallmarks of a penalty: It is imposed as a consequence of violating a public law and it is intended to deter, not to compensate.”  Because of this, it falls within the confines of 28 U.S.C. § 2462 and therefore the SEC may not bring an action seeking disgorgement more than five years after the events have occurred.  This is the right result, and it is gratifying that the full Court recognized the fact that a penalty by any other name still involves the government punishing a lawbreaker.  Calling it a new name does not change its essential nature.  

Donald Trump fired off several tweets this morning about his executive order barring for at least 90 days all immigration or travel to the United States for six Middle Eastern and African nationalities, stating that he thinks it should actually be much broader. I have previously explained why President Trump’s national security justification for the order is completely devoid of evidence. But another fact that we highlighted in our amicus brief deserves attention here: that the order’s supposed “security” purpose is based on an entirely false legal premise.

The executive order claims that it is suspending entries to give the Secretary of Homeland Security time to study “whether, and if so what, additional information will be needed from each foreign country to adjudicate an application by a national of that country for a visa, admission, or other benefit under the INA (adjudications) in order to determine that the individual is not a security or public-safety threat.” It justified the specific countries by stating that their governments have shown less “willingness or ability to share or validate important information about individuals seeking to travel to the United States.”

Even if his claim about all six countries were true, this justification is entirely without merit because the applicant, not the government, has the burden to prove their eligibility under the law. In other words, the government has no obligation whatsoever to identify or gather information on the behalf of the applicant simply to “adjudicate” an application. 8 U.S.C. 1361 could not be clearer on this point:

Whenever any person makes application for a visa or any other document required for entry, or makes application for admission, or otherwise attempts to enter the United States, the burden of proof shall be upon such person to establish that he is eligible to receive such visa or such document, or is not inadmissible under any provision of this chapter, and, if an alien, that he is entitled to the nonimmigrant, immigrant, special immigrant, immediate relative, or refugee status claimed, as the case may be. If such person fails to establish to the satisfaction of the consular officer that he is eligible to receive a visa or other document required for entry, no visa or other document required for entry shall be issued to such person, nor shall such person be admitted to the United States unless he establishes to the satisfaction of the Attorney General that he is not inadmissible under any provision of this chapter.

Thus, if someone fails to obtain identity documents or criminal history certified by the relevant foreign authorities—as the law requires—then the consular officer can still adjudicate the application by issuing a denial. The U.S. government need not affirmatively determine anything about the applicant. Indeed, even if officers conclude that they know nothing about the applicants, this lack of knowledge still wouldn’t prevent them from denying the visa. Applicants must gather the relevant proof to establish their identity and eligibility on their own. If their foreign governments are uncooperative or unreliable, that redounds to the detriment of the visa applicant, not the U.S. government. It would certainly not prevent an individualized adjudication of their application.

But if a foreign government won’t cooperate, doesn’t that by definition mean that their nationals can’t ever meet their burden of proof? Not at all. A person may be a national of a certain country, yet have lived for many years apart from it. Indeed, in the case of Syria and Iran, a national may have literally never lived in their country at all yet possess such “nationality” as a legal matter. According to the United Nations, 11.2 million nationals of the six banned countries lived outside their country of origin in 2015. (Literally no applicant can actually travel directly from five of these countries because the U.S. has closed its embassies and consulates.)

Many of these people could easily fulfill their burden of proof through the country of residence. Other individuals could meet their burden because they had previously obtained evidence from their home governments before the outbreak of civil war or because the U.S. government knows their identities for other reasons, such as scientific achievements, past U.S. travel, or cooperation with counterterrorism efforts.

Nor is there any evidence for the view that consular officers have failed to enforce the burden of proof or that they are not reacting to changes in the availability of evidence. As the table below shows, the visa refusal rate for the most common visa class is much higher in each of the nationalities impacted by the ban (plus Iraqis who the first order barred). This shows that officers have taken into account these nationals’ unique circumstances. Moreover, the visa refusal rates increased, as you would predict based on the law, after the outbreak of civil wars in Libya, Syria, and Iraq when many documents would have become lost and reliable government records more difficult to obtain.

Table: B-1 Visa Refusal Rate (% of Applicants) by Nationality









































































All other countries








Source: Department of State, Adjusted Visa Refusal Rate

In sum, as courts decide whether the executive order was completely forthcoming about its purpose, they should consider the fact that its purported premise is, in fact, false. The government does not need to gather information to adjudicate visas because it has no legal obligation to gather anything since applicants bear the burden of proof. Neither can it claim that no national of these countries could ever meet its burden nor that consular officers are not enforcing this burden. Thus, despite his tweets to the contrary, the president is left without a true justification for this order.

The Trump administration will highlight its infrastructure agenda this week. As outlined in its recent budget, the administration plans to reduce regulations on construction projects and attract private investment to traditionally government activities, such as air traffic control (ATC).

Trump will “deliver remarks in the White House Rose Garden about his vision for separating air traffic control from the federal government,” and Transportation Secretary Elaine Chao will testify to Congress on the issue. The administration has just released principles for ATC reform.

The Hill says that ATC reform has run into a “buzz saw of opposition on Capitol Hill,” but that is not a fair characterization. There is always opposition to any legislation that reduces the government’s role in anything. That’s Washington. But ATC reform has momentum, and a bill has been passed out of the House transportation committee to move ATC operations from the bureaucratic Federal Aviation Administration (FAA) to a private, nonprofit corporation.

The airlines are for it, the key labor union is for it, aviation experts are for it, and the second-largest nation on earth did it. Canada privatized its system in 1996, and today the nonprofit Nav Canada is on the leading edge of ATC efficiency and innovation. The image below shows Iridium satellites that will form the basis of an advanced navigation system for aircraft called Aireon. Nav Canada leads the revolutionary project in an international partnership—a partnership that does not include the FAA. The system will generate “more efficient use of airspace, substantial fuel savings, fewer delays and significantly enhance safety over large parts of the world.”

What is the opposition The Hill refers to? The corporate jet lobby—the National Business Aviation Association (NBAA)—is against reform, and it raises the spectre of higher fees under a privatized system. But aircraft charges under the privatized Canadian system have fallen, not risen. The latest data show that “Nav Canada has seen its inflation-adjusted user fees fall 45 percent lower than the aviation taxes they replaced,” notes Marc Scribner of CEI.

The opposition of NBAA’s leadership to reform is short-sighted. Over the long term, NBAA members will be best served by an advanced and dynamic private ATC system, not one mired in bureaucracy and unstable government funding. NBAA members should research the successful Canadian reforms themselves because the record is clear.

Kudos to President Trump and Secretary Chao for rebuffing the special interests on this issue, and pursuing reforms to the overall benefit of the aviation industry and flying public.

For an overview of ATC reform, see here. For Reason’s resources on the issue, see here.

I recently questioned two connected remarks by Wall Street Journal reporter Richard Rubin that (1) “Each percentage-point reduction in the 35% corporate tax rate cuts federal revenue by about $100 billion over a decade” and that (2) “independent analyses show economic growth can’t cover all the costs of rate cuts.”

That first remark–about each percentage-point reduction in the rate losing $100 billion over a decade–is an interpretation of pages 178-79 from a Congressional Budget Office (CBO) report on “Options for Reducing the Deficit.”

But the CBO was just talking about raising the corporate rate by one point, not cutting it 10-20 points. That can’t be converted into a rule of thumb because each percentage point reduction in the top corporate tax rate can’t lose the exact same amount of dollars. A percentage point reduction in a 35% rate loses more static revenue than a percentage point reduction in a 30% rate, which loses more than a percentage point reduction in a 25% rate, and so on. 

Yet even for a single percentage point, I called the $100 billion 10-year projection a “bad estimate” because it assumes zero change in the economy and zero change in tax avoidance (“elasticity”).

The Table compares the CBO/JCT static estimates of what might happen with a percentage-point increase in the corporate tax rate to their baseline “projections” of what corporate revenues might look like under current tax law, assuming 1.9% GDP growth. The line below the baseline adds static estimates (“from the staff of the Joint Committee on Taxation” or JCT) of the revenue gain from raising four graduated corporate tax rates from 15-35% to 16-36%. 

The average tax rate is below the top marginal rate because of reduced 15-25% rates on small profits, credits for foreign taxes, deferral of taxes on unrepatriated foreign profits, and deductions for interest and business expenses. Goldman Sachs estimates the average tax as 28% under current law and 24% (not 20%) under the Ryan-Brady tax.

If the average tax is 28% then a 1 percentage point increase in all four marginal rates might be expected to raise static revenue by about 2.8%. Sure enough, JCT claims a 1 percentage point increase in corporate rates would eventually raise revenues by roughly 2.8%, suggesting those estimates entirely staticThat is, they assume zero impact on GDP and zero elasticity of taxable income.

Despite publishing these static revenue estimates, the CBO analysis does a good job of explaining why they are seriously flawed. Bad bookkeeping is no substitute for good economics.

What follows is the CBO analysis of the economics of a higher corporate tax rate, with emphasis added in bold:

Increasing corporate income tax rates would make it even more advantageous for firms to organize in a manner that allows them to be treated as an S corporation or partnership…. Raising corporate tax rates would also encourage companies to increase their reliance on debt financing because interest payments… can be deducted…. Moreover, the option [of raising the tax rate] would discourage businesses from investing, hindering the growth of the economy.

Higher rates in the United States influence businesses’ choices about how and where to invest; to the extent that firms respond by shifting investment to countries with low taxes as a way to reduce their tax liability at home, economic efficiency declines…. The current U.S. system also creates incentives to shift reported income to low-tax countries without changing actual investment decisions. Such profit shifting erodes the corporate tax base and requires tax planning that wastes resources. Increasing the top corporate rate to 36 percent (40 percent when combined with state and local corporate taxes) would further accentuate those incentives to shift investment and reported income abroad.

How could all of those changes possibly fail to affect the amount of revenue collected?

Hindering growth of the economy by discouraging business investment reduces revenue. Shifting reported income into other countries and into pass-through entities erodes the tax base and reduces revenue. Increasing debt and other deductible expenses (fancier offices and lunches) reduces revenue. Yet the static revenue estimates in the Table obviously take none of this into account–ignoring both macroeconomic effects of higher tax rates on investment and GDP growth and microeconomic “elasticity” effects on tax avoidance.

Since the CBO explains how and why a higher corporate tax rate has numerous adverse effects on revenues, it follows that a lower corporate tax rate has numerous beneficial effects on revenues. In fact, the CBO analysis explains quite well why CBO/JCT estimates of the effects of a lower corporate tax rate on revenues are worthless.

Richard Rubin wrote that “independent analyses show economic growth can’t cover all the costs of rate cuts.” But the estimates in the Table, which he cites, pretend economic growth can’t cover a single dollar of those badly estimated costs. Besides, as the CBO explains, the effect of tax rates on revenues involves much more than just economic growth.

Tax Foundation economist Alan Sloan figures that “for a corporate income tax cut to 15 percent to be self-financing [over 10 years], it would have to raise the level of growth to 2.8 percent on average,” or 0.9% faster than the 1.9% the CBO projects. A 2.8% growth rate doesn’t seem ambitious compared to the 1947-2006 average of 3.6%. Yet the Tax Foundation “model predicts something more like 0.4 percent over the budget window: a sustained period of 2.3 percent growth instead of 1.9 percent growth.”

This is an example of what Mr. Rubin meant by independent analysts predicting that “economic growth can’t cover all the costs.” Yet faster economic growth would cover nearly half the cost, in Sloan’s estimation. CBO/JCT static revenue estimates, by contrast, always assume no effect at all. Whether tax rates are doubled or cut in half, JCT revenue estimates will pretend GDP growth remains unchanged.

Tax Foundation estimates of the revenue feedback from faster GDP growth are a huge improvement over static JCT estimates, yet they too remain incomplete. They do not account for microeconomic “elasticity of taxable income” of the sort the CBO wrote about–such as shifting income and/or investment abroad, setting up pass-through entities, and maximizing deductions for interest and office expenses. 

My previous blog noted that Treasury Department economists find the elasticity of corporate taxable income is 0.5 for smaller corporations, so when the tax rate goes down reported taxable income goes up. A paper for the Center for European Economic Research finds a higher 0.8 elasticity for multinationals: “Hence, reported profits decrease by about 0.8% if the international tax differential [e.g., between U.S. and foreign rates] increases by 1 percentage point.”

Lowering the super-high U.S. corporate tax rate will not reduce revenues from corporate and other taxes by nearly as much as crude rules of thumb may suggest, if revenues decline at all. And the reason is not entirely the result of greater investment, entrepreneurship, and economic growth, but also a reduction in myriad wasteful ways of avoiding this country’s uniquely dispiriting business tax.

Whether people in a society think that most others can be trusted seems to predict many positive social and economic outcomes. A common criticism of liberalized immigration is that the newcomers come from societies with low trust, so they might bring their low-trust attitudes with them, pass them on to their descendants, and leave our society with less trust, potentially reducing future economic growth.

Economist Bryan Caplan ran a recent exercise showing that immigrants and their descendants make substantial gains in trust, virtually assimilating by the second generation. In a similar vein, my research shows that trust levels among the second-generation are basically the same as Americans whose ancestors have been here for at least four generations according to survey responses on three related questions. 

Caplan’s post provides a possible answer to the oddest question raised in my post: Why do third-generation Americans have the highest trust scores? Based on cliometric research, Caplan argues that the descendants of slaves in the United States have far lower levels of trust, similar to how African societies that were most afflicted by the slave trade have enduringly low trust rates today. All of the descendants of slaves in the United States have ancestors who arrived on our shores more that four generations ago, as legal slave importation ended in 1808. 

Excluding black respondents from the General Social Survey (GSS) on the question “Generally speaking, would you say that most people can be trusted or that you can’t be too careful in life?” does improve trust for Americans who can trace their lineage in the United States back at least four generations (all of their grandparents were born in the United States), but the biggest trust improvement is for the immigrants themselves. I limited my sample to the years 2004 to 2014 to focus on more recent immigrants. Figure 1 presents my original findings that include respondents of all races. Figure 2 excludes all black respondents. 

Figure 1

Trust, Respondents of All Races Included, Years 2004-2014

Source: General Social Survey.

Figure 2

Trust, Black Respondents Excluded, Years 2004-2014


Source: General Social Survey.

When black respondents are excluded, “can trust” for Americans whose descendants have been here at least four generations goes from 32.7 percent to 37 percent while “cannot trust” drops from 62.7 percent to 58.1 percent. The answer “depends” shrinks the most from 8.5 percent to 1.2 percent. For immigrants, the “can trust” response shoots up from 22.6 percent to 36.1 percent while “cannot trust” drops from 68.9 percent to 62.7 percent. The GSS survey question shows that non-black immigrants have trust scores about the same as Americans whose grandparents were all born in the United States.

As part of its 2018 budget proposal, the Trump administration has introduced a plan to improve the nation’s infrastructure. The administration intends to reduce regulatory barriers on infrastructure projects and encourage greater private investment. It has also proposed increasing federal spending on infrastructure by $200 billion over 10 years.

A new Cato study provides input to the debate by examining infrastructure ownership and funding. Some people assume that the federal government plays the main role in infrastructure, but the states and private sector own 97 percent of U.S. nondefense infrastructure, and they fund 94 percent of it.

However, the federal government is the tail that wags the dog—its regulations, taxes, and subsidies affect the level and efficiency of state, local, and private infrastructure investment. The study argues that reforms to these federal interventions and privatization are the paths to higher-performance infrastructure.

President Trump’s decision to withdraw from the Paris Climate Accord was the latest in a steadily expanding list of actions that highlight his contempt for multilateral diplomacy in U.S. foreign policy. This does not mean that Trump is an isolationist. He clearly favors bilateral engagement with other countries and doesn’t mind using American military power to wage war in the Middle East and apply pressure to North Korea. The question is, what does Trump’s withdrawal from the Paris agreement mean for other areas of multilateral engagement?

A preference for bilateral over multilateral diplomacy may be appropriate in some cases, but the bilateral approach is not ideal for combating, for example, nuclear proliferation. Trump’s disdain for multilateral diplomacy is especially worrisome when combined with the deepening militarization of U.S. foreign policy. These two emerging trends simultaneously endanger the Iran nuclear deal, a major success for multilateral diplomacy and nuclear nonproliferation, while increasing the probability of armed conflict should the deal fail.

The Iran deal is a triumph of multilateral diplomacy, involving the United Nations’ Permanent Five (United States, United Kingdom, France, Russia, and China), Germany, and the European Union. This level of international involvement enhances both the legitimacy and strength of the agreement, which Iran has complied with since implementation began in January 2016. If the Trump administration wants to successfully renegotiate the deal, it would need the buy-in of the partner countries, a condition that becomes harder to achieve as Trump alienates many of our Iran deal partners with actions such as withdrawing from the Paris Climate Accord.

If Trump truly wants to renegotiate the Iran deal (and not just unilaterally withdraw from it), then he will need the support of the very countries that he is repeatedly frustrating with his characteristically undiplomatic actions on the world stage.

The other major nuclear challenge facing the Trump administration is North Korea. So far, the administration has tried to rein in the North’s nuclear weapons and ballistic missile programs through shows of military force and sanctions. Trump also wants China to do more to pressure North Korea. Pyongyang does not seem deterred by this approach. While there has not been a nuclear weapons test since Trump took office, there has been a steady march of successful ballistic missile tests and Kim Jong Un continues to place great value in his nuclear arsenal.

A multilateral diplomatic approach failed to bring North Korea to heel in the 2000s, so it makes sense that Trump would not place much confidence in a similar approach today. The administration has made no serious overt attempt at multilateral diplomacy besides introducing new sanctions via the United Nations. If the current approach of pressure fails to halt North Korea’s progress, the administration could choose to double down on their approach or try a different strategy that makes greater use of multilateral diplomacy. Trump’s aversion to multilateral diplomacy suggests that the administration is primed to keep ratcheting up pressure rather than change course.

While this latest withdrawal from a multilateral initiative is not the end of the world, it arguably has worrisome implications for nuclear nonproliferation. Multilateral cooperation is not necessary to solve every foreign policy problem, but it is incredibly valuable for preventing the spread of nuclear weapons. The sooner Trump and his advisors realize this, the better. 

In response to the U.S. withdrawing from the Paris climate treaty, I’ve issued the following statement:

The Paris climate treaty is climatically insignificant. EPA’s own models show it would only lower global warming by an inconsequential two-tenths of a degree Celsius by 2100. The cost to the U.S. – in the form of required payments of $100 billion per year to the developing world – is too great for the inconsequential results. These very real expenses will consume money that could be used by the private sector to fund innovative new technologies that are economically sound and can power our society with little pollution.

Because of our private investments in technological innovation, America leads the world in reducing carbon dioxide emissions from power plants. We did that without Paris, and we will continue our exemplary leadership without it.

While Paris will be with us for the near future as the process of withdrawing transpires, this is a step in the right direction. If you’d like to read more on the science behind Paris, take a look at this recent piece I wrote for The Hill, called “The Scientific Argument against the Paris Climate Agreement.”

The Supreme Court issued a ruling this week in the case of County of Los Angeles v. Mendez.  The case involved a police shooting and the ruling involves some technical legal analysis regarding the proper application of prior Supreme Court precedents.  In this post I want to take a step back from the technical legal discussion and highlight the facts of the case, which are quite sad.

In October 2010, Angel Mendez and his then pregnant girlfriend, Jennifer Garcia, were dirt poor.  They lived in a one room shack, made of plywood, in the backyard of a home owned by Paula Hughes in Lancaster, California.  On the awful day in question, the couple were not bothering anyone.  They were actually napping in their tiny shack when their world was suddenly shattered.

Without any announcement at all, a police officer entered the shack.  Startled, Angel got up and grabbed a BB gun that he kept in the shack to kill rodents and other pests.  The deputy then yelled “Gun!” to alert his fellow officers of potential danger.  In a moment, several police officers entered and opened fire, discharging a total of 15 rounds.  Both Angel Mendez and Jennifer Garcia were shot “multiple times and suffered severe injuries.”  Mr. Mendez’s right leg had to be amputated below the knee.

Two people minding their own business and in just a few moments, the police are shooting at them.  The police did not accuse them of violating any law. They were totally innocent.

Since Angel and Jennifer Mendez (they were subsequently married) knew they had done nothing wrong, they filed a lawsuit against the police officers and the police department. The government’s response was that it was just a tragic accident and no one was really to blame.  Since the BB gun resembled a real rifle, the deputies acted reasonably under the circumstances.

The Supreme Court, as noted above, addressed certain Fourth Amendment precedents that had been in place in the lower federal court, and remanded the case for further proceedings.  It remains to be seen whether the couple will receive the $4 million in damages that the district judge awarded, or whether that legal win will be reversed.  

It is worth noting here that the case does not have to run its course thru additional legal proceedings.  A just government would never have waited for a lawsuit to be filed.  An apology and a lavish settlement offer would have been quickly forthcoming.  The County of Los Angeles can still do the right thing.  Rein in the county lawyers and agree to the $4 million in money damages that the federal district court previously awarded to Angel and Jennifer.

That would be a decent outcome for the Mendez family, to the extent that money can address such a horrific episode.

In our recent American Banker opinion piece, Heritage’s Norbert Michel and I argue that, if the Fed is really serious about shrinking its balance sheet, it had better quit paying interest on banks’ excess reserves (IOER) as well. How come? Because the current, relatively high IOER rate  is contributing to a strong overall demand for excess reserves, while a shrunken Fed balance sheet will mean a reduced supply of reserves. Reducing the supply of reserves while doing nothing to reduce banks’ demand for them is a recipe for demand-driven deflation, which is a monetary policy no-no.

Predictably (because it has happened every time I write on this topic) our article generated several comments to the effect that we didn’t know what we were talking about, because banks couldn’t possibly prefer the meager 100 basis points they can earn by holding reserves (or something less than that, if they are obliged to pay FDIC premiums) to the far greater amount they can earn by making loans.

The remarkable thing about these criticisms is that they all appear to deny that banks (or some banks, in any event) are in fact sitting on large amounts of excess reserves, and that they are, to that extent, settling for a return on those reserves of 100 basis points or less, instead of swapping reserves for other assets.

“For a bunch of ‘smart guys,” our first commentator writes,

these fellows don’t understand how banking works. 10 times out of 10 a bank would rather make a loan than have the funds parked at the Fed. Of course the loans have to be of the quality that the bank would expect the borrower to be able to repay the loan.

So far as the evidence up to October 2008 is concerned, our commentator is on solid ground, for until then banks did in fact prefer making loans to holding reserves 10 times out of 10. But that has manifestly not been the case since since October 2008, which happens to be when the Fed started paying IOER. Since then, as the figure below, comparing commercial banks’ loans and leases to their total deposits and Fed reserve balances, shows, the odds that a bank would rather make a loan than park funds at the Fed have been closer to 8 to 10:

Although our friend Chris (“r.c.”) Whalen, a highly-regarded bank consultant, is at least aware that reserves now make up a substantial share of commercial banks’ assets, he denies that this has anything to do with the fact that those reserves now yield a positive (if seemingly modest) return.

Whalen’s brief comment consists of two parts. The first, declaring that “The Fed is not paying banks not to lend. It prices the rate for excess reserves and Fed funds at a margin designed to preserve balance,” strikes me as nothing more than an exercise in empty semantics. Whatever “balance” the Fed may be trying to strike, the fact remains that it involves a substantial increase in banks’ overall demand for excess reserves. And if paying 100 basis points instead of zero doesn’t make reserves more desirable, and all the more so when rates are generally low, then it is time for us economists to toss-away everything we thought we knew about the workings of supply and demand.

The rest of Whalen’s comment is more substantial. “Even if you ended paying interest on excess reserves,” he observes, “the totals would not move because they are ultimately tied to a purchase of securities by the FOMC.” More substantial, but still wrong. As I tried to make explain in a previous Alt-M article, although the Fed’s security purchases largely determine the total outstanding quantity of bank reserves (and currency), those purchases  don’t determine banks quantity of excess reserves, which depend on what banks choose to do with reserves that come their way.

The point is perhaps best illustrated by looking at statistics from before 2008. Back then, banks hardly held any excess reserves; yet ongoing Fed security purchases (and sales) caused the total quantity of reserves to vary considerably, at least by pre-2008 standards:

Again, for emphasis: the size of the Fed’s balance sheet determines the quantity of total, but not excess, reserves. If excess reserves increase along with total reserves, as they have tended to do since the fall of 2008, that’s because banks have found it worthwhile to accumulate excess reserves, and not because they could not possibly get rid of them.

The last comment on our piece is by Wayne Abernathy, the ABA’s Executive VP for Financial Institutions Policy and Regulatory Affairs. In full it reads,

A major problem with the authors’ theory is the assumption that banks prefer to place money at the Fed rather than lend it out. In fact, banks would rather receive the 3.19% margin that they get on loans than the net 60 or 70 basis points that they get from the Fed. Loan demand, while growing, is not yet vigorous enough to absorb the flood of deposits that banks are still receiving. The banks’ choice is place the excess deposits with the Fed or tell their depositors “no thank you.”

In referring to “60 or 70” rather than 100 basis points as the net return on reserves, Abernathy evidently has domestic U.S. banks in mind, since U.S. branches and agencies of foreign banks, being exempt from FDIC charges, earn their 100 basis points free and clear. Pointing this out isn’t nit-picking, because  foreign banks have been holding a very large share of all outstanding excess reserves, in part precisely because reserves yield more to them than to their domestic counterparts. But the more important point is that, so far as both these foreign banks and the (mostly very large) U.S. banks holding large amounts of excess reserves are concerned, holding Fed balances is in fact more profitable, at the margin, than lending the funds those balances represent.

Evidently, so far as these banks are concerned, the relevant net margin isn’t 3.19%. So what is it? First of all, margins for the largest U.S. banks and foreign bank branches and agencies, which are the ones holding most of the reserves, are much lower than that for U.S. banks as whole. Although the FRED database doesn’t supply separate net interest margin data for the very biggest U.S. banks (instead it gives the margin for banks of over $15 billion in assets, which is not high enough for the purpose), it does report the margin for New York banks, which is a better though still rough proxy.  Here’s a chart comparing that measure to net interest margins for U.S. banks as a whole, and also to margins for banks in the Euro area, which are available in FRED only until 2014:

Evidently, if you are a New York bank, or a branch of a European bank, your idea of a decent net lending margin is, not 3.19%, as it might be for a “typical” U.S. bank, but something closer to 2% or (for the foreign banks) 1.5%.

In fact, many foreign banks found it profitable to acquire and retain excess dollar reserves for the sake of earning the modest spread between the risk-free IOER rate and lower effective Fed Funds and private repo rates. We know that, because they’ve been arbitraging that difference for some time. Foreign central banks, in the meantime, have been parking money at the Fed through its reverse-repo facility, which allows them to arbitrage the spread between what the facility pays and rates on short-term  T-bills. Before the Fed began paying banks to keep balances with it, these arbitrage opportunities simply didn’t exist.

The 3.19% margin to which Mr. Abernathy refers would, in any event, be irrelevant allowing, as he does, that the demand for loans is  not “vigorous enough” to actually support it! Here it’s worth keeping in mind that, whereas the demand schedule for bank reserves is, in effect, a horizontal line at whatever rate the Fed is paying, the  demand schedule for loans slopes downward. Assuming a state of equilibrium, banks have already expanded their loan portfolios to the point where the net loan margin, whatever its value may be for banks’ entire loan portfolio, is no higher at the margin than the IOER rate. Beyond that, reserves dominate loans. In equilibrium, in other words, parking another dollar at the Fed pays more than lending it does. Were IOER reduced to zero again, on the other hand, banks would once again find lending more profitable than reserve-hoarding, and they would continue to make loans until the net margin on them (the marginal net margin, that is!)  itself approached zero.

In case it helps, here is a picture of what I just said:

In the picture, the blue line is the (downward-sloping) demand schedule for bank loans, while the orange and grey lines are the Net Interest Margin for all bank loans and the IOER rate, respectively. The picture assumes a given level of total bank deposits, here set equal to $10 trillion. The vertical red line shows equilibrium quantity of bank loans with IOER=1, while the vertical green line shows the equilibrium quantity with IOER=0.  The numbers are, of course, only meant to be suggestive.  Since banks can’t dispose of reserves (though they can dispose of excess reserves by creating more deposits), a reduced IOER rate would in practice lead, other things equal, to growth in the level of both loans and deposits.

For those who continue nonetheless to doubt that the IOER rate has much bearing on banks’ demand for excess reserves, I offer, as a final exhibit, and without commentary, one last chart, this time comparing the difference between the IOER rate and the LIBOR rate, which I treat as a measure of the relative yield on reserves, to the overall ratio of reserves to commercial bank deposits:

[Cross-posted from Alt-M.org]

Stream Energy is a retail gas and electrical energy provider whose business model allows prospective salesmen to purchase the right to sell its products and to recruit new salesmen. In 2014, some former salesmen brought a class-action lawsuit against Stream for fraud, alleging that the company’s business model constituted an illegal pyramid scheme.

But unusually for a fraud claim, the plaintiffs argued that they didn’t need to identify any specific misrepresentations made by Stream that might have convinced particular class members to become salesmen. Instead, the plaintiffs claimed that simply offering membership in an illegally structured business would be fraud in and of itself, even if people joined with full knowledge of all risks and benefits.

A federal district court in Texas certified the class, so Stream appealed that decision to the U.S. Court of Appeals for the Fifth Circuit. A three-judge panel reversed the district court, holding that a class could not be certified because each plaintiff must individually prove that he was subject to a misrepresentation. But the entire Fifth Circuit then reheard the appeal and ruled for the plaintiffs. The court didn’t rule on whether Stream was in fact engaged in an illegal pyramid scheme, but did affirm the class certification, accepting the plaintiff’s theory that a single proof of illegal structuring would prove a fraud against every one of Stream’s salespeople.

Stream has asked the Supreme Court to review this last question, and Cato has filed an amicus brief supporting that petition. In our brief, we explain why it is dangerous to hold that someone can be liable for fraud without ever having made a misrepresentation. Reasonable judicial limitations on liability are essential to protecting the personal autonomy of all parties in a case.

In the fraud context, the key inquiry has always been whether the alleged fraudster made a specific misrepresentation on which someone actually relied to her detriment. To be liable for someone else’s losses, not only must a particular misrepresentation have been made, but it must have been the direct or “proximate” cause of those losses. By abandoning this proximate-cause rule and holding that misrepresentation isn’t necessary for potential fraud liability, the Fifth Circuit removed an important check on liability.

If individual reliance on a misrepresentation need not be proven, savvy investors may search out multi-level marketing programs, knowingly put their money in such risky ventures, and then sue for fraud if their investment doesn’t yield a profit. This significantly increases the likelihood of improper class-action lawsuits—potentially subjecting undeserving defendants to crushing liability.

Instead of that uncertainty, businesses should instead be secure in the simple legal rule that has worked for centuries: if you don’t want to be liable for fraud, don’t lie about what you’re selling.

The Supreme Court should take the case of SGE Management v. Torres and ultimately reverse the Fifth Circuit.

What some doctors say about regulating the treatment of sepsis has much broader application. Sepsis, an often lethal reaction to infection sometimes called blood poisoning, is the leading cause of death in hospitals, Richard Harris reports for NPR. Understandably, then, some doctors and regulators have a typical reaction: “A 4-year-old regulation in New York state compels doctors and hospitals to follow a certain protocol, involving a big dose of antibiotics and intravenous fluids.”

Other doctors aren’t so sure about the rush to regulation. 

Dr. Jeremy Kahn at the University of Pittsburgh believes that regulations can prod doctors to follow the latest protocol. But “The downside is that a regulatory approach lacks flexibility. It essentially is saying we can take a one-size-fits-all approach to treating a complex disease like sepsis.” Harris continues:

That’s problematic, because doctors haven’t found the best way to treat this condition. The scientific evidence is evolving rapidly, Kahn says. “Almost every day another study is released that shows what we thought to be best practice might not be best practice.”

Kahn wrote a commentary about the rapid changes earlier this month for the New England Journal of Medicine.

For a while, medical practice guidelines distributed to doctors called on them to use one particular drug to treat sepsis. It turned out that drug did more harm than good. Another heavily promoted strategy, called goal-directed therapy, also turned out to be ineffective.

These are concerns that economists often raise about regulation: that government mandates may be rigid, inflexible, and frozen in time. They don’t change easily in response to new information. They may require a specific protocol that may turn out not to be the best practice:

And a study presented last week at the American Thoracic Society and published electronically in the New England Journal of Medicine finds that one of the steps required in New York may not be beneficial, either.

The regulations call for a rapid and substantial infusion of intravenous fluids, but that didn’t improve survival in New York state hospitals….

In fact, some doctors believe that most patients are better off without this aggressive fluid treatment. There’s a study getting underway to answer that question. Dr. Nathan Shapiro at Harvard’s Beth Israel Deaconess Medical Center hopes to enlist more than 2,000 patients at about 50 hospitals to answer this life-or-death question.

But that study will take years, and in the meantime doctors have to make a judgment call.

“It is possible that at present they are requiring hospitals to adopt protocols for fluid resuscitation that might not be entirely appropriate,” Kahn says.

Somehow this reminds me of the phenomenon noted in the 1980s when Canada banned cyclamates and the United States banned saccharin. Presumably one country had banned the less dangerous sugar substitute.

Economists Gerald P. O’Driscoll Jr. and Lee Hoskins wrote about the problems with regulatory mandates in 2006:

Coercion may bring uniformity of product or conduct, but only at the expense of innovation and flexibility. Merchant law suffered when the hand of the state took it over: “Many of the desirable characteristics of the Law Merchant in England had been lost by the nineteenth century, including its universal character, its flexibility and dynamic ability to grow, its informality and speed, and its reliance on commercial custom and practice” (Benson 1989: 178).

Markets excel in adapting to changing circumstances, while legislation and government regulation are notoriously rigid. That is perhaps the strongest case for market self-regulation over government-mandated regulation.

Regulation seems to substitute the judgment of a small group of fallible politicians or bureaucrats for the results of a market process that coordinates the needs and preferences of millions of people. It sets up static, backward-looking rules that can never deal with changing circumstances as well as voluntary decisions by people on the ground, whether entrepreneurs, customers, scientists, or doctors.

Greater reliance on user fees, federal loans rather than grants, and corporatization are three keys to the Trump administration’s infrastructure initiative released as a part of its 2018 budget. The plan will “seek long-term reforms on how infrastructure projects are regulated, funded, delivered, and maintained,” says the six-page document. More federal funding “is not the solution,” the document says; instead, it is to “fix underlying incentives, procedures, and policies.”

In building the Interstate Highway System, the fact sheet observes, “the Federal Government played a key role” in collecting and distributing monies to “fund a project with a Federal purpose.” Since then, however, those user fees, mainly gas tax receipts, have been “inefficiently invested” in “non-federal infrastructure.”

As a result, the federal government today “acts as a complicated, costly middleman between the collection of revenue and the expenditure of those funds by States and localities.” To fix this, the administration will “explore” whether transferring “responsibilities to the States is appropriate.”

The document contains a number of specific proposals:

  • Allow states to toll interstate and other federally funded highways;
  • Encourage states to fix congestion using “congestion pricing, enhanced transit services, increased telecommuting and flex scheduling, and deployment of advanced technology”;
  • Corporatize air traffic control, as many other developed countries have done;
  • Streamline the environmental review process by having a one-stop federal permit process and “curtailing needless litigation”;
  • Expand the TIFIA loan program and lift the existing cap on private activity bonds, both of which will make more money available for infrastructure without increasing federal deficits.

The paper also includes proposals for reforming inland waterways, the Power Marketing Administration, and water infrastructure finance. Like the transportation proposals, these call for increased reliance on user fees, corporatization, privatization, or loans rather than grants.

“Corporatization” means creating a non-profit or for-profit corporation that may be government owned but doesn’t necessarily rely on taxpayer subsidies. Comsat is a classic example, but Canada and other countries’ air traffic control systems work in this way.

Except for air traffic control reform, Trump’s plan isn’t fleshed out in detail. But these ideas have all been tossed around enough that everyone pretty much knows what they mean. Most importantly, they mean a significant change in the way Washington deals with infrastructure.

Because it doesn’t contain a list of projects that members of Congress could take credit for, the plan has received relatively little notice in the media. Democrats, of course, are unhappy with it, but they would be unhappy no matter what Trump proposed.

One of the more controversial proposals is to allow the states to toll interstate highways. “I don’t like paying for a road twice,” Representative Sam Graves (R-MO), who chairs the Highways and Transit Subcommittee of the House Transportation and Infrastructure Committee, told The Hill. But, given that Congress has had to inject tens of billions of dollars of general funds into the highway trust fund in recent years, what makes Graves thinks existing user fees are paying for the roads now? All roads need maintenance and occasional rehabilitation, so the fact that user fees paid for construction 50 years ago doesn’t mean that costs stop.

The most important point is that Trump wants user fees to pay a greater share of infrastructure costs. Naturally, the transit lobby, which represents the most heavily subsidized form of transportation, per unit of output, is upset about this. But Trump’s agenda sounds good to anyone who wants an efficient, user-fee-driven infrastructure program.

Politicians seem increasingly likely to (falsely) assert that “hate speech is not protected by the First Amendment.” The mayor of Portland, Oregon, just did so following anti-Muslim violence in his community. Former governor and Democratic Party official Howard Dean said the same last month.

The Washington Post does a good job of showing why the claim is false. Courts have not recognized a “hate speech” exception to the First Amendment. To allow us such a prohibition would allow the government to discriminate among viewpoints, a power precluded by the First Amendment. As the FIRE Guide to Free Speech on Campus says, “Laws that ban only certain viewpoints are not only clearly unconstitutional, but are also completely incompatible with the needs, spirit, and nature of a democracy founded upon individual rights.”

Part of the problem here is the term “hate speech” itself. People generally do not like expressions of hatred of individuals or groups. The term “hate speech” in and of itself makes censorship more likely especially when compared to the more neutral term “extreme speech” often used by legal scholars.

Indeed extreme speech can be odious. But we also should recall the general libertarian principle that allowing liberty to do or say something does not constitute endorsing what is done or said. You can criticize extreme speech and argue against prohibiting it. In other words, we need to defend the rights of a speaker but not what he or she says. That difference is likely to be lost in the extreme events that sometimes evoke extreme speech. Indeed that appears to have happened in Portland.

A communications group at Yale University has put out a video that seems to be a rebuttal to a Dilbert cartoon by Scott Adams poking fun at climate scientists and their misplaced confidence in models. The video is full of impressive-looking scientists talking about charts and data and whatnot. It probably cost a lot to make and certainly involved a lot of time and effort. The most amazing thing, however, is that it actually proves the points being made in the Dilbert cartoon. Rather than debunking the cartoon, the scientists acted it out in slow motion.

The Dilbert cartoon begins with a climate scientist saying “human activity is warming the earth and will lead to a global catastrophe.” When challenged to explain how he knows that, he says they start with basic physical principles plus observations about the climate, which they then feed into models, pick and choose some of the outputs, then feed those into economic models, and voila. When asked, what if I don’t trust the economic models, the scientist retreats to an accusation of denialism.

The Yale video ends in exactly the same way. After a few minutes of what I will, for the moment, call “scientific information,” we see climatologist Andrew Dessler appear at the 4:28 mark to say “It’s inarguable, although some people still argue it – heh, heh.” As in, ah those science deniers.

What exactly is “inarguable”? By selective editing we are led to believe that everything said in the video is based on multiple independent lines of evidence carrying such overwhelming force that no rational observer could dispute it. Fine, let’s go to the 2:38 mark and watch someone named Sarah Myhre tell us what this inarguable science says.

“It’s irrefutable evidence that there are major consequences that come with climate warming, and that we take these Earth systems to be very stable, we take them for granted, and they’re not stable, they’re deeply unstable when you perturb the carbon system in the atmosphere.”

How does she know this? From models of course. These claims are not rooted in observations but in examining the entrails of model projections. But she has to pick and choose her models because they don’t all say what she claims they say. Some models show very little sensitivity to greenhouse gases.  If we put the low-sensitivity results into economic models the results show that the economic impacts of warming are very low and possible even negative (i.e. a net benefit). And the section of the IPCC report that talks about the consequences of warming says:

For most economic sectors, the impact of climate change will be small relative to the impacts of other drivers (medium evidence, high agreement). Changes in population, age, income, technology, relative prices, lifestyle, regulation, governance, and many other aspects of socioeconomic development will have an impact on the supply and demand of economic goods and services that is large relative to the impact of climate change.

It goes on to show (Figure 10-1) that at low levels of warming the net economic effects are zero or positive. As to the climate being “deeply unstable” there’s hardly any point trying to debate that since these are not well-defined scientific words, but simple reflection on human experience will tell you that the climate system is pretty stable, at least on decadal and century time scales. The main thing to note is that she is claiming that changes to atmospheric CO2 levels have big warming effects on the climate and will cause a global catastrophe. And the only way she knows this is from looking at the outputs of models and ignoring the ones that look wrong to her. Granted she isn’t bald and doesn’t have a little beard, but otherwise she is almost verbatim the scientist in the cartoon.

Much of what she says in the video is unsubstantiated and sloppy. For instance she talks (2:14) about paleoclimatic indicators like tree rings, ice cores and sediment cores as if they are handy records of past climate conditions without acknowledging any of the known problems extracting climate information from such noisy sources.

Her most telling comment was the Freudian slip at 1:06 when she says “There is incredible agreement about the drivers of climate science.” What she meant (and quickly corrected herself to say) was “climate change.” But her comment is revealing as regards the incredible agreement—i.e. groupthink –that drives climate science, and the individuals who do the driving.  Myhre’s Freudian slip comes right after a clip in which Michael Mann emphatically declares that there are dozens of lines of evidence that all come together, “telling us the same thing,” adding “that’s how science works.” Really? The lines of evidence regarding climate do not all lead to one uniform point of view, nor is that how science works. If that’s how science worked there would be no need for research. But that’s how activists see it, and that’s the view they impose to drive climate science along in service of the activist agenda. As Dr. Myhre herself wrote in a recent op-ed:

Our job is not to objectively document the decline of Earth’s biodiversity and humanity, so what does scientific leadership look like in this hot, dangerous world? We don’t need to all agree with each other – dissent is a healthy component of the scientific community. But, we do need to summon our voices and start shouting from rooftops: “We have options”, “We don’t have to settle for cataclysm”.

Got that? The job of scientists is not objectively to gather and present evidence, but to impose an alarmist view and yell it from the rooftops. At least according to Sarah Myhre, Ph.D..

The video opens with a straw man argument: climate science is all just made up in computer models about the future, and it’s all just based on simulations. This is then refuted, rather easily, with clips of scientists listing some of the many observational data sets that exist. Whoopee. That wasn’t even the point of the Dilbert cartoon, it was just a straw man made up by the interviewer. Then, in the process of presenting responses, the video flits back and forth between lists of observational evidence and statements that are based on the outputs of models, as if the former prove the latter. For instance, when Myhre says (2:45—2:55) that the climate systems is “deeply unstable” to perturbations in the carbon “system” (I assume she meant cycle) the video then cuts to Andrew Dessler (2:55) talking about satellite measurements, back to Myhre on paleo indicators, then to Carl Mears and Dessler (3:11) talking about sea ice trends. None of those citations support Myhre’s claims about instability, but the selective editing creates the impression that they do.

Another example is a sequence starting at 1:14 and going to about 2:06, in which various speakers lists different data sets, glossing over different spatial and time scales, measurement systems, etc. Then an assertion is slipped in at 2:07 by Ben Santer to the effect that the observed warming can’t be explained by natural causes. Then back to Myhre listing paleoclimate indicators and Mann describing boreholes. The impression created is that all these data types prove the attribution claim made by Santer. But they do no such thing. The data sets only record changes: claims about the mechanism behind them are based on modeling work, namely when climate models can’t simulate 20th century warming without incorporating greenhouse gas forcing.

So in a sense, the video doesn’t even refute the straw man it set up. It’s not that climate science consists only of models: obviously there are observations too. But all the attribution claims about the climatic effects of greenhouse gases are based on models. If the scientists being interviewed had any evidence otherwise, they didn’t present any.

Now suppose that they are correct in their assertion that all the lines of evidence agree. All the data sets, in Mann’s words, are telling us the same thing. In that case, looking at one is as good as looking at any of the others.

Ignore for a moment the selective focus on declining Arctic sea ice data while ignoring the expansion of Antarctic sea ice. And ignore the strange quotation from Henry Pollock (3:23—3:41) about how ice doesn’t ask any questions or read the newspaper: it just melts. Overlaid on his words is a satellite video showing the summer 2016 Arctic sea ice melt. Needless to say, had the filmmaker kept the video running a few seconds more, into the fall, we’d have seen it re-freeze. Presumably the ice doesn’t read or ask questions in the fall either, it just freezes. This proves what exactly?

Anyway, back to our assumption that all the data sets agree and say the same thing. And what is it they tell us? Many key data sets indicate that climate models are wrong, and in particular that they overstate the rate of warming, (see here, here, here, here, here, here, here, here, etc.). So according to the uniformity principle so strongly enunciated in the video, all the evidence points in the same direction: the models aren’t very good. And by implication, statements made based on the models aren’t very reliable.

There’s another irony in the video’s assertions of uniformity in climate science. At the 3:55 mark Michael Mann announces that there’s a consensus because independent teams of scientists all come at the problem from different angles and come up with the same answers. He’s clearly referring to the model-based inferences about the drivers of climate change. And the models are, indeed, converging to become more and more similar. The problem is that in the process they are becoming less like the actual climate. Oops.

So how did the video do refuting Scott Adams’ cartoon? He joked that scientists warning of catastrophe invoke the authority of observational data when they are really making claims based on models. Check. He joked that they ignore on a post hoc basis the models that don’t look right to them. Check. He joked that their views presuppose the validity of models that reasonable people could doubt. Check. And he joked that to question any of this will lead to derision and the accusation of being a science denier. Check. In other words, the Yale video sought to rebut Adams’ cartoon and ended up being a documentary version of it.