Cato Op-Eds

Individual Liberty, Free Markets, and Peace
Subscribe to Cato Op-Eds feed

Democrats are making waves in tax policy by promising to reverse some of the 2017 Republican reforms. Rep. Alexandria Ocasio-Cortez called for raising the top federal individual income tax rate to 70 percent, which was the rate before Ronald Reagan came to office. I noted that the global economy has dramatically changed in recent decades, and such a high rate would be even more damaging today.

Democrats are also calling for a higher federal corporate tax rate, partly reversing the GOP’s cut from 35 percent to 21 percent. Democratic House Budget chair John Yarmuth, for example, is proposing to raise the rate to 28 percent. The problem, again, is that the global economy has changed and U.S. businesses face a more intense competitive climate than ever.

The chart shows the average federal-state corporate tax rate in the OECD industrial countries since 1980. The United States led a global wave of corporate tax rate cuts in the 1980s, but then federal policymakers sat on their hands for three decades as other countries continued cutting.

President Trump pushed hard and convinced Congress to reduce the federal corporate rate to 21 percent. But state taxes are piled on top of that for a combined U.S. federal-state rate of 27 percent. That is still higher than the 24 percent average of the OECD countries in 2018, according to KPMG. The global average rate per KPMG is also 24 percent.

On corporate taxes, America is still a high-rate country.

The data is sourced from OECD and KPMG.

Last Friday, President Trump threatened to declare a national emergency and build his border wall using “the military version of eminent domain.” By Tuesday, Trump seemed to have climbed down somewhat, declining to repeat the threat in his televised Oval Office address. But the week’s end found the president declaring it would be “very surprising” if he didn’t pull the trigger.

So is the emergency-powers gambit a live option or—like the executive order revoking birthright citizenship Trump floated before the midterms—another pump-fake designed to thrill the base and rile the media? Either way, it’s a noxious, thuggish proposal. Using the army to do an end-run around Congress is not how constitutional government is supposed to work. Imagine believing that Latin American immigration so threatens our free institutions that only banana republic tactics can protect us. 

About the best one can say for the idea is that it has the accidental virtue of concentrating the mind wonderfully about the powers we’ve concentrated in the executive branch. 

Our Constitution cedes vanishingly few emergency powers to the president. He commands “the Militia of the several States, when called into the actual Service of the United States,” and has the power, via Article II, section 3, to convene Congress on “extraordinary Occasions,” such as a national emergency. “That is about as far as his crisis authorities go,” notes the University of Virginia’s Saikrishna Prakash: “the convening authority would have been unnecessary if the chief executive could take all actions necessary to manage ‘extraordinary occasions.’” 

In Youngstown, the 1952 “steel seizure” case, the Supreme Court rebuffed the Truman administration’s claim of a general presidential emergency power divorced from specific statutory or constitutional authority. Justice Jackson, in his influential concurrence, suggested that the Framers neglected to provide such authority for fear “that emergency powers would tend to kindle emergencies.” 

Surely, then, the president can’t just gin up a bogus crisis and use the military to get what he wants when Congress won’t give it to him—can he? It would be nice to be able to answer that question with a confident “no.” Unfortunately, in this case, at least two provisions of the U.S. Code passed during the 1980s, 33 USC § 2293  and 10 USC § 2808, give Trump a non-frivolous rationale for his claim that “I can do it if I want.” 

Overbroad delegations of emergency authority to the executive are a longstanding problem. During the Watergate-era congressional resurgence, a 1974 Senate special committee investigation (co-chaired by Frank Church of Church Committee fame) identified 470 provisions of federal law delegating emergency powers to the president and four proclamations of national emergency, dating as far back as 1933, then still in effect. That investigation led to the National Emergencies Act of 1976, which repealed existing emergency declarations, required the president to formally declare any claimed national emergency and specify the statutory authority invoked, and subjected new declarations to a one-year sunset unless renewed.  

Despite those efforts, the U.S. Code today remains honeycombed with overbroad delegations of emergency power to the executive branch. A Brennan Center report released last month identifies 136 statutory powers the president can invoke in a declared national emergency. Few of these provisions require anything more than the president’s signature on the emergency declaration to trigger his new powers—“stroke of the pen, law of the land—kinda cool,” in the Clinton-era phrase. 

Most of these emergency powers have never been invoked, many of them are innocuous, and some—like the provision that allows suspension of the Davis-Bacon Act in a natural disaster—are even sensible. But other long-dormant powers are extraordinarily dangerous.

Writing in the Atlantic, the Brennan Center’s Elizabeth Goitein highlights a WWII-era amendment to the Communications Act of 1934 empowering the president to close or take over “‘any facility or station for wire communication’ upon his proclamation ‘that there exists a state or threat of war involving the United States.’” She sketches a nightmare scenario in which Trump puts the country on a war footing with Iran; invokes § 706 of the Communications Act to assume control of U.S. internet traffic, deploys federal troops to put down the resulting unrest, and scares people away from the polling stations with a menacing Presidential Alert text message. Goitein grants that “this scenario might sound extreme,” and I admit I found it a bit overcooked. Even if the administration wanted to do something like this, I’m confident it would go bust, thanks to the sort of spectacular ineptitude that botched the initial rollout of the Travel Ban. 

However, she’s absolutely right to call on Congress to “shore up the guardrails of liberal democracy” with comprehensive reform of emergency powers. “Committees in the House could begin this process now,” she writes, “by undertaking a thorough review of existing emergency powers and declarations,” laying out a roadmap for repealing unnecessary delegations, and providing “stronger protections for abuse.” The sooner, the better: you never know when a competent authoritarian is going to come along. 

According to Paul Krugman, the government shutdown amounts to a potentially big libertarian experiment.

With nine departments and multiple agencies closed, maybe for months, the New York Times columnist and Nobel laureate envisages a coming test of whether the country can live without the Food and Drug Administration, the Small Business Administration and farm subsidies.

So are those of us at Cato who believe in the abolition of these programs celebrating? Not quite.

As the vast majority of the U.S. population go about their daily lives, barely noticing that 25 percent of federal discretionary spending has been paused, it’s certainly possible many will wonder why debt is being racked up for programs that have no noticeable effect on their well-being. Who knows, many employees, businesses and farms may also reconsider the wisdom of placing their livelihoods at the whims of the political process.

Better still, the shutdown may bring attention to these otherwise rarely-scrutinized programs. If major columnists continue identifying Cato as proponents of scrapping things such as farm subsidies and small business cronyism, linking to our research on the damaging economic, political, and social consequences of existing provisions, the shutdown could serve a useful public education role too!

But, the truth is, most libertarians aren’t cheering current events because shutdowns appear not to change much in regards the size and scope of government in the long term, yet bring chaos, ill-feeling and uncertainty in the short.

Markets are powerful precisely because they allow people to interact in voluntary ways to fulfil wants and needs. Necessity, as they say, is the mother of invention.

Libertarians are indeed confident that, as in countries such as New Zealand, scrapping agricultural subsidies would deliver a more efficient industry, taxpayer savings, and a bigger economy.

But it’s obvious, as Krugman acknowledges, that temporary suspension of promised support is not an environment conducive to farmers making long-term crop or farm ownership decisions, private companies banding to form market-based food safety certification agencies, or small businesses sourcing new finance.

Yes, economic actors will take steps to mitigate the effects of disruption. But knowing government will eventually reopen, there is little to no incentive for the new institutions to develop or businesses and farms to undertake the structural change we would see if government absented from these roles. Instead, businesses and individuals are temporarily crippled in their forward planning and paralyzed by the uncertainty promises made to them being broken.

The natural priority for those farms, businesses and federal employees right now is to lobby successfully for the government to reopen and their payments to start flowing again. Hence the newspaper stories we see already about their difficulties, indicating precisely the diffuse costs yet concentrated benefits associated with much government spending.

That doesn’t mean libertarians are any less supportive of removing government from these activities. In fact, as Chris Edwards shows, a host of other areas likely to be noticeably affected by a sustained shutdown – security screening at airports, air traffic control, and the management of national parks – are better managed in other countries with more private sector involvement. If the shutdown brings attention to this, then great.

Overall though, libertarians are fully aware that for the real policy experiments we desire, the public and/or politicians must be convinced of the necessity or desirability for permanent policy change in a market-based direction. The best chance for success with that is in an environment where those affected can adjust in an orderly manner, and replacement private-sector institutions have time to develop.

Krugman knows it is disingenuous to suggest that the current chaos is some libertarian policy experiment. But as some Republicans do make the case that the programs above are vital for the health of the economy, and libertarians continue to make the case for their abolition, perhaps he will finally cease lumping Republicans and libertarians together in his columns.

President Trump’s proposed border wall would cut across nearly a thousand miles of privately owned land, so to build this project, the administration would need to use eminent domain to seize the land—something that the president is eager to do. Aside from the unpleasantness of taking people’s property without their consent, federal eminent domain use comes with it a particularly obnoxious component: the government can take the land but not provide just compensation until years later. New legislation would stop this practice.

As I wrote in 2017:

Right now, when Border Patrol wants to take someone’s land, they send them a letter offering them a nominal low sum of money for their land and threatening to file condemnation proceedings against them if they don’t accept it… . [But] under the eminent domain statute, the federal government can seize property almost as soon as it files a condemnation proceeding—as soon as the legal authority for the taking is established—then they can haggle over just compensation later.

It’s called “quick take.” Quick take eminent domain creates multiple perverse incentives for the government. 1) It can quickly take land, even when it doesn’t really need it, and 2) it has no real incentive to compromise or work with the land owner on compensation. The owner’s bargaining power is significantly diminished. The federal government already possesses the property. This means that for years, people who are subject to a border wall taking go without just compensation.

An NPR analysis of fence cases found that the resolved cases took more than 3 years to resolve. In many other cases, the process took more than a decade for a court to determine just compensation, and some cases are still pending more than 12 years later. Unfortunately, the Supreme Court has determined that this “quick take” eminent domain does not violate the 5th amendment requirement that no “private property be taken for public use, without just compensation.” The reasoning is that as long as the person will eventually get compensation, the taking is constitutional.

The awful component of this process is that, in order to challenge the taking, the property owner must not accept the offered payment. But the border wall will go up on their land just the same. Meanwhile, they have to fight in court without getting the compensation that they deserve. Many people cannot even afford to challenge the taking for this reason alone.

Today, Rep. Justin Amash (R-MI) introduced the Eminent Domain Just Compensation Act to deal with just this issue. “It is unjust for the government to seize someone’s property with a lowball offer and then put the burden on them to fight for what they’re still owed,” Rep. Amash said in a statement. “My bill will stop this practice by requiring that a property’s fair value be finalized before DHS takes ownership.”

It makes this reform by amending Section 103 of the Immigration and Nationality Act (8 U.S.C. 1103), which details the powers of the Secretary of Homeland Security.  Current law provides that:

The [Secretary of Homeland Security] may contract for or buy any interest in land, including temporary use rights, adjacent to or in the vicinity of an international land border when the [Secretary] deems the land essential to control and guard the boundaries and borders of the United States against any violation… When the [Secretary] and the lawful owner of an interest identified pursuant to paragraph (1) are unable to agree upon a reasonable price, the [Secretary] may commence condemnation proceedings pursuant to section 3113 of title 40.

The Eminent Domain Just Compensation Act would amend this provision by adding that: “the Government may not take any land prior to the issuance of a final judgment pursuant to the proceedings under section 3113 of such title.’’ This language forecloses the opportunity for the Trump administration to seize land quickly for the president’s unnecessary, ineffective, and costly border wall without first fully compensating the owners. 

The Wall Street Journal reports that Facebook has consulted with conservative individuals and groups about its content moderation. Recently I suggested that social media managers would be inclined to give stakeholders a voice (though not a veto) on content moderation policies. Some on the left were well ahead in this game, proposing that the tech companies essentially turn over content moderation of “hate speech” to them. Giving voice to the right represents a kind of rebalancing of the play of political forces. 

argued earlier that looking to stakeholders had a flaw. These groups would be highly organized representatives of their members but not of most users of a platform. The infamous “special interests” of regular politics would thus come to dominate social media content moderation which in turn would have trouble generating legitimacy with users and the larger world outside of the internet.  

But another possibility exists which might be called “pluralism.” Both left and right are organized and thus are stakeholders. Social media managers recognize and seek advice from both sides about content moderation. But the managers retain the right of deciding the “content” part of content moderation. The groups are not happy, but we settle into a stable equilibrium that over time becomes a de facto speech regime for social media.  

A successful pluralism is possible. A lot will depend on the managers rapidly developing the political skills necessary to the task. They may be honing such skills. Facebook’s efforts with conservatives are far from hiring the usual suspects to get out of a jam. Twitter apparently followed conservative advice and verified a pro-gun Parkland survivor, an issue of considerable importance to conservative web pundits, given the extent of institutional support for the March for Our Lives movement. Note I am not saying the Right will win out but rather the companies may be able to manage a balanced system of oversight.  

But there will be challenges for this model.  

Spending decisions by Congress are often seen as a case of pluralist bargaining. Better organized or more skillful groups get more from the appropriations process; those who lose out can be placated with “side payments” to make legislation possible. Overall you get spending bills that no one completely likes, but everyone can live with until the next appropriations cycle. (I know that libertarians reject this sort of pluralism, but I not discussing what should be but rather what is as a way of understanding private content moderation). 

Here’s the challenge. The groups trying to affect social media content moderation are not bargaining over money. The left believes much of the rhetoric of the right has no place on any platform. The right notes that most social media employees lean left and wonder if their effort to cleanse the platforms begins with Alex Jones and ends with Charles Murray (i.e. everyone on the right). The right is thus tempted to call in a fourth player in the pluralist game of content moderation: the federal government. Managing pluralist competition and bargaining is a lot harder in a time of culture wars, as Facebook and Google have discovered.  

Transparency will not help matters. The Journal article mentioned earlier states: 

For users frustrated by the lack of clarity around how these companies make decisions, the added voices have made matters even murkier. Meetings between companies and their unofficial advisers are rarely publicized, and some outside groups and individuals have to sign nondisclosure agreements. 

Murkiness has its value! In this case, it allows candid discussions between the tech companies and various representatives of the left and the right. Those conversations might build trust between the companies and the groups from the left and the right and maybe even, among the groups. The left might stop thinking democracy is threatened online, and the right might conclude they are not eventually going to be pushed off the platforms. We might end up with rules for online speech that no one completely likes and yet are better than all realistic alternatives.  

Now imagine that everything about private content moderation is made public. For some, allowing speech on a platform will become compromising with “hate.” (Even if a group’s leaders don’t actually believe that, they would be required to say it for political reasons). Suppressing harassment or threats will frighten others and foster calls for government intervention to protect speech online. Our culture wars will endlessly inform the politics of content moderation. That outcome is unlikely to be the best we can hope for in an era when most speech will be online. 


Welcome to the Defense Download! This new round-up is intended to highlight what we at the Cato Institute are keeping tabs on in the world of defense politics every week. The three-to-five trending stories will vary depending on the news cycle, what policymakers are talking about, and will pull from all sides of the political spectrum. If you would like to recieve more frequent updates on what I’m reading, writing, and listening to—you can follow me on Twitter via @CDDorminey.  

  1. Trump, Heading to the Border, Suggests He Will Declare and Emergency to Fund Wall,” Michael Tackett. The most pressing story of this week is undoubtedly the continued government shutdown, and President Trump’s threat to declare a state of emergency. This would allow the president to bypass Congress and the process of authorization and appropriation to instead use military funds to begin construction on a southern border wall. The money would draw from accounts that have already been earmarked for other urgent needs, like military construction. 
  2. A Shut Down Government Actually Costs More Than an Open One,” Jim Tankersley. Every day that this government shut down continues, it costs taxpayers more money in the long run. A government shut down is not like when a household goes on a self-imposed temporary spending ban. The government still needs to pay contractors and furloughed workers once they return to work—in some instances with the accrual of interest or fees on outstanding payments.
  3. Shutdown’s economic damage: $1.2 billion a week,” Victoria Guida. The government shut down is also a drag on the economy because 800,000 federal workers don’t get paid, they restrict individual spending that would otherwise be contributing to the economy. President Trump’s Chief Economist estimates the cost to be as much as $1.2 billion every week that the government remains closed and workers remain furloughed. Private contractors that won’t recieve payment on contract work and other lost business contributes to this figure. 
  4. Depending on how long it lasts, this shut down could also impact those on food stamps, leave new parents in the lurch, and have an outsized impact on veterans who make up to 25 percent of the workforce in some government agencies. 

Each year thousands of small and large businesses, non-profits, and organizations are hit with drive-by ADA claims, typically batch-produced affairs in which a complainant out of the blue claims to have found something not fully accessible to disabled users about the target’s operations and goes on to negotiate a settlement that includes ample attorneys’ fees. Because ADA requirements are both obscure and voluminous and even compliance experts do not agree among themselves how much accommodation counts as enough, potential violations can be found at most businesses. While the ADA is a national law, much of the mass filing of accessibility complaints goes on under state laws that piggyback or expand on the federal version, often with added features enhancing damages or attorney’s fee entitlements. 

It has been hard to get state-level relief from the depredations of the filing mills, since lawyers and disabled-rights activists can make for a formidable lobbying combination. But a piece of legislation just signed by Gov. John Kasich in Ohio, and an unrelated ruling in the California state courts, at least offer tiny rays of hope. 

Ohio’s HB 271 provides that in order to collect automatic attorneys’ fees under state accessibility law, a complainant must notify the business concerned, which then has 15 business days to respond and 60 days to remedy the violation.” The law, which goes into effect in March, is itself a bit of a compromise: it excludes housing discrimination claims, and provides that even a complaint filed without notice or opportunity to correct can still collect fees if a judge finds such payment appropriate. A similar bill on a national scale passed the U.S. House of Representatives last February but went nowhere in the U.S. Senate, and is likely to muster less support in the new House. 

In California, meanwhile, a state court has ruled that the distinctively harsh Unruh Act, which awards automatic damages in the thousands of dollars each to prevailing civil rights complainants whether or not they can prove any injury to themselves, does not apply as a matter of law to complaints against websites. Because of ongoing uncertainty about whether the ADA applies to websites, defendants across the country have been deluged with web accessibility lawsuits in recent years; if the ruling sticks, they will at least be spared the extra-high damages of the California version.  

Earlier today, Cato sued the Securities and Exchange Commission in federal court challenging the SEC’s policy of imposing perpetual gag orders on settling defendants in civil enforcement actions. The clear point of that policy is to prevent people with the best understanding of how the SEC uses its vast enforcement powers from sharing that knowledge with others. But silencing potential critics is not an appropriate use of government power and, as explained in Cato’s complaint, it plainly violates the First Amendment’s protections of free speech and a free press.

The case began when a well-known law professor introduced us to a former businessman who wanted to publish a memoir he had written about his experience being sued by the SEC and prosecuted by DOJ in connection with a business he created and ran for several years before the 2008 financial crisis. The memoir explains in compelling detail how both agencies fundamentally misconceived the author’s business model—absurdly accusing him of operating a Ponzi scheme and sticking with that theory even after it fell to pieces as the investigation unfolded—and ultimately coerced him into settling the SEC’s meritless civil suit and pleading guilty in DOJ’s baseless criminal prosecution after being threatened with life in prison if he refused.

The author now wants to tell his side of the story, and Cato wants to publish it as a book—but both are prevented from doing so by a provision in the SEC settlement agreement that forbids the author from “mak[ing] any public statement denying, directly or indirectly, any allegation in the [SEC’s] complaint or creating the impression that the complaint is without factual basis.” This provision appears to be standard not only in SEC settlements, but with the CFTC, the CFPB, and possibly other regulatory agencies as well. Thus, when the federal government unleashes its immense financial regulatory power in a civil enforcement action, the price of settling—as the vast majority of cases do—is a perpetual gag order that prohibits the defendant from ever telling his or her side of the story.

This is a wildly inappropriate use of government power, and it is directly contrary to the spirit of accountability and transparency that permeates our founding documents. Indeed, the Sixth Amendment guarantees the right to a speedy and public trial precisely to ensure that when the government accuses people of wrongdoing it must place its cards faceup on the table for all to see. Today, however, 97 percent of federal criminal convictions are obtained through plea bargains, and a similar percentage of SEC civil enforcement actions are settled instead of adjudicated. This means that, contrary to the constitutional prescription for a public airing of the government’s case, most enforcement actions—both civil and criminal—unfold behind closed doors and under the radar. And it is increasingly clear that the process by which the government extracts confessions, plea deals, and settlement agreements from defendants in those cases can be incredibly (and even unconstitutionally) coercive. It is at this coercive dynamic that a significant portion of Cato’s criminal justice work takes aim, in order to restore the system envisioned by the founders and enshrined in the Constitution.

Thus, the more adamant the government is about preventing us from knowing what tools and techniques are being brought to bear against those whom it accuses of misconduct, the more important it is for us to find out. Perpetual gag orders like the ones routinely imposed by the SEC, CFTC, and CFPB as a condition of settlement are utterly antithetical to principles of good government and, not coincidentally, to the First Amendment’s protections of free speech and a free press as well.

Accordingly, we at Cato have teamed up with our friends at the Institute for Justice, which represents Cato in its challenge to the SEC’s unconstitutional policy of demanding perpetual gag orders as a condition of settlement in civil enforcement actions. Together, we aim to strike down not only the specific gag order at issue in this case, but all perpetual gag orders in all existing civil settlements with federal agencies—and to terminate the government’s policy of silencing those whom it accuses of wrongdoing.

It is often said that sunlight is the best disinfectant. The SEC and its cohorts are about to get a healthy dose of each.

Some prominent economists have begun to analyze formally the market for a privately issued outside money that they associate with Bitcoin. Rodney Garratt and Neil Wallace (2018) (ungated version here) model the relative values of (exchange rates between) “Bitcoin 1” and other hypothetical cryptoassets (“Bitcoin 2,” etc.). Linda Schilling and Harald Uhlig (2018) take a related approach to the exchange rate between “Bitcoin” and “the US dollar.” I use quotation marks here to indicate that the authors’ subjects are modeling entities, named after but not the real things. Their correspondence to the real things should not be taken for granted.

Both pairs of authors draw on a well-known theoretical result by Kareken and Wallace (1981): when two fiat currencies are perfect substitutes, the equilibrium exchange rate between them is indeterminate. To get the intuition, imagine that any payment made in US dollars can equally be made in Canadian dollars valued according to the going exchange rate. Nobody then has a reason to swap one currency for an equivalent amount of the other. No matter the level of the exchange rate, there is no pressure for it to change. Only the combined real money supply and demand matter, and any exchange rate is consistent with combined real combined real supply equaling combined real money demand. The combined real money stock could be 99% US dollars (at an exchange rate making one USD worth many CAD) or 99% Canadian dollars. Either rate is compatible with monetary equilibrium.

It easy to see, of course, that in the real-world fiat currencies are not perfect substitutes in all uses. Most obviously, legal tender laws and tax-payment restrictions imposed by nation-states make any two fiat currencies non-interchangeable. Canadians cannot pay their Canadian-dollar-denominated debts or taxes with equivalent amounts of US dollars; they need Canadian dollars. Garratt and Wallace (p. 1887) are aware of this objection to the relevance of the Kareken-Wallace (hereafter KW) result. But, they assert, no such objection can be made in the case of Bitcoin and coins that are identical to Bitcoin in all respects but brand name:

Bitcoin and its actual and potential rivals — in the title intentionally mislabeled bitcoin 1, bitcoin 2, … in order to indicate that there could be many of them — do seem to satisfy all the assumptions that Kareken and Wallace made to get exchange-rate indeterminacy.

Consequently, they write (p. 1896):

Much of the uncertainty in the value of bitcoin comes from the ease of creating perfect substitutes. It is easy to clone bitcoin and the creation of very close substitutes makes the value of bitcoin rest on beliefs that may be hard to pin down.

Without questioning the Garratt-Wallace analysis given their assumptions, I want to point out that a key assumption – that perfect substitutes are easy to create – is factually false with respect to real-world cryptoassets. Bitcoin and its actual and potential rivals are in fact not perfect substitutes and thus do not satisfy the KW assumptions. The reason does not come from legal tender or tax laws, but primarily from network economies in monetary systems that apply to cryptocurrency systems. The size of the Bitcoin network matters, and is not easy or cheap to replicate.

The concern that newer cryptoassets will be close to perfect substitutes for Bitcoin, and that because very cheap to produce they will drive Bitcoin’s value toward zero, has also been expressed less formally in blog posts by John Cochrane and much earlier by Brad DeLong. DeLong wisely included the caveat that Bitcoin could retain its value if it could remain differentiated from its rivals, but he doubted that it can do so, wrongly suggesting that its differentiation is tenuous because it rests on nothing but its being the oldest cryptoasset.

To grasp the relevance of network effects, it is important to note the basic fact that each coin has its own blockchain. New cryptocurrencies do not provide access to the established Bitcoin system, only to their own much smaller systems. Bitcoin’s thousands of validation nodes are not their validation nodes, which number many fewer. Newer coins are not as widely accepted in payments. They do not offer an equivalent ecosystem of wallets, retail payment processors like Bitpay, and other ancillary service providers. They are not traded on as many exchanges, or in similar volumes, and consequently have larger bid-ask spreads. Given the network advantages of a larger system with wider acceptance and established robustness, new coins that merely clone the Bitcoin code are not in fact close substitutes for Bitcoin.

Economic theory predicts that the marginal me-too coin, with a near-zero cost of production, will have a market cap of close to zero. Such me-too coins will not bring down the market cap of Bitcoin (or other established cryptoassets) when potential investors and users have no reason to prefer them. In general, to attract users and thereby attain a positive market cap, a new coin has to offer improvements over the Bitcoin system. The cost of making significant technical improvements is of course not negligible, so new cryptoassets providing them will never be easy to create or superabundant.

The cryptoassets that have gained positive value competing against Bitcoin have done so not by cloning it but by offering new-and-better features.  The most prominent improvements have been in four areas: greater speed in payment validation (e.g. Ripple, Stellar, Bitcoin Cash, Litecoin), greater privacy (Monero, Dash, Zcash), greater security against 51 percent attacks (NEO, Peercoin, Decred), and better infrastructure for smart contracts and applications (Ethereum, EOS).  Stability of value may be considered a fifth area, or so-called stablecoin projects may alternatively be considered an entirely different proposition.

New-and-improved coin projects continue to be launched. The publicity for new coin projects characteristically emphasizes how their technology differs from and improves on the Bitcoin protocol. It never says: we are merely a clone of Bitcoin. For a recent example, on the 3rd of January 2019, the same day as the tenth anniversary of Bitcoin’s launch, a new coin called Beam was launched.[1] Beam implements a next-generation blockchain technology (“Mimblewimble”) that enables greater privacy (less information about transactions is publicly revealed on its blockchain) and faster validation.

To say that a particular cryptoasset is distinct, and not a perfect substitute for Bitcoin, is not to claim that its market value in Bitcoins or in dollars is stable or readily predictable. Nor is it to deny that its value can go to zero, if the market completely abandons it as no longer a plausible contender. But it is to claim that news that raises or lowers the odds of the coin’s wider future adoption will drive changes in its relative value. Thus measured price correlations between other cryptoassets and Bitcoin are significantly less than one, and as low as 0.2 in the case of Ripple.[2] If two coins with predetermined quantity paths were perfect substitutes, by contrast, it would be hard to explain why any news should affect their relative values. That news does affect the relative values of real-world cryptoassets is evidence that they do not fit the assumptions of the KW indeterminacy result.

Philosophers distinguish a merely valid argument – the conclusion follows from the premises – from a sound argument, which is both valid and has true premises. Arguments that assume perfect substitutability among cryptoassets may be valid, but they are not sound.

[1] I am an advisor to Beam.

[2] Charles Bovaird writes of Bitcoin and Ripple (XRP) in an article for Coindesk: “The two currencies are quite distinct, a situation noted by cryptocurrency fund manager Jacob Eliosoff, and this situation might help explain their weak price relationship. Bitcoin and litecoin have differing value propositions and target separate audiences from XRP.”

[Cross-posted from]

Alexandria Ocasio-Cortez hit headlines last week for advocating marginal income tax rates “as high as 60% or 70%” on those earning $10 million plus per year. Under her plan, revenues from such a policy would be put towards funding a “Green New Deal.”

Matt Yglesias, Paul Krugman and Noah Smith were quick out of the blocks to defend the idea of massive marginal tax hikes on high earners as simply sensible, mainstream economics. They appealed to the work of economists Peter Diamond, Emmanuel Saez, Thomas Piketty and others, who have set out the case for very high marginal tax rates on top incomes in academic journals over the last two decades.

These economists have indeed recommended the optimal marginal tax rate for the top 1% of income earners in the U.S. should be a combined (federal, state and local taxes) rate of 73 percent or higher – designed with the aim of maximizing revenue from top taxpayers.

But their recommendation is not analogous to jacking up marginal federal income tax rates on very high earners in our current code. Furthermore, their result depends on highly contentious philosophical positions and economic assumptions.

A question of philosophy, not economics

Where does their type of result come from? The most important driver is your view on the role of government and redistribution.

Diamond, Saez and others think government can assess the utility that different income groups obtain from keeping more of their own money. They think the government can then aggregate the usefulness of income for different groups to develop an overall “social welfare function,” aiming to allocate our incomes to maximize the welfare of society.

Assuming the very rich do not find additional income very useful, they argue we should attach zero weight to the welfare of the rich when setting tax policy. The only thing that matters is setting a rate to maximize revenues from the wealthy to redistribute to those further down the income scale.

Now, one could challenge this on practical grounds (would a Green New Deal really transfer resources to the poor? Are the rich only useful for their tax dollars?) But more on that later.

The key point is that the redistributive tastes of the economists overwhelmingly drive their result. Given the actual current tax system is nothing like their ideal, such tastes do not seem to reflect the preferences of the public.

Don’t take my word for it. Emmanuel Saez himself, in an older paper on this issue co-authored with Jonathan Gruber, set out different redistributive tastes governments might hold. These included:

  1. A Rawlsian view – where the government cares only about the poorest members of society (a policy of maximizing total revenue to redistribute)
  2. A Progressive Liberal view – where the government assumes the social weight we put on individuals declines as income rises, right down to zero for those at the top (a policy of maximizing revenue from the very rich)
  3. A Conservative utilitarian view – treating the rich and middle-classes with equal social weight, but those with very low incomes as in need of extra assistance (a policy of more limited redistribution)
  4. No redistribution at all (a policy designed to raise revenue to maximize efficiency with no concern for equity).

Gruber and Saez calculated the optimal marginal tax rates for each of these agendas presuming government could design a new tax code to raise the average level of revenue collected in the 1980s, given their own calculations about the responsiveness to taxes of different income groups.

The result for the optimal marginal tax rate on the rich (those earning $100k and above) was indeed 73 percent for the Rawlsian and Progressive Liberal outlook. But for a Conservative utilitarian approach, it was just 30 percent. And for a government that did not want to redistribute at all, the optimal rate would be just 3 percent.

What does the 73 percent optimal tax rate result really mean?

Thinking a top 73 percent marginal tax rate optimal then is overwhelmingly driven by your philosophical priors. As Greg Mankiw has previously intimated, it’s not clear that ordinary people share the view of the rich held by progressive liberals.

But, importantly, it is also a bait and switch for Krugman and Smith to use this result as implicit support for 73 percent marginal income tax rates being added to today’s tax code as proposed by Ocasio-Cortez.

Jonathan Gruber and Emmanuel Saez’s paper used data from the 1980s tax reform to estimate the responsiveness of different broad income groups to changes in tax rates. These calculated elasticity figures (which showed the rich more responsive than others to changes in tax rates) were then plugged into the social welfare functions described above to estimate what optimal tax rates should be according to a government’s redistributive objective.

But the results represent the optimal marginal tax rate if we had just a single tax on all income to replace existing taxes. This is very different from adding a top new rate of 73 percent rate for the federal income tax, as Ocasio-Cortez appeared to endorse. The 73 percent result assumes that in the new tax system “the social planner is free to reshape the tax system and remove all the deductions and exemptions embodied in the current law.” This would make it more difficult for people to tax plan or avoid high rates by changing the timing of charitable donations and realized capital gains, for example.

Helpfully, Gruber and Saez set out what the optimal total tax rate would be if all existing deductions and exemptions were assumed sacrosanct because they were politically difficult to abolish. In that case, the revenue maximizing, optimal marginal total tax rate (even under a progressive worldview) would be just 49 percent. This is only slightly above the 45 percent combined top marginal rate they observed the US tax system actually delivered at that time.

In fact, Saez and Gruber’s calculations, finding that the rich (and particularly high-income itemizers) are much more responsive to tax changes than the middle-class or the poor, imply that marginal tax rates on the highest income groups should be lower than those faced further down the income scale. The main policy implication of the Saez-Gruber work is that tax rates on all groups earning gross income below $100,000 should be jacked up to fund more redistribution to the poorest, if you’re a good progressive. Good luck to Miss Ocasio-Cortez making that argument!

What about the more recent paper by Peter Diamond and Emmanuel Saez cited by Krugman? Here, the 73 percent optimal marginal tax figure for top earners (those earning over $300,000) comes as part of a recommended package where marginal tax rates rise with income and peak for highest earners.

This result is again driven by a progressive social welfare function, but the optimal rising marginal rates result comes from the economists assuming a much weaker responsiveness of taxable income to changes in the tax rate than Saez’s earlier work. AEI economists have previously shown that the assumption used by Diamond and Saez of an elasticity of just 0.25 is far too low relative to other literature. But Diamond and Saez wave this concern away by saying that, ideally, governments can reform the tax code to minimize tax avoidance.

Making this heroic assumption, again, makes a big difference to the results. Diamond and Saez acknowledge that if they took the current tax system as given, with all its deductions and exemptions and assuming that state, local and payroll taxes were fixed, then the revenue-maximizing total marginal tax rate would be 54 percent (were top taxpayers as responsive as Saez previously believed).

This would mean something like a 48 percent top federal marginal income tax rate – certainly higher than the 37 percent top income tax rate we see for 2019, but way, way lower than the idea of tacking on a 73 percent rate for earners of $10 million plus proposed by Ocasio-Cortez.

It’s true that some other work – particularly that of Piketty, Saez and Stantcheva – have similarly recommended top marginal tax rates of between 71 percent and 83 percent. But their results use elasticities of tax responsiveness for the whole population, not just higher earners. Their “optimal” results depend on the U.S. government essentially eliminating the ability to tax plan through fundamental reform, including simultaneous huge hikes to capital gains and taxes on corporations too.  And they postulate that top pay for the very rich essentially just arises through socially-wasteful rent-seeking, meaning it doesn’t matter if the activity is discouraged.

Strangely, none of Krugman, Yglesias or Smith highlight that these economists’ calculated optimal rates assume that fundamental tax reform would eliminate almost all deductions and exemptions. This is one of the reasons why using Diamond-Saez’s work to back up Ocasio-Cortez’s idea while also comparing her proposed rate to tax rates in 1950s is so misleading (deductions were numerous back then, making the gap between statutory and effective rates huge.)

The U.K.: a case study

As an aside on the elasticity point, the U.K. has in recent years undertaken an experiment on first hiking its top rate of income tax from 40 percent to 50 percent, and then lowering it from 50 percent to 45 percent. The government’s rationale for the latter move, in the face of strong pressure, was that the elasticity of taxable income to the net-of-tax rate was somewhere between 0.4 and 0.7. This is substantially higher than Diamond and Saez’s 0.25. As such, the U.K. government believed that cutting the rate would barely lose revenue, but would be good for the rich themselves, and for broader economic health.

What happened? As I wrote in January 2014:

Cutting the 50p rate to 45p, as implemented by George Osborne, was only estimated to reduce the exchequer revenues by around £100 million after behavioral effects, including steps to avoid the tax, had been considered. Early indications after the tax was implemented suggested that the behavioral effects might be more significant still. In pure revenue terms, HMRC figures show that in 2011/12 and 2012/13 the amount collected from top income taxpayers was £41.3 billion and £41.6 billion respectively under the 50p rate before jumping to £49.4 billion in 2013/14, when the top rate was cut to 45p.

Of course, much of this may have been due to forestalling of income and other activities based on knowledge of the planned rate changes – but at the very least the ease with which those top rate taxpayers were able to rearrange their tax affairs should put significant doubt in the minds of those who believe that a permanent rate would lead to significant extra revenues.

What are the rich good for?

Perhaps the biggest problem with the analysis of Krugman and others though is that it views the responses of the rich to tax rates in a very static sense. Results are largely driven by how useful we consider current, existing income to different groups. Little thought is put into the long-term incentives to earn income in the first place. Yet tax rates could, on the margin, affect people’s decisions to invest in human capital or undertake the development of new ideas.

After all, income later in life is one “reward” or payoff for hard work or taking risks through entrepreneurial activity. It stands to reason that hiking top tax rates reduces the financial payoff to such activity and so may deter it. We know that superstar inventors are very responsive to tax rates in terms of their location decisions, as are star scientists. Diamond and Saez themselves acknowledge too that the long-term elasticity of income to net-of-tax rates could well be higher than they envisage because of deterring human capital accumulation.

Yet in Paul Krugman’s column, he implies that the usefulness of the rich to the poor is purely the tax revenue the former provide to be redistributed through government. As John Cochrane notes, this completely ignores the question of how people get rich in a market economy: by providing goods and services people want and need, and hence generating consumer surplus. If on the margin high tax rates deter a potential entrepreneur from deciding to set up the next Amazon, the loss to social welfare would be huge.

Krugman and Piketty seek to diminish this potential effect by looking at broad economy-wide growth rates historically under different tax rate regimes. Growth was good in the 1950s, they say, so high tax rates are evidently not that damaging. Sure, taxes are not the be-all and end-all. But, as noted by Magness, there was a huge difference between statutory tax rates and effective tax rates in the 1950s. Again, one cannot on the one hand claim that the 1950s shows high tax rates were fine for growth while also appealing to Diamond-Saez’s work which recommends eliminating the deductions which existed in the 50s.

Besides, there is another body of work – not least papers by Karel Mertens – that finds top marginal income tax rates *do* matter for GDP growth. And as Charles Jones has noted, if we accept new ideas drive economic growth and acknowledge after-tax income is a financial reward for innovation, then the optimal tax rate would be much, much lower than Saez suggests (Jones estimates 28 percent), precisely because we all benefit from the better products and higher GDP that result.   


As I hope this piece has demonstrated then, the results of Diamond-Saez’s work:

a)   Are dependent on a progressive worldview that is seemingly rejected through the revealed preference of voters

b)   Are predicated on a wholesale tax reform including the elimination of deductions, exemptions and opportunities for avoidance (unlike when the U.S. previously had high marginal rates)

c)   Are dependent on the assumption of less responsiveness of high-income individuals to tax rates than found in most studies

d)   Ignore the potential impact that high tax rates might have on future human capital accumulation or entrepreneurial activity

Krugman, Yglesias and Smith could use the Diamond-Saez work to say “there’s a progressive case for major tax reform, including high tax rates across the distribution, eliminating all deduction and very high rates on top earners.” Alternatively, they could say “there’s a progressive case for modestly higher top tax rates within the current code.” But they cannot claim simultaneously Ocasio-Cortez’s ideas merely echo the 1950s *and* reflect the work of Diamond-Saez.

Perhaps more importantly, they cannot claim those of us with different philosophical views, and hence different preferences for redistribution, are somehow ignorant of economics.

Crime along the border and national security will be major themes in President Trump’s upcoming address where he will likely make the case for declaring a national emergency to build his wall.  Shocking images and anecdotes of crime along the border fuel this narrative, but rarely are facts deployed to make the case.  We’ve addressed the terrorism and crime arguments frequently, but only rarely touch on border crime.  Border counties have far less crime per capita than American counties that are not along the border. 

If the entire United States in 2017 had crime rates identical to those in counties along the U.S.-Mexico border, there would have been 5,720 fewer homicides, 159,036 fewer property crimes, and 99,205 fewer violent crimes across the entire country.  If the entire United States had crime rates as low as those along the border in 2017, then the number of homicides would have been 33.8 percent lower, property crimes would have been 2.1 percent lower, and violent crimes would have dropped 8 percent. 


Table 1

Crime Rates by Counties in 2017, per 100,000

  Violent Crime Rate Property Crime Rate Homicide Rate Border counties 347.8 2,207.1 3.4 Non-border counties 378.6 2,256.4 5.2 United States 377.8 2,255.2 5.1

Source: FBI Uniform Crime Reports 2017.

The numbers in Table 1 come from the FBI’s Uniform Crime Reports for 2017 that we obtained via a special request from the FBI.  The crime rates are organized by county, with all crimes reported to sub-county agencies added up using county codes from the FBI’s 2012 Law Enforcement Agency Identifiers Crosswalk.  The population figures also come from the FBI and are based on the intercensal reports obtained by the FBI from the Census Bureau.  The 23 border counties are lumped together as one and compared to the non-border counties. The numbers for the entire United States are in the last row. 

Sheriff Ronny Dodson of Brewster County Texas said, “A lot of politicians are running on securing the border.  One’s got a six point plan, one’s got a nine point plan. They’re throwing tons of money at this border. I wish they’d just shut up about it.”  Dodson went on to say, “I think they’re [politicians] just throwing money at the border for nothing. I think people on the interior see all these shows about the border where there’s violence.” 

Although Dodson’s comment is just rhetoric, there is a lot more empirical support for his claims than there is for those who claim that there is a border crime crisis. 


On January 7 a paper by Veronika Eyring and 28 coauthors, titled “Taking Climate Model Evaluation to the Next Level” appeared in Nature Climate Change, Nature’s  journal devoted exclusively to this one obviously under-researched subject.

For years, you dear readers have been subject to our railing about the unscientific way in which we forecast this century’s climate: we take 29 groups of models and average them. Anyone, we repeatedly point out, who knows weather forecasting realizes that such an activity is foolhardy. Some models are better than others in certain situations, and others may perform better under different conditions. Consequently, the daily forecast is usually a blend of a subset of available models, or, perhaps (as can be the case for winter storms) only one might be relied upon.

Finally the modelling community (as represented by the football team of authors) gets it. The second sentence of the paper’s abstract says “there is now evidence that giving equal weight to each available model projection is suboptimal.”

A map of sea-surface temperature errors calculated when all the models are averaged up shows the problem writ large:

Annual sea-surface temperature error (modelled minus observed) averaged over the current family of climate models. From Eyring et al.

First, the integrated “redness” of the map appears to be a bit larger than the integrated “blueness,” which would be consistent with the oft-repeated (here) observation that the models are predicting more warming than is being observed. But, more important, the biggest errors are over some of the most climatically critical places on earth.

Start with the Southern Ocean. The models have almost the entire circumpolar sea too warm, much of it off more than 1.5°C. Down around 60°S (the bottom of the map) water temperatures get down to near 0°C (because of its salinity, sea water freezes at around -2.0°C). Making errors in this range means making errors in ice formation. Further, all the moisture that lies upon Antarctica originates in this ocean, and simulating an ocean 1.5° too warm is going to inject an enormous amount of nonexistent moisture into the atmosphere, which will be precipitated over the continent in nonexistent snow.

The problem is, down there, the models are making error about massive zones of whiteness, which by their nature absorb very little solar radiation. Where it’s not white, the surface warms up quicker.

(To appreciate that, sit outside on a sunny but calm winters day, changing your khakis from light to dark, the latter being much warmer)

There are two other error fields that merit special attention: the hot blobs off the coasts of western South America and Africa. These are regions where relatively cool water upwells to the surface, driven in large part by the trade winds that blow into the earth’s thermal equator. For not-completely known reasons, these sometimes slow or even reverse, upwelling is suppressed, and the warm anomaly known as El Niño emerges (there is a similar, but much more muted version that sometimes appears off Africa).

There’s a current theory that El Niños are one mechanism that contributes to atmospheric warming, which holds that the temperature tends to jump in steps that occur after each big one. It’s not hard to see that systematically creating these conditions more persistently than they occur could put more nonexistent warming into the forecast.

Finally, to beat ever more manfully on the dead horse—averaging up all the models and making a forecast—we again note that of all the models, one, the Russian INM-CM4 has actually tracked the observed climate quite well. It is by far the best of the lot. Eyring et al. also examined the models’ independence from each other—a measure of which are (and which are not) making (or not making) the same systematic errors. And amongst the most independent, not surprisingly, is INM-CM4.

(It’s update, INM-CM5, is slowly being leaked into the literature, but we don’t have the all-important climate sensitivity figures in print yet.)

The Eyring et al. study is a step forward. It brings climate model application into the 20th century.

The deaths of two children in custody in recent weeks have led to a justifiable focus on the numbers of children who enter Border Patrol custody every year. The Department of Homeland Security told Congress this week that “more children and families are being apprehended between the ports of entry than ever before.” While the large numbers of children are certainly alarming, it is incorrect that it is the largest number ever. President Bush’s administration apprehended more children with far fewer resources.

Figure 1 shows the number of children who Border Patrol apprehended from 2001 to 2018. In Fiscal Year (FY) 2005, Border Patrol brought into their custody 114,222 people under the age of 18. The number of minors proceeded to nosedive, bottoming out 23,089 in FY 2011, before rising again to a peak of 107,613 in FY 2014. Under President Trump, the agency arrested 82,769 in FY 2017 and about 85,000 in FY 2018, both significantly below the prior peaks.

Figure 1: Apprehensions of Juveniles By Border Patrol, FY 2001-2018

Sources: CRS, DHS, CBP, Border Patrol (2018 was estimated based on CBP’s number of unaccompanied children for 2018)

Moreover, as Figure 1 also shows, Border Patrol made those apprehensions in the early 2000s with a far smaller force than under Presidents Obama or Trump. Three times under President Bush, the average Border Patrol agent apprehended 10 children per year. Under President Trump, the average Border Patrol agent has brought in about half as many children per year as under President Bush (Figure 2). This is greater than the average for President Obama’s presidency, but still far from the most ever.

Figure 2: Apprehensions of Juveniles Per Border Patrol Agent, FY 2001-2018

Sources: CRS, DHS, CBP, Border Patrol (Bush, FY 2001-08; Obama FY 2009-16; Trump, FY 2017-18)

What has changed in recent years is the share of apprehensions who were juveniles. As Figure 3 shows, the number of children largely moved in tandem with the number of adults prior to FY 2014, but after FY 2014, the number of children returned to the early 2000s norm, while the number of adults remained low. From FY 2001 to 2013, 9 percent of apprehensions were minors—since FY 2014, they were 23 percent of apprehensions. 

Figure 3: Apprehensions – Juvenile and Adult – and Juvenile Share of Apprehensions, FY 2001-2018

Sources: CRS, DHS, CBP

The shift to a more child-heavy flow with lower overall numbers coincided with a substantial decline in apprehensions of Mexicans, and an increase in arrivals from non-Mexican countries, 90 percent of whom were from Central America’s Northern Triangle—Guatemala, Honduras, and El Salvador.

Figure 4: All Apprehensions by Border Patrol by Nationality and Juvenile Share, FY 2001-2017

Source: Border Patrol

From the standpoint of the Trump administration, Central American children are much more difficult to deal with for several reasons. If deported, they need to be flown back to Central America, which takes more time and resources than simply putting children back in Mexico. Unaccompanied children from Mexico are afforded fewer protections and can be deported without a process to protect them, while Central American unaccompanied minors cannot are subject to procedures intended to make sure that, if they were trafficked, fled persecution, or were abandoned, they receive protection in America.

From a security perspective, however, the shift to a majority child-family flow is a blessing because children and families overwhelmingly turn themselves in to Border Patrol rather than attempt to sneak into the country. According to a DHS-commissioned report last year, nearly all families and children turn themselves in rather than seek to evade detection. This allows the U.S. government the opportunity to check them for diseases and conduct background checks.  

In the period between 2005 and 2014, the government constructed hundreds of miles of fences near urban areas, which has led to an influx of migrants in remote areas of the border. This increased the risks to child migrants trying to cross and turn themselves in for asylum, and it contributed to the deaths of the two children who ended up in Border Patrol custody. The Trump administration has aggravated this problem by institutionalizing a practice of turning away asylum seekers at ports of entry and firing tear gas at people who try to scale the fence to let themselves be caught. 

The point is not that there is not a real humanitarian crisis at the border, but it is one primarily driven by government policies, not the unprecedented number of children.

Social scientist Bent Flyvbjerg described the selection of government-funded infrastructure projects as “survival of the unfittest” because proponents of those projects systematically exaggerate the benefits and underestimate the costs.  President Trump’s proposed border wall with Mexico provides a striking example of this: A wall along the border with Mexico will likely cost about $59.8 billion to construct.

The Office of Management and Budget (OMB) recently sent a letter to Congress where it argued that $5.7 billion would pay for approximately 234 miles of a new physical steel barrier along the border.  That new estimate comes to about $24.4 per mile.  This new OMB estimate is 41 percent more costly than the approximately $17.3 million per mile construction costs that the Department of Homeland Security (DHS) estimated just a few years ago, 2.7 times as expensive as Mitch McConnell and Paul Ryan estimated, and 5 times as expensive as Trump’s lowest estimate

Even worse, the $24.4 million per mile estimate does not include the large cost overruns for government construction projects.  Applying a conservative 50 percent cost overrun estimate to building the border fence brings the total price tag to approximately $36.6 million per mile.  Building a steel fence along the remaining 1,637 miles of Mexican border not covered by pedestrian fencing would cost approximately $59.8 billion, excluding any maintenance costs. 

There are a few caveats about the above estimate.

First, the 50 percent cost overrun estimate is conservative.  A small sample of large construction projects selected by my colleague Chris Edwards shows that cost overruns boost total project costs by an average of 3.3 fold.  The cost of the border fence is thus very likely to be more than double what I estimate above.

Second, this estimate is for the steel bollard barrier and not a concrete wall.  In other words, the currently proposed steel border fence is far cheaper than the concrete and steel wall originally proposed by President Trump.  Making it out of concrete could more than double the price.

Third, our cost estimate does not include the low-ball $864,353 annual per mile cost of maintaining the current border fence – which is likely a lot less expensive than repairing the barrier that has been proposed by Trump. 

Fourth, the OMB’s cost estimate per wall is more in line with previous Trump administration requests than estimates made by organizations that are ideologically committed to building a wall regardless of the cost to taxpayers.    

Since 2017, administration officials at the OMB have been relatively consistent in estimating that the government cost of building a border wall is around $24 million per mile.  However, the incentives for and history of government agencies systematically underestimating the costs of government construction projects makes this the lowest possible estimate.  If it is built for about $24.3 million per mile than it would be the first time that a large government construction project has come in at or below cost in a very long time.

The cost of the border keeps getting higher, the border wall keeps becoming less of a wall, and the administration keeps promising that it will cover less and less of the border.  At this rate, President Trump might end his administration with less fencing than he began it. 

Rep. Alexandria Ocasio-Cortez is making headlines calling for raising the top individual income tax rate to 70 percent to fund a Green New Deal. Sympathetic commentators are saying that such a high rate on the wealthy is no big deal because the top tax rate used to be 70 percent and above. Noah Smith at Bloomberg says the congresswomen’s plan would be “a return to the 20th century norm.”

The problem is that globalization has dramatically changed the economy over recent decades. High tax rates were not a good idea back then, but they would be disastrous now.

Before the 1980s, capital controls under fixed currency exchange rate regimes kept money bottled up within countries, keeping taxpayers in national economic prisons. That regime broke down, and today trillions of dollars flows over borders under flexible exchange rate systems, while industries and entrepreneurs have become highly mobile. Tax bases are far too movable these days for governments to sustain yesteryear’s excessive tax rates, as I discuss in Global Tax Revolution.

Every industrial country has figured that out, and their policy decisions refute the soak-the-rich tax dreaming of Rep. Ocasio-Cortez. Top income rates have plunged around the world since 1980 under governments of both the political left and right.

The chart shows the average top federal-state income tax rate for 26 core OECD countries that have good data back to 1980. The average top rate among these high-income nations fell from 68 percent in 1980 to 47 percent today. The average rate for all 35 OECD countries today is 43 percent. The top U.S. federal-state tax rate at 46 percent in 2017 was above the OECD average. The recent GOP tax cut dropped the top federal rate a few points, but raised the effective state rate by capping deductibility. On individual income taxes, America is not a low rate country.


The 26 countries are Australia, Austria, Belgium, Canada, Denmark, Finland, France, Germany, Greece, Iceland, Ireland, Italy, Japan, Korea, Luxembourg, Mexico, Netherlands, New Zealand, Norway, Portugal, Spain, Sweden, Switzerland, Turkey, United Kingdom, and United States. Data for 2000-2017 from the OECD. Data for 1980-1995 from Global Tax Revolution.

Upon taking office in 2017, President Trump accused trade partners of underhandedness, demonized U.S. companies with foreign supply chains, and perpetuated the false narrative that trade is a zero-sum game requiring an “America First” agenda. He withdrew the United States from the Trans-Pacific Partnership, threatened to pull out of North American Free Trade Agreement and the Korea-U.S. Free Trade Agreement, and initiated a war of attrition against the World Trade Organization by refusing to endorse any new Appellate Body judges until his unspecified demands were met. Yet, those were still the halcyon days of trade.

In 2018, straining all credulity, the Trump administration dusted off a seldom-used law (Section 232 of the Trade Expansion Act of 1962) to impose tariffs on imported steel and aluminum from most countries on the basis that national security is threatened by U.S. dependence on foreign sources of these widely available commodities.

Later in the year, invoking another controversial U.S. trade statute (Section 301 of the Trade Act of 1974), which is widely considered an act of vigilantism under WTO rules, the administration announced tariffs on $50 billion worth of imports from China for alleged unfair practices, such as forced technology transfer and intellectual property theft. When Beijing retaliated with tariffs on U.S. agricultural products, Trump announced that he would hit another $200 billion of imports from China with tariffs. Once again, Beijing responded by broadening its list of targeted U.S. products and the president subsequently threatened to apply U.S. levies to all imports from China (over $500 billion in 2017).

To be fair, U.S. trade policy in 2018 wasn’t only rancor, hostage-taking, and trade war. Juxtaposed against this contentious, grievance-based, enforcement-oriented U.S. posture was some “trade liberalization.” Instead of withdrawing from NAFTA and KORUS, the Trump administration renegotiated both. Both included some liberalizing provisions, but also some lamentable, protectionist retrogression, which wasn’t totally unexpected given that, in both cases, U.S. insistence on renegotiation was motivated less by an interest in updating, expanding, and modernizing the agreements than by a desire to revise provisions that would—at least nominally—tilt the playing field in favor of U.S. workers and certain manufacturers.

As 2019 begins, five major issues cast long shadows over the trade policy landscape. First is whether and how the U.S.-China trade war will be contained, scaled back, and ultimately ended. Second is the looming possibility that the Trump administration will invoke national security to impose sweeping new tariffs on automobile imports. Third is the question of whether and when Congress will pass the implementing legislation for the new NAFTA (the United States-Mexico-Canada Agreement or USMCA). Fourth is whether, when, and how the crisis at the WTO will be resolved. And fifth concerns whether the Trump administration has the wherewithal to make good on its stated intentions of negotiating new trade agreements with Japan, the European Union, the Philippines, possibly the United Kingdom, and other countries. With much of the rest of the world moving forward with a slew of new trade agreements and the United States stuck on revamping old deals, the real and opportunity costs to U.S. businesses, consumers, and taxpayers continue to mount.

Throughout the year ahead, these major issues will be the predominant focus of the research and writing of the Cato Institute’s Herbert A. Stiefel Center for Trade Policy Studies.

 Over the last two years, Cato has published three Immigration Research and Policy Briefs on illegal immigrant criminality.  In each one, we found that illegal immigrants have lower criminal conviction rates in the state of Texas and lower nationwide incarceration rates relative to native-born Americans.  Although nobody has criticized our methods or the data, we answer other criticisms that arise.

The best recent criticism is that illegal immigrant conviction rates are low because they are deported after they serve their sentences, which reduces their recidivism rates relative to native-born Americans who cannot be deported after being released from prison.  Thus, the illegal immigrant incarceration or conviction rates are lower than those of native-born Americans because it is more difficult for them to recidivate as they would have to enter the country illegally again to do so.  This has been a difficult criticism to address as data limitations are severe, but we attempted to do so after making some assumptions.  We focused on comparing first-time criminal conviction rates.

We estimate that native-born Texans had a first-time criminal conviction rate of 683 per 100,000 natives in 2016.  In the same year, we estimate that illegal immigrants had a first-time criminal conviction rate of 462 per 100,000 illegal immigrants – 32 percent below that of native-born Americans.  Thus, about 36 percent of the gap that we observed in criminal conviction rates between illegal immigrants and native-born Americans can be explained by lower illegal immigrant recidivism that is likely due to their deportation. 

This question could have been easily resolved by comparing the immigration statuses of first-time offenders.  Of course, such data do not exist.  Regardless, this is still an important question even if our estimate results from a back of the envelope estimate.  You can judge for yourself how we came to this estimate.  This is how we did it. 

First, we used the Arizona state prison data from 2016 for those admitted to state prison that year.  Of U.S. citizens sent to prison that year, 58 percent had previously been to prison at some point since 1984.  The subpopulation of deportable non-citizens, which includes illegal immigrants but is not limited to them, had a recidivism rate of 47 percent – below those of U.S. citizens, but not that much below. 

Second, we assumed that U.S. citizens are analogous to native-born Americans.  This isn’t accurate, of course, but native-born Americans are 94 percent of Arizona’s citizen population so it is a reasonable back-of-the-envelope assumption. 

Third, we assumed that the Arizona recidivism rates for prison by roughly approximated immigration statuses translate well to the Texas criminal conviction rates.  This is our weakest assumption as not every criminal conviction results in incarceration and the Texas data on illegal immigrant and native-born convictions is more granular than the Arizona incarceration data. 

Fourth, we subtracted the recidivism rates from 100 percent to estimate the first-time offender rate.  Then we multiplied those numbers by the Texas criminal conviction rates by immigration status in 2016. 

Our back-of-the-envelope estimate should not be the final word on this issue, but we cannot do better at this time due to the lack of specific data.  Improved criminal justice and immigration data could easily answer these questions if such data are ever created or available.  This estimate cannot be the final word on this topic.  Regardless, our estimates do confirm the pattern of lower illegal immigrant criminality discovered elsewhere but the gap is narrowed. 

Presidents have the power to set the agenda and drive policy debates, and President Trump has put trade policy front and center. While President Obama moved slowly on trade policy during his first term (he picked up the pace during his second term), and candidate Hillary Clinton called for a “pause” on trade policy during her campaign, Trump has made an activist (and protectionist) trade policy one of his signature issues. Among other things, he has imposed tariffs under a number of trade statutes, accused many other countries of cheating on trade, renegotiated some existing trade agreements, and challenged the functioning of World Trade Organization by blocking the appointment of appeals court judges.

This flurry of trade policy activity has brought a wide-ranging debate over the foundations of trade economics, trade law, and trade politics. Ultimately, this debate might be productive, and it has provided an opportunity to explain these issues to the broader public. In the short-term, however, it has led to a chaotic and economically harmful U.S. approach to trade policy.

With trade policy making headlines, the current group of actual and aspiring Democratic leaders may be forced to make some tough choices on trade. It is not so much whether they are “for it or against it,” but rather, what exactly are they for?

In the short-term, this question is for the House Democrats, who have a great deal of power over Trump’s trade agenda. The administration has negotiated a new NAFTA (called the U.S.-Mexico-Canada Agreement, or USMCA), and Congressional ratification will require support from at least some House Democrats. But do they like the agreement Trump negotiated? And do they want to give Trump a political win? So far, many of them seem skeptical about supporting it in its current form.

But the more interesting question is the longer-term trade agenda of the Democratic party. We just hit 2019, but 2020 Democratic presidential candidates are coming forward already, and with trade policy making daily headlines they will almost certainly be offering their views on trade. Senator Elizabeth Warren has already announced that she would form an “exploratory committee,” and has said a few things about trade. Her statements so far have hints of traditional economic nationalism, but leave room for maneuver, and many questions remain. Here are a few questions I would like to see reporters ask her:

- Does she think the tariffs on Chinese imports are working, and would she keep them in place or remove them? What alternative strategies would she consider to address China’s trade practices?

- Would she maintain the steel and aluminum tariffs imposed for what are said to be “national security” purposes?

- Would she support Congressional ratification of the USMCA (subject to certain changes), or would she wait and try to renegotiate NAFTA herself if elected? How exactly would her vision of trade agreements differ from the existing model?

- With which countries, if any, would she negotiate new trade agreements?

- Would she end the Trump administration’s tactic of blocking appointments to the World Trade Organization’s appeals court?

Prior to 2016, it was difficult to imagine so much focus on the details of trade policy. Many politicians were content to proclaim vaguely that they were for “free and fair trade” and leave it at that. But now we have a number of specific actions on the table, and it is worth asking presidential candidates what they think of each one.

Senator Bernie Sanders has been an outspoken trade critic for years, but it is still worth pinning him down on all this. There are nuances to being a trade critic, and we do not know what he thinks of all the issues noted above.

Less is known about other potential Democratic candidates on these issues, however. Senator Kamala Harris, Senator Cory Booker, Senator Amy Klobuchar, Beto O’Rourke and others should all be asked to weigh in on the questions set out above. In the past, when trade policy was not as prominent, candidates could get away with saying very little, and sticking to vague platitudes, but now there is an opportunity to ask them very specific policy questions about actual U.S. tariffs and other trade actions that will be hard for them to evade.

The Democrats have a big choice to make here on how they want to approach trade. Some of Senator Warren’s comments make it sound a bit like she favors a less chaotic version of Trump’s protectionist, nationalist trade policy, with all the same favors for influential companies, just done more efficiently. But she has left things open enough that she could go in a more pro-trade direction if she wanted. Polls suggest that the Democratic base is more open to trade than ever before, which means that an anti-tariff/pro-trade position might be more acceptable in a Democratic primary than in the past (and could also be an advantage in the general election against Trump).

People hoping that Democratic politicians will become enthusiastic proponents of trade liberalization may end up being disappointed. But there is a chance that they will move away from the confrontational, unilateralist, protectionist approach of the Trump administration. To find out how likely that is, let’s start pressing them on their views right now.

On March 6, 2017, President Trump issued Executive Order 13780.  The order was mostly concerned with reducing the number of immigrants and travelers from certain countries that his administration thought could pose a terror risk.  One portion of that Executive Order called for the Department of Justice (DOJ) and the Department of Homeland Security (DHS) to investigate the number of terrorist threats and, little noticed at the time, “information regarding the number and types of acts of gender-based violence against women, including so-called ‘honor killings,’ in the United States by foreign nationals.” 

The DOJ-DHS released their report in January 2018 and almost everybody focused on the terrorism portion – including myself and my colleagues here at Cato.  However, thanks to a brilliant lawsuit that uncovered how shoddy the report was, it is now clear that it made an absolutely false statement about the number of foreign-born people arrested for sex offenses.  The DOJ-DHS report says:

Regarding sex offenses, the Government Accountability Office (GAO) in 2011 produced an estimate regarding the population of criminal aliens incarcerated in state prisons and local jails from fiscal years 2003 through 2009. In that report, GAO estimated that over that period, aliens were convicted for 69,929 sex offenses—which, although not explicitly stated in the report, in most instances constitutes gender-based violence against women.

The DOJ-DHS authors of the report made two errors that others have made in interpreting that exact GAO report, many of whom I’ve criticized

First, 69,929 is the number of arrests for sex offenses where the arrestees were criminal aliens, not the number of sex offenses for which criminal aliens were convicted as the DOJ-DHSclaimed.

Second, those arrests occurred from 1955 through 2010, not from 2003 through 2009.

At least the DOJ-DHS have admitted they misinterpreted the GAO report – further vindication that Peter Kirsanow made numerous errors when he was given three full minutes to monologue on it last August on the Tucker Carlson Show.  Kirsanow wouldn’t appear with me on the show after that segment to debate me – I’ll let you guess the reason why.

The biggest problem here isn’t that the DOJ-DHS authors of that report didn’t read the fine print, although that is worrying, or that they likely let their political bias cloud their research findings.  The biggest problem here is that the GAO report misleads more than it illuminates and provides a legitimate looking citation for erroneous claims that are difficult to check.  The GAO is a more professional and less political department than the DOJ or DHS, at least when it comes to investigating and publishing the results of empirical research.  The GAO should retract the report and the later 2018 version that have both been so misinterpreted, rewrite them so that they are crystal clear, re-release them with a list of corrections from the previous editions, and include an FAQ section with answers.   If current government bureaucrats at the DOJ and DHS as well as former bureaucrats like Peter Kirsanow have trouble understanding the GAO report, then clearly the GAO needs to fix the problem and try to prevent it from occurring in the future.  Otherwise, what is the point of the GAO?


Scott Sumner had a wonderful post on Econlog last week. He was responding to an Atlantic article lamenting behavioral economics not taking a prominent role in introductory economics courses.

Scott’s key point was that many insights in behavioral economics are intuitive, while important economic concepts are not. In a world in which there is so much misunderstanding about trade, migration, the price mechanism and much else, the real value added of introductory economics comes in giving students the toolkit to “think like an economist.” Hence, it makes sense to spend more time teaching standard micro over human heuristics and biases.

I couldn’t agree more. But there is perhaps another point Scott could have made.

Though behavioral economics is interesting and can have beneficial applications to our own life and in policy areas where clear defaults must be set, leaning so heavily on human irrationality in introductory courses risks behavioral economics becoming a kind of “Market Failure version 2.”

What I mean by that is that, absent a thorough treatment in courses with applications about trade-offs, unintended consequences, or case studies, the risk of throwing out basic economics so early in favor of declaring “humans are irrational” is that policy debates become even more heavily weighted towards unthinking intervention to “correct” for our supposed biases.

As with market failure, the undercurrent of lots of behavioral economic contributions – the throwaway implications – are that government intervention is needed to fix the biases of behavioral consumers. Intervention is often thought implicitly pareto improving over non-intervention (helping behavioral consumers without harming others.) But there are at least six reasons why this may not be the case (even if we see what we consider evidence of behavioralism):

1)      Behavioral consumers (BCs) might themselves respond “behaviorally” to interventions or nudges designed to help them, potentially leaving them worse off (e.g. drug prohibition, payday loan restrictions, some smart disclosures on credit costs).

2)      Seemingly behavioral decision-makers may, in fact, be acting rationally, especially given the costs associated with accessing information or switching (e.g. in credit card markets and in relation to fuel economy).

3)      Interventions to correct for irrational decision-making by BCs may impose substantial costs on others, maybe even failing a reasonable overall welfare evaluation (e.g. autoenrollment often comes with lower default savings rates, caps on payday loan interest rates can reduce services for non-BCs too).

4)      Developing policies to correct the biases of BCs may distract attention from policy approaches that are welfare-improving for all groups (e.g. opt-out organ donation vs. organ markets, environmental behavioral approaches vs. more direct tax incentives).

5)      Interventions can increase the complexity of economic decision making or worsen inaccurate perceptions of risk (e.g. disclosure laws, overdraft protection).

6)      Interventions can undermine the “ecological rationality” of the market, dampening incentives to learn from mistakes or for entrepreneurs to deliver new protections for BCs.

Yes, behavioral economics is an important body of economic knowledge. But putting irrationality front and center of very introductory economic courses would both constrain time from teaching more difficult economic concepts, and worsen economic policy debates absent teaching the difficulties associated with correcting perceived biases through interventions or nudges.