Feed aggregator

Tomorrow, September 17, is Constitution Day in America, celebrating the day 229 years ago when the Framers of the Constitution finished their work in Philadelphia over that long hot summer and then sent the document they’d just completed out to the states for ratification. Reflecting a vision of liberty through limited government that the Founders had first set forth 11 years earlier in the Declaration of Independence, “We the People,” through that compact, sought “to form a more perfect Union, establish Justice, insure domestic Tranquility, provide for the common defence, promote the general Welfare, and secure the Blessings of Liberty to ourselves and our Posterity.”

The political and legal history that has followed has been far from perfect, of course. Has any nation’s been otherwise? But in the great sweep of human affairs, we’ve done pretty well living under this, the world’s oldest written constitution still in effect. Much that has happened to and under the Constitution during the ensuing years has been good, such as the ratification of the Civil War Amendments, which incorporated at last the grand principles of the Declaration of Independence; much has not been so good, such as the major reinterpretations of the document, without amendment, that took place during the New Deal, giving us the modern executive state that undermines the Founders’ idea of liberty through limited government.

Here at the Cato Institute we’ve long celebrated this day as we did yesterday with our 15th annual Constitution Day symposium. Those who missed the event, which concluded with Arizona Supreme Court Justice Clint Bolick’s B. Kenneth Simon Lecture in Constitutional Thought, will be able to see it in a couple of days at Cato’s events archives. At the symposium, as we do every year, we released our 15th annual Cato Supreme Court Review, which is already online, along with the volumes from previous years. As the Founders and Framers understood, liberty under constitutionally limited government will endure only as long as it remains alive in the hearts and minds of the people. That’s why we mark this day each year.

Michael Lind, a co-founder of left-leaning New America, is urging the federal government to create universal mobility accounts that would give everyone an income tax credit, or, if they owe no taxes, a direct subsidy to cover the costs of driving. He argues that social mobility depends on personal mobility, and personal mobility depends on access to a car, so therefore everyone should have one.

This is an interesting departure from the usual progressive argument that cars are evil and we should help the poor by spending more on transit. Lind responds to this view saying that transit and transit-oriented developments “can help only at the margins.” He applauds programs that help low-income people acquire inexpensive, used automobiles, but–again–thinks they are not enough.

Lind is virtually arguing that automobile ownership is a human right that should be denied to no one because of poverty. While I agree that auto ownership can do a lot more to help people out of poverty than more transit subsidies, claiming that cars are a human right goes a little to far.

Lind may not realize that there just aren’t that many poor people who don’t have cars anymore. According to the 2015 American Community Survey, 91.1 percent of American households have at least one vehicle and 95.5 percent of American workers live in a household with at least one vehicle. A lot of the households with no vehicles could afford to own one but choose not to, so the number of poor people without vehicles is very small. Of course, some two-worker families may have only one vehicle, but older used cars are affordable enough that the cost hardly seems a barrier to anyone with a job.

One implication is that adding a mobility credit to an already complicated tax system means that the vast majority of the credit will go to high-income people and people who already have cars. If only 2 to 3 percent (or even more) of people are both in poverty and lack access to cars, then why should everyone else also get a tax credit? Lind argues against means testing, saying that middle- and upper-income voters resent such programs, but non-means-tested programs always end up favoring the rich, if only because many poor people won’t file tax returns and so won’t know to ask for their credit or subsidy.

Lind makes no attempt to estimate how much the program will cost, but to make a different to low-income people I suspect it would have provide a credit of at least $500 per person per year. With 150 million workers (there were 146 million in 2015, but this program is supposed to increase employment), that’s $75 billion a year, or about 5 percent of personal income tax revenues.

Which raises the question of how to pay for the program. Giving people either tax breaks or direct subsidies either means borrowing more money or raising some other taxes. Whether borrowed or taxed, someone is eventually going to have to pay for it, and that means middle- to high-income people. This pretty much destroys the argument that they will support this policy so long as they get some of the benefits.

More important questions are whether this program would even make a real difference and if there are other programs that could do better for less cost. Lind offers no evidence of the former, but I can think of a lot of ways to help low-income workers that would both cost less and not imply that mobility is some kind of a human right.

Most important would be to remove the barriers to mobility that have been erected by other left-leaning groups, namely the smart-growth crowd and other urban planners. Too much city planning for the past several decades has been based on increasing traffic congestion to try to force people out of their cars. Yet congestion is mostly likely to harm low-income people, because their jobs are less likely to allow flex time, working at home, or other escapes from traffic available to knowledge workers. Low-income people also have less choice about home locations, especially in regions that have made housing expensive, meaning their commutes can be longer than those of higher-income workers.

Relieving traffic congestion in every major urban area in the country would cost a lot less than $75 billion a year. This doesn’t necessarily mean building new roads. Instead, start by removing the so-called traffic calming devices that actually take more lives by delaying emergency service vehicles than they save by reducing auto-pedestrian accidents. Then add dynamic traffic signal controls that have been proven to cost-effectively save people’s time, energy, and clean air. Converting HOV lanes to HOT lanes greatly reduces congestion and actually produces revenue. At relatively little cost, steps like these would remove many of the barriers to automobility for low-income families.

I’m not going to do more than mention the effects of increased auto ownership on urban sprawl, energy consumption, pollution, and greenhouse gas emissions other than to say that low-income people tend to drive cars that are least safe, least energy efficient, and most polluting. I think these problems are exaggerated by the left, but it is ironic that Lind wants to expand auto ownership when most other progressives want to restrict automobile use.

Ultimately, however, I have to reject any implication that mobility is some kind of a human right. Human rights include the rights not to be bothered by government for things that you want to do or believe that don’t harm other people–things like free speech, freedom of the press, and freedom of religion. Human rights do not include the right to expect other people to pay for things that you would like but can’t or don’t want to work for.

Rather than invent new human rights, people who are concerned about poverty should first ask what kind of barriers government creates that prevent social mobility. Those barriers should all be removed before any thought is given to taxing some people in order to give money or resources to others.

A major Wall Street Journal article claims, “A group of economists that includes Messrs. Hanson and Autor estimates that Chinese competition was responsible for 2.4 million jobs lost in the U.S. between 1999 and 2011.”  In a recent interview with the Minneapolis Fed, however, David Autor said, “That 2 million number is something of an upper bound, as we stress.” The central estimate was a 10% job loss which works out to 1.2 million jobs in 2011, rather than 2.4 million.  Since 2011, however, the U.S. added 600,000 manufacturing jobs – while imports from China rose by 21% – so both the job loss estimate and its alleged link to trade (rather than recession) need a second look.

“The China Shock,” by David Autor, David Dorn and Gordon Hanson examined the effect of manufactured imports from one country (China) on local U.S. labor markets. That is interesting and useful as far as it goes.  But a microeconomic model designed for local “commuting zones” cannot properly be extended to the entire national economy without employing a macroeconomic model.  

For one thing, the authors look only at one side of trade – imports – and only between two countries.  They ignore rising U.S. exports to China - including soaring U.S. service exports to China.  They are at best discussing one side of bilateral trade. And they fail to consider spillover effects of China’s soaring imports from other countries (such as Australia, Hong Kong and Canada) which were then able to use the extra income to buy more U.S. exports. 

Autor, Dorn and Hanson offer a seemingly rough estimate that “had import competition not grown after 1999” then there would have been 10% more U.S. manufacturing jobs in 2011.  In that hypothetical “if-then” sense, they suggest that “direct import competition [could] amount to 10 percent of the realized job loss” from 1999 to 2011. 

Any comparison of jobs in 1999 and 2011 should have raised more alarm bells than it has. After all, 1999 was the peak of a long tech boom, while manufacturing jobs in 2011 were still at the worst level of the Great Recession.  

From 1999 to 2011, manufacturing jobs fell by 5.6 million, but between recessions (December 2001 to December 2007) manufacturing jobs fell just 2 million.  The alleged job loss would look much smaller if it had been measured between 1999 and 2007 or extended beyond 2011 to 2015-16.  In the Hickory NC area the Wall Street Journal focused on, for example, unemployment was down to 4.8% in July (lower than 2007) although manufacturing accounted for only a fourth of all jobs.

We can easily erase half of the alleged job loss from China imports by simply updating the figures to 2015.

There were 11.7 million manufacturing jobs in 2011, so a 10% increase would have raised that to 12.9 million – suggesting a hypothetical job loss of 1.2 million, rather than 2.4 million. 

From 2011 to 2015, U.S. imports from China rose 21%, but U.S. manufacturing jobs were up to 12.3 million. Half the job loss Autor, Dorn and Hanson attributed to imports from China vanished from 2011 to 2015, and that certainly was not because we imported less from China.

Written with Christopher E. Whyte of George Mason University

What would it mean if a country couldn’t keep any secrets? 

The question may not be as outlandish as it seems. The hacks of the National Security Agency and the Democratic National Committee represent only the most recent signposts in our evolution toward a post-secrecy society. The ability of governments, companies, and individuals to keep information secret has been evaporating in lock step with the evolution of digital technologies. For individuals, of course, this development raises serious questions about government surveillance and people’s right to privacy. But more broadly, the inability for governments to keep secrets foreshadows a potential sea change in international politics.

To be sure, the U.S. government still maintains many secrets, but today it seems accurate to describe them as “fragile secrets.” The NSA hack is not the first breach of American computer networks, of course, but the nature of the hack reveals just how illusory is our ability to keep secrets. The Snowden affair made clear that the best defense isn’t proof against insider threats. The Shadow Brokers hack – against the NSA’s own top hacker group – has now shown that the best defense isn’t proof against outsider threats either. Even if the Shadow Brokers hack is a fabrication and the information was taken from the NSA in other ways – a traditional human intelligence operation, for instance, where a man with a USB drive managed to download some files – it seems clear that we’re in an era of informational vulnerability.

And what is true for the federal government is even more clearly true for private organizations like the Democratic National Committee. The theft and release of the DNC’s email traffic – likely carried out by Russian government hackers – illustrates that it’s not just official government information at risk. Past years have made it clear that civil society organizations – both venerable (political parties, interest groups, etc.) and questionable (the Church of Scientology, for instance, was the target of a range of disruptive attacks in 2008-‘09) – are as often the targets of digital intrusion as are government institutions.

At this point, it seems fair to think that there is no government or politically-relevant information that couldn’t, at some point, find its way into the hands of a hacker. From there, it is just a short hop into the public domain.

This is not to say, of course, that everything will wind up being made public. There is clearly too much information to wade through, most of it of little interest or value to anyone. And a great deal of information is so time sensitive that its value will evaporate long before anyone can steal it. But even there, as we know from the exposure of the NSA’s global surveillance programs, under the right circumstances hackers can monitor all manner of communications at the highest levels of government in real time. 

Nonetheless, as the frequency of broken secrets has risen, it has become clear that we have entered a new phase of “revelation warfare,” one in which revealed truths join disinformation and propaganda in the foreign policy toolkit. Disinformation, of course, remains an important weapon in the struggle for hearts and minds – witness Russia’s recent attempts to obfuscate the debate over NATO membership in Sweden, for example. But in many cases the truth may turn out to be even more powerful. Nations using disinformation campaigns always risk getting caught, thereby tarnishing their messages. Once a secret truth is revealed, however, it doesn’t matter whether the thief gets caught: the truth is out there.

The implications of this new era are profound. 

Leaks have traditionally been an insider’s threat of last resort, but the ability to steal information from a distance clearly provides exponentially greater opportunity for foreign nations to influence the political process here at home. Whether they come in the form of secrets about candidates’ personal lives, the shady dealings of political organizations, or deception by government officials, strategic revelations have the potential to destroy candidates, change the course of elections, and obstruct the making and implementation of public policy.

More fundamentally, too much transparency could seriously threaten the ability of governments to do things that require a certain amount of secrecy and discretion. Though most government operations do not require any special need for classified information or secrets, the conduct of foreign policy certainly does. Intelligence gathering, diplomacy, and the conduct of war all promise to become much more problematic if secrets become too difficult to keep.

Over time, we may be in store for an extended “Watergate effect,” in which the steady stream of hacked secrets leads to one ugly revelation after another, further eroding public confidence in our political institutions. On the other hand, a “sunlight effect” is also possible. The explosive potential of having one’s secrets revealed might just convince bureaucrats and politicians that the wisest course of action is to have nothing to hide.

However things turn out, it is clear that we need to get used to living in an era of increasingly fragile secrets.

Donald Trump recently unveiled a new child care plan whereby the government will force employers to give time off to new mothers in exchange for some shuffling of the tax code.  Mothers do tend to benefit from such schemes but they also end up paying for their time off in other, indirect ways like lower wages.  Forcing employers to pay their female employees to take time off decreases the labor demand for child-bearing age women and increases their supply, thus lowering their wages.  Economist Larry Summers, former Director of the National Economic Council during President Obama’s first administration, wrote a fantastic paper explaining this effect.

Many firms have maternity leave policies that balance an implicit decrease in wages or compensation for working-age mothers with time off to care for a newborn.  The important point about these firm-specific policies is that they are flexible.  Some women want a lot of time off and aren’t as sensitive to the impact of their careers while others want to return to work immediately.  A one-size fits all government policy will remove this flexibility.    

Regardless of the merits or demerits of Trump’s plan, economists Patricia Cortes and Jose Tessada discovered an easier and cheaper way to help women transition from being workers to being mothers who work: allow more low-skilled immigration.  In a 2008 paper, they found:

Exploiting cross-city variation in immigrant concentration, we find that low-skilled immigration increases average hours of market work and the probability of working long hours of women at the top quartile of the wage distribution.  Consistently, we find that women in this group decrease the time they spend in household work and increase expenditures on housekeeping services.

The effect wasn’t huge but skilled women did spend less time on housework and more time working at their job.    

Younger women with higher educations and young children would be the biggest beneficiaries from an expansion of childcare services provided by low-skilled immigrants.  There are about 5.4 million working-age women with a college degree or higher that also have at least one child who is under the age of 8 (Table 1).  Almost 78 percent of them are employed, 2 percent are unemployed looking for work, and 21 percent are not in the labor force.    

Table 1

Age of Youngest Child by Mother’s Employment Status, College and Above Educated, Native-Born Mothers, Ages 18-64


Less than 1 Year Old





























Not in Labor Force




















     Source: March CPS, 2014. 

As the youngest child ages, the percentage of women not in the labor force also shrinks, probably because less labor at home is required to take care of them (Figure 1).  Cheaper childcare provided by low-skilled immigrants can increase the rate at which these skilled female American workers return to the job after having a child. 

Age of Youngest Child by Percent of Mothers Not in Labor Force, College and Above Educated, Native-Born Mothers, Ages 18-64


     Source: March CPS, 2014. 

Trump’s immigration policy doesn’t allow him to propose helping mothers in this cheaper way.  Allowing more lower-skilled immigrants to help with childcare does conflict with Trump’s position on cutting LEGAL immigration.  Liberalizing low-skilled immigration will give more options to working mothers, boost opportunity for immigrants, increase the take-home pay of working mothers (complementarities), allow more Americans with higher skills to enter the workforce, and slightly liberalize international labor markets.  It’s a win-win for everybody except for the current government protected, highly regulated, heavily licensed, and very expensive day care industry. 

It’s hard being a new mother.  My wife’s ability to deliver, feed, and care for our three-month-old son with only a short break from her job is impressive.  Instead of lowering my wife’s wages indirectly with an ill-conceived family leave policy, it would be better to expand her ability to hire immigrants to help out at home.  More choices, not mandated leave, will lead to better outcomes.  If only Trump’s immigration platform didn’t make this solution impossible.    

Last week’s nuclear test by North Korea generated a wealth of commentary and analysis about the future of security on the Korean peninsula. The United States and South Korea quickly responded to the test. On September 13, the United States flew two B-1 bombers over South Korea in a show of force, reminiscent of bomber flights conducted after North Korea’s third nuclear test in March 2013. South Korea’s response didn’t feature any displays of military force, but in many respects it was more dangerous due to its implications for crisis stability.

Two days after the nuclear test, South Korea’s Yonhap News Agency reported on the Korea Massive Punishment & Retaliation (KMPR) operational concept. According to the news report, “[KMPR] is intended to launch pre-emptive bombing attacks on North Korean leader Kim Jong-un and the country’s military leadership if signs of their impending use of nuclear weapons are detected or in the event of a war.” The strikes would likely be conducted using conventionally-armed ballistic and cruise missiles. Jeffrey Lewis of the Middlebury Institute of International Studies summed up the concept, “[South Korea’s] goal is to kill [Kim] so that he can’t shove his fat little finger on the proverbial button.”

A preemptive decapitation strike against North Korean political leadership is a bad idea for a host of reasons.

First, it will be very difficult for South Korea to know with certainty that a nuclear attack is imminent. Based on the press release accompanying the latest nuclear test, North Korea says it will mount nuclear warheads on Hwasong missile units. All the missiles in the Hwasong family are liquid fueled, meaning they take time to fuel before they can launch, but recent test footage indicates that they can be fired from stretches of highway. This presents a very complicated detection problem for South Korea. Moreover, even if Seoul can detect missiles moving into position, these missiles can be armed with either conventional or nuclear warheads further contributing to detection difficulties.

Second, decapitation strikes are very difficult to pull off. In the opening hours of the 2003 invasion of Iraq, the United States tried to kill Saddam Hussein with cruise missiles and bombs from aircraft to no avail. Furthermore, even if the KMPR plan was implemented successfully and Kim Jong-un was killed before he could order a strike, there is no telling what happens next. Kim could simply devolve launch authority to missile units already in the field, telling them to go through with an attack even if he is killed. Or, without its dictator, the North could descend into chaos as various groups and individuals vie for control. Such a scenario is rife with uncertainty, and if armed conflict with South Korea broke out in the wake of a preemptive decapitation it could be more difficult to bring the conflict to a close.

Finally, the United States faces great danger under the KMPR if Washington cannot reign in its alliance partner. The KMPR is based on preemption, acting before North Korea has a chance to use its nuclear weapons. This means that Seoul would go on the offensive, and would likely do so with little warning to preserve the element of surprise. Admittedly, we do not know all the details of the KMPR, but if the United States is not consulted before South Korea launches its preemptive strike then Washington risks being drawn into a war not of its choosing. Such a risk exists with any alliance relationship, and while the language of the U.S.-South Korea defense treaty is explicitly defensive, if war broke out in the aftermath of a South Korean preemptive attack there would be strong political pressure for the United States to help its ally. 

North Korea must be deterred from ever using its nuclear weapons, but South Korea’s preemptive decapitation plan is fraught with problems that contribute to instability. The idea of killing Kim Jong-un before he can use nuclear weapons may be politically attractive, but operationally the gambit is unlikely to succeed. The United States should make it clear to Seoul that going through with the KMPR would be disastrous and not in Seoul’s best interests. 

According to the Washington Post,

The Canadian government has quietly approved new drug regulations that will permit doctors to prescribe pharmaceutical-grade heroin to treat severe addicts who have not responded to more conventional approaches.

In an ideal world, heroin and other opioids would be fully legal, but medicalizing at least helps users avoid street heroin that is adulterated or much stronger than advertised (e.g., because it’s laced with fentanyl, an even more potent opioid). The adulterated, stronger-than-advertised samples are the main cause of adverse health effects, not heroin per se.


In April, the Trudeau government announced plans to legalize the sale of marijuana by next year, and it has appointed a task force to determine how marijuana will be regulated, sold and taxed.

Maybe the U.S. can learn something about drug policy from its neighbors to the North.

Global economic freedom improved slightly to 6.85 on a scale of 0 to 10 according to the new Economic Freedom of the World report, co-published in the United States today by the Fraser Institute and Cato. The United States still ranks relatively low on economic freedom, and is below Chile (13), the United Kingdom (10) and Canada (5). The top four countries, in order, are Hong Kong, Singapore, New Zealand and Switzerland.

The long-term U.S. decline beginning in 2000 is the most pronounced among major advanced economies. It mirrors a pattern among OECD (Organization for Economic Cooperation and Development) nations of steady increases in economic freedom in the decades leading into the 2000s, a fall in freedom especially in response to the global economic crisis, and a subsequent slight increase from the low. (See the graph below.) Neither the United States nor the average OECD country has recovered the high levels of economic freedom that they reached last decade.

Given that economic freedom is strongly related to prosperity and progress in the whole range of human well-being, the relatively low levels of the advanced countries are worrisome. Economic freedom in the top countries affects not only those nations, but exerts great impact on the global economy and human well-being around the world. A chapter by Dean Stansel and Meg Tuszynski documents the decline in the U.S. level of economic freedom and how it tracks the decline in the health of the U.S. economy since 2000. From 1980 to 2000 U.S. GDP grew 3.4% per year on average, compared to 1.8% thereafter. Stansel and Tuszinski describe how specific policies have weakened property rights and every major category of economic freedom studied in the index.

The rise in global economic freedom in the past 35 years has brought huge improvements to humanity. Authors James Gwartney, Robert Lawson and Josh Hall document the convergence in levels of economic freedom between developing and developed economies. Poor countries are catching up to rich countries in terms of economic freedom. This helps explain the unprecedented gains in global poverty reduction during this period (see the graph below).

Economic freedom is unambiguously good for the poor. It reduces poverty and increases the income of the poorest. As Professor Gwartney and his co-authors point out: “the average income of the poorest 10% in the most economically free nations is twice the average per-capita income in the least free nations.”

The report also includes chapters on the long-term decline of freedom in Venezuela, the increase of economic freedom in Ireland, and the effect of gender disparity on economic freedom. Read the whole report here.

My Cato trade policy colleagues and I recently released a Working Paper analyzing the Trans-Pacific Partnership (TPP). We find that the agreement is “net liberalizing,” and that despite its various flaws, the agreement will improve people’s lives and should be ratified. Some aspects of the agreement were obviously good (like lower tariffs) and others were easy to condemn (like labor regulations); but for many of the TPP’s 30 chapters, our opinion is more ambivalent. 

The TPP’s chapter on “state-owned enterprises” (SOEs) is one of those. The TPP’s SOE rules are good rules, but they’re not nearly as ambitious as we wish they’d be. We gave the chapter a minimally positive grade of 6 out of 10. Here’s some of what we had to say in our report:

Concerns about the role of SOEs have grown in recent years because SOEs that had previously operated almost exclusively within their own territories are increasingly engaged in international trade of goods and services or acting as investors in foreign markets. The chapter represents the first ever attempt to discipline SOEs as a distinct category through trade rules.

The provisions include three broad obligations meant to reduce the discretion of governments to use state-ownership as a tool for trade protectionism. These obligations are that SOEs and designated monopolies must operate according to commercial considerations only, must not give or receive subsidies in a way that harms foreign trade, and must not discriminate against foreign suppliers.

While the privatization of public assets is an important part of economic liberalization, the State-Owned Enterprises chapter does not attempt to eliminate or prevent government ownership. Instead, it is intended to reduce the economic distortions caused by direct government ownership of prominent firms. The chapter’s three main obligations make an excellent contribution to international economic law. The rules are simple and well-tailored to address the problem of protectionism conducted through management of SOEs and the granting of special privileges to them….

On the down side, the chapter provides for numerous exceptions to the basic rules, which will have the effect of diminishing the impact of these otherwise market-oriented disciplines. Each party maintains detailed lists of exemptions, allowing the bulk of existing SOEs to continue operating without paying much heed to the SOE disciplines. Most members have carved out specific SOEs from the chapter’s disciplines, particularly in the energy and finance sectors. The extent of these exemptions significantly limits the chapter’s practical impact. Moreover, by requiring a controlling interest, the definition of an SOE leaves out a great number of enterprises that receive special treatment by the state due to inappropriate government involvement through ownership.

It’s good that the TPP has rules for SOEs and that the rules in the TPP are good ones, but the agreement isn’t going to do much, if anything, to reform existing SOEs in the 12 member countries. 

Even so, the TPP’s SOE rules are not a total wash. They have real value in the long run, because they will provide a blueprint for future negotiations involving China.

Any trade negotiations between the United States and China, whether it’s China’s entry into the TPP or the formation of a larger agreement including both economies, will surely be contentious. The U.S. business lobby has a lot of complaints about various Chinese policies that disadvantage foreign companies, and SOEs are near the top of the list. At the same time, the Chinese government continues to rely on state-ownership in key sectors, especially banking and finance, not simply as a tool for protectionism, but also as a way to manage its economy more broadly. They will be reluctant to give up that control in a trade agreement. 

Ultimately, the greatest benefit from SOE liberalization will be enjoyed by the Chinese people, who currently suffer from rampant cronyism, mismanagement, and misallocation of investment in SOE-dominated industries.

It’s impossible to know at this point precisely how future U.S.–China trade negotiations will go or how much influence the TPP’s SOE rules will have on them. But those rules do provide an encouraging step in the right direction. Despite its practical weaknesses, the SOE chapter is a positive component of the TPP and is one of many reasons why advocates of free markets should support the agreement.

Yesterday, Cato published my policy analysis entitled “Terrorism and Immigration: A Risk Analysis” where I, among other things, attempt to quantify the terrorist threat from immigrants by visa category. 

One of the best questions I received about it came from Daniel Griswold, the Senior Research Fellow and Co-Director of the Program on the American Economy and Globalization at the Mercatus Center. Full disclosure: Dan used to run Cato’s immigration and trade department and he’s been a mentor to me. Dan asked me how many of the ten illegal immigrant terrorists I identified crossed the Mexican border?

I didn’t have a good answer for Dan yesterday but now I do. 

Of the ten terrorists who entered the country illegally, three did so across the border with Mexico. Shain Duka, Britan Duka, and Eljvir Duka are ethnic Albanians from Macedonia who illegally crossed the border with Mexico as children with their parents in 1984. They were three conspirators in the incompetently planned Fort Dix plot that was foiled by the FBI in 2007, long after they became adults. They became terrorists at some point after immigrating here illegally. Nobody was killed in their failed attack.

Gazi Ibrahim Abu Mezer, Ahmed Ressam, and Ahmed Ajaj entered illegally or tried to do so along the Canadian border. Ajaj participated in the 1993 World Trade Center bombing, so I counted him as responsible for one murder in a terrorist attack. Abdel Hakim Tizegha and Abdelghani Meskini both entered illegally as stowaways on a ship from Algeria. Shahawar Matin Siraj and Patrick Abraham entered as illegal immigrants but it’s unclear where or how they did so.

Based on this history, it’s fair to say that the risk of terrorists crossing the Southwest border illegally is minuscule.

Beginning in 2009, developers in Seattle became leaders in micro-housing. As the name suggests, micro-housing consists of tiny studio apartments or small rooms in dorm-like living quarters. These diminutive homes come in at around 150–220 sq. ft. each and usually aren’t accompanied by a lot of frills. Precisely because of their size and modesty, this option provides a cost-effective alternative to the conventional, expensive, downtown Seattle apartment model.

Unfortunately, in the years following its creation, micro-housing development has all but disappeared. It isn’t that Seattle prohibited micro-housing outright. Instead, micro-housing’s gradual demise was death by a thousand cuts, with a mushroom cloud of incremental zoning regulation finally doing it in for good. Design review requirements, floor space requirements, amenity requirements, and location prohibitions constitute just a few of the Seattle Planning Commission’s assorted weapons of choice.

As a result of the exacting new regulations placed on tiny homes, Seattle lost an estimated 800 units of low-cost housing per year. While this free market (and free to the taxpayer) solution faltered, Seattle poured millions into various housing initiatives that subsidize housing supply or housing demand, all on the taxpayer’s dole.

Sadly, Seattle’s story is anything but unusual. Over the past almost one hundred years, the unintended consequences of well-meaning zoning regulations have played out in counterproductive ways time and time again. Curiously, in government circles zoning’s myriad failures are met with calls for more regulations and more restrictions—no doubt with more unintended consequences—to patch over the failures of past regulations gone wrong.

In pursuit of the next great fix, cities try desperately to mend the damage that they’ve already done. Euphemistically-titled initiatives like “inclusionary zoning” (because who doesn’t want to be included?) force housing developers to produce low-cost apartments in luxury apartment buildings, thereby increasing the price of rent for everyone else. Meanwhile, “housing stabilization policies” (because who doesn’t want housing stabilized?) prohibit landlords from evicting tenants that don’t pay their rent, thereby increasing the difficulty low-income individuals face in getting approved for an apartment in the first place.

The thought seems to be that even though zoning regulations of the past have systematically jacked up housing prices, intentionally and unintentionally produced racial and class segregation, and simultaneously reduced economic opportunities and limited private property rights, what else could go wrong?

Perhaps government planners could also determine how to restrict children’s access to good schools or safe neighborhoods. Actually, zoning regulations already do that, too.

Given the recent failures of zoning policies, it seems prudent for government planners to begin exercising a bit of humility, rather than simply proposing the same old shtick with a contemporary twist.

After all, they say that the definition of insanity is doing the same thing over and over and expecting different results.

Terrorism, I have argued previously, has hijacked much of the American foreign policy debate. Regardless of whether we are discussing Iraq, Iran, Libya, Russia, or nuclear weapons, it seems we are really talking about terrorism. But although it feels like we talk about terrorism nonstop these days, we actually talk about it a lot less than we did right after 9/11.

As Figure One shows, the news media’s attention to terrorism declined steadily through 2012. The Syrian civil war and then the emergence of the Islamic State reversed the trend. But even so there were almost 40% fewer news stories mentioning the words “terror” or “terrorism” in 2015 compared to the peak in 2002. 


Of course, the one time every year when we can guarantee seeing plenty of news about terrorism is around the anniversary of the 9/11 attacks. Figure Two compares the daily average coverage of terrorism each year to the number of stories published on September 11. We can think of the difference between the coverage on 9/11 and the daily average as the “anniversary attention effect.” The biggest anniversary effect came in 2011 on the 10th anniversary of the attacks, when the major U.S. newspapers printed almost six times more articles mentioning terrorism than the daily average. The smallest anniversary effect came in 2015 when the effect only boosted coverage of terrorism by about a third. Though the trend is a bit noisy, over time it is clear that the anniversary effect is shrinking. From 2002 through 2006, anniversary coverage was an average of 2.37 times higher than the daily average, but over the last five years from 2012–2016 anniversary coverage has averaged just 1.88 times higher.

The shrinking anniversary attention effect suggests that the resonance of 9/11 may be waning as the attacks recede into history. Of course, we should not be too hasty to conclude that public fears of terrorism are also fading. As John Mueller has written here (and in greater detail here), Americans have harbored a healthy level of fear of future terrorist attacks ever since 9/11. But given how hyperbolic and utterly divorced from reality much of the terrorism rhetoric has been this election cycle, we can only hope that 9/11 is beginning to lose some of its symbolic power. Though it is important to honor friends and family we have lost to terrorism, we cannot let emotion dictate foreign policy. 

James Madison presciently warned “it will be of little avail to the people that the laws are made by men of their own choice if the laws be so voluminous that they cannot be read, or so incoherent that they cannot be understood.” Sadly, however, Madison’s admonishment has fallen on deaf ears when it comes to modern statutes and regulations—which in some cases are so numerous and complex that they cannot be deciphered by trained attorneys, much less the general public.

What’s worse, federal prosecutors have seized the opportunity to use these vague statutes, and they now have the ability to prosecute almost anyone for anything. One protection against these incoherent laws and regulations, however, is that, in most criminal cases, the prosecution must prove a defendant had a certain degree of criminal intent (mens rea) to prove a violation. But in order for this protection to be effective, the courts must properly instruct the jury on the level of intent required by the statute.

In United States v. Clay, the district court—as well as a panel of judges on the Eleventh Circuit Court of Appeals—failed in this respect. In 2002, the Florida legislature enacted the “80/20 Statute,” which requires certain medical providers receiving state Medicaid funds to spend 80 percent of such funds towards “the provision of behavioral health care services” or refund the difference to Florida’s Agency for Health Care Administration (AHCA). The statute, however, was ambiguous as to how the expenditures were to be calculated and did not set out any certain guidelines. Despite this ambiguity, in 2011, federal prosecutors indicted Mr. Clay and others for healthcare fraud and making false statements relating to not properly calculating and reporting their expenditures to the AHCA. The defendants were prosecuted under a federal fraud statute, which requires the government to prove that the defendants “knew” the reports were false. The judge, however, instructed the jury that it could convict if the defendants knew either that the submissions were “untrue” or if they acted “with deliberate indifference as to the truth,” which is certainly not the same as the “knowledge” required by the statute. The district court allowed this jury instruction despite a 2011 Supreme Court case that held “deliberate indifference” cannot substitute for a statutory knowledge requirement, and a three-judge panel in the Eleventh Circuit upheld the district court’s instruction.

The Cato Institute has joined with the National Association of Criminal Defense Lawyers, the Washington Legal Foundation, the Reason Foundation and twelve criminal and business law professors in requesting the full Eleventh Circuit to rehear the case and vacate the panel’s opinion. The district court’s jury instruction was a clear departure from Supreme Court precedent, and, if upheld, would weaken one of the fundamental checks on vague statutes and over-zealous prosecutors—the requirement that the government prove someone knows they are committing a crime. 

Arlen and Cindy Foster are farmers in Miner County, South Dakota. Arlen’s grandfather bought the land over a century ago, and the family has been working it ever since. In 1936, Arlen’s father planted a tree belt on the south end of the farm as a conservation measure. As the weather warms, the snow around the tree belt melts and the water flows into the circular depression, called a “prairie pothole” (circled in blue on the lower right hand part of the picture). Unfortunately for the Fosters, the federal government has declared that the shallow depression is a protected wetland, and thus denied them the productive use of that portion of their land. 

Department of Agriculture regulations define what qualifies as a wetland, but remain vague on some of the details. The regulations say that, if a parcel’s wetland status can’t be determined due to alteration of the vegetation (such as through filling or tilling the land), a similar parcel from the “local area” will be chosen to act as a proxy. “Local area” is never defined, but a 2010 internal field circular refers agency officials to an Army Corps of Engineers manual that uses the parallel language “adjacent vegetation.” Here, the agency interpreted “local area” to refer to an area of almost 11,000 square miles and then selected proxy site some 33 miles from the Fosters’ farm. That proxy site supports wetland vegetation, so the Fosters’ land was also declared a protected wetland.


The Fosters appealed that determination all the way to the Eighth Circuit Court of Appeals, which blindly deferred to the agency’s strained and unnatural interpretation of “local area,” upholding the determination as a reasonable reading of the regulations’ requirements. Now the Fosters are seeking Supreme Court review, and the Cato Institute has filed an amicus brief in support of their petition for certiorari. The Eighth Circuit relied on the 1997 Supreme Court case Auer v. Robbins, which held that courts should give broad deference to an agency’s interpretation of its own regulations. While Auer’s holding has repeatedly been called into question by both the Supreme Court and various lower courts, the Eighth Circuit’s decision goes beyond Auer’s already shaky foundations. The decision actually afforded the agency “second-level” Auer deference, deferring to an interpretation of a vaguely written agency circular that interprets a vague regulation that in turn interprets a vague statute–all to get to a definition of “local area” that is nothing close to a natural and reasonable interpretation of that term.

We argue that Auer should not be extended in this way for several reasons. Since major policy decisions are being made in internal documents with no notice to the public, ordinary people are denied fair warning of what the law requires of them. And since these interpretive decisions are not binding on the agency itself, it is free to change them at any time, again without any notice to the public. Second-level Auer deference also undermines the rule of lenity—a traditional rule of interpretation stating that ambiguity in criminal statutes must be resolved in favor of the defendant—even more than first-level Auer deference already does. It effectively allows agencies to create new crimes (again without notice to the public) by doing as little as reinterpreting a footnote in a memo. Cato urges the Supreme Court take the case so that it may rein in the expansion of Auer deference and make it clear to administrative agencies that they cannot avoid judicial review by refusing to promulgate clear, unambiguous regulations. 

Cato published a paper of mine today entitled “Terrorism and Immigration: A Risk Analysis.”  I began this paper shortly after the San Bernardino terrorist attack in December last year when it became clear that few had attempted a terrorism risk analysis of immigration in general, let alone focusing on individual visa categories.  There were few studies on the immigration status of terrorists and the vast majority of them were qualitative rather than quantitative.  Inspired by the brilliant work of John Mueller and Mark Stewart, I decided to make my own.  

From 1975 through the end of 2015, 154 foreign-born terrorists murdered 3024 people on U.S. soil.  During that same time period, over 1.14 billion foreigners entered the United States legally or illegally.  About 7.4 million foreigners entered the United States for each one who ended up being a terrorist.  Startlingly, 98.6 percent of those 3024 victims were murdered on 9/11 (I did not count the terrorists as victims, obviously).  However, not every terrorist is successful.  Only 40 of those 154 foreign-born terrorists actually ended up killing anyone on U.S. soil.    

Immigrants frequently enter the United States on one visa and adjust their status to another.  Many tourists and other non-immigrants frequently enter legally and then fall out of status and become illegal immigrants.  I focused on the visas foreigners used to enter the United States because applications for that visa are when security screenings are initially performed. 

Table 1, copied from my paper, shows the chance of being killed in a terrorist attack on U.S. soil by foreigners by visa category.  Only three people have been killed on U.S. soil in terrorist attacks caused by refugees – which is a one in 3.64 billion chance a year of dying in an attack by a refugee.  If future refugees are 100 times as likely to kill Americans at past ones, all else being equal, then the chance of being killed in an attack caused by them will be one in 36 million a year.  That’s a level of risk we can live with. 

Table 1


 Source: “Immigration and Terrorism: A Risk Analysis.”

I chose to begin my analysis in 1975 for three main reasons.  First, I wanted to make sure to include many refugees because of the current public fear over Syrians.  The waves of Cubans and Vietnamese refugees during the 1970s provided a large pool of people in that category.  Second, I had to go back to the late 1970s to find refugees who actually killed people on U.S. soil in terrorist attacks.  Although some refugees since then have attempted terrorist attacks, none has successfully killed anyone.  Third, I wanted to see if there was a different result before and after the modern refugee screening system was created in 1980.  The timing of that immigration reform coincides with the end of successful refugee terrorist attacks but the small sample of three victims prior to 1980 and none afterwards speaks volumes.     

In any project of this size, many findings and facts get left on the editing floor.  Here are some:

  • The chance of being murdered by a non-terrorist is one in 14,275 a year compared to one in 3,609,709 a year for all foreign-born terrorist attacks.
  • The chance of being murdered on U.S. soil by any terrorist, native or foreigner, was one in 3.2 million a year.
  • The chance of being murdered in a terrorist attack on U.S. soil, committed by a foreigner after 9/11 was one in 177.1 million a year.
  • For every successful foreign-born terrorist who actually killed somebody on U.S. soil in an attack, over 28 million foreigners entered the United States.
  • 9/11 is a tremendous outlier in terms of deadliness – about an order of magnitude deadlier than the second-deadliest terror attack in world history.  Excluding 9/11 from this analysis helps us understand what most terrorist attacks in the past and the future are going to be like.  Doing that reveals that 91 percent of the deaths caused by all terrorists on U.S. soil, native or foreign-born, were committed by natives or those with unknown nationalities (usually because their identities were never uncovered) while 9 percent were committed by foreigners.  

And it came to pass in those days, that there went out a decree that all the world should be taxed.

And lo, the ubiquity of taxation made it possible for the Treasury Department to identify all the same-sex marriages in the land by zip code and present the data in tables and a map.

And in all the land only a few paranoids worried about the implications for privacy and freedom, of gay people and others, of a government that knows everything about you.

The massive decline in the U.S. poverty rate reported today by the Census (it fell from 14.8% of all families below to the poverty line to just 13.5%, the largest drop since the 1960s) may have come as a surprise to many economists and political commentators but it should not have. The one thing we have learned from the last three business cycles is that the poor benefit greatly from sustained economic growth.

When a recession occurs the unemployment rate can fall quickly, but it usually takes a long time before it returns to its previous pre-recession levels. no matter how aggressive our infrastructure spending may be. What eventually helps low-income workers is an economy where labor–skilled and unskilled–becomes difficult to find. When that happens companies bid against one another to find workers or else become creative-perhaps by investing in labor-saving equipment or else taking a chance on workers who haven’t been in the labor market for a while and don’t have the most sterling resumes.

When the unemployment rate approached 4.5% in the late 1990s poverty rates also declined significantly, as wages all across the income distribution grew steadily. Productivity grew smartly as well during this time–while the facile explanation for this was that businesses finally managed to take advantage of IT innovations, the same companies that were using IT to boost productivity were also the same ones that hire lots of low income workers (i.e. big box stores like Wal-Mart and Target) and they had every incentive to figure out how to do more with fewer workers, who were becoming more expensive. In Chicago the grocery chain Dominick’s sought out people living in government housing projects and spent significant resources training them to work for them, with surprising success. In Peoria another grocery chain, Kroger’s, worked with a local social service organization to train and employ young adults with Down Syndrome to work in their stores, also with a great deal of success. With luck more firms will have the need to get creative on employment soon.

Today’s numbers reflect the fact that strong and sustained economic growth and not redistribution are the best way to help low-income Americans. There’s lots that the next president and Congress can do on that front–in the last year the Department of Labor alone has imposed regulations that will cost businesses tens of billions of dollars a year to implement, and the FCC’s going to throttle investment in high speed internet for the now-inviolate right for a Netflix customer to not have to wait three minutes for his movie to load.

The lesson macroeconomists painfully learned in the 1970s was that they’re no good at forecasting the ebbs and flows of the business cycle and we’re better off concentrating our efforts on thinking about the things that can boost productivity and long-run growth. Today, however, that lesson has been all but ignored as we debate whether society would survive a quarter-point rise in the discount rate and how much of a free lunch new infrastructure spending would be.

A better lesson for politicians would be that 3% growth is 50% more than 2% growth and that it’s worth contemplating how to reach that copasetic rate once again. It should be a lesson for the rest of us as well.

The Trans-Pacific Partnership trade agreement between the United States and 11 other countries was reached late last year, signed by the parties earlier this year, and now awaits ratification by the various governments. In terms of the value of trade and share of global output accounted for by the 12 member countries, the TPP is the largest U.S. trade agreement to date.

In the United States, the TPP has been controversial from the outset, drawing criticism from the usual suspects – labor unions, environmental groups, and sundry groups of anti-globalization crusaders – but also from free traders concerned that the deal may be laden with corporate welfare and other illiberal provisions that might lead to the circumvention or subversion of domestic sovereignty and democratic accountability.

As free traders who recognize that these kinds of agreements tend to deliver managed trade liberalization (which usually includes some baked-in protectionism), rather than free trade, my colleagues and I at the Herbert A. Stiefel Center for Trade Policy Studies set out to perform a comprehensive assessment of the TPP’s 30 chapters with the goal of answering this question: Should Free Traders Support the Trans-Pacific Partnership?

Yesterday, Cato released our findings in this paper, which presents a chapter-by-chapter analysis of the TPP, including summaries, assessments, scores on a scale of 0 (protectionist) to 10 (free trade), and scoring rationales. Of the 22 chapters analyzed, we found 15 to be liberalizing (scores above 5), 5 to be protectionist (scores below 5), and 2 to be neutral (scores of 5). Considered as a whole, the terms of the TPP are net liberalizing – it would, on par, increase our economic freedoms.

Accordingly, my trade colleagues and I hope it will be ratified and implemented as soon as possible.

Drug policy watchers learned earlier this month that the latest substance to earn Schedule I status is the obscure plant ​called kratom. So what’s Schedule I? By the letter of the law, Schedule I of the Controlled Substances Act contains “drugs, substances, or chemicals” that meet the following criteria:

The drug or other substance has a high potential for abuse.
The drug or other substance has no currently accepted medical use in treatment in the United States.
There is a lack of accepted safety for use of the drug or other substance under medical supervision.

In this post, I’m not going to consider the penalties that apply to the use, possession, or sale of Schedule I substances. I’m just going to look at the criteria for inclusion. While they may appear plausible, these criteria are preposterous and completely indefensible as applied.

The most important unwritten fact about Schedule I is that all three of its criteria are terms of political art. Neither science nor the plain meanings of the words have much to do with what Schedule I really includes.

We can see this first in how Schedule I fails to include many substances that clearly belong there. These substances easily meet all three criteria. Yet they are in no danger whatsoever of being scheduled. It literally will never happen.

Solvent inhalants, such as toluene, have a high potential for abuse, have no accepted medical uses, and cannot be used safely even with close medical supervision. The same is true of obsolete anesthetics like diethyl ether and chloroform. Toluene, ether, and chloroform are all dangerous when used as drugs. Overdosing on each is relatively easy, they bring serious health risks at any level of use, and they have no valid medical uses today.

None, of course, will ever be scheduled, because each is also an essential industrial chemical. That they happen to be abusable as drugs is a fact that a crime-based drug policy can’t easily accommodate. And so that fact is simply ignored.

The substances included on Schedule I are an odd lot as well. Some clearly meet the criteria, but many do not.

Why, for example, is fenethylline Schedule I, while amphetamine is in the less restrictive Schedule II? On ingestion, fenethylline breaks down into two other compounds: theophylline – a caffeine-like molecule found in chocolate – and amphetamine.

People commonly use amphetamine under medical supervision in the United States; the popular ADHD drug Adderall is simply a mixture of various forms of amphetamine. Theophylline has also seen use by physicians for care of various respiratory issues. And people still use fenethylline under medical supervision in other countries. In the published literature, fenethylline is described as having a “lower abuse potential and little actual abuse compared to amphetamine.” (Emphasis added.) To say that fenethylline has “no accepted medical use in the United States” is, quite literally, to suggest that medical science changes when you cross the border.

​Fenethylline isn’t unique. Schedule I contains many drugs quite like it, molecules that bear a close but not exact resemblance to familiar and widely used medical drugs. Many of these are prodrugs – substances that break down in the body to become familiar, medically useful molecules like morphine or amphetamine. Others, like dimethylamphetamine, are held by the medical literature to be safer than their less strictly regulated chemical cousins.

This is not to say that fenethylline, dimethylamphetamine, or amphetamine itself is risk-free. No drug is. But one could hardly find a less rational set of classifications than this one, in which drugs are scheduled more severely if and when they are less risky.

O​r consider psilocybin. ​Psilocybin flunks the first criterion for Schedule I because it is in fact fairly difficult to abuse. Psilocybin​ ​binges don’t generally happen because even a single dose creates a swift and strong tolerance response: A second dose, or an added dose of any other traditional psychedelic, usually does little or nothing, and doses after that will likely be inert until several days have elapsed.

A user may have a regrettable or upsetting psilocybin​ ​experience, and many do. But users can’t have a binge, and deaths and serious illnesses are exceedingly rare. Psilocybin isn’t an entirely risk-free drug – again, no drug is risk-free – but it’s clearly not in the same league as cocaine (Schedule II) or even ketamine (Schedule III). Going by the letter of the law, psilocybin’s place on Schedule I is inexplicable.

Still more inexplicable is cannabis, which has​ a relatively low ​potential for abuse, ​many important medical uses, ​and ​​such a favorable safety profile that a life-threatening overdose is impossible. Too much cannabis can be deeply psychologically unpleasant, but it can’t be fatal.

As you all know, cannabis is Schedule I.

This has brought Americans, long the world’s most inventive people, to invent and ingest dozens of substitutes. Each of these so-called cannabimimetics became a recreational drug almost solely because a safe, well-studied, and well-tolerated recreational drug – cannabis – just happened to be illegal. Now there are dozens of cannabimimetics, all with somewhat different dosages, effects, and safety profiles. Much remains unknown about them, unlike the relatively well-studied compounds found in cannabis.

A similar process has taken place with the traditional psychedelics, generating a bewildering array of new psychoactive substances, each of which has a dosage, effect constellation, and risk profile that is relatively unknown when compared to, say, psilocybin or mescaline. It might even be said that Schedule I itself is the single largest cause of Schedule I drugs. In all, the mimetics are an area of comparative ignorance. Many of these new drugs may even deserve a bad reputation, if not a state-enforced ban. But, at least for a time, all of them were technically legal (at least, if we ignore the Federal Analogue Act, which is an entirely different mess of its own). If cannabis or psilocybin were legal instead, few would likely bother with the mimetics outside a laboratory setting.

Yet many of these mimetics could also be medically interesting, much like cannabis itself. We just don’t know yet, and we are a lot less likely ever to find out because it’s difficult to do research with Schedule I drugs.

To sum up, the list of drugs on Schedule I both over-includes and under-includes. I suspect that the list does not exist to fulfill the criteria. Rather, the criteria​ exist to make Congressional and DEA determinations look scientific, even when they clearly are not. They would appear to have no other function.

Compared to the criteria for inclusion, the list of drugs on Schedule I both over-includes and under-includes. I suspect that the list does not exist to fulfill the criteria. Rather, the criteria​ exist to make DEA determinations appear scientific, whatever they might be. They have essentially no other function.

With that in mind, let’s take a closer look at kratom.

As Jacob Sullum notes, the DEA has simply defined all use of kratom as abuse. Of course, then, the potential for abuse is (nominally) high. But it begs the question that science could and should have answered: What exactly is kratom’s abuse potential? Thanks to kratom’s new Schedule I status, U.S. researchers are in no position to question the DEA anytime soon.

This is typical of how drug scheduling works; to some extent the law creates its own medical facts by foreclosing research avenues that might otherwise be explored. But it can only do this by stunting our knowledge and perhaps delaying the development of useful new medicines.

What’s true of abuse potential is also true of “accepted medical use.” It too is an obfuscation; the DEA, and not doctors, determine what counts as accepted. But as Jeffrey Miron noted, kratom users report that it can relieve the symptoms of opiate addiction and help addicts kick the habit. Are they right? More clinical study might help, and we can be pretty sure that we’re not getting it now.

Finally, “lack of accepted safety for use” is – you guessed it – yet another determination made by a certain department in the executive branch. Not that it would change their minds, but Jacob Sullum correctly notes that kratom is relatively safe when compared to many other drugs, particularly the recreational opiates like heroin. In particular, while overdose on kratom is certainly possible, no fatal overdoses have ever been recorded. This is not to say that it’s impossible, of course, but when compared to heroin – or many other drugs – “no recorded fatal overdoses” is a pretty good track record.

In short: Schedule I is not a set of scientific criteria, rationally applied to the world of drugs. Rather, it’s a science-y looking smokescreen, one that allows the DEA to do virtually whatever it feels like – which is often completely indefensible.

Image by Uomo vitruviano (Own work) [CC BY-SA 3.0 (http://creativecommons.org/licenses/by-sa/3.0), via Wikimedia Commons.

“There is now a consensus that the United States should substantially raise its level of infrastructure investment,” writes former treasury secretary Lawrence Summers in the Washington Post. Correction: There is now a consensus among two presidential candidates that the United States should increase infrastructure spending. That’s far from a broad consensus.

“America’s infrastructure crisis is really a maintenance crisis,” says the left-leaning CityLab. The “infrastructure crisis is about socialism,” says the conservative Heritage Foundation. My colleague Chris Edwards says, “There is no widespread crisis of crumbling infrastructure.” “The infrastructure crisis … isn’t,” the Reason Foundation agrees.

As left-leaning Charles Marohn points out, the idea that there is an infrastructure crisis is promoted by an “infrastructure cult” led by the American Society of Civil Engineers. As John Oliver noted, relying on them to decide whether there is enough infrastructure spending is like asking a golden retriever if enough tennis balls are being thrown.

In general, most infrastructure funded out of user fees is in good shape. Highways and bridges, for example, are largely funded out of user fees, and the number of bridges that are structurally deficient has declined by more than 52 percent since 1992. The average roughness of highway pavements has also declined for every class of road.

Some infrastructure, such as rail transit, is crumbling. The infrastructure in the worst condition is infrastructure that is heavily subsidized, because politicians would rather build new projects than maintain old ones. That suggests the U.S. government should spend less, not more, on new infrastructure. It also suggests that we should stop building rail transit lines we can’t afford to maintain and maybe start thinking about scrapping some of the rail systems we have.

Aside from the question of whether our infrastructure is crumbling or not, the more important assumption underlying Summers’ article is that infrastructure spending always produces huge economic benefits. Based on a claim that infrastructure spending will produce a 20 percent rate of return, Summers says that financing it through debt is “entirely reasonable.” Yet such a rate of return is a pure fantasy, especially if it is government that decides where to spend the money. Few private investments produce such a high rate of return, and private investors are much more careful about where their money goes.

For every government project that succeeds, a dozen fail. Funded by the state of New York, the Erie Canal was a great success, but attempts to imitate that success by Ohio, Indiana, and Pennsylvania put those states into virtual bankruptcy.

The 1850 land grants to the Illinois Central Railroad paid off, at least for Illinois, but similar subsidies to the First Transcontinental Railroad turned into the biggest political corruption scandal of the nineteenth century. The Union Pacific was forced to reorganize within four years of its completion, and it went bankrupt again two decades later. The similarly subsidized Northern Pacific was forced to reorganize just a year after its completion in 1883 and, like the Union Pacific, would go bankrupt again in 1893.

The Interstate Highway System was a great success, but a lot of transportation projects built since then have been pure money pits. It’s hard to argue that any of the infrastructure spending that came out of the American Recovery and Reinvestment Act did anything to actually stimulate the economy.

Think the Atlanta streetcar, whose ridership dropped 48 percent as soon as they started charging a fare, generates economic development? Only in a fantasy world. Japan has used infrastructure spending to stimulate its way out of its economic doldrums since 1990. It hasn’t worked yet.

In the Baptists and bootleggers political model, Keynesians such as Summers are the Baptists who promise redemption from increased government spending while the civil engineers, and the companies that employ them, are the bootleggers who expect to profit from that spending. Neither should be trusted, especially considering how poorly stimulus spending has worked to date.

Making infrastructure spending a priority would simply lead to more grandiose projects, few of which will produce any economic or social returns. In all probability, these projects will not be accompanied by funding for maintenance of either existing or new infrastructure, with the result that more infrastructure spending will simply lead to more crumbling infrastructure.

Almost as an aside, Summers adds that, “if there is a desire to generate revenue to finance infrastructure investments, the best approaches would involve user fees.” That’s stating the obvious, but the unobvious part is, if we agree user fees are a good idea, why should the federal government get involved at all? The answer, of course, is that politicians would rather get credit for giving people infrastructure that they don’t have to pay for than rely on user fees, and the controversies they create, to fund them.

Instead of an infrastructure crisis, what we really have is a crisis over who gets to decide where to spend money on infrastructure. If we leave infrastructure to the private market, we will get the infrastructure we need when we need it and it will tend to be well maintained as long as we need it. If we let government decide, we will get too much of some kinds of infrastructure we don’t need, not enough of other kinds of infrastructure we do need, and inadequate maintenance of both.