Cato Op-Eds

Individual Liberty, Free Markets, and Peace
Subscribe to Cato Op-Eds feed

In 2013, Defense Distributed uploaded computer aided design (CAD) files and made them freely available to the public. With the proper equipment and knowledge, someone could use the CAD files to create a 3D-printed gun. The government quickly ordered the files removed (under threat of severe penalties) because it determined that the files ran afoul of the International Traffic in Arms Regulations (ITAR), which prevent people from communicating to foreign persons “technical data” about constructing certain arms. In other words, ITAR is one of the laws that makes it illegal to tell the foreign persons how to make things like an Apache helicopter. Not all arms are listed, and ITAR doesn’t restrict technical data that is merely “general scientific, mathematical, or engineering principles commonly taught in school.”

There are many manuals and documents out there that tell people how to make dangerous things. The Anarchist Cookbook is perhaps the most famous. Many people are surprised that the government lets The Anarchist Cookbook exist, but it is not the government that lets it exist—they’d probably rather it didn’t—it’s the First Amendment. The First Amendment protects communication about making dangerous things, from bombs to napalm, and it certainly protects communication on how to fix guns or even construct them from scratch. If the government is going to restrict such information it must do so narrowly and with good reason, while understanding that there is a difference between instructions for a plutonium trigger for a hydrogen bomb and CAD files for a plastic, one-shot pistol. And if the government goes too far, people should be allowed to challenge it.

ITAR’s regulation of communicating technical data is clearly a content-based prior restraint of speech—it restrains speech before it is published based on the content of the communication—which is one of the most egregious ways to violate the First Amendment. While it is certainly proper for the government to prohibit telling the North Koreans how to make a nuclear bomb, Defense Distributed believes that the government went too far in extending ITAR to cover CAD files for small, 3D-printed guns. Moreover, by uploading the files to the internet—which, yes, foreign persons can access—Defense Distributed believed it wasn’t communicating them to foreigners in the manner contemplated by the statute. In such situations, when a plaintiff believes a law has reached too far and is impinging on their freedom of speech, it is usually proper to seek a preliminary injunction that keeps the government from shutting the speech down until a court has determined the merits of the ultimate issue. But the district court improperly denied the request based on an incorrect approach to preliminary injunction analysis and a wholly inappropriate assessment of the relevant interests at stake. The Fifth Circuit upheld the lower court’s decision and denied a request for rehearing en banc, which is when all the judges on a circuit hear a case rather than the usual three-judge panel. Defense Distributed has now filed a petition for a writ of certiorari asking the Supreme Court to take up their case and protect their First Amendment rights. Cato has filed a supporting brief urging the Court to accept and summarily reverse the decision below. We argue that such disposition is required when a facially content-based prior restraint escapes review just because the government says “national security.”

In essence, the lower courts refused to look at one of the most important considerations in the preliminary injunction analysis—likelihood of success on the merits—because the government intoned the words “national security,” to which the judges said “okay, that clearly outweighs any interests Defense Distributed has.” Defense Distributed certainly has an interest—the rights protected by the First Amendment of the U.S. Constitution—and it was frankly ridiculous that the lower courts so casually let the government’s interests trump the First Amendment.

In a First Amendment case, in order to determine whether an injunction is in the public interest, the merits of a plaintiff’s claim must be evaluated before proceeding to weigh the equities. This is not like an injunction to prevent your neighbor from spilling pollutants on your lawn. In such a case the court would weigh the various interests involved in deciding whether to issue an injunction, but no one’s constitutional rights would be part of the consideration. Constitutional rights get special weight if it is likely they’re being violated. That’s why they’re in the Constitution. Nevertheless, the lower courts said that Defense Distributed failed to show how granting an injunction to enjoin an unlawful restriction of speech was in the public interest. But enforcing the Constitution is always in the public interest, and the government cannot be harmed if its own unconstitutional activity is enjoined. If it seems likely the government is violating the First Amendment, then that strongly indicates that the plaintiff’s equities outweigh the government’s because the First Amendment is being violated.

By concluding that the district court had not abused its discretion by failing to consider the merits of a First Amendment plaintiff’s claims, the Fifth Circuit fundamentally altered the preliminary injunction standard, laid out by the Supreme Court, which should be applied to the most egregious abridgments of speech. Dissenting from denial of rehearing en banc, Judge Elrod put it succinctly: “The panel opinion’s flawed preliminary injunction analysis permits perhaps the most egregious deprivation of First Amendment rights possible: a content-based prior restraint.”

While some people are frightened by the prospect of 3D-printed guns—including, perhaps, some of the judges in the lower courts here—that is no reason to allow the government to shut down speech about such guns without ensuring that the restrictions comport with the strictures of the First Amendment. Even if you don’t like guns, this case should concern you because the government should not be allowed to say “national security” in order to shut down speech it doesn’t like—“first they came for the guns, and I didn’t speak up because I didn’t own guns; then they came for the…” The implications for free speech rights could be catastrophic Defense Distributed fails to prevail in this case.

 

During the Hurricane Harvey disaster, many reporters and commentators seemed to assume that federal agencies had to take the lead in rescuing the city. And even before water levels had receded in Houston, federal politicians were promising billions of dollars in aid.

However, the large-scale federal intervention in natural disasters we saw during and after Katrina, Sandy, and Harvey is a relatively recent phenomenon. Prior to recent decades, the private sector handled much of the nation’s disaster response and rebuilding. The U.S. military and National Guard have long played important roles during natural disasters, but private charitable groups and businesses have been central to disaster response and rebuilding throughout U.S. history.

In this essay, I discuss the responses to various natural disasters in the past. The 1906 San Francisco earthquake and fire and the 1913 Great Easter Flood illustrate the impressive outpouring of private-sector support during past calamities.

1906 San Francisco Earthquake and Fire

San Francisco was struck by a massive earthquake and fire in 1906 that destroyed 80 percent of the city and killed about 3,000 people. At least 225,000 people out of about 400,000 in the city were left homeless, and 28,000 buildings were wrecked.

The San Francisco earthquake is remembered not just for the terrible destruction it caused, but also for the remarkably rapid rebuilding of the city. More than 200,000 residents initially left the city, but the population recovered to pre-quake levels within just three years, and residents quickly rebuilt about 20,000 buildings.

The private sector response to the disaster was extremely impressive. Voluntary aid poured in from around the country. John D. Rockefeller, Andrew Carnegie, and W.W. Astor, for example, each donated $100,000. Charitable groups, including the Salvation Army and the Red Cross, played a large role in relief efforts. The health care and home-products company Johnson and Johnson quickly loaded rail cars full of donated medical supplies and sent them to San Francisco.

The insurance industry was crucial to the rebuilding. About 90 percent of San Francisco residents had fire insurance from more than 100 different companies. The companies ended up paying out a massive $225 million in claims, which was equal to what the entire U.S. insurance industry had earned in profits in the prior four decades. Insurance payouts totaled about 90 percent of what was owed, as only a relatively small number of companies failed.

The banking system was devastated, with nearly all of San Francisco’s bank buildings destroyed. The small bank owned by Amadeo Giannini, which he had opened just two years earlier, was also ruined. But Giannini was able to rescue his gold and securities, and the next day he opened for business on a wharf on San Francisco Bay. His rapid response and willingness to provide loans to all types of people after the disaster helped him gain the respect of the city. His bank would eventually grow to be one of the largest in the nation, the Bank of America.

Another impressive story is that of the Southern Pacific Railroad, which immediately swung into action and provided free evacuation for more than 200,000 city residents to anywhere in the country. Within five days of the earthquake, the company had filled 5,783 rail cars with passengers leaving the city. Southern Pacific president Edward Harriman made disaster response the highest priority of his rail network. Only one day after the earthquake, the first of his rail cars full of emergency supplies left Omaha for San Francisco. Harriman personally donated $200,000 to relief efforts.

What about the government response to the San Francisco conflagration? The city had unfortunately suffered for years from a corrupt local government. The good news was that in the immediate aftermath of the earthquake, leading citizens formed essentially a new city government called the “Committee of 50,” which was credited with a very organized and effective disaster response. For its part, Congress appropriated just $2.5 million for relief to San Francisco, or about $50 million in today’s dollars.

The main federal organization that responded was the U.S. Army, which moved quickly to take control of the city and provide water, food, tents, and other relief items. Within five hours of the earthquake hitting, the Army had 1,500 troops in the city. Some of the actions of the Army were controversial, but the swift response by the commander of the nearby Presidio base is an example of how local resources and local decisionmaking are crucial in the aftermath of disasters.

1913 Great Easter Flood

The Great Easter Flood in 1913 ravaged a huge area in one of the most widespread and damaging disasters ever to strike the United States. High winds and massive flooding caused destruction and more than 1,000 deaths across 14 states from Vermont to Alabama. The U.S. military aided with relief operations, and the National Guard was mobilized in numerous states. Americans responded with huge contributions to the Red Cross and other charitable organizations aiding victims.

Ohio was the hardest hit state, and Dayton probably the hardest hit city. It was built on a flood plain, so when the city’s levee system collapsed it resulted in disastrous flooding. Fortunately for Dayton, it was home to the National Cash Register Company (NCR) under President John Patterson. Seeing the flood disaster that was about to happen, Patterson seized the initiative and NCR become the central funder and organizer of relief in the city.

NCR built 300 boats to rescue flood victims, organized search teams, and provided meals and shelter for thousands of people. On its peak day, NCR’s kitchens provided meals for 83,000 flood victims. NCR headquarters also became the base of operations for the Red Cross and Ohio National Guard.

John Patterson was an interesting leader. He instituted innovative and enlightened management practices, such as providing a wide range of recreation and medical amenities for workers. But he was also an aggressive businessman, and he and other NCR executives were found guilty of violating federal antitrust laws just weeks before the flood, although this was reversed on appeal. NCR’s leaders apparently saw a chance to redeem themselves in the eyes of the community, and their remarkable efforts to save their city during the flood gained them national praise.

Historian Trudy Bell has written in detail about the 1913 disaster. One of her findings is that there were widespread refusals of aid by affected individuals and communities, apparently because of cultural norms at the time regarding personal pride and the belief in standing on one’s own feet. Some people and communities even gave back unused amounts of aid that they had received after the disaster. These days, sadly, the situation is the reverse: there is usually a large amount of fraud in relief programs in the wake of disasters.

For more on the proper federal role in natural disasters, see www.downsizinggovernment.org/dhs/fema.

 

Like the Genesis tales of the Great Flood and Sodom and Gomorrah, several news outlets are blaming Hurricane Harvey’s destruction on its victims’ moral failings. If not for Houstonians’ impious laissez-faire attitude toward zoning and building codes, the storm would have been less damaging—so say Newsweek and the Washington Post, anyway.

Vanessa Brown Calder debunks the zoning claim here. But what of building codes? At first blush, it seems reasonable to argue that government should have required new construction to be more resilient to severe weather. Problem is, the empirical evidence is unclear on whether supposedly storm-toughened building codes make much difference.

The late University of Georgia economist Carolyn Dehring spent much of her career examining the effects of coastal areas’ storm-protection regulations. In this Regulation article with the University of Wisconsin’s Martin Halek, she specifically looked at the effectiveness of federal requirements for building codes in hurricane zones. The authors found that houses built prior to the federally mandated codes were more resilient in hurricanes than houses built under the codes. The codes apparently encouraged a “race to the bottom” in which builders focused on meeting the government requirements rather than nature’s destructiveness.

On the other hand, a new working paper examines the effects of Florida’s 2001 statewide building code that was drafted in response to the damages from 1992’s Hurricane Andrew. The authors find that wind damages to homes built under the code were much less than damages to homes built prior to the code. More important, the savings from the reduced damages more than offset the increased construction costs under the code. Peter Van Doren summarizes this paper here.

So, for now, it seems uncertain whether government building codes provide effective protection against extreme weather—especially weather that drops 50+ inches of rain in a few days’ time.

“The sense of responsibility is always strongest, in proportion as it is undivided,” Hamilton argued in Federalist 74; for that reason, the broad constitutional power to pardon was best vested in “a single man,” the president, who could be expected to wield it with “scrupulousness and caution.” Things aren’t exactly working out according to plan so far in the Trump presidency.

Trump’s first presidential pardon, accomplished with an end-run around his own Justice Department, went to former Maricopa County, AZ sheriff Joe Arpaio, an unrepentant, serial abuser of power. If, as Hamilton suggested, “humanity and good policy” are the ends the pardon power is supposed to serve, its exercise in this case served neither. 

“All agree the U. S. President has the complete power to pardon,” Trump tweeted in July. Subject to a few caveats (has to be a federal offense, no pardons for future acts, doesn’t apply “in Cases of Impeachment”), “complete power” is pretty close to the truth. The legal scholar Sanford Levinson has called the pardon power “Perhaps the most truly monarchical aspect of the presidency.”

The Framers were aware that so broad a prerogative might be abused, and delegates to the Constitutional Convention and Ratification debates repeatedly identified impeachment as the essential check. At the Philadelphia Convention, when Edmund Randolph moved to remove “cases of treason” from the power’s scope, James Wilson retorted that “Pardon is necessary for cases of treason, and is best placed in the hands of the Executive. If he be himself a party to the guilt he can be impeached and prosecuted.” At the Pennsylvania ratifying convention later that year, one delegate addressed the objection that the president could pardon treasonous coconspirators by noting that “the President of the United States may be impeached before the Senate, and punished for his crimes.”

In Virginia, another observed that because “the President himself is personally amenable for his mal-administration, the power of impeachment must be a sufficient check on the President’s power of pardoning before conviction.” And when George Mason warned that the president “may frequently pardon crimes which were advised by himself,” James Madison replied that

“There is one security in this case to which gentlemen may not have adverted: if the President be connected, in any suspicious manner, with any person, and there be grounds to believe he will shelter him, the House of Representatives can impeach him; [and] they can remove him if found guilty.”

Still, no president has ever been impeached for misusing the power. Only one ever came anywhere close. In the House of Representatives’ first, failed attempt to impeach President Andrew Johnson, in 1867, one of the charges specified that Johnson had “abused the pardoning power conferred on him by the Constitution, to the great detriment of the public, in releasing… the most active and formidable of the leaders of the rebellion.” The resolution failed 108-57—Johnson wouldn’t be impeached until the following year, after he defied the Tenure of Office Act by firing Secretary of War Edwin Stanton.

It’s not that the clemency power hasn’t been abused: in his book on the subject, American University’s Jeffrey Crouch notes an increasing trend toward self-interested pardons that shield presidents from legal trouble, or carry political and financial gain. But “each of these clemency decisions was made by a president protected from electoral consequences,” late in his second term.

Trump has already broken that pattern, consequences be damned. As Cato adjunct scholar Josh Blackman notes, the Arpaio pardon “came less than eight months into this presidency, and it went to a sheriff who consistently flouted court orders. This is the beginning, not the end.” Indeed, the Washington Post reported in July that, under pressure of the special counsel’s Russia investigation, Trump had “asked his advisers about his power to pardon aides, family members and even himself.” Trump denounced the report as “FAKE NEWS,” but special counsel Robert Mueller appears to believe it’s a live possibility, judging by his recent maneuvers.

Standing alone, the Arpaio pardon could likely never muster the majority necessary to sustain an impeachment. But it may not stand alone. And, as the Nixon-era House Judiciary Committee staff argued in its comprehensive report on the “Constitutional Grounds for Presidential Impeachment,” “the cause for the removal of a President may be based on his entire course of conduct in office.”

Roger Pielke makes good points about climate change and hurricanes in the Wall Street Journal today, but his ideas for federal policy action are off-base.

Pielke proposes that we “enhance federal capacity” for natural disasters and create a National Disaster Review Board. In my 2014 study on FEMA, I argue the opposite—that enlarging the federal role would be counterproductive.

Federalism is supposed to undergird America’s system of handling disasters, particularly natural disasters. State, local, and private organizations should play the dominant role. Looking at American history, many disasters have generated large outpourings of aid by individuals, businesses, and charities, and we see a similar wonderful response to Hurricane Harvey.

Pielke says that the federal government “plays a crucial role in supporting states and local communities to prepare for, respond to and recover from disasters.” But the federal role in preparation and recovery is not crucial, as it mainly involves handing out cash. The states have their own cash, and my study describes the disadvantages of pushing costs onto federal taxpayers.

As for disaster response, federal involvement is appropriate when agencies have unique capabilities to offer, such as the Coast Guard’s search and rescue capabilities. But it is mainly state, local, and private entities that own the needed resources and are on the scene to assist in emergencies. The states, for example, employ 1.3 million people in police and fire departments. As for the private sector, the 9/11 Commission report noted, “85 percent of our nation’s critical infrastructure is controlled not by governments but by the private sector, [so] private-sector civilians are likely to be the first responders in any future catastrophes.”

When the states need additional resources after a disaster, they can and do rely on help from other states under mutual aid agreements. Similarly, electric utilities have longstanding agreements with each other to share resources when disaster strikes. Such horizontal support makes more sense than top-down interventions from Washington.

Federal intervention can impede disaster response and rebuilding because of the extra paperwork involved and the added complexity of decisionmaking. A growing federal role may also induce states to neglect their own disaster preparedness because officials assume Uncle Sam will bail them out when disaster hits.

Growing federal intervention has been sadly crowding out state, local, and private roles in handling natural disasters. We should reverse course and only task the federal government with those roles that are unique and truly beyond the capabilities of other entities in society.

For further reading, see “The Federal Emergency Management Agency: Floods, Failures, and Federalism”.

“Devastating storm may ultimately boost US GDP” read the headline on CNBC’s Market Insider. Much like the debate around price gouging (addressed here), every storm or natural disaster seems to bring with it a discussion of whether physical destruction, or at least the aftermath and reconstruction arising from it, is somehow “good for the economy.”

Now, the particular CNBC headline above may prove to be right or it may prove to be wrong for a given period, as I’ll explain below. It’s purely an empirical matter. But examining the economic impact of hurricanes such as Harvey through assessing movements in short-term GDP alone is clearly a very partial account of the legacy of such a storm.

Firstly, the obvious point. Hurricane Harvey has destroyed property and hence destroyed wealth. This is unequivocally bad for the economy. Studies from Goldman Sachs and others estimate the economic destruction at anywhere between $30-40 billion.

Yet wealth is a “stock” concept, whereas measured GDP is a flow of activity in a given period. In principle then it is less obvious what the short, medium and long-term impacts of a hurricane on measured GDP (a measure of economic value-added at market prices) would be.

The very short-term impact of such a storm on GDP is almost certain to be negative. Hurricanes destroy productive capacity, disabling factories and, in the case of Texas, curtailing the oil refining sector. This acts as a negative supply-shock to both the local and national economy, with higher gasoline prices (an input to travel and production processes) filtering through to raise input prices and hence production costs.

Of course, as the effects of the storm wade, rebuilding activity and construction will begin. This will count towards GDP, and as much of GDP relates to voluntary activity that has real value-added, so it should. People will genuinely want to replace destroyed or damage homes, cars, fences, and the like, and this is valuable economic activity. In Houston this reconstruction and redevelopment is likely to be quicker than if it occurred in many other areas, due to the less restrictive zoning laws and hence lower transaction costs associated with new building.

But will all this increase GDP overall, relative to a counter-factual in which the storm had not taken place?

As my colleague David Boaz has previously written, to look at this activity alone would be to fall for what Frederic Bastiat described as the “broken window fallacy”. If a window is smashed, one observes the window being repaired and the associated spin-off ventures as economic activity. What is unseen is the economic activity that would have existed had people made decisions to save for a college degree, buy a new suit or invest in a start-up with the resources which they have now have to put towards rebuilding their house or replacing their car.

Whether the counterfactual path of GDP would be exactly the same or even higher than the post-storm economy is difficult to say. If there are unused resources, then it is theoretically possible that observed GDP could be temporarily higher for a period than it would otherwise have been absent a hurricane whilst construction is taking place. But that does not appear to be the empirical record. In fact, the same CNBC article shows that the New York economy performed substantially worse in the 12 quarters after Hurricane Sandy relative to the national economy.

And in the longer-term the consequences are certainly negative, whether represented in measured GDP or some broader conception of economic welfare (see this study, for example). Even if there were a case when GDP did look higher than expected for a while, this would not mean we were “richer” or somehow better off in an economic sense. 

We would have expended a bunch of resources to get back to where we were before the storm, and lost out on much other valuable activity that we would have preferred to have freely chosen to engage in. Circumstances would have altered our choices, and to the extent this meant we would have adjusted our preferences to buy certain repair goods and services, this would show up as economic activity. But overall, whatever measured GDP shows in the next few months, we would still be economically poorer for the destruction wrought by the storm, due the opportunities and decisions we were unable to make as a consequence of it.

In fact, in the very long run the only way a super-damaging storm such as this could improve the economic performance of an economy is if its effects led us to reassess damaging policies such as subsidizing flood insurance.

President Trump is reportedly considering pulling the plug on the Deferred Action for Childhood Arrivals (DACA) program, which allows about 800,000 immigrants who came to the U.S. as children to live and work here lawfully. If the president does decide to end the program, it will impose a massive cost on employers who currently employ these workers. The cost of recruiting and hiring new employees is expensive. Here are the facts:

  • DACA rescission will cost employers $6.3 billion in employee turnover costs, including recruiting, hiring, and training 720,000 new employees.
  • Every week for the next two years, U.S. employers will have to terminate 6,914 employees who currently participate in DACA at a weekly cost of $61 million.
  • Ending DACA would be the equivalent of 31 “major” regulations.

DACA recipients receive employment authorization documents (EADs). It is not illegal to work without authorization, but it is illegal for employers to hire someone who lacks authorization. Thus, DACA EADs essentially grant permission to employers to hire DACA beneficiaries for a given period—in this case, two years—without fear of employer sanctions for hiring an unauthorized worker. Note that the law prohibits employers from discriminating against foreign-born applicants purely because they have temporary authorization. Thus, if President Trump rescinds DACA, employers are the ones who will have to actually implement the policy by policing their workforce and firing DACA recipients. DACA repeal’s regulatory compliance burden will fall directly on American employers.

To estimate these costs, I reviewed 11 studies of the cost of turnover to employers. These studies included a wide variety of occupations with radically different wage levels. The most important component of turnover cost is the leaving employee’s wage, which is the marginal value of the worker’s production. The Table below displays the cost as a percent of annual wages.

As the Table shows, the estimated turnover cost ranges from 12 percent to 37 percent of annual wages with a median of 25 percent (the average is 26 percent). This estimate is slightly lower than a U.S. Department of Labor estimate that concluded that turnover costs an employer 30 percent of the leaving employee’s salary. It is slightly higher than a 2012 literature survey by Boushey and Glynn (2012) that found a median turnover cost of 21 percent of an employee’s annual salary.

Table: Costs of Turnover in Various Occupations

  Turnover Cost Studies Industries Percent of Annual Wages Average Costs Hourly Wage

1

Seninger, et al (2002) Supported Living

24%

$3,631

$7.56

2

Larson, et al (2004) Direct support professionals

17%

$4,333

$12.45

3

Patterson, et al. (2010) Emergency medical

25%

$7,926

$15.71

4

Hinkin & Tracey (2000) Hotels

29%

$13,104

$15.95

5

Frank (2000) Grocery Stores

31%

$10,848

$17.50

6

Dube, et al (2010) Various

12%

$4,563

$18.55

7

Jones (1990) Nurses

37%

$19,402

$25.94

8

Barnes, et al. (2007) Teachers

36%

$13,446

$30.23

9

Appelbaum & Milkman (2006) Various

25%

$16,461

$32.92

10

Wise (1990) Nurses

31%

$22,557

$36.89

11

Milanowski & Odden (2007) Teachers

17%

$13,969

$41.44

  Median All Above

25%

$13,104

$18.55

Sources: See links in table and table text

An August 2017 Center for American Progress survey of DACA recipients found that their wages had risen to $17.46 hourly (or $34,920 annually). It also found that 91 percent of DACA recipients have jobs. According to my projections based on U.S. Citizenship and Immigration Services data, 790,148 people have DACA or will have DACA by September 1, 2017. Thus, 719,035 immigrants are earning $25.1 billion per year. If the federal government forces employers to fire all of DACA recipients, it will cost employers $6.3 billion.

The fact that some employers will receive advanced notice of the expiration of their employees’ work authorization could mitigate these costs, but according to these studies, the primary cost associated with turnover is the lower productivity of new hires. Additionally, because DACA recipients’ wages have grown 69 percent over the last five years, it is likely that those DACA participants whose cancellations occur in 2018 and 2019 will have higher wages than those today. Finally, DACA participants’ employment rate has also risen year after year—four percentage points since 2016—and older participants have a higher employment rate. This again indicates that the number of firings could be higher than this projection estimates.

The costs will likely not be imposed all at once as the program will slowly unwind over a two-year period. I previously estimated the quarterly rate of expirations, based on U.S. Citizenship and Immigration Services data, which can give us an estimate of how a DACA cancellation would distribute the costs over time. Every week U.S. employers will have to terminate 6,914 DACA employees at a weekly cost of $61 million.

Figure: DACA Employee Terminations and DACA Recession Turnover Costs

 

Source: See Table 1 and Cato Institute (Note that 886,000 people have received DACA at some point, but many have had their renewals rejected or have failed to renew for other reasons. 720,000 had jobs in 2017)

For context, the Congressional Review Act regards any new administrative rule as a “major rule” if it will have a likely annual impact of more than $100 million. CRS requires major rules to go through a 60-day notice and public comment period and to allow Congress the opportunity to review it and reject it. Because DACA was not created through a rule-making process, it likely does not require this process to be terminated, but its rescission would still impose $3.2 billion in annual costs. Thus, ending DACA would be the equivalent of more than 30 major regulations.

President Trump is considering DACA rescission only under the threat of a lawsuit that claims DACA was unconstitutionally implemented. If that claim is valid, Congress should immediately act to pass legislation to extend employment authorization and legal status for these young immigrant workers. It should not choose to impose massive costs on employers and immigrants.

An urban fairytale is emerging in the aftermath of Hurricane Harvey. Commentators claim that because Houston lacks a traditional zoning code, Houstonians recklessly built a city with too many roads, buildings, and parking lots, and these impervious surfaces collected water rather than absorb it, exacerbating flooding. They argue Houston doesn’t have enough absorbent surfaces with trees, grasses, and soil because of the lack of zoning.

The facts don’t support this story. It’s true Houston is the only major U.S. city without conventional Euclidean “separate-all-land uses” zoning. But this has not reduced absorbent surface cover relative to other cities with more aggressive regulation.

In fact, a map of Houston indicates the city has a low level of impervious surface cover across more than 90% of the city. Most of the remaining 10% falls under the “average” impervious/pervious surface ratio category, and hardly any falls under the “high levels of pavement” category.

 

Source: Houston-Galveston Area Council Planning & Development Department

Of course, a more important question is how Houston stacks up against similarly sized cities that have comprehensive zoning regulation. On CNN, the chair of Georgia Tech’s School of Regional and City Planning argued that “when you have a less dense urban fabric, you’re going to have more impervious surface and you’re going to have more runoff … That’s clearly an important consideration in Houston.”

But on the contrary, Houston has substantially less impervious surfaces covered by buildings, roads, and parking lots (39.2%) and substantially more absorbent surfaces with trees, grasses, and soils (60.6%)  than similarily populated American cities.

City Impervious Surface Cover (buildings, roads, parking lots, sidewalks) Absorbent Surface Cover (vegetation, soil) Houston

39.2 %

60.6 %

New York

61.1 %

38.8 %

Chicago

58.5 %

41.3 %

Los Angeles

54.0 %

45.8 %

New Orleans

41.7 %

57.8 %

 

Data Source: USDA Forest Service, 2012

Still, is it possible that urban planners would have preserved even more green space? That seems extremely unlikely: New Orleans and New York City experience hurricane and flood risks, and both have more impervious surface cover than Houston despite conventional planning and zoning. 

And it’s not as if Houston is absent planners or land use regulation. What Houston planners currently regulate, like parking requirements, minimum lot size, and paved easement requirements, drive impervious surface cover up, not down. 

Houston should do exactly opposite of what commentators suggest in order to reduce impervious surface cover. It should eliminate existing parking requirements and paved easement requirements, not add to them. Conventionally zoned cities would benefit from the same approach.

The idea that more zoning is a solution to Houston’s Harvey problem is wishful thinking. 

Since its publication in 1963, Milton Friedman and Anna Jacobson Schwartz’s A Monetary History of the United States has stood as a monumental scholarly accomplishment. Even critics of Friedman’s Monetarism have admired the work’s meticulous historical research, particularly its reconstruction of a data series on several measures of the U.S. money stock going back to 1867. Almost all subsequent researchers have accepted and employed Friedman and Schwartz’s numbers.

Recently, however, some have implied that Friedman and Schwartz fudged their data or cooked their numbers. For example, Joe Salerno, in a Mises Institute post entitled “Milton Friedman Debunked — by Econometricians,” explicitly accuses Friedman and Schwartz of “fudging” their data. Another post at the Institute for New Economic Thinking blog makes a similar claim with the title “Did Milton Friedman Cook His Numbers?”

These charges are very serious, but on close examination they turn out to be based entirely on misrepresentations or misunderstandings of some relatively minor and arcane criticisms of Friedman and Schwartz’s analysis of the velocity of money that they published in a volume that appeared in 1982, nearly two decades after their Monetary History came out. These criticisms, whether valid or not, do not challenge the accuracy of the numbers in the Monetary History but in fact rely on those very numbers.

Both posts are reporting on an article at VOX, CEPR’s Policy Portal, entitled “Milton Friedman and Data Adjustment.” Written by Neil Ericsson, David Hendry, and Stedman Hood, the article is a summary of their longer chapter in Milton Friedman: Contributions to Economics and Public Policy, edited by Robert A. Cord and J. Daniel Hammond (Oxford University Press, 2016). What follows is an extended discussion of Ericsson, Hendry, and Hood’s criticisms, but those interested in only a summary can just read the next and final sections.

Money Stock Figures

Although Salerno’s description of the VOX article is generally accurate, his post overall, as well as the title of the Institute for New Economic Thinking post, leave the unwary reader with an exaggerated and misleading impression.

To begin with, the criticisms raised by Ericsson, Hendry, and Hood do not even apply to the data series on the U.S. money stock that Friedman and Schwartz presented in their classic Monetary History (1963) or in their subsequent, massive Monetary Statistics of the United States: Estimates, Sources, and Methods (1970). Indeed, the three econometricians in their VOX article are not challenging Friedman and Schwartz’ money stock figures at all.

Instead, they are challenging the analysis of money’s velocity that Friedman and Schwartz made in the later, much neglected, 1982 volume, Monetary Trends in the United States and the United Kingdom: Their Relation to Income, Prices, and Interest Rates, 1867-1975. Moreover, their criticisms of Friedman and Schwartz’s velocity analysis is based entirely on Friedman and Schwartz’s own money stock figures.

Velocity Analysis

The 1982 volume that Ericsson, Hendry, and Hood are critiquing was initially supposed to be the first of two volumes looking at the relationship between money and other economic variables. Monetary Trends, as the first of these, looks at those relationships over the long run. The second volume was going to take up the relationship between money and other economic variables over the business cycle, but by the time Monetary Trends appeared, Friedman and Schwartz had unfortunately abandoned this final volume.

Monetary Trends, in its analysis of the factors affecting the demand for money, employs the well-known equation of exchange: MV = Py. The variable y captures the impact of real income or output on the real demand for money (M/P), whereas velocity (V) is a residual variable. This makes the equation of exchange an identity, true by definition, with velocity reflecting all factors other than real income that affect money demand and the price level. If velocity falls, ceteris paribus, the demand for money rises, and vice versa. Much of the early debate between Keynesians and Monetarists was about the behavior of velocity, with Friedman long contending that it had predictable relationship with a small number of other variables.

The Ericsson, Hendry, and Hood online critique of Friedman and Schwartz displays a striking chart showing the level of velocity in the U.S. from 1872 to 1975. It contains two sets of lines, one showing a much more drastic decline in velocity over the long run than the other.

[caption caption=”Figure 1: Unadjusted and adjusted US annual and phase-average observations for velocity Data Source: VOX CEPR, Friedman and Schwartz (1982)”]  

[/caption]

The set with the least drastic decline is from Monetary Trends (p. 186) and, as the VOX authors report, results from Friedman and Schwartz “adjusting the US money stock series by a linear trend of 2.5% per annum for observations before 1903, with no trend adjustment thereafter.” In other words, Friedman and Schwartz derived a more stable linear trend for velocity by adjusting upward their money stock figures between 1867 and 1903 and then re-calculating velocity. The VOX post continues: “while the unadjusted money stock for 1867 is $1.28 billion, its adjusted value is $3.15 billion: 246% of its original value.”

Although at first glance, this may seem like a drastic and perhaps unwarranted adjustment, it becomes far less so in the face of several observations.

Relying on Friedman and Schwartz to Criticize Friedman and Schwartz

The raw, unadjusted number of $1.28 billion for the total money stock in 1867 also appears in Friedman and Schwartz’s Monetary Trends (p. 122). It is exactly the same number in their earlier Monetary Statistics (p. 61) and (with trivial differences arising from timing) approximately the same number reported in Monetary History (p. 704; $1.31 billion). All these estimates are for the old M2, and it is only in Monetary Trends that Friedman and Schwartz make the adjustment criticized in the VOX post.

In other words, the authors of the VOX post, as they freely admit, had to rely on Friedman and Schwartz’s own money stock estimates to create the unadjusted estimates of velocity that appear in their graph. Moreover, reading from their graph, their unadjusted series for velocity is roughly identical to velocity series used in Friedman and Schwartz’s older Monetary History (p. 774).

There was absolutely nothing deceptive about the Friedman and Schwartz velocity adjustments in Monetary Trends. Friedman and Schwartz describe in lengthy detail (pp. 216-218) how they made the adjustments and, contrary to the impression left by the VOX authors, provide a persuasive rationale for doing so. Thus, the real question is not whether Friedman and Schwartz’s data was faulty. It is whether their subsequent analysis was correct.

Financial Sophistication

What was the rationale for these adjustments? Friedman and Schwartz noticed a huge difference in the long-run velocity trend for M2 between the U.K. and U.S. during the second half of 19th century, whereas the long run trend was similar in both countries thereafter. They attributed this difference to greater financial sophistication in the U.K. relative to the U.S. as of 1867, with the U.S. converging to the U.K. level over the next half century. As Friedman and Schwartz put it, “the more rapid spread of financial institutions in the United States than in the United Kingdom after 1880 was probably the main reason for the near elimination by 1903 of the wide difference in velocity that prevailed in 1876-77” (p. 216). In other words, increased financial sophistication in the U.S. was generating an increased demand for money as measured by M2, with a concomitant fall in velocity.

Friedman and Schwartz are not the only scholars who have noticed this difference or have tried to explain it. The alternative but complementary explanation that I find the most plausible is provided by Richard Timberlake in chapter 9 of his Monetary Policy in the United States: An Intellectual and Institutional History (1993). He attributes the high velocity (or low demand) for M2 at the end of the Civil War to the shortages of official currency in small denominations that plagued the U.S. economy at that time.

He points out that “denominational hindrances encouraged swaps, barter, payment in kind, and use of unaccounted and unaccountable moneys to a much greater degree than … can be measured” (p. 125). Consequently, the measured money stock was not capturing all transactions in the U.S. To give just two examples, privately issued “shinplasters” used as money were quite common during this period, and the prevalence of sharecropping, essentially a barter transaction, throughout the southern states was in large part the result of postwar debilities in the South’s financial system. Notice that Timberlake’s alternate explanation gives an even more straightforward justification for adjusting the money stock upward.

Isolating a Bone of Contention

Whatever the explanation for the drastic decline in velocity in the U.S., why did Friedman and Schwartz adjust their series to eliminate this effect?

The primary answer is that they wanted to isolate the impact of interest rates on velocity. After all, this was a major bone of contention between Monetarists and Keynesians. Even today, many macro models assume, implicitly or explicitly, that interest rates are the dominate, if not the only, factor affecting velocity.

This is not to claim that Friedman and Schwartz’s adjustment is necessarily the best approach to this question. But there is certainly nothing illegitimate or misleading about what they did.

Cyclical Variability

Ericsson, Hendry, and Hood also offer some sophisticated econometric criticisms that apply to Friedman and Schwartz’s entire velocity series through 1975 and not just to the period prior to 1902. For instance, the VOX authors point out that Friedman and Schwartz “also employed data adjustment to remove cyclical variability.”

Removing cyclical variability was fully justified, given that Friedman and Schwartz were interested in long-run relationships in this volume and had intended to take up cyclical relationships in the never-completed final volume. But the VOX authors show in their graph that the statistical technique employed failed to “fully eliminate the data’s short-run variability [emphasis mine].”

In this case, their almost contradictory complaint is that Friedman and Schwartz did not adjust their data series enough.

Minor Minutiae

More significant, Ericsson, Hendry, and Hood also argue that these data adjustments undermined Friedman and Schwartz’s “empirical model constancy and goodness of fit.” I don’t think it is necessary to get into these arcane statistical quibbles. It is sufficient to point out that econometric techniques have seen major innovations since Monetary Trends was published in 1982 and that even the volume’s initial publication witnessed debates about its statistical methodology. For example, Thomas Mayer raised such issues as early as his overall favorable review of Monetary Trends in the December 1982 issue of the Journal of Economic Literature.

Ultimately, the VOX article criticism boils down to the claim that a “random walk model” provides a better fit. Given that Friedman and Schwartz never contended that velocity was perfectly constant, notice how the argument has now been reduced to relatively minor minutiae.

Moot Modeling

Finally, there is an important sense in which these technical questions about how to precisely model velocity have become somewhat moot. Everyone recognizes that the financial deregulation of the 1980s caused the velocity of money to behave in unpredictable ways. Friedman himself conceded as much in, among other places, a Wall Street Journal article on August 19, 2003. This led to the widespread abandonment of monetary targeting by central banks (to the extent that they ever actually practiced it). Indeed, it is one of the reasons Friedman altered his preferred monetary policy from increasing M2 at some fixed rate to instead freezing the monetary base while permitting banks to issue banknotes.

Conclusion

To sum up, claiming that Friedman and Schwartz “fudged their data” or “cooked their numbers” is a gross misrepresentation. Even critics of their theoretical conclusions rely on their raw numbers. Friedman and Schwartz can be challenged on minor econometric issues, particularly their analysis of velocity’s behavior. But the erratic behavior of velocity beginning in the 1980s has diminished the relevance of even these questions. In contrast, Friedman and Schwartz’s estimates of the money stock in the U.S. (prior to the Fed’s reporting those numbers) not only remain the best we are likely ever to have but set a standard for historical and statistical research that has been rarely, if ever, matched.

[Cross-posted from Alt-M.org]

Nobel laureate James Buchanan has been in the news lately, especially because of a book that seeks to link his 7000 pages of economic writing to both Dixiecrat segregationists and Charles Koch’s secret plan “to radically alter our government in ways that will be devastating to millions of people.” The thesis of Democracy in Chains by Nancy MacLean is that public choice economics is a radical plan to “shackle the people’s power,” “to put democracy in chains.” Oddly, she claims (without evidence), he set out on this project because he resented the Supreme Court’s decision in Brown v. Board of Education – which of course used “undemocratic” means to overturn the democratic decisions of legislatures in various states.

Buchanan certainly was concerned with how to achieve justice, efficiency, and “prevention of discrimination against minorities” in the context of majority rule. Throughout his work he explored how to design constitutional rules to bring about optimal outcomes, including a balanced budget requirement, supermajorities, and constitutional protection of individual rights. He worried that both majorities and legislatures would be short-sighted, economically ignorant or inefficient, and indifferent to the imposition of burdens on others.

And today a Washington Post column by Dana Milbank illustrates one of the big problems that Buchanan sought to solve: the temptation of legislatures to spend money with little regard for what two of his students called “deficits, debt, and debasement.” Looking outward from Hurricane Harvey to the upcoming congressional session, Milbank wrings his hands:

Harvey makes landfall in Washington as soon as next week, when President Trump is expected to ask for what could be tens of billions of dollars in storm relief. And paying for storm recovery — probably with few offsetting spending cuts — will be but the first blow to fiscal discipline in what looks to be a particularly active, and calamitous, spending season.

It’s not just disaster relief. The Pentagon is hoping for tens of billions of additional dollars. And Republicans may pivot from “tax reform” to mere tax cuts. It’s easier just to spend money and cut taxes than to reform the flood insurance program, make the tax system more efficient, and focus military spending on actual defense needs, much less to think about the national debt and the next generation.

Trump, who came to power promising to eliminate the $20 trillion debt, or at least to cut it in half, is poised to oversee an exponential increase in that debt. Republicans, who came to power with demands that Washington tackle the debt problem, could wind up doing at least as much damage to the nation’s finances as the Democrats did….

If the red ink rises according to worst-case forecasts, “we’re talking additions to the debt in the trillions,” Maya MacGuineas, president of the Committee for a Responsible Federal Budget, tells me. All from actions to be taken in the next few months. “It turns out the Republican-run Congress is not willing to make the hard choices,” she says. “It is a fiscal free-lunch mentality on all sides.”

We’ve heard a lot over the past few years about a “dysfunctional” Congress. Many critics mean that Congress doesn’t pass enough laws. But this is the real dysfunction: a Congress that spends money with little thought to the future. The national debt doubled under President George W. Bush and doubled again under President Barack Obama. President Trump and the Republican Congress are just getting started, but the prospects don’t look good.

Milbank, MacGuineas, and others who worry about the “fiscal free-lunch mentality” should read some Buchanan. As one scholar put it in a reflection on Buchanan’s work, “Perhaps legislatures would do better if supermajorities were required whenever transfers to current recipients will burden future generations.” Perhaps so. And perhaps constitutional guarantees of individual rights, judicial protection of those rights, and limits on the legislature’s free-lunch mentality are all part of what Buchanan called the constitutional political economy of a free society.

In July, my Cato colleague Ari Blask and I wrote a study critiquing the National Flood Insurance Program.

We made what—to us—seem like obvious critiques of a broken program: it doesn’t charge an actuarially fair price for many homeowners who live in flood plains, and those who get the best deal seem to be the wealthy. It also fails to use updated maps that detail the current geography and risks to homeowners, and generally doesn’t charge enough to cover the costs of major catastrophes. The result of this is that we have too much development in the flood-prone areas of the country.

After the report came out I appeared on a few radio and TV shows and received a few emails about our research, and the main complaint I received—in fact, virtually the only feedback I got—was from people who said they were in a 100 year floodplain but had never seen any flood. They were angry that their bank made them buy this insurance to get a mortgage, and didn’t think it was fair.

On Thursday, I was a guest on another radio show to talk about the aftereffects of Hurricane Harvey, and the second caller complained about… having been forced to purchase flood insurance despite his belief that his house couldn’t flood. The next caller chimed in with the same complaint.

Apparently not too many people see Harvey as being a precautionary tale. But that’s only natural: human beings aren’t great at perceiving risk. Surveys show that people worry much more about dying from a terrorist attack or a plane crash when slipping in the shower or getting hit by lightning are far more deadly, for instance.

However, insurance companies tend to be pretty good at discerning risk—they go bankrupt if they’re bad at it. Life insurers hire teams of people to try to understand longevity risk, for instance, and property and casualty insurers do the same to understand the risk to homes.

They use this information in pricing the cost of life insurance and homeowners insurance, and they have every incentive to get it right—if they charge too high a price they won’t get much business, and too low and they will lose money.

The federal government—which administers the National Flood Insurance Program—does not have such a keen incentive to get prices right. In fact, in many regions of the country they use flood insurance maps that are decades out of date because it would anger homeowners—who are also voters—to force them to pay more than they think is fair.

I am not sure this is going to change. One thing that Harvey will do is bring more pressure on the federal government to provide “affordable” flood insurance to more people.

However, not all places merit cheap flood insurance—certainly not people who live along the gulf coast or along the North Carolina beaches, where flooding is not uncommon. These people tend to be well off already—many of these are vacation homes.

Harvey is unusual in that it created a monumental flood in a place where most of the homes were not in a flood zone. Some meteorologists described it as a “1,000 year flood.” It is not at all clear how the government will react to this disaster—hopefully it decides that it should be a priority to accurately measure flood risk, for starters. While the government is clearly not beyond subsidizing rich people (that’s what the mortgage interest deduction is all about, after all) perhaps we shouldn’t use this disaster to expand the National Flood Insurance Program and its inherent subsidies, as I fear people will begin to suggest.

Most people agree that we want a federal government to assist the public when we have unforeseen natural disasters, such as what has occurred with Hurricane Harvey.

However, the government’s role in the flood insurance market has exacerbated the damage done by catastrophic floods. By not charging the proper premiums for flood insurance, homeowners don’t make all cost-effective mitigation efforts and we see more development in flood prone areas than would otherwise be the case.

As Congress begins to deliberate the re-authorization of the National Flood Insurance Program, it is worth asking why the federal government feels obligated to provide flood insurance in place of the private market, and whether concentrating its efforts in disaster mitigation and relief, rather than poorly administering an insurance program, might be a more appropriate way.

Having such a debate in the aftermath of a tragedy like Hurricane Harvey may lead the government to simply throw money at the problem. But the reality is that extricating itself from the flood insurance market would be the best thing it could do in the long run if it wants to mitigate the damage caused by future floods as well as the federal government’s obligations during such disasters.

Public schooling monopolists such as the President of the American Federation of Teachers, Randi Weingarten, argue that private school choice programs undermine our democratic society. One of the frequently made fundamental arguments is that, if given the opportunity to do so, self-interested individual families would choose a “less-than-socially optimal” level of schooling since education may be a merit good.

In other words, if my children receive an education, the rest of society benefits from the transaction without having to pay for it directly. After all, other members of society will benefit if my children grow up to be well-informed voters and law-abiding citizens. The conclusion made by some economists is that the government ought to be able to force the rest of society – the free-riders – to subsidize schooling so that the collective could reach some “socially optimal” level of education.

However, such a conclusion assumes that having a more educated populace is the only externality associated with traditional schooling. In order to better understand the overall effect, I have created a list of possible positive and negative externalities associated with government schooling.

Positive Externalities:

  • A more educated citizenry – the rest of society benefits when they have educated people to interact with. Also, democracy might function more effectively with highly educated and informed voters.
  • Obedience – public schools were originally designed to create more obedient citizens. If a person is more obedient to the state, they may be less likely to break the law. As a result, third parties benefit from not having their property damaged or stolen.

Negative Externalities:

  • A less educated citizenry – third parties are harmed if the compulsory levels of schooling do not maximize children’s education levels. After all, schooling is but one channel to achieve an education, and government schools do not have an incentive to provide children with optimal educational experiences.
  • Obedience – if citizens are trained to be obedient, they may be less likely to invent technologies that benefit the rest of society. In addition, obedient employees may be less productive if their job requires them to think on their feet.
  • Legitimized coercion through voting – the voting booth allows advantaged groups to exercise coercion over less fortunate members of society. Politically powerful groups can mobilize and extract resources from third parties, producing, at best, a zero-sum game.
  • Opportunity costs of the political process – citizens must use excessive amounts of time and effort in order to become politically knowledgeable about various educational policies. These scarce resources could be more efficiently allocated towards generating an income or spending time strengthening bonds within the family.
  • Inefficiency – government schools do not have an incentive to spend taxpayer resources efficiently. Consequently, we have observed public school spending increase substantially without discernible effects on observed student outcomes.

These expected externalities are demonstrated in table 1 below:

 

So is government schooling a merit good?

It certainly doesn’t seem like it if we consider all relevant externalities. Indeed, if I had to guess the sign of the net externality, I would argue that it is more likely to be negative overall. Nonetheless, since all of these positive and negative externalities are uncertain and likely to be very large in magnitude, I do not believe it is even possible to accurately calculate the sign of the net externality.

Rather than attempt to reach some imaginary socially optimal level of schooling, we ought to acknowledge that externalities exist in education, but that government intervention likely leads to more negative effects than it eradicates. Instead, we can improve overall social welfare by eliminating much of the negative externalities produced by government involvement in the education system.

Earlier this year my colleague Logan Albright and I estimated the economic and fiscal costs that a full and immediate repeal of DACA would impose on the federal government and the economy as a whole. DACA stands for the Deferred Action for Childhood Arrivals, an Executive Order issued by President Obama that allowed the foreign-born children of illegal immigrants who migrated with their family to remain in the U.S. if they remain in school and subsequently obtain gainful employment.

We found that the aggregate economic cost would be over $200 billion and the cost to the government would be $60 billion, numbers we suggest are conservative. Most of this high cost is driven by the fact that the “dreamers” tend to do well in school and as a result do well in the job market after they complete their education.

To shed some further light on this issue we recently updated our analysis to break down these costs by the individual states.

We began our original analysis by comparing DACA recipients to those immigrants who hold H-1B visas. These are high-skilled, well-educated immigrants who are demographically analogous to DACA students, all of whom must necessarily enroll in higher education programs in order to be eligible.

The average DACA recipient is 22 years old, employed, and a student. 17 percent of them are on track to complete an advanced degree. The college attrition rate of DACA recipients is miniscule compared to domestic students, an indication of the exceptional caliber of the DACA students and their degree of motivation, no doubt partly driven by the fact that dropping out of school for them can result in deportation.

H-1B holders are generally between 25 and 34, have an employment rate of nearly 100%, and have usually completed a college education. We posit that they are akin to what DACA recipients will look like in a few years’ time.

We used a study from the Hoover Institute that estimated the economic impact of expanding the H-1B visa program as our baseline for estimating the cost of DACA repeal.[1] The two differences between this study and what we would like to do is that Hoover was considering an increase in numbers and we contemplate a (dramatic) decrease–an irrelevant difference for our purposes–and the two populations differ somewhat in size and salary, which does matter but is something that we can easily adjust for. 

If DACA recipients were completely analogous to H-1B holders, their removal would constitute a budgetary loss of $127 billion and a GDP loss of $512 billion.

DACA recipients, being younger and not completely finished with their education, earn on average roughly 43 percent of what H-1B holders earn. Also, the population of DACA recipients is about 750,000, compared to the 660,000 H-1B holders the Hoover study examined. Accordingly, we adjust our numbers by the lower wage and the higher population.

From this, we determined that, over a ten-year window, a repeal of DACA would cost the federal government $60 billion in lost revenue, and the impact on economy would total $215 billion in lost GDP.[2]

Our results were consistent with other work on the impact of DACA on the economy. For instance, a 2016 study published by the National Research Council[3] estimated the average long-term fiscal impact for immigrants who remain in the country for an extended period of time to be $59.3 billion, or within one percent of our own estimate.

To provide a bit more relevant data for policymakers, we have supplemented our original work by breaking down the fiscal and economic costs at the state level. Using data from a 2015 survey completed by the Center for American Progress,[4] we estimated the total cost of repealing DACA for each state based on the proportion of DACA recipients in each state.[5] Table One contains the breakdown of these state-level costs.

Of the 50 states, California will bear the highest cost, with over 30 percent of DACA recipients. Factoring in budgetary and economic effects, California’s total cost over a ten year window would be $84.2 billion.

It is important to note that these estimates are conservative, as DACA recipients will likely end up being more productive than their current salaries indicate, as they complete their degrees and gain experience in the workplace. Nor does this analysis factor in the enforcement cost of physically deporting recipients should the program be eliminated, which we believe would be significant.

The repeal or rollback of the DACA program would have a significant and negative fiscal and economic impact on the country, and disproportionately affect the various states in which DACA recipients are most prevalent.

Table 1: Cost of DACA Repeal By State[6] 

Table 1: Cost of DACA Repeal By State 2018-2028

State Budget Cost (Millions $) Economic Cost (Millions $) Total Cost (Millions $) AL 258 924.5 1182.5 AZ 2826 10126.5 12952.5 CA 18372 65833 84205 CO 768 2752 3520 CT 642 2300.5 2942.5 DC 900 3225 4125 DE 258 924.5 1182.5 FL 5910 21177.5 27087.5 GA 1158 4149.5 5307.5 HI 126 451.5 577.5 IA 258 924.5 1182.5 IL 1926 6901.5 8827.5 IN 642 2300.5 2942.5 KS 384 1376 1760 MA 258 924.5 1182.5 MD 642 2300.5 2942.5 MI 768 2752 3520 MN 126 451.5 577.5 MO 126 451.5 577.5 NE 126 451.5 577.5 NJ 384 1376 1760 NM 258 924.5 1182.5 NV 126 451.5 577.5 NY 10794 38678.5 49472.5 NC 2184 7826 10010 OH 126 451.5 577.5 OK 126 451.5 577.5 OR 384 1376 1760 PA 258 924.5 1182.5 SC 258 924.5 1182.5 TN 258 924.5 1182.5 TX 5142 18425.5 23567.5 UT 384 1376 1760 VA 1026 3676.5 4702.5 WA 1800 6450 8250

 

Logan Albright, Director of Research at Free the People, co-authored this report.


[1]http://www.hoover.org/sites/default/files/uploads/aafs/2013/05/Estimating-the-Economic-and-Budgetary-Effects-of-H-1B-Reform-In-S.744.pdf

[2] To conform to Congressional budget procedures we compiled a ten year aggregate cost.

[3] The Economic and Fiscal Consequences of Immigration; National Academies Press, 2016.

[4]“Results of a Tom K Wong, National Immigrant Law Center, and CAP Survey” Center for American Progress Memo, June 2015. 

[5] The CAP survey found that nearly the entire DACA population were in 35 states.

[6] Several states have the same estimates because they happen to have the same number of survey respondents in their states.

President Trump’s executive order attempted to temporarily ban all refugees and all travelers or immigrants from six African and Middle Eastern countries due to a concern over widespread vetting failures. The purpose of the temporary ban was to give the administration time to “improve the screening and vetting protocols and procedures.” The order grounded this concern in one fact:

Recent history shows that some of those who have entered the United States through our immigration system have proved to be threats to our national security. Since 2001, hundreds of persons born abroad have been convicted of terrorism-related crimes in the United States.

These statements contain four clear implications: 1) that these “hundreds of persons born abroad” committed acts of terrorism in the United States; 2) that they came to the United States “through our immigration system,” 3) that they entered since 2001, 4) that better “screening and vetting protocols” could have prevented their entry, and 5) these offenders pose a significant threat to Americans. Each one of these implications is false. Here are the facts:

1)      Not “hundreds of persons” committing terrorism in the United States: Only 55 percent of people convicted of “terrorism-related” offenses according to the federal government are, in fact, convicted of involvement in terrorism.

2)      Not “hundreds” through our immigration system: Less than 200 foreigners convicted of or killed during terrorism offenses since 9/11 entered “through our immigration system.”

3)      Not “hundreds” entering since 9/11: Only 34 foreigners convicted of or killed during terrorism offenses since 9/11 entered “through our immigration system” since 2001.

4)      Not “hundreds” slipping through “screening” since 9/11: Only 18 likely radicalized prior to entry—just six refugees and only four from six banned countries.

5)      Not a significant threat: No refugee nor any national of the banned countries has successfully carried out a deadly terrorist attack in over four decades. 

In the aftermath of the world’s worst terrorist attack on September 11, 2001, the U.S. government rapidly responded with much stricter vetting for foreign visitors, immigrants, and refugees. It created new terrorist watch lists, required biometric verification of identities, instituted mandatory visa interviews, hired thousands of new consular officers, improved inter-agency intelligence sharing, and much else. America’s pre-9/11 visa vetting system has almost nothing in common with today’s system. For this reason, it is appropriate to begin the analysis of immigration vetting failures with 9/11.

The government’s terrorism-“related” definition inflates the number of terrorism convictions

The executive order does not reveal the source for the claim that “hundreds of persons born abroad have been convicted of terrorism-related crimes,” but the National Security Division (NSD) of the Department of Justice (DOJ) has published a list of 627 unsealed “terrorism-related convictions” from October 2001 to December 2015. Of this list, however, nearly half—45 percent—were not convicted of a terrorism offense. NSD includes them because the prosecution began with terrorism investigation, even if it did not end with a terrorism conviction. Non-terrorism convictions include mainly false statements to investigators, ID fraud, immigration violations, and drug offenses. Other “terrorism-related” offenses include child pornography, social security fraud, and stealing truckloads of cereal. 

Because the NSD list is both overbroad, incomplete, and not fully up-to-date, I also reviewed all terrorism offenders whose convictions were publicized on the DOJ website since 2015 as well as those included in George Washington University’s Program on Extremism (GW) or in New America Foundation’s International Security Program (NAF). NAF includes offenders who lived in the United States for a period before being killed both in the United States and abroad. I created a combined list of NSD, DOJ, GW, and NAF that includes only those convicted of or killed during terrorism offenses. I used court filings and news reports to identify the dates and places of birth and the years of entry for each of them. In two cases, I was unable to nail down exact entry years, but the fact that these individuals naturalized or were in the process of naturalization allows us to know that they had to have been in the country with legal permanent residency for at least five years.

Many foreign-born terrorism offenders did not go “through the immigration system”

Of the actual terrorism offenders, nearly 60 percent were either born in the United States or brought into the country by U.S. law enforcement for prosecution or arrest, leaving 195 other foreign-born terrorism offenders who entered “through the immigration system” at some point, not the “hundreds” claimed in the executive order. Of these, however, only 34 entered through the system after 9/11 (another one entered illegally), again far fewer than hundreds.

Finally, these 34 were not all vetting failures, either. To begin with, 14 entered as juveniles, including nine who entered at 15-years-old or younger (Abdul Artan’s exact age is uncertain, so I included him as an adult). Six of the juveniles converted from Christianity to extremist Islam. Focusing solely on the adults, we find that the government determined that radicalization occurred prior to entry in just 11 cases. In another nine cases, no determination was made, but in two of these cases, it is apparent from their biographical details and their post-entry behavior that they most likely did not radicalize until after their entry to the United States (both entered as teenagers and lived for eight years before their offense). Thus, if we assume all seven of the other uncertain ones radicalized prior to entry, there were 18 vetting failures since 9/11—not hundreds.

Very few terrorism vetting failures were from banned countries

The 34 terrorism offenders came from 22 different countries. Notably, it includes eight individuals from non-majority Muslim countries. Of the banned countries—Iran, Libya, Syria, Somalia, Sudan, and Yemen—only three are represented on the list. The 18 vetting failures came from 13 countries. No single country had more than three vetting failures. Four of the six banned countries had no likely vetting failures since 9/11, which means that nine countries for whom there were vetting failures are not on the list—representing 78 percent of all vetting failures.

Terrorism offenders have entered or received status in the United States through several avenues. President Trump’s executive order specifically targets the refugee program, which accounts for 26 percent of post-9/11 terrorism offenders and a third of all vetting failures. Other avenues account for 67 percent of the vetting failures. In absolute terms, this was just six refugees. Six deviants simply cannot justify shutting down a program that has admitted nearly a million new U.S. residents since 9/11.

Terrorism vetting failures from banned countries caused zero deaths since 9/11

Vetting failures from refugees or the six banned countries represent a tiny portion of the terrorism offenders since 9/11—to be price, less than 2 percent. More importantly, these offenders caused no deaths. Refugees and nationals of these countries simply have not successfully killed anyone in the United States in the last four decades. In fact, 14 of the 34 terrorism offenders were not involved in a plot to kill anyone in the United States—they were mainly either going overseas to join a terrorist organization or sending money to them. Among the 18 vetting failures, fully half were not attempting to kill anyone in the U.S. Only one did: Tashfeen Malik, a Pakistani woman who immigrated using a family-based nonimmigrant visa (fiancé K visa).

These facts contract the claims of the administration that vetting failures are widespread, and that a total rewrite of the system is necessary. My colleague has previously noted that the risk of foreign-born terrorism is miniscule: just a 1 in 3.6 million chance of dying in a terrorist attack on U.S. soil per year. The risk from a post-9/11 vetting failure is more than a hundred times less.

Table 1: Foreign Terrorism Offenders Killed or Convicted Who Entered Through the Immigration System After 9/11

  Country of Birth All Post-9/11 Entries Likely Vetting Failures

1

Albania

1

2.9%

1

5.6%

2

Bangladesh

1

2.9%

1

5.6%

3

Cuba

1

2.9%

0

0.0%

4

Ethiopia

1

2.9%

0

0.0%

5

India

1

2.9%

0

0.0%

6

Iran

1

2.9%

0

0.0%

7

Iraq

3

8.8%

2

11.1%

8

Jordan

1

2.9%

1

5.6%

9

Kenya

1

2.9%

0

0.0%

10

Kuwait

1

2.9%

1

5.6%

11

Kyrgyzstan

2

5.9%

0

0.0%

12

Lebanon

1

2.9%

1

5.6%

13

Libya

0

0.0%

0

0.0%

14

Mexico

2

5.9%

0

0.0%

15

Nicaragua

1

2.9%

0

0.0%

16

Nigeria

1

2.9%

1

5.6%

17

Pakistan

3

8.8%

2

11.1%

18

Philippines

1

2.9%

0

0.0%

19

Saudi Arabia

1

2.9%

1

5.6%

20

Somalia

4

11.8%

3

16.7%

21

Sudan

2

5.9%

1

5.6%

22

Syria

0

0.0%

0

0.0%

23

United Kingdom

1

2.9%

1

5.6%

24

Uzbekistan

3

8.8%

2

11.1%

25

Yemen

0

0.0%

0

0.0%

  Total*

34

100%

18

100%

Bold italics = banned country. *One who entered illegally and is not represented came from Kazakhstan Sources: Department of Justice, National Security Division, George Washington University, New America Foundation

Table 2: Foreign Terrorism Offenders Killed or Convicted Who Entered Through the Immigration System or Illegally After 9/11

Status All Terrorism Offenders Likely Vetting Failures Resident

14

40.0%

5

27.8%

Refugee

9

25.7%

6

33.3%

Student

4

11.4%

3

16.7%

Asylum & Other Humanitarian

3

8.6%

0

0.0%

Tourist

2

5.7%

2

11.1%

Family-Based Temporary

1

2.9%

1

5.6%

Visa Waiver

1

2.9%

1

5.6%

Employment Temporary

0

0.0%

0

0.0%

Cultural Exchange

0

0.0%

0

0.0%

Diplomatic

0

0.0%

0

0.0%

Illegal

1

2.9%

0

0.0%

Sources: See Table 1

Table 3: Foreign Terrorism Offenders Killed or Convicted After 9/11 Who Entered Through the Immigration System As Adults

  Name Offense Born in Charge Year Entry year Entry Age Years in U.S. Status Deaths

Vet. Fail

1

Reid, Richard US Plot Britain

2001

2001

28

0

VWP

0

YES

2

Mohammed, Gufran Abroad India

2013

2003

20

10

Resident

0

NO

3

Mohamud, Ahmed Nasir Abroad Somalia

2011

2004

28

7

Resident

0

NO

4

Ahmad, Jubair Abroad Pakistan

2011

2007

19

4

Resident

0

YES

5

Mohamed, Ahmed Abroad Kuwait

2007

2007

25

0

Student

0

YES

6

Alwan, Waad Abroad Iraq

2011

2007

28

4

Refugee

0

YES

7

Aldawsari, Khalid US Plot Saudi Arabia

2011

2008

18

3

Student

0

YES

8

Hassoun, Sami Samir US Plot Lebanon

2010

2008

20

2

Resident

0

?

9

Hasbajrami, Agron Abroad Albania

2011

2008

24

3

Resident

0

?

10

Ibrahim, Abdinasir Abroad Somalia

2014

2008

32

6

Refugee

0

YES

11

Hammadi, Mohanad Abroad Iraq

2011

2009

20

2

Refugee

0

YES

12

Kodirov, Ulugbek US Plot Uzbekistan

2011

2009

20

2

Student

0

NO

13

Abdulmatallab, Umar US Plot Nigeria

2010

2009

23

1

Tourist

0

YES

14

Kurbanov, Fazliddin US Plot Uzbekistan

2013

2009

27

4

Refugee

0

?

15

Fazeli, Adnan Abroad Iran

2016

2009

31

7

Refugee

0

NO

16

Esse, Amina Abroad Somalia

2014

2009

35

5

Refugee

0

?

17

Juraboev, Abdurasul Abroad Uzbekistan

2015

2011

21

4

Resident

0

?

18

Nafis, Quazi US Plot Bangladesh

2012

2012

21

0

Student

0

YES

19

Elhassan, Mahmoud Abroad Sudan

2016

2012

22

4

Resident

0

YES

20

Malik, Tashfeen US Plot Pakistan

2015

2014

28

1

Fiancé

14*

YES

21

Artan, Abdul Razak US Plot Somalia

2016

2014

~16

2

Refugee

0

?

Italics = Banned Country or Refugee
*She carried out the attack with her husband, but all of their victims are represented here.
Sources: See Table 1

Table 4: Foreign Terrorism Offenders Killed or Convicted After 9/11 Who Entered Through the Immigration System As Juveniles

  Name Offense Born in Charge Year Entry year Entry Age Years in U.S. Status Deaths Vet. Fail

22

Tsarnaev, Dzhokhar US Plot Kyrgyzstan

2013

2002

9

11

Asylum

3

NO

23

Suarez, Harlem* US Plot Cuba

2015

2002

11

13

Asylum

0

NO

24

Daud, Abdirahman Abroad Kenya

2015

2003

9

12

Refugee

0

NO

25

Deleon, Ralph* Abroad Philippines

2012

2003

14

9

Resident

0

NO

26

Tsarnaev, Tamerlan US Plot Kyrgyzstan

2013

2003

16

10

Asylum

3

NO

27

Martinez, Antonio* US Plot Nicaragua

2010

2004

15

6

Resident

0

NO

28

Melaku, Yonathan* Other Ethiopia

2011

2005

16

6

Resident

0

NO

29

Santana, Miguel* Abroad Mexico

2012

2007

<16

>5

Resident

0

NO

30

Smadi, Hosam US Plot Jordan

2009

2007

16

2

Tourist

0

?

31

Badawi, Muhanad Abroad Sudan

2015

2007

16

8

Resident

0

NO

32

Khalid, Mohammad Abroad Pakistan

2011

2008

~9**

8

Resident

0

NO

33

Al Hardan, Omar Abroad Iraq

2016

2008

17

8

Refugee

0

Likely No

34

Garcia, Sixto Ramiro* Abroad Mexico

2015

2010

<15

>5

Resident

0

NO

35

Saidakhmetov, Akhror Abroad Kazakhstan

2015

2011

15

4

Illegal Entry

0

NO

*converted to Islam, **verified through personal correspondence with attorney
Sources: See Table 1

Net primary production (NPP) represents the net carbon that is fixed (sequestered) by a given plant community or ecosystem. It is the combined product of climatic, geochemical, ecological, and human effects. In recent years, many have expressed concerns that global terrestrial NPP should be falling due to the many real (and imagined) assaults on Earth’s vegetation that have occurred over the past several decades—including wildfires, disease, pest outbreaks, and deforestation, as well as overly-hyped changes in temperature and precipitation.

The second “National Assessment” of the effects of climate change on the United States warns that rising temperatures will necessarily result in the reduced productivity of major crops, such as corn and soybeans, and that crops and livestock will be “increasingly challenged.” Looking to the future, the National Assessment suggests that the situation will only get worse, unless drastic steps are taken to reduce the ongoing rise in the air’s CO2 content (e.g., scaling back on the use of fossil fuels that, when burned, produce water and CO2).

But is this really the case? If growing crops are increasingly affected, damage should also be showing up in the global ecosystem. Is the productivity of the biosphere in decline?

In a word, no! Observational data indicate that just the opposite is occurring (see, for example, the many studies reviewed previously on this topic here). Rather than withering away, biospheric productivity is increasing, thanks in large measure to the growth-enhancing, water-saving, and stress-ameliorating benefits of atmospheric CO2 enrichment.

The latest study to confirm as much comes from the research team of Li et al. (2017). Working with a total of 2,196 globally-distributed databases containing observations of NPP, as well as the five environmental variables thought to most impact NPP trends (precipitation, air temperature, leaf area index, fraction of photosynthetically active radiation, and atmospheric CO2 concentration), Li et al. analyzed the spatiotemporal patterns of global NPP over the past half century (1961–2010).

Results of their analysis are depicted in the figure below, which shows that global NPP increased significantly from 54.95 Pg C yr-1 in 1961 to 66.75 Pg C yr-1 in 2010 (Figure 1a). That represents a linear increase of 21.5 percent in the last half-century. In quantifying the relative contribution of each of the five variables impacting NPP trends (Figure 1b), Li et al. report that “atmospheric CO2 concentration was found to be the dominant factor that controlled the interannual variability and to be the major contribution (45.3%) of global NPP.” Leaf area index, which is also enhanced by increasing atmospheric carbon dioxide, was the second most important factor, contributing an additional 21.8 percent, followed by climate change (precipitation and air temperature together) and the fraction of photosynthetically active radiation, which accounted for the remaining 18.3 and 14.6 percent increase in NPP, respectively. Li et al. also report that the vast majority of the observed rise in NPP occurred in the middle and high latitude regions, with 61.1 percent of the increase occurring between 30 and 60 degrees of latitude and 26.4 percent between 60 and 90 degrees of latitude of both hemispheres (see Figure 1c).

Figure 1. (A) Annual variations in global NPP between 1961 and 2010. (B) Changes in NPP in recent decades that resulted from multiple environmental factors including climate, leaf area index (LAI), fraction of photosynthetically active radiation (fPAR), and CO2, and the relative contribution rate (%) of each factor during the study period. (C) Spatial distribution of the trend in NPP during the period 1961–2010. Source: Li et al. (2017).

The observed increase in global NPP over the past five decades is quite an accomplishment for the terrestrial biosphere, especially when one considers all the negative stories—nary a day goes by without notice of some environmental disaster (human- or naturally-caused) occurring somewhere in the world and wreaking havoc on nature. Since 1980, the Earth has experienced three of the warmest decades in the modern instrumental temperature record, has weathered a handful of intense and persistent El Niño events, and suffered large-scale deforestation, “unprecedented” forest fires, disease and pest outbreaks, and episodes of persistent, widespread, and severe droughts and floods. Yet, despite each of these factors, and every other possible negative influence that has occurred over the past half century, terrestrial net primary productivity has increased by 21.5 percent! And it has done so largely because of the ongoing rise in atmospheric CO2. How ironic it is, therefore, that the supposed chief culprit behind the many real (and imagined) assaults on Earth’s vegetation—rising atmospheric CO2—has been found to be the primary cause of an ever-greener planet.

Reference

Li, P., Peng, C., Wang, M., Li, W., Zhao, P., Wang, K., Yang, Y. and Zhu, Q. (2017) “Quantification of the response of global terrestrial net primary production to multifactor global change.” Ecological Indicators 76: 245–255.

Imagine you are an employee and you suspect another employee, or your employer, has violated federal securities laws. You might want to report these violations to your employer (internal reporting), or you might want to tell the federal government (external reporting). But if you report the violation, you run the risk of being retaliated against by your employer.

When Congress passed the Dodd-Frank Act in 2010, it included an “anti-retaliation” provision to protect those employees who externally report securities violations to the Securities and Exchange Commission (SEC). Indeed, the statutory text clearly defines a reporting employee—a “whistleblower”—as an “individual who provides … information relating to a violation of the securities laws to the Securities and Exchange Commission.” The statute is unambiguous: if a person reports a violation of the covered laws to the SEC, Dodd-Frank provides them a remedy to protect themselves from retaliating employers.

In 2014, Paul Somers sued his former employer Digital Realty in the United States District Court for the Northern District of California. Somers claimed that he was fired for complaining to senior management that his supervisor had violated the Sarbanes-Oxley Act of 2002 (one of the securities laws covered by Dodd-Frank). It is undisputed that Somers did not report any violation of the securities laws to the SEC, but he nevertheless asserted in his complaint that Digital Realty retaliated against him in violation of Dodd-Frank’s anti-retaliation provision. Digital Realty moved to dismiss the case because, as noted, it’s clear that Dodd-Frank only protects people who report violations to the SEC. The district court disagreed, however, holding that the definition of “whistleblower” was ambiguous and that Chevron deference was owed to a 2011 SEC rulemaking which had redefined the term “whistleblower” to include not only those who report violations to the SEC, but also those that internally report violations to their employer. Digital Realty appealed to the U.S. Circuit Court of Appeals for the Ninth Circuit, but lost there as well. The Ninth Circuit not only agreed with the district court that the statute was ambiguous, and that Chevron deference should apply to the SEC’s rulemaking, but also—incredibly—upheld Somers claim on the grounds that a better reading of the statute’s text protected internal reporting. Digital Realty petitioned the Supreme Court to hear the case and the Court granted their request.

Cato has filed a brief supporting Digital Realty. We agree with Digital Realty that the statutory language of Dodd-Frank clearly only protects those who report externally to the SEC. If any ambiguity exists, however, the SEC’s 2011 rulemaking should not be granted Chevron deference. The Administrative Procedure Act (APA) requires final rules be the “logical outgrowth” of proposed rules when agencies conduct notice-and-comment rulemaking. In other words, the SEC cannot include things in its final rule that were not in the proposed rule, because it does not give the public “fair notice” and an opportunity to comment on how the SEC intends to interpret the law. When the SEC promulgated its notice of proposed rulemaking, it defined “whistleblower” in line with the statutory definition that Dodd-Frank only applies to those that report externally to the SEC. It did not mention in its proposed rule that it was thinking about changing the definition and did not ask the public for comments on whether it should do so. But when the SEC promulgated its final rule, it expanded the definition of “whistleblower” to cover not only people who report externally, but also to those who report a violation internally to their employers. Thus, the SEC did not give the public an opportunity to comment on that change and did not give them fair notice. This violated the APA. Just two terms ago, the Supreme Court in Encino Motors Cars v. Navarro (2016) reinforced that that agency rules that violate the APA do not receive Chevron deference, and thus the SEC regulation here should not either. Chevron deference is a powerful tool for agencies, and should not be applied when they run afoul of the procedural protections Congress has put in place for the regulated public.  

In The New York Times Magazine, Nicholas Confessore writes about the new lobbying stars in Washington. A new president always creates opportunities for new players. When that president is a non-politician without an established Washington entourage, there’s a lot of uncertainty. Who knows the new president? Who knows the people who know the president?

Confessore tells great stories about newly famous Trumpists such as one-time campaign manager Corey Lewandowski and about “Washington backbenchers, B-listers and understudies” who suddenly realized they knew somebody who had been part of the Trump campaign.

USA Today has reported on people close to Vice President Pence who have opened or expanded lobbying businesses this year.

It’s a sordid story of how fixers and their handsome fees survive even in an administration that came in promising to “drain the swamp.” But how much has really changed? As Confessore reviews:

There are about 10,000 registered lobbyists in Washington — roughly 20 for every member of Congress — and thousands more unregistered ones: consultants and ‘‘strategic advisers’’ who are paid to help shape government policy but do not disclose their clients. By whatever name, they are the people companies and countries hire to help roll back regulations, unstick bids, tweak legislation or get meetings. Lobbying is at once Washington’s most maligned, enduring and essential industry. Underpaid young politicos and retiring lawmakers depend on Beltway lobby shops — known as ‘‘K Street’’ after the city boulevard that once housed many of them — for the high-six-figure salaries that will loft them into Washington’s petite aristocracy… .  But the private sector needs lobbyists the most. The modern federal government is so sprawling and complex that it practically demands a specialized class of middlemen and -women.

Over the decades, lobbying has evolved from a niche trade of fixers and gatekeepers to a sleek, vertically integrated, $3-billion-a-year industry.

Total reported spending on lobbying peaked in 2009 and 2010, the first two years of President Barack Obama’s administration, when trillions of dollars were being handed out or moved around by the stimulus package, the omnibus spending bill, the Dodd-Frank financial regulation bill, the Affordable Care Act, and an ultimately unsuccessful 1200-page energy bill stuffed with taxes, regulations, loopholes, and subsidies. The Washington Post found that “more than 90 organizations hired lobbyists to specifically influence provisions of the massive stimulus bill.” Well-connected Democratic lobbyists like former House majority leader Richard Gephardt and Tony Podesta, the brother of Obama transition director John Podesta, did especially well.

And of course it didn’t start with Obama. As federal spending soared under President George W. Bush, the number of registered lobbying firms climbed. In six years the number of companies with Washington lobbyists rose 58 percent. After the Republicans took control of Congress in 1994, party leaders created the “K Street Project” to pressure lobbying firms to replace Democrats with Republicans. They made it clear that lobbyists needed to shift their political contributions toward Republican candidates, or lose their access to Republican policymakers. By 2003, the Washington Post reported, the GOP had in fact placed Republicans in a significant number of the most influential positions at trade associations and corporate government affairs offices—and were getting their contributions. 

Every new administration threatens to shake up some policies, and that creates a demand for lobbyists to get a piece of the new action. It also means opportunities for people who are well connected among the new White House and agency staffs. But the biggest reason that lobbying grows is that federal spending and regulation grow.

As Craig Holman of the Ralph Nader-founded Public Citizen told Marketplace Radio after a report on rising lobbying expenditures during the financial crisis, “the amount spent on lobbying … is related entirely to how much the federal government intervenes in the private economy.”

Marketplace’s Ronni Radbill noted then, “In other words, the more active the government, the more the private sector will spend to have its say… . With the White House injecting billions of dollars into the economy [in early 2009], lobbyists say interest groups are paying a lot more attention to Washington than they have in a very long time.”

Big government means big lobbying. When you lay out a picnic, you get ants. And today’s federal budget is the biggest picnic in history.

The Nobel laureate F. A. Hayek explained the process 70 years ago in his prophetic book The Road to Serfdom: “As the coercive power of the state will alone decide who is to have what, the only power worth having will be a share in the exercise of this directing power.”

That’s the worst aspect of the growth of lobbying: it indicates that decisions in the marketplace are being crowded out by decisions made by lobbyists and politicians, which means a more powerful government, less freedom, and less economic growth. 

When Oil States Energy Services, LLC filed its patent-infringement suit against Greene’s Energy Group, LLC in federal court back in 2012, the far-reaching negative consequences of the new America Invents Act (AIA) were not yet readily apparent. As the private dispute between these parties has wound its way through the AIA’s legal labyrinth in the subsequent half-decade, however, the structural problems inherent in this new administrative scheme have become increasingly obvious. 

The passage of the AIA has resulted in a substantial transfer of power from the judiciary to the executive branch through the creation of the Patent Trial and Appeal Board (PTAB), an administrative-law body housed within the Patent and Trade Office (PTO) and vested with the extraordinary power to cancel already-issued patents. Although Congress has constitutional authority to determine the kinds of inventions that merit patents, patents themselves (whatever their legislatively determined scope and strength) are and have always been a form of private property. Patents cannot properly be characterized as public rights, as they neither involve the government setting conditions under which it waives its own sovereign immunity nor implicate a statutorily created cause of action that was unknown at common law. Patents are thus necessarily subject to the same protections as a piece of privately held land—and disputes over patents must be handled in the same manner as disputes over other kinds of property, with full judicial review rather than some lesser administrative process. 

This means that the PTAB is fundamentally incompatible with the purposes of Article III of the Constitution, in at least two important ways. First, the PTAB denies patent litigants their right to a fair and impartial adjudication, as the administrative patent judges who comprise the PTAB are fully under the control of the PTO director (a political appointee), and serve at his pleasure. Second, Article III was designed to protect the independence of the judiciary itself, but the creation of the PTAB draws power away from the judicial branch in favor of the executive. The inordinate powers exercised by the PTAB reach far beyond anything previously accepted by the Supreme Court, which concentration of power is further exacerbated by the lack of meaningful judicial review. Such a distortion of the separation of powers creates a sort of unevenness and instability akin to a three-legged stool after one leg has been cut short and then attached to the end of another. 

This tenuous arrangement cannot stand, and so the Cato Institute, joined by the American Conservative Union Foundation, has filed an amicus brief seeking to restore both the proper role of federal courts in patent disputes and the property rights of patent-holders. The Supreme Court will hear Oil States Energy Services, LLC v. Greene’s Energy Group, LLC, this fall.

The Professional and Amateur Sports Protection Act (PASPA), which Congress passed in 1992, forbids states from “authorizing” sports betting “by law.” As every middle-schooler learns, however, our Constitution establishes dual sovereignty between the states and the federal government. And as the Supreme Court most recently held in New York v. United States (1992) and Printz v. United States (1997), the Constitution forbids Congress from “commandeering” state officials to serve federal ends, whether by forcing states to enforce federal laws or to pass new state laws (or to refrain from repealing old ones), which is exactly what PASPA does.

In 2011, New Jerseyans voted overwhelmingly—two to one—to legalize sports betting in a 2011 referendum. The next year, the state legislature responded to the will of the people by enacting a law allowing sports wagering at casinos and racetracks. The four major professional sports leagues, plus the National Collegiate Athletic Association (NCAA), sued under PASPA to prevent the state from moving forward and legalizing sports betting. In 2016, the U.S. Court of Appeals for the Third Circuit ruled for the NCAA, reasoning that if the state were to repeal its pre-PASPA sports gambling laws, they would be “authorizing” the activity “by law,” which was forbidden by PASPA. Unwilling to be forced to continue enforcement of a law overwhelmingly rejected by its populace, New Jersey appealed to the Supreme Court.

Cato has now joined the Pacific Legal Foundation and Competitive Enterprise Institute on a brief (written by former Cato intern Jonathan Wood) in support of the Garden State. We argue that PASPA unconstitutionally commandeers state officials and undermines the core concepts of federalism.

If the federal government wants to enforce its chosen policy, it must find a way to do so that doesn’t involve having New Jersey do its dirty work. There are several options: Congress could regulate sports betting itself (at least across state lines) or it could use its spending power to provide incentives to states to adopt more restrictive schemes. Instead, PASPA forces states to enforce and maintain policies which have become outdated and unpopular, against popular and state sovereignty.

PASPA and other overweening federal laws pose a serious problem for accountability because they tie the hands of state officials, forcing them to enforce policies they do not want. The people of the state then blame state officials for bad outcomes, not knowing that their hands are tied by Congress. Moreover, the same issue comes up again and again, in areas ranging from immigration to guns, from health care to marijuana. The federal government should not be forcing one-size-fits-all solutions on a large and diverse country—and indeed the Constitution was designed to prevent such abuse.

The Supreme Court will hear Chris Christie v. NCAA this fall.”

I fully expected Larry White’s recent post challenging the state theory of money, and particularly that theory’s understanding of the origins of metallic coinage, to generate some critical feedback. In particular, I expected it to raise the hackles of “Lord Keynes” (henceforth LK), the otherwise anonymous author of the blog, Social Democracy for the 21st Century, who has discussed the same topic on several occasions (e.g. herehere, and here), and who is inclined to favor the alternative, “cartalist” (or “chartalist”) perspective.

Nor was I disappointed. Indeed, within moments of tweeting a link to Larry’s post I found myself in a twitter debate with LK regarding the origins of Lydia’s electrum coins, which are generally considered the world’s earliest. In response to my tweet, LK tweeted in return that “The consensus of modern ancient historians is that coined money in Anatolia and Greece was invented by the state.”

LK has since published a post specifically countering White’s claims, including the claim that, although sovereigns eventually monopolized ancient coinage,

as far as we know coins were already in use among merchants before that happened. Very early coins from ancient Lydia, in what is now Turkey, were not inscribed with human faces but rather animal figures. The Ancient History Encyclopedia states: “It appears that many early Lydian coins were minted by merchants as tokens to be used in trade transactions. The Lydian state also minted coins.” Regarding Lydian coins inscribed with the names Walwel and Kalil, the British Museum comments: “It is unclear whether these are names of kings or just rich men who produced the earliest coins.” Regarding a nearly contemporary ancient Greek coin bearing the legend “I am the badge of Phanes,” the Museum comments: “We cannot be certain who this Phanes was, but it seems that he was placing his badge on coins as a guarantee of their quality.”

According to LK, White here is “clearly asserting that coined money was invented by the private sector in ancient Lydia and Greece.” That seems to me a problematic interpretation, since White’s qualifier, “As far as we know,” makes his statement tentative: to say that X is true “as far as we know” is not to say that X is definitely true. It is merely to observe that we have no good reason for believing that X is not true. Consequently the fact that the positive evidence for the private beginnings of coinage is, as LK goes on to declare, “feeble at best,” doesn’t itself refute White’s claim, for the the positive evidence for kings having been the first coiners could be even more “feeble.”

But is it?

That Supposed Consensus

The one thing we know for sure is that a fair portion of all known electrum coins — one inventory places the share at about 25 percent — bear markings that point to official origins. As for the rest, although expert opinion is divided concerning their sources, most authorities continue to allow that they may bear private markings. “We do not know,” Koray Konuk observes (in his contribution, on coinage in Asia Minor, to the Oxford Handbook on Greek and Roman Coinage), “whether there was a state  monopoly of issuing coinage or whether some wealthy private individuals  such as bankers or merchants were also allowed to  strike coins of their own.” The British Museum’s ancient coin curators, with whom I once had a lengthy discussion of the subject, are of the same opinion. Another relatively recent source, finally, sums matters up by observing how “the enormous bibliography on the origins of coinage partly serves to highlight the continued absence of definitive answers to the fundamental questions of ‘who, what, when, why, where?’”

Naturally this lack of definitive answers hasn’t prevented authorities from taking sides in the debate. But despite LK’s remarks, their doing so can hardly be said to have resulted in a “consensus of modern ancient historians” favoring the view that coinage was a state invention. Although some authorities (notably Robert Wallace) clearly favor that view, others, no less recent or authoritative, lean the other way. According to David Schaps, the author of the superb monograph,The Invention of Ancient Coinage and the Monetization of Ancient Greece (2004), “the prevailing opinion,” far from holding “that the first coins were official private issues,”

is that the types of the coins (there are some twenty, many more than the two or three kings who reigned from the time coins were invented until the end of the Lydian empire) identify not the king under whom they were struck, but the producer of the coin — perhaps a royal functionary, more likely an independent gold merchant (“The Invention of Coinage in Lydia, in India, and in China,” 2006, emphasis added).

John Kroll, another highly-regarded, contemporary expert on ancient coins, also maintains that the “profusion of type symbols” found on early electrum coins suggests

that in addition to the coins that were minted by Lydian monarchs and Greek city states, much early electrum may have been struck by local dynasts, large landholders, and other petty rulers in Lydia and neighboring regions — anyone, in short, with wealth in electrum and a need to spend it.

A recent paper by Peter Van Alfen, the American Numismatic Society’s curator of Greek coins, directly challenges one of the main arguments offered in support of the “state invention” hypothesis, namely the claim that only state authorities could command the “trust” needed to make coins circulate. Although he recognizes that kings were probably not the only source of early electrum coins, John Kroll also supplies a typical instance of this view, in his contribution to the Oxford Handbook on Greek and Roman Coins:

The key factor, which made coinage possible and distinguished it from all earlier forms of money, was the involvement of the state. Unlike anonymously supplied bullion, coins were supplied by the state and were guaranteed by its authority. As small, preweighed and hence prevalued ingots of precious metal that were stamped with the certifying badge of the issuing government, they were instantly acceptable in payment on trust.

Had he reflected on such names as Browne & Brind, Johnson Matthey, and Englehard, Kroll might not have been so quick to claim that bullion must either be supplied by the state or “anonymously.” His perspective is, nonetheless, all too common. To his credit, Van Alfen will have none it. “The generation of trust and guarantees,” he observes,

does not always require state intervention or backing. Indeed, in some cases, state intervention is decidedly to be avoided. While states can serve to mitigate transactional chaos through their various formal institutions, like market regulators and courts, there are numerous non-state institutional responses to the same the problems, including reputation and trust networks, that can be just as effective, particularly when the geographical scope and population size in question is comparatively small.

Moreover, he adds,

there is no necessary relationship between states and monetary instruments, like coinage; there often is a functional relationship between the two, but the state is not a necessary component for generating trust, even for fiduciary instruments….In cases where we have contextual evidence, problems of trust were overcome primarily through private guarantee mechanisms.

As an alternative to the view that coinage began as a state innovation, Van Alfen proposes and defends the hypothesis that originally “the so-called right of coinage was not limited to the state alone, but was rather a (property) right held universally.”

Within the larger context of archaic state formation and the more specific dynamic of Asia Minor monarchies, we should not then expect to find a single established set of relationships between the individual polities and coinage ab initio, but rather a process working out what that set of relationships might become. Coinage, with its potential to enhance social, political and economic might, was no doubt one of many sites where the extension and centralization of power was being negotiated between monarchs and their competitors, and monarchs and the ruled.

In Lydia, Van Alfen speculates, “as state capacity increased, so too did political stability along with general elite consent to Mermnad rule.” Eventually — by Croesus’ time —  the Mermnad’s political influence was such that they had “achieved monopolization over coin production, not so much by decree, but by default.”

If Van Alfen’s account is indeed correct, the notion that coinage was a “state” invention makes little sense, for at the birth of coinage the distinction between “the state” on one hand and relatively important individuals (“elites”) on the other was itself murky. All that can be said is that the consolidation of power in certain rulers tended to coincide with the monopolization of coinage — a claim no one has ever contested.

Counter-Counterarguments

So much for the “consensus” that supposedly contradicts White’s stand. Now let’s consider the particular “counterarguments” LK offers against it.  The first concerns the sources of electrum itself.  According to LK, “Lydian king’s either controlled the mines in their kingdom directly and/or levied taxes on mining or extraction of metals.” Therefore, he says, “it is most probable the kings also minted the first electrum coinage.” But the conclusion is a plain non-sequitur: no less than mining and jewelry-making (concerning which more anon), mining and coining are each distinct, specialized activities, which have historically been undertaken by separate outfits; and this has been no less true when mines themselves have been nationalized than when they have been privately owned and operated.

Moreover the premise that kings alone had access to sources of electrum and other precious metals is itself contentious. In his previously-cited paper Van Alfen observes that “While state control of mining by the end of the archaic period seems to have been fairly widespread…there are as well indications that archaic elites individually could gain access to mines far away from the oversight of their home state, and might have had unfettered access to mines within their home territory as well.”

LK’s second counterargument, that the presence of coins not bearing the images or names of kings is no proof that those coins weren’t minted by kings, because “people knew perfectly well that [these coins] had been minted by the state,” begs the question. Since marking coins took some effort, why, in that case, should kings have bothered to mark any of their coins?

LK’s third counterargument, that the names not belonging to any known king’s on some of Lydia’s coins may either be those of mints or those of persons who minted coins on behalf of some Lydian king or kings, is almost equally question-begging. Why identify a coin with a mint, or a coiner, when it was the king’s status that supposedly lent value to the coin? And, if kings did indeed allow private agents to coin for them, does that not itself suggest that those agents, rather than the kings who employed their skills, may have “invented” the first coins?

According to LK, electrum coins were unlikely to have been manufactured by or on behalf of merchants, because most of them were made in denominations too large to be used in ordinary commercial transactions: a Lydian trite, or one-third of a stater, he notes, is supposed to have been capable of buying 10 sheep.

In fact, a trite may actually have been worth considerably less: if some experts have said that one could buy 10 sheep, others say it could only buy one sheep, or three jars of wine.  More importantly, as reported in a very recent paper by Ute Wartenberg, the AMS’s Executive Director, the denominations of even the earliest known electrum coins are now understood to have ranged “from a stater to a 1/192 stater.”[1] It might, in other words, have taken about 21 of the smallest coins, each containing just .06 grams of electrum, to buy a single jar of wine.[2] Furthermore, as François Velde points out in his paper “On the Origin of Specie,” extant electrum coins of various denominations display a weight loss pattern suggesting that the coins did in fact “circulate like modern coinages.”

Precious Tokens?

The last of LK’s counterarguments starts from the premise that, instead of being “full-bodied” coins, Lydia’s electrum globules were actually fiduciary or “token” coins, commanding considerably more than their metallic value in payments, including payments to the state, and goes on to insist that they could not possibly have commanded such value had they not been official products.

In accepting this premise, LK appears to completely ignore (he certainly does not address) White’s observation that it

fails to explain…why governments chose bits of gold or silver as the material for these tokens, rather than something cheaper, say bits of iron or copper or paper impressed with sovereign emblems. In the market-evolutionary account, preciousness is advantageous in a medium of exchange by lowering the costs of transporting any given value. In a Cartalist pay-token account, preciousness is disadvantageous — it raises the costs of the fiscal operation — and therefore baffling. Issuing tokens made of something cheaper would accomplish the same end at lower cost to the sovereign.

Recent research casts further doubt on the claim that electrum coins must have been tokens. That claim rests on the once widely-held belief that electrum coins, though representing uniform weights, did not represent a consistent alloy of gold and silver. Instead, the blend, and hence then commodity value of coins of any given weight, was understood to vary considerably. It would therefore have been quite inconvenient for the coins circulate by weight, that is, at their true metallic worth, rather than by tale, that is, at nominal values independent of that worth.

This once-common view has recently been challenged.  As Wartenberg reports in her aforementioned paper,

Current investigations by a number of scientists and scholars shed critical new light on the question of how the earliest coins were minted, how their production was organized, and how alloys were produced. By using a variety of new analytical methods and techniques, some of these processes are beginning to be better understood.

Among other things, the new methods and techniques to which Wartenberg refers reveals that Lydia’s electrum coins were made, not from naturally-occurring and variable alloy, but from “an alloy deliberately created for coinage.” Using a technique called “Synchrotron X-ray photoelectron spectroscopy,” Wartenberg discovered that Lydia’s electrum coins were in fact “more consistent in their metal composition than previously thought”:

What these different results all show is a fully organized system, in which a specific composition of electrum for a coin series was created. All this was clearly done deliberately, and the desired gold:silver ratio was achieved by combining pure gold and silver, which was previously refined. The discovery that it was not naturally found electrum, which was used, illustrates a highly sophisticated process, but not only of metallurgical technology in the 7th and 6th century BC, but also an understanding of monetary systems.

Although these findings alone don’t suffice to establish that Lydia’s electrum coins, instead of being mere (if costly) tokens, were valued at their metallic worth, or at that worth plus a premium reflecting coinage costs, and perhaps some seigniorage, they certainly make this view appear more plausible than before. Taking the trouble to regulate the blend of gold and silver contained in what were in fact mere tokens would have been yet another pointless expense, on top of that involved in making tokens from any blend of precious metal instead of from less costly materials.

A Misplaced Burden

I’d like to conclude with some remarks concerning, not LK’s particular arguments, but the presumption, implicit in most versions of the “state invention” hypothesis, that sovereigns are at least as capable as other persons, and perhaps more capable, of coming up with monetary innovations. Such a belief flies in the face of all experience. The story of money’s evolution — or that part of it concerning which we have certain knowledge — is, essentially, one of recurring private inventions followed, in many instances, by public appropriation of those inventions. It was not kings or governments but private-sector innovators who came up with manual screw presses, as alternatives to hammers, for striking coins, and with their later steam-driven and electrical counterparts. It was private goldsmiths, and not public bankers, who, in the west, issued the first banknotes. Private innovators also gave us the first lines of credit, the first clearinghouses, the first electronic payments (consisting of telegraphic wire transfers), the first credit and debit cards, the first ATMs, and, most recently, the first blockchain-based means of payment. Governments, in contrast, pioneered little, if anything. Instead, they observed what private markets did, and then stuck their mitts in, sometimes regulating, sometimes prohibiting, and sometimes nationalizing, private-sector innovations.

Consider again, in light of these observations, those tiny electrum coins. According to Wartenberg their existence “begs the question how such blank metal flans were produced to such precision.” In answer, Wartenberg notes that

The technique of granulation was well-known for Lydian and Achaemenid jewelry, and it is likely that a similar method was used for these coins, which were also struck with obverse and reverse dies. … The dies used for many of these objects have simple emblems, which are stylistically close to archaic gems.

Wartenberg’s remarks suggest a link between early coins and jewelry that appears to be just another instance of the even more ancient connection between ornament and money, as described in detail in chapter two of William Carlile’s Evolution of Modern MoneyBut to recognize that linkage is to raise what ought to be an obvious question: if anyone was likely to be the true “inventor” of the first electrum coins, why not a Lydian manufacturer of jewelry, who would have possessed the skill and instruments, as well as access to the metal, required for the purpose?

Allowing, as John Kroll (and most other authorities) do, that “electrum in the form of nuggets, weighed ingots, and bags of electrum ‘dust’ must have been put to use in all sorts of payments for goods and services” well before coins were first made from it, and that “because it was a mixed metal whose gold-silver proportions varied in nature and could be artificially manipulated by adding refined silver to dilute the gold content, it was poorly suited as a dependable means of exchange,” would it not have been perfectly natural for some jeweler to have employed familiar techniques, including the augmentation of natural electrum with silver, not in order to deceive, but to make coins of standardized alloy to supply to merchants for use in exchange?[3] Why suppose instead that some Lydian king came up with the idea?

In short, to treat coinage as an exception to the general rule that private parties are the source of technical monetary innovations, on the grounds that we lack affirmative evidence to the contrary, is, in my humble opinion, to place the burden of proof in this controversy precisely where it doesn’t belong.

_____________________

[1]It had previously been supposed that the smallest coins were those of 1/96 stater.

[2]For further criticism of the argument that early coins were unsuited for commercial use see this article by Alain Bresson.

[3]Making coins conform, at least roughly, to a particular standard was a simple matter of employing a touchstone — a device in common use in ancient Greece long before the birth of coinage, and so closely associated with the Lydians that it is also known even today as a “Lydian” stone.

[Cross-posted from Alt-M.org]

Pages