Cato Op-Eds

Individual Liberty, Free Markets, and Peace
Subscribe to Cato Op-Eds feed

One person were murdered in a likely terrorist attack in Charlottesville, Virginia this Saturday when a suspected white nationalist named Alex Fields Jr. drove his car into a group of protesters. Prominent people on both sides of the political spectrum have condemned the politically motivated violence. However, some commentators have pointed out that left wing terrorists and rioters have also committed violence in recent years, but they have not provided any information to actually compare the violence of both sides. This blog fills that gap by describing terrorist murders by the political ideology of the perpetrators. The chance of being murdered in a terrorist attack is minor but there is wide variation by ideology.

Data and Methodology

This post examines 25 years of terrorism on U.S. soil from 1992 through August 12, 2017. Fatalities and injuries in terrorist attacks are the most important measures of the cost of terrorism. The information sources are the Global Terrorism Database at the University of Maryland and the RAND Corporation. Other organizations seem to count many religiously or racially motivated crimes as terrorist offenses, an overcounting that I attempted to avoid. I estimate the number of murders committed by terrorists in 2017 from online sources although they may be incomplete. As much as possible, I excluded terrorists who died or were injured in their attacks as they are not victims.

I grouped the ideology of the attackers into four broad groups: Islamists, Nationalists and Right Wingers, Left Wingers, and Unknown/Other. Global Terrorism Database descriptions of the attackers and news stories were my guide in organizing the groups by ideology. Islamists and unknown/other straightforward. Left Wing terrorists include Communists, Socialists, animal rights activists, anti-white racists, LGBT extremists, attackers inspired by Black Lives Matter, and ethnic or national separatists who also embrace Socialism. Nationalist and Right Wing terrorists include white nationalists, Neo-Confederates, non-socialist secessionists, nationalists, anti-Communists, fascists, anti-Muslim attackers, anti-immigration extremists, Sovereign Citizens, bombers who targeted the IRS, militia movements, and abortion clinic bombers. Some of the marginal attacks are open to reinterpretation but the ideology of the attackers by death and injury are straightforward in virtually all cases.

Findings

Terrorists have murdered 3,350 people on U.S. soil from 1992 through August 12, 2017 (Figure 1). Islamists committed 92 percent of all those murders and are, far and away, the deadliest group of terrorists by ideology. The 9/11 attacks accounted for 2,983 of the 3,086 Islamist-inspired terrorist deaths—an overwhelming 97 percent. The chance of being murdered in a terrorist attack committed by an Islamist during this period was about 1 in 2.5 million per year (Table 1). 

Nationalist and Right Wing terrorists are the second deadliest group of terrorists by ideology and account for 228 murders and 6.9 percent of all terrorist deaths. The chance of being murdered in a Nationalist or Right Wing terrorist attack was 1 in 33 million per year. The 1995 Oklahoma City bombing, the second deadliest terrorist attack in U.S. history after 9/11, killed 168 people and accounted for 74 percent of the murders committed by Nationalist and Right Wing terrorists. Left Wing terrorists killed only 19 people in terrorist attacks during this time but 15 since 2016. Nationalist and Right Wing terrorists have only killed 5 since then, including Charlottesville. Meanwhile, the annual chance of being murdered by a Left Wing terrorist was about 1 in 400 million per year. Regardless of the recent upswing in deaths from Left Wing terrorism since 2016, Nationalist and Right Wing terrorists have killed about 12 times as many people since 1992. Terrorists with unknown or other motivations were the least deadly. 

Figure 1

Murders in Terrorist Attacks by the Ideology of the Attacker, 1992-2017.

Sources: Global Terrorism Database at the University of Maryland, RAND Corporation, ESRI, and author’s calculations.

Table 1

Annual Chance of Dying in a Terrorist Attack by Ideology of Perpetrator, 1992-2017

Terrorist Ideology

Terrorism Deaths per Ideology

Annual Chance of Being Murdered

Islamist

3,086

1 in 2,461,464

Nationalist and Right Wing

230

1 in 33,316,130

Left Wing

19

1 in 399,793,565

Unknown/Other

17

1 in 446,828,102

Total

3,352

1 in 2,267,486

Sources: Global Terrorism Database at the University of Maryland, RAND Corporation, ESRI, United States Census, and author’s calculations.

The distribution of injuries committed by terrorists is similarly ideologically skewed (Figure 2). Attacks committed by Islamists are responsible for almost 94 percent of the 17,414 injuries during the entire period. Nationalist and Right Wing terrorists are responsible for 992 injuries, or 5.7 percent of the total. Left Wing terrorists are responsible for 27 injuries, or 0.16 percent of the total. Nationalist and Right Wing terrorists injured about 37 times as many people in terrorist attacks as Left Wingers did during this time. 

Injuries is a less clear category of damage that can range from a few scratches to amputations or brain damage. The annual chance of being injured in a terrorist attack does not reveal as much as your annual chance of being injured but I included it anyway (Table 2).

Figure 2

Injuries in Terrorist Attacks by the Ideology of the Attacker, 1992-2017.

 

Sources: Global Terrorism Database at the University of Maryland, RAND Corporation, ESRI, and author’s calculations.

Table 2

Annual Chance of Being Injured in a Terrorist Attack by Ideology of Perpetrator, 1992-2017

Injuries

Terrorism Injuries per Ideology

Annual Chance of Being Injured

Islamist

16,334

1 in 465,047

Nationalist and Right Wing

992

1 in 7,657,336

Left Wing

27

1 in 281,336,213

Unknown/Other

61

1 in 124,525,865

Total

17,414

1 in 436,205

Sources: Global Terrorism Database at the University of Maryland, RAND Corporation, ESRI, United States Census, and author’s calculations.

The risk of being killed or injured in a terrorist attack on U.S. soil is small. However, a comparison to other intentional harms can put the risk in perspective. The chance of being murdered in a non-terrorist homicide from 1992 through 2017 was about 1 in 17,000 a year, which is about 133 times as great as being killed by terrorism during that time.

Islamism is an ideology created overseas, while much of the ideology that inspires Nationalist, Right Wing, and Left Wing terrorism is home grown or it has been here for so long that it might as well be.

Conclusion

Islamist terrorists are the deadliest since 1992. They killed about 13.5 times as many people as Nationalist and Right Wing terrorists who, in turn, killed about 12 times as many people as Left Wing terrorists did. The deadliness of terrorists by ideology has changed over time and will continue to do so. Charlottesville was a tragedy and the person responsible should be tried and, if convicted, punished to the fullest extent possible under the law. However, it is important to realize that the actual scale and scope of the recent terrorist threat differs significantly by ideology even though the annual chance of being murdered in such an attack is still small.

 

 

Timothy Carpenter and Timothy Sanders were convicted in federal court on charges stemming from a string of armed robberies in and around the Detroit area. They appealed on the ground that the government had acquired detailed records of their movements through cell site location information (“CSLI”) from their wireless carriers in violation of the Fourth Amendment. The U.S. Court of Appeals for the Sixth Circuit turned their appeal aside, finding that “[t]he government’s collection of business records containing these data … is not a search.”

The Fourth Amendment states that “[t]he right of the people to be secure in their persons, houses, papers, and effects, against unreasonable searches and seizures, shall not be violated.” Presumably, when called on to determine whether a Fourth Amendment violation has occurred, courts would analyze the elements of this language as follows: Was there a search? Was there a seizure? Was any such search or seizure of “their persons, houses, papers, [or] effects”? Was any such search or seizure reasonable?

In cases involving familiar physical objects, they usually do. In harder cases dealing with unfamiliar items such as communications and data, however, courts retreat to the Supreme Court’s “reasonable expectation of privacy” doctrine that emerged from Katz v. United States (1967). The Court has decided to review the important criminal-procedure and digital-privacy issues here.

Cato and the Competitive Enterprise Institute, joined by Reason Foundation and the Committee for Justice, filed an amicus brief urging the Court to return to the text of the Fourth Amendment. The reasonable expectation of privacy test is outdated because it lacks a strong connection to the text and asks courts to conduct a sociological exercise rather than a judicial one. This is especially true in the context of new technology, where societal expectations have not been fully formed yet and will change based on the Court’s judgment, leading to circular reasoning.

Courts have also used the ”reasonable expectation of privacy” test to undermine the very things the Fourth Amendment was designed to protect. For instance, dog sniffs looking for drugs have been said to not “compromise any legitimate interest in privacy” because they are only looking for contraband. But just because a search is designed to look for illegal activity doesn’t mean that the Fourth Amendment is inapplicable.

Likewise with the “third-party doctrine,” which holds that constitutional protections stop when protected information is shared.

The Carpenter case deals with information about a person’s location for more than 100 days, and yet the government claims that no privacy is violated when it seizes and searches that data. The Court should return to the text of the Fourth Amendment and recognize that data and digital communication are property that are protected by the papers and effects part of the Fourth Amendment, as it did in Riley v. California—the 2014 case where the justices unanimously required a warrant for searching a phone seized during an arrest.

Here, the government ordered the information on Mr. Carpenter’s location turned over (a seizure) and then processed that data for the location of the defendants (a search). The defendants had a contract with the phone company prohibiting the distribution of the data and the Court should recognize the property interest that the defendants had based on that contract.

In sum, the Fourth Amendment presumes that a warrant is required but for exceptional circumstances. There was no exigency that threatens the destruction of the data here, threat to officer safety, or any other reason that law enforcement officers could not get a warrant if they had probable cause. Focusing on the actual text of the Fourth Amendment demonstrates that the government’s actions here violated the Fourth Amendment.

The Supreme Court will hear Carpenter v. United States this fall.

The federal government spends more than $4 trillion a year on programs in hundreds of agencies. Which are the largest agencies, and how fast are they growing?

You can find out using the charting tool at DownsizingGovernment.org/charts. The tool plots spending on hundreds of federal agencies and programs in real, or inflation-adjusted, 2017 dollars. The charts cover 1970 to 2017, based on data from the 2018 federal budget.

The following are seven charts from the tool showing spending by the 21 largest agencies in order by size.

The first chart shows the largest departments: Defense, Health and Human Services, and Social Security. The three used to vie for top spot, but Defense has been left in the dust in recent years as the two entitlement-dispensing agencies have continued to grow. The federal government now has two $1 trillion agencies. Wow.

The second chart shows that spending by Veterans Affairs, Agriculture, and the IRS have soared in recent years. Veterans Affairs spending has doubled in a decade—again this is real dollars. Yikes. Agriculture spending includes food stamps and farm subsidies. IRS spending is fueled by outlays on “refundable” tax credits, particularly the earned income tax credit.

The third chart shows Education, Transportation, and the Office of Personnel Management. For the latter agency, spending includes the retirement and health costs of federal workers. Education spending gyrates widely because of recalculations in the costs of student loans. Transportation spending shows a solid upward trend, despite all the claims that we shortchange infrastructure investment. Either way, federal transportation spending should be cut.

The fourth chart shows Labor, Homeland Security, and Other Defense Civil Programs. The latter includes spending on military retirement and health care. The spike in Labor around 2010 was due to the extra UI benefits passed by Congress during the recession. Homeland spending spiked during the early Bush years and remains high.

The fifth chart shows that State department spending has tripled in constant dollars since 2000. HUD spending gyrates due to the accounting for housing finance subsidies. Justice spending tripled from 1990 to the mid Bush years. If you go to the chart tool, you can see that HUD subsidies for rental aid and community development remain at high levels.

The sixth chart shows Energy, NASA, and International Aid. The Energy spike from 2010-2012 stemmed from President Obama’s “stimulus” legislation. Remember Solyndra?

The seventh and final chart shows Interior, Commerce, and the EPA. The spikes in Commerce surround Census years. You can see this if you go to the charting tool, click open Commerce, and plot Census separately.

Similarly, use the chart tool to see that the Commerce spike in the late 1970s was for the Economic Development Administration, which by the way is one of the dumbest agencies in the government.

Finally, if you click open EPA on the charting tool, you can see that the spike in the late 1970s was due to a surge in grants to state governments.

Which of these departments and agencies should be cut? I suggest starting with these.

 

Can we talk about the story of whether Google, a company entrusted with everyone else’s personal secrets, should let its own employees’ confidential data be thrown open to the scrutiny of a vengeful world in the course of trying to show that its workplace is not rife with discrimination?

No, not that Google story. Not the one about the firing of Google Memo author James Damore, which has been taking up oxygen in online conversation all week.  I’ve already had my say in Wednesday’s USA Today on how existing federal law would have helped shape Google’s incentives in handling that furor. (“Now, as then, government pressure on employers to ban speech consists less of direct you-must-ban mandates and more of litigation incentives whose contours are not explicitly announced.”) And since you can read that piece here, I won’t retrace the ground it covers. 

My purpose here instead is to relate another Google bias-claims-and-employee-privacy story from last month, which would have counted as fairly significant news in its own right had it not soon been eclipsed by the memo episode. 

In 2015 the U.S. Department of Labor launched a contract-compliance review of Google’s employment practices related to diversity. This past January, in its last month of office, the Obama DoL followed up with a lawsuit alleging that Google had not been forthcoming enough in providing employee information in response to the review, and asking a court to order it to comply. (As I mentioned in my Wednesday piece, for a company like Google to actually be in litigation over its employment practices “means lawyerly caution would be at a zenith on whether to let its corporate culture be portrayed in a future courtroom as tolerant of sexist argumentation.”)

How forthcoming had Google already been? Per the law firm of Michael Best & Friedrich in the National Law Journal

Up until June 2016, Google had complied with all of OFCCP’s information requests, producing over 1 million data points and approximately 740,000 pages. This production cost Google approximately $500,000 and 2,300 man hours.

In June 2016, OFCCP sent Google two letters requesting a large amount of information and materials. This request came after Google had already provided an incredibly large volume of documents requested during the audit. Google complied with all but three of OFCCP’s requests: (1) a salary history for every person employed by Google during two snapshot periods, going back to each person’s date of hire (which for some extended back to 1998); (2) another snapshot period that included not only the OMB-approved list of information, but also an additional 38 categories for each of the 19,539 people employed by Google on September 1, 2014; and, (3) the name, address, telephone number, and personal email of every employee reflected on either of the two snapshots.

The Labor Department’s suit was heard by one of its own in-house administrative law judges (ALJs), which ruled last month that the demands were “over-broad, intrusive on employee privacy, unduly burdensome, and insufficiently focused on obtaining the requested information.” While allowing some of the requests to go forward in pared-down form, the ALJ drastically cut back their scope and said the agency “OFCCP offered nothing credible or reliable to show that its theory … is based … on anything more than speculation.

Since DoL’s own ALJs have (to understate matters) little reason to lean against the department’s interests, this is a pretty good indication that the requests were indeed overbroad, maybe even an example of the widely suspected federal agency practice of going on subpoena “fishing expeditions” meant to find some rule violation.  

In a statement, Google said it was “concerned that providing personal contact information for more than 25,000 Google employees could have privacy implications, and the judge agreed, citing the history of government data breaches and recent hacking of Department of Labor data.”   

Now if only Google could get its own employees to be as careful about not spilling confidential information about co-workers to parties on the outside. 

The first installment of this blog was a preliminary look at a Washington Post article “Is Amazon Getting Too Big?” by Steven Pearlstein.  That article promoted strong opinions of Yale law school student Lina Khan based largely on (1) faulty market concentration estimates from President Obama’s Council of Economic Advisers and (2) a selective 40-year survey of mergers as evidence of some current problem linking concentrated markets to rising prices.    

There was another bit of indirect evidence in the Obama CEA memo which merits discussion.  A graph from former CEA Chair Jason Furman showed large recent gains in “returns on invested capital” among public nonfinancial firms, as measured by McKinsey & Co.  The CEA insinuated that this shows a recent surge in “rents” (receipts larger than needed to attract capital) which they wrongly defined as “greatly in excess of historical standards.”  There is a simpler explanation.

Return on invested capital is notoriously difficult to estimate and, as McKinsey explains, returns look relatively larger by this measure because invested capital has become smaller as the economy shifted from capital-intensive manufacturing to services and software:

“What if the invested-capital side of the equation approaches zero, as it increasingly does among companies that use outsourcing and alliances and thus reduce the capital intensity of parts of their businesses? Other businesses, such as software development and services, also have inherently low capital requirements or take advantage of atypical working-capital dynamics, including prepayment by customers for licenses and payment by suppliers for inventory. Even traditional businesses are shedding capital: the median level of invested capital for US industrial companies dropped from around 50 percent of revenues in the early 1970s to just above 30 percent in 2004.”

Like the CEA’s mention of a rising share of sales by Top 50 firms in 10 industries between two years, the CEA’s return on invested capital data also failed to uncover any new “market concentration” problem to be solved by the Democratic Party’s mysterious “Better Deal.”

Neither Khan, Pearlstein, nor their cited sources provide any evidence of (1) the alleged widespread increase in market share held by 3 or 4 firms, nor of (2) higher prices outside of federally-regulated and subsidized industries, nor of (3) any connection between concentration and monopoly pricing, nor of (4) any connection between return on invested capital and concentration or monopoly pricing.

Despite no evidence that market share (concentration) predicts higher pricing, Pearlstein and Khan talk about Amazon’s share of markets.  Pearlstein claims Amazon sells 40% of books in this country, but such estimates vary depending on how we handle Kindle’s share of e-Books, Amazon’s consumer-friendly discounting of new books, and Amazon’s utility as a market for used books. Many people, like me, sell very good used books on Amazon and eBay, making the market for new books more competitive than ever.  Stacy Mitchell’s “The Big Box Swindle” costs $14.32 on Amazon Prime, but very good used copies sell below $2.

Khan claims “Amazon now controls 46% of all e-commerce in the United States.” That inflated figure includes sales that Amazon processes for other retailers and producers. And retail alone is not “all e-commerce” even ignoring the biggest e-commerce market – China. In U.S. business-to-business sales, Amazon Business is ranked 37th in the B2B E-Commerce 300 by Digital Commerce.  Besides, e-commerce only  amounted to 11.7% of U.S. retail sales last year, according to the Commerce Department.

The source of Amazon’s alleged 46% share of something or other is Stacy Mitchell (BA in History) a publicist for “local-self-reliance” (which was also the unwise goal [zi li geng sheng] of Mao Tse-tung’s 1958-59 “Great Leap Forward”).  As Ms. Mitchell and her co-author explain, “Amazon’s market share was calculated by the authors, drawing on information from Amazon and Channel Advisor.”

E-commerce is not a market, much less a U.S. market, and it’s not something that can be “controlled.”  But the Khan-Mitchell-Pearlstein complaint is not about a big market share causing higher prices for buyers but about lower prices for sellers.  Mitchell applauds the Democrats’ new tough talk on antitrust because, “With most industries now dominated by just a few firms, it is harder…  for small businesses to compete and for farmers and producers to get a fair price” [emphasis added].

“In considering whether a proposed merger or business practice would harm competition,” writes Mr. Pearlstein, “courts and regulators narrowed their analysis to ask whether it would hurt consumers by raising prices.” He and Kahn blame such a “narrow” emphasis on consumer welfare on what they call “The Chicago School.”  They want courts and regulators to instead ask whether businesses structure or practices would harm competitors by reducing prices. Khan thinks “antitrust enforcers” should take a “holistic” approach, taking account the interests of rival sellers or producers who might have a tough time competing with Amazon’s low prices, huge inventories, or fast delivery. 

As I noted in the first blog, antitrust law is a large and lucrative industry, which has long been famously generous to politicians and think tanks who try to enlarge the volume and value of antitrust activities.  “Amazon is reported to be in the market for an antitrust economist,” notes Pearlstein, and “has engaged the services of two former heads of the Justice Department antitrust division, one Democrat and one Republican.” Indeed. That’s the way the antitrust extortion racket is played.  Skilled antitrust lawyers are equally eager to prosecute or defend, since they win either way.

Antitrust works very much like any other regulatory bureaucracy. Business lobbyists recruit antitrust agencies to get them to damage or constrain their more-successful rivals. Key members of congressional committees use antitrust threats to shake down corporate executives for campaign contributions and discourage them from supporting a rival Party’s policies or candidates.  Redundant federal agencies, the DOJ and FTC, drag companies into costly litigation battles for years, with the usual result being multi-billion-dollar fines rather than noticeable change in business practices.  Ambitious prosecutors use the publicity from high-profile cases as a red-carpet invitation into the revolving door of high-paying positions as antitrust defense attorneys or as executives in the effected industries.

Pearlstein suggests antitrust should be more about “fair play” and a “level field” to protect competitors who would much prefer to charge higher prices for less selection and slower service. “Some observers point to the E.U.’s Google case as an example of the difference between the American and European approach: They protect competitors; we protect consumers. … To me, this view betrays a naïve belief that in our open market system, every person and every company has the same opportunity to succeed. … Leveling the playing field is a legitimate policy goal.”  

To me, Pearlstein’s view betrays naïve optimism that ambitious politicians and prosecutors are omniscient and incorruptible. He thinks future antitrust cops should have great discretionary authority to “block Amazon” from competing too effectively with UPS, Oracle or Comcast.  If that threat ever materialized, it would surely attract generous donations from UPS, Oracle, or Comcast to sympathetic politicians and think tanks.  Antitrust law is not supposed to be about dividing the spoils but frequently is.

The federal government plays a large role in the nation’s highways through the funding of aid programs for the states and the imposing of top-down regulations. Congress passed a major highway bill in 2015 that authorized $305 billion in spending over five years, of which $226 billion was for highways and most of the rest for urban transit.

The Trump administration is promising a fresh approach to highway spending and regulation. What are the main problems with current highway policies, and what reforms should the administration pursue?

Transportation expert Gabriel Roth and I examine these questions in a new study at DownsizingGovernment.org. We review the history of federal highway interventions, describe the inefficiencies of federal aid and regulations, and discuss possible reforms.

We argue that Americans would be better off if federal highway and transit spending, fuel taxes, and related regulations were cut. The states can more efficiently tackle their transportation needs with a reduced federal role, and they would be more likely to pursue privatization and other market-based reforms.

Our primer on federal highway policies is here.

As expected, and thanks to the big 2015-6 El Niño, the National Oceanic and Atmospheric Administration (NOAA) has announced that 2016 is the warmest year in their 150-year long global surface temperature record. They didn’t mention that there are signs that global average temperatures are headed back to pre-El Niño values, which may put them near the range of the long “pause” in warming beginning in 1997 that ended with the recent El Niño.

There are several sources showing this. Here’s the satellite data from the University of Alabama-Huntsville through last month:

Temperatures have fallen to within approximately 0.15⁰C of the average since the end of the last (1998) big El Niño and the beginning of the recent one. These are “bulk” data for the lower atmosphere.

You can see similar behavior in the surface record from the University of East Anglia:

In this record, the “pause” from mid-1997 through 2013 is obvious. It will be interesting to see where this record settles out, as the early 2017 data look very “pause-y”.

We are also suffering from the problem that NOAA’s (the folks who made today’s announcement) record is the “pause-buster” version that used a new record of sea-surface temperatures (designated ERSSTv4) that became progressively warmer, beginning in 1998, compared to its predecessor (ERSSTv3).  It also raised very good buoy temperatures to match very bad ship intake tube temperatures. Just inside a large hunk of conductive metal sitting in the sun isn’t a good place to take the water temperature.   

One increasingly popular recent surface temperature history is “reanalysis” data in which temperatures are transformed onto a tight latitude/longitude grid that provides a spatially “level playing field”, bypassing the problems that occur as weather stations move, or go off or on-online. You can also see the temperature peak here, and that we are approaching pre-El Niño values.

It looks like the warm party is breaking up.

The U.S. Postal Service (USPS) is a major business enterprise operated by the federal government. It has a legal monopoly over first-class mail, which prevents entrepreneurs from competing to improve quality and reduce costs.

I describe the postal system’s inefficiencies here, and discuss how European countries have privatized their systems and/or opened them to competition.

In this country, privatization is needed more than ever because the USPS is increasingly distorting the booming package delivery business. My study discusses USPS cross-subsidies between its mail and package activities, and a recent article in the Wall Street Journal explored the problem further.

Josh Sandbulte argued that the USPS gives Amazon an unfair advantage over brick-and-mortar firms:

The U.S. Postal Service delivers [Amazon’s] boxes well below its own costs. Like an accelerant added to a fire, this subsidy is speeding up the collapse of traditional retailers in the U.S. and providing an unfair advantage for Amazon.

… The 2006 Postal Accountability and Enhancement Act made it illegal for the Postal Service to price parcel delivery below its cost. But with a networked business using shared buildings and employees, calculating cost can be devilishly subjective.

… An April analysis from Citigroup estimates that if costs were fairly allocated, on average parcels would cost $1.46 more to deliver. It is as if every Amazon box comes with a dollar or two stapled to the packing slip—a gift card from Uncle Sam.

In a response to Sandbulte here, the USPS claims that they do not cross-subsidize. The solution for this dispute? Privatize the USPS, repeal the monopoly, and let competitive markets decide on product pricing.

Another way that the USPS distorts the marketplace was explored in the Washington Post recently. Stamps.com appears to have a sweetheart deal with the USPS related to selling postage, preparing labels, and servicing shippers:

At the heart of this dispute is the Postal Service’s use of discount deals, called negotiated service agreements, that allow some companies to sell postage for less than others even though the underlying service — having the Postal Service deliver a package to a particular address within a specified period of time — is identical. The details of these deals, and even the identities of companies receiving them, are not public because of the Postal Service’s broad exemption from public disclosure laws when it comes to its dealings with private businesses, leaving rivals to guess at who is getting better terms and why.

Several current and former industry officials say they believe that Stamps.com, through several subsidiary companies, has gotten particularly lucrative discount postage deals from the Postal Service and is using them in novel ways that give Stamps.com an unfair competitive advantage over other companies.

The WaPo story talks about USPS dealings with language such as “opaque,” “closely held secrets,” and “details of which are not made public.” I’m confused—I thought the USPS was the people’s mail service operated in the public interest?

Alas, as I explored in this study on privatization, government agencies are usually less transparent than private enterprises. The WaPo says, “The Postal Service declined multiple requests for interviews regarding its postage-discount programs and did not respond to written questions from The Post.”  Public agencies are often less responsive to the public than private ones.

Bottom line: We do not need a giant and secretive distortion machine in the middle of our mail and package industry, as the Europeans are showing us. Postal-socialism makes no sense in the Internet age.

I never expected to have trouble distinguishing the rhetoric of America’s president and North Korea’s leader. Nor did I ever imagine it would be unclear which official was more impulsive, emotional, blustering, and reckless. But these are not normal times.

For anyone contemplating the odds in a war between the U.S. and the Democratic People’s Republic of Korea, a few numbers are instructive. Last year the U.S. had a GDP of almost $19 trillion, roughly 650 times the GDP of the Democratic People’s Republic of Korea. The latter is equivalent to the economy of Portland, Maine or Anchorage, Alaska. America’s population is around 13 times as large as that of the DPRK.

The U.S. military spends upwards of 100 times as much as the North’s armed forces. With the world’s most sophisticated nuclear arsenal and 1411 warheads (the peak was 31,255 50 years ago), Washington could incinerate the North in an instant. Pyongyang is thought to possess around 20 nukes, of uncertain status and deliverability.

Does the DPRK’s “Supreme Leader” Kim Jong-un recognize this reality? There’s plenty of evidence that he is ruthless and cruel. But none that he is blind or suicidal. Like his father and grandfather, who ruled before him, he most assuredly prefers his virgins in this world.

The North’s rhetoric is bombastic, splenetic, confrontational, and fantastic. But it always has been thus. Even before Pyongyang possessed deployable nukes and long-range missiles, it was promising to turn New York (as well as Seoul) into a “lake of fire.” The North Koreans even distributed a video showing precisely that result. If calm ever descends upon the Supreme Leader and his minions, then perhaps Americans should really worry.

The North’s rhetoric and behavior is determined at least in part by domestic considerations. Politics is all-consuming and militaristic images are everywhere. (I visited in June and put up a bunch of photos on Forbes. We are holding a CatoConnects session on Tuesday, August 15 to discuss my visit.) The regime seeks support by portraying itself as heroically defending—against overwhelming odds—a society under siege by imperialistic Americans and their South Korean puppets. The constant mantra, almost irrespective of subject, place, or person, I heard was “under the wise leadership of the Supreme Leader.” Whether the population believed it seemed secondary.

As for Pyongyang’s international interactions, the regime is acting out of weakness. Even North Korean officials, while proudly proclaiming that their nation was prospering despite sanctions, admitted that becoming an “economic power” was an objective, not an accomplished fact. The Victorious Fatherland War Museum displays pictures of the devastation wreaked by U.S. air attacks during the Korean War. Even when it comes to nuclear weapons officials talk of “matching the U.S.,” not outpacing Washington.

Indeed, this helps explain North Korea’s weapons development. Of course, there is nothing good to say about the Kim dynasty, now into its third generation. It runs a totalitarian regime which holds an entire people in bondage. Although state controls are slipping a bit, especially when it comes to the economy, individuals have essentially no civil, political, or religious liberties. Nevertheless, it is important to try to understand the DPRK in order to develop sensible policies.

North Korean elites know Washington’s power. After all, the U.S. intervened to defend the Republic of Korea after the 1950 DPRK invasion and would have liberated the entire peninsula had China not joined the conflict. Gen. Douglas MacArthur then advocated using nuclear weapons, a threat also employed by the incoming Eisenhower administration to “encourage” China to conclude an armistice.

Once that agreement was reached, the U.S. forged a “Mutual Defense” treaty (in practice it runs only one way, of course) with the South and maintained a garrison, backed by nuclear weapons on the peninsula, joint military exercises with the South, and ample supplemental forces nearby. Such measures posed an obvious existential threat to the North Korean regime.

The dangers resulting from America’s policies were enhanced by the end of the Cold War, when first Moscow and then Beijing opened diplomatic relations with South Korea. Today the DPRK is truly alone, even as U.S. officials say “all options are on the table”—and the American president threatens “fire and fury.”

Moreover, the Kim regime sees Washington engage in promiscuous military intervention around the globe. American administrations most recently used the armed forces to promote regime change in Afghanistan, Iraq, and Libya. Notably, the government of the latter traded away its nukes and missiles, leaving it vulnerable to outside intervention. North Korean officials are quick to cite these examples.

If there ever was a case of a paranoid state having a real enemy, it is North Korea.

Obviously, anything said by the DPRK should be taken with several grains of salt, but there is little reason to doubt the concerns Pyongyang expresses over potential U.S. military action. When I visited the North in June North Korean officials dismissed criticism of their nuclear program, pointing to America’s “hostile policy,” highlighted by “military threats” and “nuclear threats.”

The DPRK’s nuclear weapons obviously defend against such “threats,” but have other uses as well, such as international status and extortion. Long-range missiles have only one purpose: deterring U.S. military intervention against the North.

The Kims want to avoid, not wage, war with America. If the U.S. was not “over there,” the North’s safest course would be to ignore Washington. But with America already involved and threatening intervention, Pyongyang’s only sure defense is deterrence.

The North is moving to do what no other potential adversary other than China and Russia has done, effectively foreclose the possibility of U.S. military action. Once the Kim regime has a reasonable chance of turning at least a couple of major American cities into “lakes of fire,” would any administration risk even conventional involvement in a Korean conflict?

Indeed, Washington should have turned South Korea’s defense over to Seoul years ago—the South possesses 40 times the GDP and twice the population of the North. Pyongyang’s nuclear ambitions should spur Washington to withdraw from the Korean imbroglio, phasing out both the U.S. treaty guarantee and troop deployment. But even then the DPRK tops the list of regimes that should not possess nukes. And as long as America is involved, the possibility of misjudgment and mistake remains real. Especially with two leaders possibly suffering from impulse control issues.

What to do? Military strikes would risk triggering the Second Korean War, which could kill tens or hundreds of thousands, result in widespread devastation, and spread chaos, instability, and refugees. The war would be in Northeast Asia, “not here in America,” as Sen. Lindsey Graham (R-S.C.) proclaimed, but Americans still would suffer.

Better to lower the regional temperature, in both action and rhetoric; begin phasing out America’s security treaty and military garrison, shifting any Korean military struggle from the U.S. to the peninsula; open a dialogue with the North, especially over security issues; engage China over its interests, encouraging it to more fully cooperate over the North; and develop a regional  development/security package in return for North Korea’s denuclearization, enlisting Beijing as the chief muscle behind an offer impossible to refuse.

Perhaps most important, the U.S. should continue to set as its prime objective supporting peace on the peninsula. That has been America’s objective since the end of the Korean War 64 years ago. That should remain Washington’s goal as it seeks to extricate itself from an outdated military commitment that now threatens to go nuclear.

Seventy-two years after the United States dropped atomic bombs on Hiroshima and Nagasaki the specter of nuclear war once again hangs over the world. In the span of a few hours, both the United States and North Korea made nuclear threats against one another. Donald Trump went first, saying “North Korea best not make any more threats to the United States. They will be met with fire and fury like the world has never seen.”

Shortly after Trump’s “fire and fury” comments, North Korea’s KCNA carried a statement from the Strategic Force of the Korean People’s Army (KPA) that threatened the “air pirates” stationed at Guam with a nuclear strike. The KCNA statement closed with the warning, “[The United States] should immediately stop its reckless military provocation against the state of the DPRK so that the latter would not be forced to make an unavoidable military choice.” While KCNA did not reference Trump’s comments, the timing of its release creates the impression that the two countries had issued dueling nuclear threats.

At their core, both Trump’s “fire and fury” comment and the KCNA statement are deterrent threats, which seek to prevent a certain action by threatening a high cost in retaliation. If the target of the deterrent threat takes the action that the threat issuer deems unacceptable, then the former will suffer a worse fate. The credibility of deterrent threats depend on whether or not the targeted state believes that the issuer will follow through on its rhetoric.

While both Trump and KCNA issued deterrent threats, the quality of the threats are markedly different. Trump’s threat is incredibly vague both in terms of what the threat is trying to prevent and what costs the United States would inflict on North Korea in response. A lack of clarity about what Trump wants to deter could prevent North Korea from taking any escalatory actions, but given the high stakes involved for North Korea it is unlikely to view Trump’s threat as credible. Kim Jong Un will keep making nuclear threats because the vulnerability of the United States to nuclear attack deters America from attacking North Korea in the first place.

Ambiguous deterrent threats can work, but such threats are not usually issued by powerful countries. Meanwhile, the uncertainty created by Trump’s comment is not reassuring to the other parties involved in the North Korea issue. The “fire and fury” statement could complicate relationships with U.S. allies if they feel that Trump’s rhetoric increases the likelihood of escalating the crisis and putting their security at risk. Additionally, efforts to convince China to do more to help the United States solve the North Korea problem could suffer if Trump’s rhetoric is seen as an indication of unpredictable U.S. policy.

In contrast to Trump’s ambiguous language about “fire and fury,” the KCNA statement is very detailed and precise. The statement references specific U.S. actions that North Korea wants stopped and says what costs North Korea is prepared to inflict. Flights of strategic bombers out of Guam are interpreted as “muscle-flexing in a bid to strike the strategic bases of [North Korea]. This grave situation requires the KPA to closely watch Guam…and necessarily (sic) take practical actions of significance to neutralize it.” In order to neutralize this threat, the KPA is preparing an operational plan to make “an enveloping fire at the areas (sic) around Guam with medium-to-long-range strategic ballistic rocket (sic) Hwasong-12 in order to contain the U.S. major military bases on Guam.”

There is little ambiguity in the KCNA statement. The deployment of strategic bombers to Guam is seen a major threat to North Korea’s security. In the event of a conflict, the KPA plans to neutralize the threat with nuclear-armed ballistic missiles to both prevent the bombers from carrying out an attack on North Korea and send a “serious warning signal to the U.S.” Pyongyang is trying to deter strikes by aircraft stationed at Guam by threatening to destroy the island’s air force base with nuclear weapons. An unprovoked North Korean attack against Guam is not credible because it would invite a devastating retaliation, but their threats to attack the U.S. base early in a conflict are credible. Such threats can deter the United States by making the costs of preemptive action prohibitively high.

Trump’s bombastic statement about “fire and fury” is a clumsy threat that is unlikely to change North Korea’s calculus or behavior. Ambiguity can be a very valuable tool for deterrence, but ambiguous deterrent threats are ill-suited for addressing America’s North Korea problem. Trump doesn’t have to give up his colorful language, but if he wants his deterrent threats to be effective he needs to be precise about what actions the United States deems unacceptable. He should leave the “fire and fury” talk to the North Koreans.

North Korea’s Kim Jong Un is doing everything in his power to ensure that he remains atop the United States’ enemies list. For months, his government has been test-launching missiles and issuing threats. This week the rhetoric got even hotter. President Trump pledged to rain “fire and fury like the world has never seen” on North Korea. The North Koreans responded with a promise to attack the U.S. base at Guam.

Notwithstanding Secretary of State Rex Tillerson’s statements last week and in April that the United States does not seek regime change in Pyongyang, other tin-pot dictators have heard similar assurances before. If KJU doesn’t want to go the way of Slobodan Milosevic, Saddam Hussein and Muammar Gaddafi, he’ll hold onto his nukes.

Unsurprisingly, hawks in Washington – who don’t like being so deterred – are urging President Trump to launch a preventive war, and denude the latest Crazy Kim of his dangerous toys.

For example, John Bolton explained last week that, since diplomacy is unlikely to be successful, Trump has only three options: “pre-emptively strike at Pyongyang’s known nuclear facilities, ballistic-missile factories and launch sites, and submarine bases”; “wait until a missile is poised for launch toward America, and then destroy it”; or launch “airstrikes or [deploy] special forces to decapitate North Korea’s national command authority, sowing chaos, and then sweep in on the ground from South Korea to seize Pyongyang, nuclear assets, key military sites and other territory.”

To summarize: small war now, small war later, or big war now. And, of the middle option, Bolton warns that a preemptive strike would “provide more time but at the cost of increased risk” and that “Intelligence is never perfect” – so that leaves war now (or soon).

Bolton grudgingly admitted “All these scenarios pose dangers for South Korea, especially civilians in Seoul,” and that “The U.S. should obviously seek South Korea’s agreement (and Japan’s) before using force, but no foreign government, even a close ally, can veto an action to protect Americans from Kim Jong Un’s nuclear weapons.”  

Along similar lines, Lindsey Graham explained “Japan, South Korea, China would all be in the crosshairs of a war if we started one with North Korea. But if [North Korea gets] a missile they can hit California, maybe other parts of America.”

“If there’s going to be a war to stop [Kim Jong Un],” Graham continued, “it will be over there. If thousands die, they’re going to die over there. They’re not going to die here.”

This leaves aside the rather obvious fact that the American troops carrying out a war with North Korea would be risking death. That factor should also weigh heavily on the president’s mind. The American people, bitten by other wars that Bolton and Graham championed, are highly averse to new ones–especially those that are likely to result in large numbers of Americans getting killed. 

As I point out in a new article at The Skeptics

A recent paper finds that in the 2016 election, Donald Trump performed well in those communities that paid the heaviest price during America’s post–9/11 wars in Afghanistan and Iraq. Voters in these communities may even have provided the margin he needed to win the presidency.

“Trump’s ability to connect with voters in communities exhausted by more than fifteen years of war,” write Douglas Kriner and Francis Shen, “may have been critically important to his narrow election victory.”

“If Trump wants to win again in 2020, his electoral fate may well rest on the administration’s approach to the human costs of war. Trump should remain highly sensitive to American combat casualties.”

Could public sentiment really constrain a president convinced that military action is the last best course of action? Maybe, argue Kriner and Shen. “The significant inroads,” they write, “that Trump made among constituencies exhausted by fifteen years of war—coupled with his razor thin electoral margin (which approached negative three million votes in the national popular vote tally)—should make Trump even more cautious in pursuing ground wars.”

I’m skeptical.

“Trump” and “cautious” are two words that rarely go together. And not all U.S. wars are ground wars.

The human cost of war should factor into any president’s decision to start one. But Donald Trump’s limited understanding of modern warfare and international politics might convince him that he can pick a few cheap and easy fights to boost his popularity and secure a few quick wins. Though he might be disinclined to initiate a major conflict, that doesn’t mean that Trump is reluctant to use force. And those superficially limited military engagements have a nasty tendency to morph into honest-to-goodness full-blown wars.

You can read the whole thing here.

Maranda O’Donnell was arrested for driving with a suspended license and bail was set at the prescheduled amount of $2,500, which she could not pay. Ms. O’Donnell was not alone in having bail set at an amount that could not be paid. Robert Ford’s bail was set at $5,000 for misdemeanor theft of property and Loetha McGruder’s bail was set at $5,000 for the misdemeanor of giving a false name to a police officer. There are many other such examples; all of these bail amounts were set according to a predetermined schedule based on the offense. None of the defendants could afford the bail and so were forced to stay in jail.

According to one report, 81% of misdemeanor arrestees in Harris County (Houston), Texas, were unable to post bail at booking, and 40% were never able to post bail. Ms. O’Donnell sued Harris County and various government officials on behalf of herself and all others similarly situated for violating the Fourteenth Amendment’s Due Process and Equal Protection Clauses by setting bail amounts higher than defendants could pay, which detained indigent people much longer than those financially able to pay.

The federal district court found that the predetermined bail schedule was treated as a “nearly irrebuttable presumption in favor of applying secured money bail at the prescheduled amount.” Further, the court found that Harris County did not even provide “timely hearings” to prove their inability to pay or even the reasons why defendants were being denied a bail they could paid. The court issued a preliminary injunction ordering the county to release misdemeanor defendants on personal bond—not secured by cash in advance—within 24 hours of being arrested.

The county appealed to the U.S. Court of Appeals for the Fifth Circuit, where Cato has now filed an amicus brief supporting the injunction based on the history of bail.

Bail has ancient roots going back to before Magna Carta. Even before the United States existed, English courts required that bail be set on an individualized basis based on the financial ability of the defendant. When the king’s sheriffs issued bail that was too high to be paid, the prohibition on excessive bail was created in the English Bill of Rights, which was then incorporated into the U.S. Constitution in the Eighth Amendment. As both the Supreme Court and D.C. Circuit held in 1835, “to require larger bail than the prisoner could give would be to require excessive bail, and to deny bail” in violation of the Constitution. This was the understanding in America for more than 100 years after the Founding.

The modern Supreme Court has continued to recognize the protection that these ancient requirements for bail provide: “In our society, liberty is the norm, and detention prior to trial or without trial is the carefully limited exception.” United States v. Salerno (1987). These long-standing bail customs require an individualized determination of the bail amount, which was not provided by Harris County, violating the defendants’ right to the due process of law as guaranteed by the Fourteenth Amendment. The Fifth Circuit should maintain the preliminary injunction in O’Donnell v. Harris County.

In a recent column, AEI scholar Abby McCloskey claims that “[m]ost people on the right and on the left” want government-sponsored paid family leave. McCloskey links to an admiring summary of a 2016 public opinion poll as evidence.

The summary does not provide the associated poll topline (questionnaire), but Morning Consult kindly provided some questions upon request. They included “Do you support or oppose requiring employers to offer paid parental leave for new parents?” and “if the federal government required employers to offer paid parental leave for new parents, how long should that leave be?”

Unfortunately, the poll’s questions are not sound from a psychology of survey response perspective. As analysts know, the question’s language makes an enormous difference in poll results. When people are asked whether or not they would like a particular benefit sans the process or cost, many will respond affirmatively.

But if costs are mentioned, public opinion transforms (see polling on healthcare for example). As a result, polling can be confusing at best and calculated to elicit certain responses at worst. In the first question, the Morning Consult poll does not describe who will be requiring employers to provide paid family leave or how they will do so. It does not mention tradeoffs. In the second question it asks respondents to accept that government is providing paid leave and then pick the length.

Usually it would be hard to know how much the absence of the “whos” or “hows” mattered for the results. But fortunately, Pew Research asked the same questions and made the details explicit.

Specifically, Pew asked whether A) the federal government should require employers to provide paid family leave or B) employers should be able to decide for themselves whether to provide paid family leave. 

Pew found “there is no consensus,” and public opinion was split evenly. Contrary to McCloskey’s summary article which claims political parties unanimously approve of a government mandate for paid family leave, Pew described public opinion as being divided along political lines. Democrats were the only group where a majority strongly favored the lightest of paid family leave measures; using government tax credits as an incentive for employers to provide paid leave. 

Far from being a top policy priority, another recent Pew poll suggests that “expanding access to paid family and medical leave ranks at the bottom of a list of 21 policy items.”  7 of 10 workers (69%) are at least somewhat satisfied with the benefits their employer already provides, and Americans prioritize other issues over creating an entitlement to a benefit most already receive63% of workers who took parental, family, or medical leave say their employer paid for part or all of it.

It seems that the public is divided after all. That said, do Americans like or want paid family leave benefits? Of course, and when asked they say so. But the right question is not whether they like or desire leave, but whether they desire federal involvement. The answer is often “no.”

One fortunate aspect of President Trump’s bill to reduce legal immigration by 50 percent is that it has started the conversation on how to reform the nation’s legal immigration system—even if it started it on the wrong foot. Members of Congress now have an opportunity to respond with legislation that would increase legal immigration and fix the various problems with the system, which are numerous.

1) Employment-based quotas haven’t changed since 1990, even as the economy doubled in size. Unlike many other countries, the legislative branch establishes hard ceilings on immigration, rather than flexible targets or administratively determined limits. In 1990, Congress passed the Immigration Act of 1990, which established the current limit at 140,000 visas for immigrants whom employers sponsor for legal permanent residency. Since then, U.S. real Gross Domestic Product increased from 8.9 trillion to 17 trillion. At the same time, the computer and Internet revolutions transformed the economy, yet the quota remained the same.

  • Congress should double the 1990 employment-based quota to at least 280,000 and index the quota to GDP growth. Senators Ron Johnson and John McCain incorporate GDP indexing in their State-Sponsored Pilot Program Act, which would allow states to sponsor temporary workers (see p. 24).

2) Half the quota for immigrant workers is filled by family. Of the 140,000 employer-sponsored visas, sponsored employees actually use less than half. That’s because the George H.W. Bush administration in 1991 adopted an interpretation of the law that found that spouses and children of the immigrants—who are entitled to a visa with the primary applicant as well—count against the quota. As I’ve written before, it is far from clear that this is the correct interpretation of the law, but it makes little sense in any case. The quota is targeting the number of workers that the economy needs. Why should married workers take away slots from other applicants? If the quota is hit after a worker receives his visa but before his family does, why should we separate them? For these reasons, all temporary worker categories exempt spouses and children from those caps.

  • Congress should clarify that spouses and children of immigrant workers do not count against the green card limits. This would require amending 8 U.S.C. 1153(d) with a statement that the visas or status issued under that subsection don’t reduce the number of visas available to primary applicants.

3) America discriminates against applicants from more populous countries. The law states that the total number of permanent residency visas made available to nationals of any single country may not exceed 7 percent. This means that countries with few applicants, like Iceland and Moldova, receive priority over countries with many applicants like China and India, creating massive wait times for specific nationalities. The wait for Indian employer-sponsored immigrants is so long many of the applicants will die before they see their visas. Nativists, however, openly admit that they prefer the per-country limits specifically because they make the system so frustrating that immigrant workers from certain countries want to give up.

  • Congress should remove the per-country limits. The Fairness for High Skilled Immigrants Act (H.R. 392), which has 238 cosponsors, would phase out the per-country limits for employment-based immigrants and double the limits for family-based.

4) Foreign workers can work here for 10 years legally, but still not receive the right to live here permanently. As I’ve said before, America treats its high-skilled immigrants worse than it treats its lowest skilled refugees, who receive permanent residency after one year. Employers can sponsor H-1B high skilled temporary workers for permanent residency, and the law allows them to extend their status indefinitely. Despite living and working in the country for many years—even a decade—they cannot enjoy their full rights. They cannot work for whomever they want. They cannot start businesses. They must apply for extensions every single year. Other work visa categories, such as the E-2 temporary visa for entrepreneurs and investors and the O-1 visa for entrepreneurs and other outstanding achievers, have no visa category for which they can apply because they have no employer to sponsor them. They can work for decades and receive no path to permanent residency.

  • Congress should create a path to permanent residency for any worker who has worked legally in the United States for an aggregate period of more than 10 years. It should similarly allow anyone who is waiting abroad to enter if they have waited for 10 years. That would place a hard limit on backlogs and create an incentive for legal immigrants to stay and not abandon the American dream for the Canadian dream.

5) Children of legal foreign workers grow up in the United States but are deported at adulthood—even if they were already waiting in line for permanent residency. This “aging out” problem is one of the more cruel aspects of America’s immigration system. H-1B foreign workers and their spouses and minor children receive a temporary visa that is renewable indefinitely if their employer sponsors the worker for permanent residency. The worker may add his spouse and minor children to the permanent residency application, and the whole family waits together in the U.S. Thus, many children of H-1B workers grow up in the United States, graduate a U.S. high school, and attend U.S. colleges. You can read here about how accomplished these young people are. Yet because only minor children are eligible, they receive a removal order as soon as they reach age 21 if their parent has yet to receive permanent residency. As I’ve written before, they are essentially young immigrant “Dreamers.”

  • Congress should end “aging out.” If a person is already waiting in line for permanent residency when they hit 21, they should remain in the green card queue and in legal status in the United States. This could be done in a number of ways, but perhaps the best is section 3(c) of the Johnson-McCain state-sponsored visa bill (pp. 36–37). The Johnson-McCain bill language would also solve a number of other problems for high-skilled workers, including the inability to change jobs and the prohibition on spousal work (which President Obama partially ended).

6) If the administration fails to issue the required permanent residency visas for a year, immigrants and employers are out of luck. This fact is unbelievable, but true. Every year from 1992 to 2009, with the exception of 2008, the government simply failed to issue the full allotment of visas. According to the 2010 U.S. Citizenship and Immigration Services Ombudsman report, nearly 750,000 visas went unused during this time. Immigrants who are the beneficiary of an approved immigrant petition from a U.S. employer cannot apply for permanent residency until their “priority date” comes up. The State Department estimates the priority date, but it cannot know the date for certain. It depends on how many people are waiting and how many of those who are waiting apply once their number does come up. Both of these factors are unknown. Thus, the State Department must guess. If it guesses wrong, not enough immigrants apply, and visa slots are lost.

  • Congress should recapture all of the lost visas since 1992 and create a provision that increases the quota in the following year by the number of visas that go unused in the prior year. These provisions were included in the Section 2304 of the Senate-passed 2013 immigration bill (p. 371).

7) The quota for immigrant workers without a bachelor’s degree is just 5,000. This figure is laughably low in light of the more than 11 million unauthorized immigrants in the United States—85 percent of whom have no college degree. It’s also absurd given that even in 2020, only 35 percent of job openings will require a four-year degree, while 36 percent will require no education at all after high school. These positions are not all “low-skilled” either. Dozens of occupations, like these, require no bachelor’s degree, but pay over $70,000, which is close to the threshold for getting “points” under the Trump immigration bill. Opponents of low-skilled immigration claim that these workers are a detriment to U.S. workers, but the empirical evidence indicates that this is false.

  • Congress should make available 100,000 visas for workers without a college degree. This is where a points system actually makes more sense. For college grads, the degree is a decent predictor of labor market success. For those with less than a college degree, there is significantly more variability in outcomes, so a points system could be a better predictor. The Senate bill’s Merit-Based Track 1, Tier 2 (pp. 354–356) provides a model for this type of point system.

8) The U.S. educates and trains a million foreign students and then sends them home to compete with us. This must rank highly among America’s worst economic policies. According to the National Academy of Science 2016 report on the fiscal effects of immigration, each foreign bachelor’s degree holder contributes, in net present value terms, between $210,000 and $330,000 more in taxes than they receive in benefits over their entire lifetime. For those with advanced degrees, it’s between $427,000 and $635,000 (p. 341). As my colleague Alex Nowrasteh has detailed, foreign-born immigrants contribute massively to innovation, entrepreneurship, and economic growth. Yet if the U.S. continues its current policy, they will do those things in other countries.

  • Congress should exempt from the immigration quotas foreign graduates of U.S. universities, at least for all science, technology, engineering, and math fields. The Senate bill’s Section 2307 would have exempted foreign physicians, doctorate degree holders from U.S. universities, and all advanced degree holders in science, technology, engineering, and math (pp. 407–409). This would be a good start.

9) The U.S. has a limit on the number of “extraordinary” immigrants that it will admit. The EB-1 visa category is for immigrants with “extraordinary ability,” “outstanding professors and researchers,” and multinational executives. These include Nobel Prize winners and those with “original scientific, scholarly, artistic, athletic, or business-related contributions of major significance to the field.” Yet bafflingly, we subject these immigrants to the same quota as other immigrants.

  • Congress should exempt all employment-based first preference immigrants from the quota system. The Senate bill’s Section 2307 would have implemented this change (pp. 404–407). Congress should immediately adopt these changes.

10) America has no entrepreneurship visa. Immigrants are roughly twice as likely to start a business in the United States as native-born Americans. Immigrants founded more than half of all new businesses in Silicon Valley from 1995 to 2005. In 2011, nearly 70,000 New York City immigrants owned more than 60 percent of the city’s small businesses. Almost all of the city’s dry cleaning and laundry services and taxi and limo services were immigrant owned. Yet somehow there is no permanent residency category for entrepreneurs. It goes without saying that this is exceptionally counterproductive. It’s important to emphasize that most immigrant entrepreneurs will not start the next Google, but even small business owners play an important role in keeping America’s economy competitive and innovative.

  • Congress should create a visa category for businesses owners and entrepreneurs. Sen. Jerry Moran’s Startup Act is the best available option to do so.

Note that these are just the reforms related to the process for permanent residency. There are equally as many reforms needed to the temporary work visa system. Moreover, the RAISE Act, the president’s preferred legal immigration reform, contains only one of these reforms (#3). It also makes #7 worse by completely eliminating all permanent residency visas for non-college grads. Surprisingly, the situation in #9 would be worse as well because the RAISE Act does not increase skilled visas at all. Instead, it would completely eliminate the EB-1 extraordinary ability category and replace it with a point system that is so convoluted that Nobel Prize winners may do worse than certain bachelor’s degree holders, as I’ve explained before.

CSBA’s Katherine Blakeley has published a brief but highly informative analysis of the prospects for a major military spending boost.

Bottom line up front: The combination of “procedural and political hurdles” in Congress make an increase along the lines of what the Trump administration requested (approx. $54 billion) unlikely. The substantially larger increases passed out of the House and Senate Armed Services Committees (roughly $30–33 billion more than the president’s request) seem even more fanciful.

Blakeley concludes:

The wide gulfs between the political parties, and between the defense hawks and the fiscal hawks, will not be closed soon. Additionally, the full legislative calendar of the Congress before September 30, 2017, including Obamacare repeal, FY 2018 appropriations, and an impending debt ceiling debate, increase the likelihood that FY 2018 will begin with a several-month-long continuing resolution, rather than a substantial increase in defense spending.  

This aligns with what I’ve suspected all along – but Blakeley provides critical details to back up her conclusions.

For years now, we’ve heard defense hawks say that adequately funding the defense budget shouldn’t be a struggle for a country as wealthy as the United States. A mere 4 percent of GDP, for example, should be a piece of cake. And, at one level, that is absolutely correct. It should be easy. But when you dig into it, as Blakeley has done, you discover that even 3 percent is a real struggle. After all, $50 billion – a rounding error in a $19 trillion economy – threatened to bring the entire budget process to a screeching halt in late June, and may do so again.

If and when a final budget deal is hammered out, the Pentagon’s Overseas Contingency Operations (OCO) account may provide at least some of the additional billions that the HASC and the SASC want. Because OCO is exempted from the bipartisan Budget Control Act’s spending caps, additional defense dollars do not have to come at the expense of non-defense discretionary spending, as President Trump’s budget proposed.

But many billions from the Pentagon’s base budget (i.e. non-war spending) have been shoved into the OCO for years now, and the gimmick is starting to wear thin – after all, the wars in Iraq and Afghanistan peaked years ago. The voices in Congress and beyond who pushed the BCA in the first place, and who remain committed to reducing the deficit (e.g. current OMB chief Mick Mulvaney), are likely to feel that they’re being played.

The defense vs. non-defense spending debate is, and always has been, about politics, not math. And it isn’t obvious that the Pentagon will win this political battle. Given this uncertainty, we should adapt our military’s objectives to the means available to achieve them. We should prioritize U.S. security and defending vital national interests, and approach foreign adventures that don’t advance these interests with great caution. Expecting our soldiers, sailors, airmen and Marines to do the same – or more – with less money isn’t fair to them, and isn’t likely to work.

The rising opioid overdose death rate is a serious problem and deserves serious attention. Yesterday, during his working vacation, President Trump convened a group of experts to give him a briefing on the issue and to suggest further action. Some, like New Jersey Governor Chris Christie, who heads the White House Drug Addiction Task Force, are calling for him to declare a “national public health emergency.” But calling it a “national emergency” is not helpful. It only fosters an air of panic, which all-too-often leads to hastily conceived policy decisions that are not evidence-based, and have deleterious unintended consequences.

While most states have made the opioid overdose antidote naloxone more readily available to patients and first responders, policies have mainly focused on health care practitioners trying to help their patients suffering from genuine pain, as well as efforts to cut back on the legal manufacture of opioid drugs.

For example, 49 states have established Prescription Drug Monitoring Programs (PDMPs) that monitor the prescriptions written by providers and filled by patients. These programs are aimed at getting physicians to reduce their prescription rate so they are not “outliers” in comparison with their peers. And they alert prescribers of patients who have filled multiple prescriptions within a given timeframe. In some states, the number of opioids that may be prescribed for most conditions is limited to a 7-day supply.

The Drug Enforcement Administration continues to seek ways to reduce the number of opioids produced legally, hoping to negatively impact the supply to the illegal market.

Meanwhile, as patients suffer needlessly, many in desperation seek relief in the illegal market where they are exposed to dangerous, often adulterated or tainted drugs, and oftentimes to heroin.

The CDC has reported that opioid prescriptions are consistently coming down, while the overdose rate keeps climbing and the drug predominantly responsible is now heroin. But the proposals we hear are more of the same.

We need a calmer, more deliberate and thoughtful reassessment of our policy towards the use of both licit and illicit drugs. Calling it a “national emergency” is not the way to do that.

Last week, the Trump Justice Department announced that it would scrutinize colleges’ consideration of applicants’ race in their admissions decisions. The announcement suggests the DOJ’s current leadership believes school policies intended to boost enrollments of some minority groups violate anti-discrimination laws and improperly reduce admissions for other groups.

Over the weekend, Washington Post columnist Christine Emba responded that “Black People Aren’t Keeping White Americans Out of College. Rich People Are.” She argues that some wealthy parents “buy” their kids’ way into selective colleges when those kids don’t have strong applications. As a result, fewer seats are available for non-wealthy kids with stronger applications.

Regardless of what one might think of the consideration of race in the application process, one should understand that Emba’s analysis is incorrect. “Rich kid admissions” help non-rich kids to attend college, and reducing the number of enrolled rich kids would reduce the enrollment of other students, whatever their demographics.

Last year, Regulation published a pair of articles debating the Bennett hypothesis, the idea that colleges raise their tuition and fees whenever government increases college aid to students. One of the articles, by William & Mary economists Robert Archibald and David Feldman, includes an insightful discussion of the economics of college admissions and price setting (i.e., scholarship decisions).

Selective colleges practice what economists call price discrimination, in which admissions and prices are set with an eye to a student’s willingness (and ability) to pay–what schools politely call “need aware” admissions. Applicants with limited admission prospects but who have wealthy parents may be admitted, but they will be charged a high price. These are the kids and parents who pay the staggering $50,000+ a year “list price” that selective private schools are quick to say that few of their students pay. Most other enrollees, on the other hand, had applications that admissions officers considered more desirable, but the students had less willingness to pay, so they were awarded scholarships, i.e., large price discounts. The discounts, in turn, are financed in part by the high prices paid by the rich kids and their parents.

Archibald and Feldman explain:

In order to meet revenue and enrollment goals, almost all selective programs admit and enroll students with lower admission ratings [than their ideal applicants]. Knowing the odds of enrolling students with successively lower admission ratings, schools can eventually craft a class with the highest possible average admission rating that satisfies the tuition revenue requirement while filling the seats in the entering class. In its enrollment decisions, a school may find that many of its [mid-tier applicants] have a higher willingness to pay than many or most of the [top tier]. These lower-ranked applicants have fewer opportunities to earn merit scholarships at more selective schools, and many come from high-income families that do not qualify for need-based aid. For some schools this means that a student from the [mid tier] with a very high willingness to pay may get preference over a student from [an upper tier] with a very low willingness to pay.

If the rich kids were denied admission, then fewer non-rich kids would gain admissions because schools would have less money to subsidize them. And the students who would attend would have to pay higher prices because, again, there would be less money to subsidize them.

It may be frustrating that rich parents buy their kids’ way into college. But it would be far more frustrating if many of the non-rich kids who benefit from those payments were to lose their way into selective schools. So, contra Emba, rich kids aren’t taking seats away from non-rich kids, they’re helping to put non-rich kids–black and white–through college.

So far, throughout this primer, I’ve claimed that central banks have one overarching task to perform:  their job, I said, is to “regulate the overall availability of liquid assets, and through it the general course of spending, prices, and employment, in the economies they oversee.” I’ve also shown how, prior to the recent crisis, the Fed pursued this task, sometimes competently, and sometimes ineptly, by means of “open-market operations,” meaning routine purchases (and occasional sales) of short-term Treasury securities.

But this picture isn’t complete, because it says nothing about central banks’ role as “lenders of last resort.” It overlooks, in other words, the part they play as institutions to which particular private-market firms, and banks especially, can turn for support when they find themselves short of cash, and can’t borrow it from private sources.

For many, the “lender of last resort” role of central banks is an indispensable complement to their task of regulating the overall course of spending. Unless central banks play that distinct role, it is said, financial panics will occasionally play havoc with nations’ monetary systems.

Eventually I plan to challenge this way of thinking. But first we must consider the reasoning behind it.

h4>The Conventional Theory of Panics

The conventional view rests on the belief that fractional-reserve banking systems are inherently fragile. That’s so, the argument goes, because, unless it’s squelched at once, disquiet about any one bank or small group of banks will spread rapidly and indiscriminately to others. The tendency is only natural, since most people don’t know exactly what  their banks have been up to. For that reason, upon hearing that any bank is in trouble, people have reason to fear that their own banks may also be in hot water.

Because it’s better to be safe than sorry, worried depositors will try to get their money back, and — since banks have only limited cash reserves — the sooner the better. So fear of bank failures leads to widespread bank runs. Unless besieged banks can borrow enough cash to cover panicking customers’ withdrawals, the runs will ruin them. Yet the more widespread the panic, the harder it is for affected banks to secure private-market credit; if it spreads widely enough, the whole banking system can end-up going belly-up.

An alert lender of last resort can avoid that catastrophic outcome, while also keeping sound banks afloat, by throwing a lifeline, consisting of a standing offer of emergency support, to any solvent bank that’s threatened by a run. Ideally, the standing offer alone should suffice to bolster depositors’ confidence, so that in practice there needn’t be all that much actual emergency central bank lending after all.[1]

It’s a Wonderful Theory

A striking feature of this common understanding is its depiction of a gossamer-like banking system, so frail that the merest whiff of trouble is enough to bring it crashing down.  At very least, the depiction suggests that any banking system lacking a trustworthy lender of last resort, or its equivalent, is bound to be periodically ravaged by financial panics.

And therein lies a problem. For however much it may appeal to one’s intuition, the conventional theory of banking panics is simply not consistent with the historical record.  Among other things, that record shows

  • that banks seldom fail simply because panicking depositors rush to get their money out. Instead, runs are almost always “information based,” with depositors rushing to get money out of banks that got themselves in hot water beforehand;
  • that individual bank runs and failures generally aren’t “contagious.”  Although trouble at one bank can lead to runs on banks that are affiliated with the first bank, or ones that are known to be among that bank’s important creditors, panic seldom if ever spreads to other banks that would otherwise be unscathed by the first bank’s failure;
  • that, while isolated bank failures, including failures of important banks, have occurred in all historical banking systems, system-wide banking crises have generally been relatively rare events, though they have been much more common in some banking systems than in others;
  • that the lack of a central bank or other lender of last resort is not a good predictor of whether a  banking system will be especially crisis-prone; and
  • that the lack of heavy-handed banking regulations is also a poor predictor of the frequency of banking crises. Instead, some heavily-regulated banking systems have endured crisis after crisis, while some of the least regulated systems have been famously crisis-free.

That the conventional theory of banking panics is not easily reconciled with historical experience may help to explain why its proponents often illustrate it, as Ben Bernanke did in the first of a series of lectures he gave on the subprime crisis, not by instancing some real-world bank run, but by referring to the run on the Bailey Bros. Building & Loan in “It’s a Wonderful Life”!  In the movie, although George Bailey’s bank is fundamentally sound, it suffers a run when word gets out that Bailey’s absent-minded Uncle Billy mislaid $8000 of the otherwise solvent bank’s cash.

The Richmond Fed’s Tim Sablik likewise treats Frank Capra’s Christmas-movie bank run as exhibit A in his account of what transpired during the 2007-8 financial crisis:

George Bailey is en route to his honeymoon when he sees a crowd gathered outside his family business …. He finds that the people are depositors looking to pull their money out because they fear that the Building and Loan might fail before they get the chance. His bank is in the midst of a run.

Bailey tries, unsuccessfully, to explain to the members of the crowd that their deposits aren’t all sitting in a vault at the bank — they have been loaned out to other individuals and businesses in town. If they are just patient, they will get their money back in time. In financial terms, he’s telling them that the Building and Loan is solvent but temporarily illiquid. The crowd is not convinced, however, and Bailey ends up using the money he had saved for his honeymoon to supplement the Building and Loan’s cash holdings and meet depositor demand…

As the movie hints at, the liquidity risk that banks face arises, at least to some extent, from the services they provide. At their core, banks serve as intermediaries between savers and borrowers. Banks take on short-maturity, liquid liabilities like deposits to make loans, which have a longer maturity and are less liquid. This maturity and liquidity transformation allows banks to take advantage of the interest rate spread between their short-term liabilities and their long-term assets to earn a profit. But it means banks cannot quickly convert their assets into something liquid like cash to meet a sudden increase in demand on their liability side. Banks typically hold some cash in reserve in order to meet small fluctuations in demand, but not enough to fulfill all obligations at once.

There you have it: banks by their very nature are vulnerable to runs. Hence banking systems are inherently vulnerable to crises. Hence crises like that of 2007-8. Hence the need for a lender of last resort (or something equivalent, like government deposit insurance) to keep perfectly sound banks from being gutted by panic-stricken clients.

But is that really all there is too it? Were the runs of 2007-8 triggered by nothing more than some minor banking peccadilloes, if not by mere unfounded fears? Not by any stretch! For starters, the most destructive runs that took place during the 2007-8 crisis were runs, not on ordinary (commercial) banks, or thrifts (like George Bailey’s outfit), but on non-bank financial intermediaries, a.k.a. “shadow banks,”  including big investment banks such as Bear Stearns and Lehman Brothers, and money-market mutual funds, such as Reserve Primary Fund.

Far from having been random or inspired by shear panic, all of these runs were clearly information based: Bear and Lehman were both highly leveraged and heavily exposed to subprime mortgage losses when the market for such mortgages collapsed, while Reserve Primary — the money market fund that suffered most in the crisis — was heavily invested in Lehman Brother’s commercial paper.

As for genuine bank runs, there were just five of them in all, and every one was triggered by well-founded news that the banks involved — Countrywide, IndyMac, Washington Mutual, Wachovia, and Citigroup — had suffered heavy losses in connection with risky mortgage lending. Indeed, with the possible exception of Wachovia, the banks were almost certainly insolvent when the runs on them began. To suggest that these banks were as innocent of improprieties, and as little deserving of failure, as the fictitious Bailey Bros. Building and Loan, is worse than misleading: it’s grotesque.

Not having been random, the runs of 2007-8 also weren’t contagious. The short-term funds siphoned from failing investment banks and money market funds went elsewhere. Relatively safe “Treasuries only” money market funds, for example, gained at riskier funds’ expense. The same thing happened in banking: for every bank that was perceived to be in trouble, many others were understood to be sound. Instead of being removed, as paper currency, from the banking system, deposits migrated from weaker to stronger banks, such as JP Morgan, Wells Fargo, and BB&T. While a few bad apples tried to fend-off runs, in part by seeking public support, other banks struggled to cope with unexpected cash inflows.

Yet because the runs were front-page news, and the corresponding redeposits weren’t, it was easy for many to believe that a general panic had taken hold. That sound and unsound banks alike were forced to accept TARP bailout money only reinforced this wrong impression. Evidently we have traveled far from the quaint hamlet of Bedford Falls, where George Bailey’s bank nearly went belly-up.

Nor were we ever really there. During the Great Depression, for example, most of the banks that failed, including those that faced runs, were rural banks that suffered heavy losses as fallen crop prices and land values caused farmers to default on their loans. Few if any unquestionably solvent banks failed, and bank run contagions, with panic spreading from unsound to sound banks, were far less common than is often supposed. Even the widespread cash withdrawals of February and early March, 1933, which led FDR to proclaim a national bank holiday, weren’t proof of any general loss of confidence in banks. Instead, they reflected growing fears that FDR planned to devalue the dollar upon taking office. Those fears in turn led bank depositors to cash in their deposits for Federal Reserve notes, in order to convert those notes into gold. What looked like distrust of commercial banks’ ability to keep their promises was really distrust of the U.S. government’s ability to keep its promises.

Regulate, Have Crisis, Repeat

If bank runs are mainly a threat to under-diversified or badly-managed banks, it’s no less the case that banking crises, in which relatively large numbers of banks all find themselves in hot water at the same time, are mainly a problem in badly-regulated banking systems. To find proof of this claim, one only has to compare the records, both recent and historical, of different banking systems. Do that and you’ll see that, while some systems have been especially vulnerable to crises, others have been relatively crisis free. Any theory of banking crisis that can’t account for these varying experiences is one that ought not to be trusted.

But just how can one account for the different experiences? The conventional theory of panics implies that the more crisis-prone systems must have lacked a lender of last resort or deposit insurance (which also serves to discourage runs) or both. It may also be tempting to assume that they lacked substantial restrictions upon the activities banks could engage in, the interest rates they could charge and offer, the places where they could do business, and other aspects of the banking business.

Wrong; and wrong again. Central banks, deposit insurance, and relatively heavy-handed prudential regulations aren’t the things that distinguished history’s relatively robust banking systems from their crisis-prone counterparts. On the contrary: central bank misconduct, the perverse incentives created by both explicit and implicit deposit guarantees, and misguided restrictions on banking activities including barriers to branch banking, portfolio restrictions, and mandated business structures, have been among the more important causes of banking-system instability. Some of the most famously stable banking systems of the past, on the other hand, lacked either central banks or deposit insurance, and placed relatively few limits on what banks were allowed to do.

Northern Exposures

It would take a treatise to review the whole, gruesome history of financial crises for the sake of revealing how unnecessary and ill-considered, if not corrupt, regulations of all sorts contributed to  every one of them.[2] For our little primer we must instead settle for four especially revealing case studies: those of the U.S. and Canada on the one hand and of England and Scotland on the other. The banking systems of Scotland between 1772 and 1845 and Canada from 1870 to 1914 and again from 1919  until 1935 were remarkably free of both crises and government interference. In comparison, the neighboring banking systems of England and the United States were both more heavily regulated and more frequently stricken by crises.

To first return again to the Great Depression, in the U.S. between 1930 and 1933, some 9000 mostly rural banks failed. That impressive record of failure could never have occurred had it not been for laws that prevented almost all U.S. banks from opening branch offices, either in their home states or elsewhere. The result was a tremendous number of mostly tiny and severely under-diversified banks.

Canada’s banks, in contrast, were allowed to establish nationwide branch networks. Consequently, not a single Canadian bank failed during the 1930s, despite the fact that Canada had no central bank until 1935, and no deposit insurance until 1967, and also despite the fact that Canada’s depression was especially severe. The few U.S. states that allowed branch banking also had significantly lower bank failure rates.

Comparing the performance of the Canadian and U.S. banking  systems between 1870 and 1914 tells a similar story. Although the U.S. didn’t yet have a central bank, and so was at least free of that particular source of financial instability (yes, you read that last clause correctly), thanks to other kinds of government intervention in banking, and especially to barriers to branch banking and to banks’ ability to issue circulating notes put in place during the Civil War, the U.S. system was shaken by one financial crisis after another. Yet during the same period Canada, which also had no central bank, but which didn’t subject its commercial banks to such restrictions,  avoided serious banking crises.

Although naturally different in its details, the Scotland-vs.-England story is remarkably similar in its broadest brushstrokes. Scotland’s banks, like Canada’s, were generally left alone, while in England privileges were heaped upon the Bank of England, leaving other banks enfeebled and at its mercy. In particular, between 1709 and 1826, the so-called “six partner rule” allowed only small partnerships to issue banknotes, effectively granting the Bank of England a monopoly of public or “joint stock” banking. In an 1826 Parliamentary speech Robert Jenkinson, the 2nd Lord Liverpool, described the system as one having “not one recommendation to stand on.” It was, he continued, a system

of the fullest liberty as to what was rotten and bad; but of the most complete restriction, as to all that was good. By it, a cobbler or a cheesemonger, without any proof of his ability to meet them, might issue his notes, unrestricted by any check whatever; while, on the other hand, more than six persons, however respectable, were not permitted to become partners in a bank, with whose notes the whole business of a country might be transacted. Altogether, this system was one so absurd, both in theory and practice, that it would not appear to deserve the slightest support, if it was attentively considered, even for a single moment.

Liverpool made these remarks in the wake of the financial panic that struck Great Britain in 1825, putting roughly 10 percent of the note-issuing cobblers and cheesemongers of England and Wales out of business. Yet in Scotland, where the six-partner rule didn’t apply, that same panic caused nary a ripple.

Although Scotland and Canada offer the most well-known instances of relatively unregulated and stable banking systems, other free banking experiences also lend at least some support to the thesis that those governments that governed their banks least often governed them pretty darn well.

Bagehot Bowdlerized

The aforementioned Panic of 1825 was one of the first instances, if not the first instance, in which the Bank of England served as a “lender of last resort,” albeit too late to avert the crisis. It was that intervention by the Bank, as well as the lending it did during the Overend-Gurney crisis of 1866, that inspired Walter Bagehot to formulate, in his 1873 book Lombard Street, his now-famous “classical” rule of last-resort lending, to wit: that when faced with a crisis, the Bank of England should lend freely, while taking care to charge a relatively high rate for its loans, and to secure them by pledging “good banking securities.”

Nowadays central bankers like to credit Bagehot for the modern understanding that every nation, or perhaps every group of nations, must have a central bank that serves as a lender of last resort to rescue it from crises. Were that actually Bagehot’s view, he might be grateful for the recognition if only he could hear it.  In fact he’s more likely to be spinning in his grave.

How come? Because far from having been a fan of the Bank of England, or (by implication) of central banks more generally, Bagehot, like Lord Liverpool, considered the Bank of England’s monopoly privileges the fundamental cause of British financial instability. Contrasting England’s “one reserve” system, which was a byproduct of the Bank of England’s privileged status, with a “natural,” “many-reserve” system, like the Scottish system (especially before the Bank Act of 1845 thoughtlessly placed English-style limits on Scottish banks’ freedom to issue notes), Bagehot unequivocally preferred the latter. That is, he preferred a system in which no bank was so exalted as to be capable of serving as a lender of last resort, because he was quite certain that such a system had no need for a lender of last resort!

Why, then, did Bagehot bother to offer his famous formula for last-resort lending? Simply: because he saw no hope, in 1873, of having the Bank of England stripped of its destructive privileges. “I know it will be said, ” he wrote in the concluding passages of Lombard Street,

that in this work I have pointed out a deep malady, and only suggested a superficial remedy. I have tediously insisted that the natural system of banking is that of many banks keeping their own cash reserve, with the penalty of failure before them if they neglect it. I have shown that our system is that of a single bank keeping the whole reserve under no effectual penalty of failure. And yet I propose to retain that system, and only attempt to mend and palliate it.

I can only reply that I propose to retain this system because I am quite sure that it is of no manner of use proposing to alter it… . You might as well, or better, try to alter the English monarchy and substitute a republic.

Perhaps today’s Bagehot-loving central bankers didn’t read those last pages. Or perhaps they read them, but preferred to forget them.

The Flexible Open-Market Alternative

If Great Britain was stuck with the Bank of England by 1873, as Bagehot believed, then we are no less stuck with the Fed, at least for the foreseable future. And unless we can tame it properly, we may also be stuck with its “unnatural” capacity to destabilize the U.S. financial system, in part by being all-too-willing to rescue banks and other financial firms that have behaved recklessly, even to the point of becoming insolvent.

Consequently, getting the Fed to follow Bagehot’s classical last-resort lending rules may, for the time being, be our best hope for securing financial stability. But doing that is a lot easier said than done. For despite all the lip-service central bankers pay to Bagehot’s rules, they tend to honor those rules more in the breach than in the observance. One need only consider the relatively recent history of the Fed’s last-resort lending operations, especially before 2003 (when it finally began setting a “penalty” discount rate) and during the subprime crisis, to uncover one flagrant violation Bagehot’s basic principles after another.

There is, I believe, a better way to make the Fed abide by Bagehot’s rules for last-resort lending. Paradoxically, it would do away altogether with conventional central bank lending to troubled banks, and also with the conventional distinction between a central banks’ monetary policy operations and its emergency lending. Instead, it would make emergency lending an incidental and automatic extension of the Fed’s routine monetary policy operations, and specifically of what I call “flexible” open-market operations, or “Flexible OMOs,” for short.

The basic idea is simple. Under the Fed’s conventional pre-crisis set-up, only a score or so of so-called “primary dealers” took direct part in its routine open-market operations aimed at regulating the total supply of money and credit in the economy. Also, those operations were — again, traditionally — limited to purchases and sales of short-term U.S. Treasury securities. Consequently, access to the Fed’s routine liquidity auctions was very strictly limited. A bank in need of last-resort liquidity, that was not a primary dealer, or even a primary dealer lacking short-term Treasury securities, would have to go elsewhere, meaning either to some private-market lender or to the Fed’s discount window, where to borrow from the latter was to risk being “stigmatized” as a bank that might well be in trouble.

“Flexible” open-market operations would instead allow any bank that might qualify for a Fed discount-window loan to take part, along with non-bank primary dealers, in its open-market credit auctions. It would also allow the Fed’s expanded set of counterparties to bid for credit at those auctions using, not just Treasury securities, but any of the marketable securities that presently qualify as collateral for discount-window loans, with the same  margins or “haircuts” applied to relatively risky collateral as would be applied were they used as collateral for discount-window loans. A “product-mix” auction, such as that the Bank of England has been using in its “Indexed Long-Term Repo Operations,” would allow multiple bids made using different kinds of securities to be efficiently dealt with, so that credit gets to the parties willing to pay the most for it.

So, instead of having a discount-window for emergency loans, not to mention various ad-hoc lending programs, in addition to routine liquidity auctions for the conduct of “ordinary” monetary policy, the Fed would supply liquid funds through routine auctions only, while making the auctions sufficiently “flexible”to allow any illiquid financial institute with “good banking securities” to successfully bid for such funds. Thanks to this set up, the Fed would no longer have to concern itself with emergency lending as such. Its job would simply be to get the total amount of liquidity right, while leaving it to the competitive auction process to put that liquidity where it commands the highest value. In other words, it really would have no other duty save that of regulating “the overall availability of liquid assets, and through it the general course of spending, prices, and employment.”

Continue Reading A Monetary Policy Primer:

_____________________

[1]Deposit insurance can serve a function similar to that of having an alert lender of last resort. Most banking systems today rely on a combination of insurance and last-resort lending.

Diamond and Dybig’s famous 1983 model — one of the most influential works among modern economic writings — is in essence a clever, formal presentation of the conventional wisdom, in which deposit insurance is treated as a solution to the problem of banking panics. For critical appraisals of Diamond and Dybvig see Kevin Dowd, “Models of Banking Instability,” and chapter 6 of Lawrence White’s The Theory of Monetary Institutions.

[2]Although that badly-needed treatise has yet to be written, considerable chunks of the relevant record are covered in Charles W. Calomiris and Stephen Haber’s 2014 excellent work, Fragile by Design: The Political Origins of Banking Crises and Scarce Credit. I offer a much briefer survey of relevant evidence, including evidence of the harmful consequences of governments’ involvement in the regulation and monopolization of paper currency, in chapter 3 of Money: Free and Unfree. I express my (mostly minor) differences with Calomiris and Haber here.

[Cross-posted from Alt-M.org]

Illegal immigration is at its lowest point since the Great Depression. President Trump has claimed success, but nearly all of the decrease occurred under prior administrations. The president’s campaign rhetoric does appear to have caused a small increase in illegal immigration prior to assuming office. Because immigrants moved up their arrival dates a few months, the typical amount in illegal entries failed to materialize in the spring. But these recent changes are small in the big picture: 98.2 percent of the reduction in illegal immigration from 1986 to 2017 occurred before Trump assumed office.

Naturally, illegal border crossings are difficult to measure. The only consistently reported data are the number of immigrants that Border Patrol catches attempting to cross. Border Patrol has concluded that the number of people who make it across is proportional to the number of people it catches. All else being equal, more apprehensions mean more total crossers. Of course, the agency could catch more people because it has deployed more agents. But we can control for the level of enforcement by focusing on the number of people each agent catches.

Figure 1 provides the number of people that each Border Patrol agent brought into their custody during each of the last 50 years. As it shows, illegal immigration peaked in the mid-1980s. From 1977 to 1986, each border agent apprehended almost 400 people per year. After the 1986 amnesty legislation that authorized new agents and walls, the flow fell at a fairly steady rate. Following the housing bubble burst, the 2009 recession, and the concomitant border buildup, the flow has essentially flatlined. In 2016, each border agent nabbed fewer than 17 people over the course of the entire year. That’s one apprehension every two and a half weeks of work. The “crisis” is over and has been for a decade.

Figure 1: Apprehensions Per Border Patrol Agent, FY 1957-2017

Sources: Apprehensions 1957-2016: Border Patrol; Apprehensions FY 2017 (projected from October-June data): Border PatrolBorder Patrol Staffing: Border Patrol, INS Statistical Yearbooks and INS Annual Reports

Following Trump’s election, the flow did fall further, but this was mostly a continuation of the existing trend. Before Trump assumed office, there was a slight departure from the trend (represented as a dotted line in the Figure 2), but this June’s apprehension figures are roughly where we would expect based on the last decade and a half of data. I interpret this to mean that we saw a Trump effect before he assumed office, when some additional asylum seekers and immigrants came to the border a few months ahead of schedule in fear of the changes that he might bring. But the effect dissipated after he assumed office.

Figure 2: Monthly Apprehensions Per Border Agent and Exponential Trendline, October 1999 to June 2017

Sources: Apprehensions FY 2000-16: Border Patrol; Apprehensions FY 2017: Border Patrol; Border Patrol Staffing: Border Patrol

Zooming in further on the Obama and Trump administration months only reinforces the interpretation of a pre-election Trump effect (Figure 3). In every pre-Trump year, illegal flows spiked during the month of May or earlier (a phenomenon which goes back to at least 2000). Donald Trump launched his campaign in June 2015. Instead of waiting until the spring, immigrants started coming to the border during the winter months for the first time, peaking in December. In 2016, there was the typical spike in the spring, but then after Trump won the Republican nomination, apprehensions rose quickly peaking in November at well over the Spring numbers.

Figure 3: Monthly Apprehensions Per Agent and Exponential Trendline, January 2009 to June 2017

Sources: See Figure 2

There were 90,000 more apprehensions made from August 2016 to January 2017 than in the pre-Trump August 2014 to January 2015. Assuming that this is a sign of a Trump effect—and immigrants moved their travel plans up roughly six months earlier than they otherwise would have—each month from February to June under the Trump administration would have seen 15,000 or so more arrivals and from August to January 15,000 fewer per month. This would place the first months of the new president’s tenure right about on the trend line from the Obama administration (Orange line in Figure 3).

In this case, Trump is benefiting not so much from his current rhetoric or policies, but from his rhetoric on the campaign trail. Immigrants chose to come earlier than they would have, and so the normal spring rush failed to materialize. If this is the case, then it’s possible that apprehensions will return to the normal trend next year. Of course, the administration’s new policies may have started to make an impact by that point. Only time will tell.

Even if the normal trend returns, large scale illegal immigration is over. Whoever deserves credit, the job is done. Congress should move on and start talking about real issues.

“Europe’s Taxes Aren’t as Progressive as Its Leaders Like to Think,” wrote the Wall Street Journal’s Joseph C. Sternberg yesterday. Citing tax expert Stefan Bach from the German Institute for Economic Research, Sternberg shows how Germany’s tax system is only mildly progressive overall. Sternberg therefore states that politicians need to “tackle” indirect taxation if they want to have a major impact on the economy.

Now, Sternberg is undoubtedly right that broad-based tax systems which incorporate social contributions and VATs tend to be less progressive than those which rely more heavily on progressive income taxes. That is, if we narrowly look at the effects of taxes alone, rather than government spending. But does it make any economic sense to look at a tax system in isolation?

Good economic theory would suggest that to the extent we care about progressivity and redistribution, revenues should be collected in the least distortionary way possible, with redistribution done via cash transfers. So judging the desirability of a tax system by its degree of progressivity is not a good starting point. From an economic perspective, the assessment should be how distortionary different taxation systems across the world are. European tax systems have huge problems in this regard, but their progressivity or otherwise should not be a major consideration.

The second and more important related point is that assessing progressivity should not seek to separate the issues of taxes from transfers. To judge progressivity, one must look at the position of households across the income spectrum after both, not least because one person’s taxes are (now or later) another person’s cash transfer.

I cannot find figures to do this for Germany, but am familiar with some headline UK and US stats.

Every year when the UK Office for National Statistics (ONS) releases its publication The effects of taxes and benefits on household income, historical datasets a similar lament to Sternberg’s arises. Calculating total taxes paid as a proportion of gross income (market income plus government cash transfers), critics of the tax system assert that the poorest quintile pay 35.0% of their gross income in taxes, on average, which is almost identical to the average 34.1% for the top quintile (2015/16 figures). Like Sternberg, many conclude that the tax system is not progressive enough.

Yet a few seconds’ thought to what these figures show highlights how misleading this is. Gross income (the denominator in the calculation) includes cash transfers, which are transfers from one group to another. That a household uses money redistributed to it to spend, in turn paying what the ONS describes as indirect taxes (things like VAT, beer duty, tobacco duty, the TV license and fuel duty), can hardly be described as “regressive”.  This is akin to taking from Peter to pay Paul and then saying that – because Paul spends a large proportion of this money – the  tax system is unfair.

Put simply, benefits don’t fall like manna from heaven. One person’s taxes are someone else’s cash transfers. That the tax system is not ultra-progressive then is not what matters – it’s what the overall tax AND transfers system does that counts.

Thought of in this way, we can calculate effective tax rates, which measure the net contribution for the average household in each income quintile as a proportion of their market income. This is key: how much of the income that you earn is being taxed away and given to others? I.e. how progressive is the taxpayer-funded welfare state.

Table 1 shows the poorest fifth of households in the UK on average actually face an effective tax rate (all taxes minus cash benefits, divided by earned income) of -34.1 per cent, while the richest fifth face an average tax rate of 31.8 per cent. This means that, for every £1 earned in market income, the average household in the poorest quintile is transferred another 34.1p in cash benefits, while the average household in the top decile pays 31.8p in tax. The tax and cash transfers system, in other words, is very progressive.

But even this excludes so-called “benefits in kind,” which the UK state provides lots of, and which disproportionately benefit the poor. Once benefits-in-kind (education, healthcare, and subsidies for housing, rail and buses) are considered, these effective tax rates for the average household in the richest and poorest fifths become -140.2 per cent and 25.3 per cent respectively (see Table 2).

Now, the UK figures are not completely comprehensive. Unlike the figures below for the US (below), they do not seek to assign the impact of corporate income taxes on workers across the income distribution. They also exclude the cash-value benefits of other public goods, such as defense, law and order etc., and the courts which uphold property rights, where there is an argument that the rich benefit disproportionately (though development work stresses the importance of property rights for the poor too). But overall, it’s clear that welfare states are hugely redistributive.

Can similar figures be found for the US? The best comparator figures I can find come from the CBO’s June 2016 report The Distribution of Household Income and Federal Taxes, 2013.” There are three important differences in the methodology from the UK figures which means they are likely to look more progressive on the surface: the quintiles are assigned by market income, rather than disposable income; the transfers include transfers from state and local programs but only federal taxes (and sales taxes tend to be more regressive); and the figures presented include in-kind assistance, meaning they are closer in methodology to Table 2 than Table 1. But the results show the same trend.

The average household in the bottom quintile receives around $2.90 in cash transfers or in-kind benefits for every $1 earned, against the top quintile which faced an effective tax rate of 24.8%. Interestingly, as Greg Mankiw noted before, middle income Americans have shifted since 1979 from being, on average, net contributors to net beneficiaries under this measure.

Of course, averages hide a lot of information. Government programs redistribute heavily to those with children and to old people. But if we are going to assess crude measures of progressivity by looking across the income spectrum, it makes sense to include transfers too.

In conclusion, there are many problems with tax systems here and in Europe. But aiming to make them more progressive should not be an underlying economic aim. To the extent that redistribution is considered a valid goal, it should be undertaken through spending, and these stats above show countries such as the US and UK are already hugely redistributive or “progressive” in this regard. 

Pages