Cato Op-Eds

Individual Liberty, Free Markets, and Peace
Subscribe to Cato Op-Eds feed

CSBA’s Katherine Blakeley has published a brief but highly informative analysis of the prospects for a major military spending boost.

Bottom line up front: The combination of “procedural and political hurdles” in Congress make an increase along the lines of what the Trump administration requested (approx. $54 billion) unlikely. The substantially larger increases passed out of the House and Senate Armed Services Committees (roughly $30–33 billion more than the president’s request) seem even more fanciful.

Blakeley concludes:

The wide gulfs between the political parties, and between the defense hawks and the fiscal hawks, will not be closed soon. Additionally, the full legislative calendar of the Congress before September 30, 2017, including Obamacare repeal, FY 2018 appropriations, and an impending debt ceiling debate, increase the likelihood that FY 2018 will begin with a several-month-long continuing resolution, rather than a substantial increase in defense spending.  

This aligns with what I’ve suspected all along – but Blakeley provides critical details to back up her conclusions.

For years now, we’ve heard defense hawks say that adequately funding the defense budget shouldn’t be a struggle for a country as wealthy as the United States. A mere 4 percent of GDP, for example, should be a piece of cake. And, at one level, that is absolutely correct. It should be easy. But when you dig into it, as Blakeley has done, you discover that even 3 percent is a real struggle. After all, $50 billion – a rounding error in a $19 trillion economy – threatened to bring the entire budget process to a screeching halt in late June, and may do so again.

If and when a final budget deal is hammered out, the Pentagon’s Overseas Contingency Operations (OCO) account may provide at least some of the additional billions that the HASC and the SASC want. Because OCO is exempted from the bipartisan Budget Control Act’s spending caps, additional defense dollars do not have to come at the expense of non-defense discretionary spending, as President Trump’s budget proposed.

But many billions from the Pentagon’s base budget (i.e. non-war spending) have been shoved into the OCO for years now, and the gimmick is starting to wear thin – after all, the wars in Iraq and Afghanistan peaked years ago. The voices in Congress and beyond who pushed the BCA in the first place, and who remain committed to reducing the deficit (e.g. current OMB chief Mick Mulvaney), are likely to feel that they’re being played.

The defense vs. non-defense spending debate is, and always has been, about politics, not math. And it isn’t obvious that the Pentagon will win this political battle. Given this uncertainty, we should adapt our military’s objectives to the means available to achieve them. We should prioritize U.S. security and defending vital national interests, and approach foreign adventures that don’t advance these interests with great caution. Expecting our soldiers, sailors, airmen and Marines to do the same – or more – with less money isn’t fair to them, and isn’t likely to work.

The rising opioid overdose death rate is a serious problem and deserves serious attention. Yesterday, during his working vacation, President Trump convened a group of experts to give him a briefing on the issue and to suggest further action. Some, like New Jersey Governor Chris Christie, who heads the White House Drug Addiction Task Force, are calling for him to declare a “national public health emergency.” But calling it a “national emergency” is not helpful. It only fosters an air of panic, which all-too-often leads to hastily conceived policy decisions that are not evidence-based, and have deleterious unintended consequences.

While most states have made the opioid overdose antidote naloxone more readily available to patients and first responders, policies have mainly focused on health care practitioners trying to help their patients suffering from genuine pain, as well as efforts to cut back on the legal manufacture of opioid drugs.

For example, 49 states have established Prescription Drug Monitoring Programs (PDMPs) that monitor the prescriptions written by providers and filled by patients. These programs are aimed at getting physicians to reduce their prescription rate so they are not “outliers” in comparison with their peers. And they alert prescribers of patients who have filled multiple prescriptions within a given timeframe. In some states, the number of opioids that may be prescribed for most conditions is limited to a 7-day supply.

The Drug Enforcement Administration continues to seek ways to reduce the number of opioids produced legally, hoping to negatively impact the supply to the illegal market.

Meanwhile, as patients suffer needlessly, many in desperation seek relief in the illegal market where they are exposed to dangerous, often adulterated or tainted drugs, and oftentimes to heroin.

The CDC has reported that opioid prescriptions are consistently coming down, while the overdose rate keeps climbing and the drug predominantly responsible is now heroin. But the proposals we hear are more of the same.

We need a calmer, more deliberate and thoughtful reassessment of our policy towards the use of both licit and illicit drugs. Calling it a “national emergency” is not the way to do that.

Last week, the Trump Justice Department announced that it would scrutinize colleges’ consideration of applicants’ race in their admissions decisions. The announcement suggests the DOJ’s current leadership believes school policies intended to boost enrollments of some minority groups violate anti-discrimination laws and improperly reduce admissions for other groups.

Over the weekend, Washington Post columnist Christine Emba responded that “Black People Aren’t Keeping White Americans Out of College. Rich People Are.” She argues that some wealthy parents “buy” their kids’ way into selective colleges when those kids don’t have strong applications. As a result, fewer seats are available for non-wealthy kids with stronger applications.

Regardless of what one might think of the consideration of race in the application process, one should understand that Emba’s analysis is incorrect. “Rich kid admissions” help non-rich kids to attend college, and reducing the number of enrolled rich kids would reduce the enrollment of other students, whatever their demographics.

Last year, Regulation published a pair of articles debating the Bennett hypothesis, the idea that colleges raise their tuition and fees whenever government increases college aid to students. One of the articles, by William & Mary economists Robert Archibald and David Feldman, includes an insightful discussion of the economics of college admissions and price setting (i.e., scholarship decisions).

Selective colleges practice what economists call price discrimination, in which admissions and prices are set with an eye to a student’s willingness (and ability) to pay–what schools politely call “need aware” admissions. Applicants with limited admission prospects but who have wealthy parents may be admitted, but they will be charged a high price. These are the kids and parents who pay the staggering $50,000+ a year “list price” that selective private schools are quick to say that few of their students pay. Most other enrollees, on the other hand, had applications that admissions officers considered more desirable, but the students had less willingness to pay, so they were awarded scholarships, i.e., large price discounts. The discounts, in turn, are financed in part by the high prices paid by the rich kids and their parents.

Archibald and Feldman explain:

In order to meet revenue and enrollment goals, almost all selective programs admit and enroll students with lower admission ratings [than their ideal applicants]. Knowing the odds of enrolling students with successively lower admission ratings, schools can eventually craft a class with the highest possible average admission rating that satisfies the tuition revenue requirement while filling the seats in the entering class. In its enrollment decisions, a school may find that many of its [mid-tier applicants] have a higher willingness to pay than many or most of the [top tier]. These lower-ranked applicants have fewer opportunities to earn merit scholarships at more selective schools, and many come from high-income families that do not qualify for need-based aid. For some schools this means that a student from the [mid tier] with a very high willingness to pay may get preference over a student from [an upper tier] with a very low willingness to pay.

If the rich kids were denied admission, then fewer non-rich kids would gain admissions because schools would have less money to subsidize them. And the students who would attend would have to pay higher prices because, again, there would be less money to subsidize them.

It may be frustrating that rich parents buy their kids’ way into college. But it would be far more frustrating if many of the non-rich kids who benefit from those payments were to lose their way into selective schools. So, contra Emba, rich kids aren’t taking seats away from non-rich kids, they’re helping to put non-rich kids–black and white–through college.

So far, throughout this primer, I’ve claimed that central banks have one overarching task to perform:  their job, I said, is to “regulate the overall availability of liquid assets, and through it the general course of spending, prices, and employment, in the economies they oversee.” I’ve also shown how, prior to the recent crisis, the Fed pursued this task, sometimes competently, and sometimes ineptly, by means of “open-market operations,” meaning routine purchases (and occasional sales) of short-term Treasury securities.

But this picture isn’t complete, because it says nothing about central banks’ role as “lenders of last resort.” It overlooks, in other words, the part they play as institutions to which particular private-market firms, and banks especially, can turn for support when they find themselves short of cash, and can’t borrow it from private sources.

For many, the “lender of last resort” role of central banks is an indispensable complement to their task of regulating the overall course of spending. Unless central banks play that distinct role, it is said, financial panics will occasionally play havoc with nations’ monetary systems.

Eventually I plan to challenge this way of thinking. But first we must consider the reasoning behind it.

h4>The Conventional Theory of Panics

The conventional view rests on the belief that fractional-reserve banking systems are inherently fragile. That’s so, the argument goes, because, unless it’s squelched at once, disquiet about any one bank or small group of banks will spread rapidly and indiscriminately to others. The tendency is only natural, since most people don’t know exactly what  their banks have been up to. For that reason, upon hearing that any bank is in trouble, people have reason to fear that their own banks may also be in hot water.

Because it’s better to be safe than sorry, worried depositors will try to get their money back, and — since banks have only limited cash reserves — the sooner the better. So fear of bank failures leads to widespread bank runs. Unless besieged banks can borrow enough cash to cover panicking customers’ withdrawals, the runs will ruin them. Yet the more widespread the panic, the harder it is for affected banks to secure private-market credit; if it spreads widely enough, the whole banking system can end-up going belly-up.

An alert lender of last resort can avoid that catastrophic outcome, while also keeping sound banks afloat, by throwing a lifeline, consisting of a standing offer of emergency support, to any solvent bank that’s threatened by a run. Ideally, the standing offer alone should suffice to bolster depositors’ confidence, so that in practice there needn’t be all that much actual emergency central bank lending after all.[1]

It’s a Wonderful Theory

A striking feature of this common understanding is its depiction of a gossamer-like banking system, so frail that the merest whiff of trouble is enough to bring it crashing down.  At very least, the depiction suggests that any banking system lacking a trustworthy lender of last resort, or its equivalent, is bound to be periodically ravaged by financial panics.

And therein lies a problem. For however much it may appeal to one’s intuition, the conventional theory of banking panics is simply not consistent with the historical record.  Among other things, that record shows

  • that banks seldom fail simply because panicking depositors rush to get their money out. Instead, runs are almost always “information based,” with depositors rushing to get money out of banks that got themselves in hot water beforehand;
  • that individual bank runs and failures generally aren’t “contagious.”  Although trouble at one bank can lead to runs on banks that are affiliated with the first bank, or ones that are known to be among that bank’s important creditors, panic seldom if ever spreads to other banks that would otherwise be unscathed by the first bank’s failure;
  • that, while isolated bank failures, including failures of important banks, have occurred in all historical banking systems, system-wide banking crises have generally been relatively rare events, though they have been much more common in some banking systems than in others;
  • that the lack of a central bank or other lender of last resort is not a good predictor of whether a  banking system will be especially crisis-prone; and
  • that the lack of heavy-handed banking regulations is also a poor predictor of the frequency of banking crises. Instead, some heavily-regulated banking systems have endured crisis after crisis, while some of the least regulated systems have been famously crisis-free.

That the conventional theory of banking panics is not easily reconciled with historical experience may help to explain why its proponents often illustrate it, as Ben Bernanke did in the first of a series of lectures he gave on the subprime crisis, not by instancing some real-world bank run, but by referring to the run on the Bailey Bros. Building & Loan in “It’s a Wonderful Life”!  In the movie, although George Bailey’s bank is fundamentally sound, it suffers a run when word gets out that Bailey’s absent-minded Uncle Billy mislaid $8000 of the otherwise solvent bank’s cash.

The Richmond Fed’s Tim Sablik likewise treats Frank Capra’s Christmas-movie bank run as exhibit A in his account of what transpired during the 2007-8 financial crisis:

George Bailey is en route to his honeymoon when he sees a crowd gathered outside his family business …. He finds that the people are depositors looking to pull their money out because they fear that the Building and Loan might fail before they get the chance. His bank is in the midst of a run.

Bailey tries, unsuccessfully, to explain to the members of the crowd that their deposits aren’t all sitting in a vault at the bank — they have been loaned out to other individuals and businesses in town. If they are just patient, they will get their money back in time. In financial terms, he’s telling them that the Building and Loan is solvent but temporarily illiquid. The crowd is not convinced, however, and Bailey ends up using the money he had saved for his honeymoon to supplement the Building and Loan’s cash holdings and meet depositor demand…

As the movie hints at, the liquidity risk that banks face arises, at least to some extent, from the services they provide. At their core, banks serve as intermediaries between savers and borrowers. Banks take on short-maturity, liquid liabilities like deposits to make loans, which have a longer maturity and are less liquid. This maturity and liquidity transformation allows banks to take advantage of the interest rate spread between their short-term liabilities and their long-term assets to earn a profit. But it means banks cannot quickly convert their assets into something liquid like cash to meet a sudden increase in demand on their liability side. Banks typically hold some cash in reserve in order to meet small fluctuations in demand, but not enough to fulfill all obligations at once.

There you have it: banks by their very nature are vulnerable to runs. Hence banking systems are inherently vulnerable to crises. Hence crises like that of 2007-8. Hence the need for a lender of last resort (or something equivalent, like government deposit insurance) to keep perfectly sound banks from being gutted by panic-stricken clients.

But is that really all there is too it? Were the runs of 2007-8 triggered by nothing more than some minor banking peccadilloes, if not by mere unfounded fears? Not by any stretch! For starters, the most destructive runs that took place during the 2007-8 crisis were runs, not on ordinary (commercial) banks, or thrifts (like George Bailey’s outfit), but on non-bank financial intermediaries, a.k.a. “shadow banks,”  including big investment banks such as Bear Stearns and Lehman Brothers, and money-market mutual funds, such as Reserve Primary Fund.

Far from having been random or inspired by shear panic, all of these runs were clearly information based: Bear and Lehman were both highly leveraged and heavily exposed to subprime mortgage losses when the market for such mortgages collapsed, while Reserve Primary — the money market fund that suffered most in the crisis — was heavily invested in Lehman Brother’s commercial paper.

As for genuine bank runs, there were just five of them in all, and every one was triggered by well-founded news that the banks involved — Countrywide, IndyMac, Washington Mutual, Wachovia, and Citigroup — had suffered heavy losses in connection with risky mortgage lending. Indeed, with the possible exception of Wachovia, the banks were almost certainly insolvent when the runs on them began. To suggest that these banks were as innocent of improprieties, and as little deserving of failure, as the fictitious Bailey Bros. Building and Loan, is worse than misleading: it’s grotesque.

Not having been random, the runs of 2007-8 also weren’t contagious. The short-term funds siphoned from failing investment banks and money market funds went elsewhere. Relatively safe “Treasuries only” money market funds, for example, gained at riskier funds’ expense. The same thing happened in banking: for every bank that was perceived to be in trouble, many others were understood to be sound. Instead of being removed, as paper currency, from the banking system, deposits migrated from weaker to stronger banks, such as JP Morgan, Wells Fargo, and BB&T. While a few bad apples tried to fend-off runs, in part by seeking public support, other banks struggled to cope with unexpected cash inflows.

Yet because the runs were front-page news, and the corresponding redeposits weren’t, it was easy for many to believe that a general panic had taken hold. That sound and unsound banks alike were forced to accept TARP bailout money only reinforced this wrong impression. Evidently we have traveled far from the quaint hamlet of Bedford Falls, where George Bailey’s bank nearly went belly-up.

Nor were we ever really there. During the Great Depression, for example, most of the banks that failed, including those that faced runs, were rural banks that suffered heavy losses as fallen crop prices and land values caused farmers to default on their loans. Few if any unquestionably solvent banks failed, and bank run contagions, with panic spreading from unsound to sound banks, were far less common than is often supposed. Even the widespread cash withdrawals of February and early March, 1933, which led FDR to proclaim a national bank holiday, weren’t proof of any general loss of confidence in banks. Instead, they reflected growing fears that FDR planned to devalue the dollar upon taking office. Those fears in turn led bank depositors to cash in their deposits for Federal Reserve notes, in order to convert those notes into gold. What looked like distrust of commercial banks’ ability to keep their promises was really distrust of the U.S. government’s ability to keep its promises.

Regulate, Have Crisis, Repeat

If bank runs are mainly a threat to under-diversified or badly-managed banks, it’s no less the case that banking crises, in which relatively large numbers of banks all find themselves in hot water at the same time, are mainly a problem in badly-regulated banking systems. To find proof of this claim, one only has to compare the records, both recent and historical, of different banking systems. Do that and you’ll see that, while some systems have been especially vulnerable to crises, others have been relatively crisis free. Any theory of banking crisis that can’t account for these varying experiences is one that ought not to be trusted.

But just how can one account for the different experiences? The conventional theory of panics implies that the more crisis-prone systems must have lacked a lender of last resort or deposit insurance (which also serves to discourage runs) or both. It may also be tempting to assume that they lacked substantial restrictions upon the activities banks could engage in, the interest rates they could charge and offer, the places where they could do business, and other aspects of the banking business.

Wrong; and wrong again. Central banks, deposit insurance, and relatively heavy-handed prudential regulations aren’t the things that distinguished history’s relatively robust banking systems from their crisis-prone counterparts. On the contrary: central bank misconduct, the perverse incentives created by both explicit and implicit deposit guarantees, and misguided restrictions on banking activities including barriers to branch banking, portfolio restrictions, and mandated business structures, have been among the more important causes of banking-system instability. Some of the most famously stable banking systems of the past, on the other hand, lacked either central banks or deposit insurance, and placed relatively few limits on what banks were allowed to do.

Northern Exposures

It would take a treatise to review the whole, gruesome history of financial crises for the sake of revealing how unnecessary and ill-considered, if not corrupt, regulations of all sorts contributed to  every one of them.[2] For our little primer we must instead settle for four especially revealing case studies: those of the U.S. and Canada on the one hand and of England and Scotland on the other. The banking systems of Scotland between 1772 and 1845 and Canada from 1870 to 1914 and again from 1919  until 1935 were remarkably free of both crises and government interference. In comparison, the neighboring banking systems of England and the United States were both more heavily regulated and more frequently stricken by crises.

To first return again to the Great Depression, in the U.S. between 1930 and 1933, some 9000 mostly rural banks failed. That impressive record of failure could never have occurred had it not been for laws that prevented almost all U.S. banks from opening branch offices, either in their home states or elsewhere. The result was a tremendous number of mostly tiny and severely under-diversified banks.

Canada’s banks, in contrast, were allowed to establish nationwide branch networks. Consequently, not a single Canadian bank failed during the 1930s, despite the fact that Canada had no central bank until 1935, and no deposit insurance until 1967, and also despite the fact that Canada’s depression was especially severe. The few U.S. states that allowed branch banking also had significantly lower bank failure rates.

Comparing the performance of the Canadian and U.S. banking  systems between 1870 and 1914 tells a similar story. Although the U.S. didn’t yet have a central bank, and so was at least free of that particular source of financial instability (yes, you read that last clause correctly), thanks to other kinds of government intervention in banking, and especially to barriers to branch banking and to banks’ ability to issue circulating notes put in place during the Civil War, the U.S. system was shaken by one financial crisis after another. Yet during the same period Canada, which also had no central bank, but which didn’t subject its commercial banks to such restrictions,  avoided serious banking crises.

Although naturally different in its details, the Scotland-vs.-England story is remarkably similar in its broadest brushstrokes. Scotland’s banks, like Canada’s, were generally left alone, while in England privileges were heaped upon the Bank of England, leaving other banks enfeebled and at its mercy. In particular, between 1709 and 1826, the so-called “six partner rule” allowed only small partnerships to issue banknotes, effectively granting the Bank of England a monopoly of public or “joint stock” banking. In an 1826 Parliamentary speech Robert Jenkinson, the 2nd Lord Liverpool, described the system as one having “not one recommendation to stand on.” It was, he continued, a system

of the fullest liberty as to what was rotten and bad; but of the most complete restriction, as to all that was good. By it, a cobbler or a cheesemonger, without any proof of his ability to meet them, might issue his notes, unrestricted by any check whatever; while, on the other hand, more than six persons, however respectable, were not permitted to become partners in a bank, with whose notes the whole business of a country might be transacted. Altogether, this system was one so absurd, both in theory and practice, that it would not appear to deserve the slightest support, if it was attentively considered, even for a single moment.

Liverpool made these remarks in the wake of the financial panic that struck Great Britain in 1825, putting roughly 10 percent of the note-issuing cobblers and cheesemongers of England and Wales out of business. Yet in Scotland, where the six-partner rule didn’t apply, that same panic caused nary a ripple.

Although Scotland and Canada offer the most well-known instances of relatively unregulated and stable banking systems, other free banking experiences also lend at least some support to the thesis that those governments that governed their banks least often governed them pretty darn well.

Bagehot Bowdlerized

The aforementioned Panic of 1825 was one of the first instances, if not the first instance, in which the Bank of England served as a “lender of last resort,” albeit too late to avert the crisis. It was that intervention by the Bank, as well as the lending it did during the Overend-Gurney crisis of 1866, that inspired Walter Bagehot to formulate, in his 1873 book Lombard Street, his now-famous “classical” rule of last-resort lending, to wit: that when faced with a crisis, the Bank of England should lend freely, while taking care to charge a relatively high rate for its loans, and to secure them by pledging “good banking securities.”

Nowadays central bankers like to credit Bagehot for the modern understanding that every nation, or perhaps every group of nations, must have a central bank that serves as a lender of last resort to rescue it from crises. Were that actually Bagehot’s view, he might be grateful for the recognition if only he could hear it.  In fact he’s more likely to be spinning in his grave.

How come? Because far from having been a fan of the Bank of England, or (by implication) of central banks more generally, Bagehot, like Lord Liverpool, considered the Bank of England’s monopoly privileges the fundamental cause of British financial instability. Contrasting England’s “one reserve” system, which was a byproduct of the Bank of England’s privileged status, with a “natural,” “many-reserve” system, like the Scottish system (especially before the Bank Act of 1845 thoughtlessly placed English-style limits on Scottish banks’ freedom to issue notes), Bagehot unequivocally preferred the latter. That is, he preferred a system in which no bank was so exalted as to be capable of serving as a lender of last resort, because he was quite certain that such a system had no need for a lender of last resort!

Why, then, did Bagehot bother to offer his famous formula for last-resort lending? Simply: because he saw no hope, in 1873, of having the Bank of England stripped of its destructive privileges. “I know it will be said, ” he wrote in the concluding passages of Lombard Street,

that in this work I have pointed out a deep malady, and only suggested a superficial remedy. I have tediously insisted that the natural system of banking is that of many banks keeping their own cash reserve, with the penalty of failure before them if they neglect it. I have shown that our system is that of a single bank keeping the whole reserve under no effectual penalty of failure. And yet I propose to retain that system, and only attempt to mend and palliate it.

I can only reply that I propose to retain this system because I am quite sure that it is of no manner of use proposing to alter it… . You might as well, or better, try to alter the English monarchy and substitute a republic.

Perhaps today’s Bagehot-loving central bankers didn’t read those last pages. Or perhaps they read them, but preferred to forget them.

The Flexible Open-Market Alternative

If Great Britain was stuck with the Bank of England by 1873, as Bagehot believed, then we are no less stuck with the Fed, at least for the foreseable future. And unless we can tame it properly, we may also be stuck with its “unnatural” capacity to destabilize the U.S. financial system, in part by being all-too-willing to rescue banks and other financial firms that have behaved recklessly, even to the point of becoming insolvent.

Consequently, getting the Fed to follow Bagehot’s classical last-resort lending rules may, for the time being, be our best hope for securing financial stability. But doing that is a lot easier said than done. For despite all the lip-service central bankers pay to Bagehot’s rules, they tend to honor those rules more in the breach than in the observance. One need only consider the relatively recent history of the Fed’s last-resort lending operations, especially before 2003 (when it finally began setting a “penalty” discount rate) and during the subprime crisis, to uncover one flagrant violation Bagehot’s basic principles after another.

There is, I believe, a better way to make the Fed abide by Bagehot’s rules for last-resort lending. Paradoxically, it would do away altogether with conventional central bank lending to troubled banks, and also with the conventional distinction between a central banks’ monetary policy operations and its emergency lending. Instead, it would make emergency lending an incidental and automatic extension of the Fed’s routine monetary policy operations, and specifically of what I call “flexible” open-market operations, or “Flexible OMOs,” for short.

The basic idea is simple. Under the Fed’s conventional pre-crisis set-up, only a score or so of so-called “primary dealers” took direct part in its routine open-market operations aimed at regulating the total supply of money and credit in the economy. Also, those operations were — again, traditionally — limited to purchases and sales of short-term U.S. Treasury securities. Consequently, access to the Fed’s routine liquidity auctions was very strictly limited. A bank in need of last-resort liquidity, that was not a primary dealer, or even a primary dealer lacking short-term Treasury securities, would have to go elsewhere, meaning either to some private-market lender or to the Fed’s discount window, where to borrow from the latter was to risk being “stigmatized” as a bank that might well be in trouble.

“Flexible” open-market operations would instead allow any bank that might qualify for a Fed discount-window loan to take part, along with non-bank primary dealers, in its open-market credit auctions. It would also allow the Fed’s expanded set of counterparties to bid for credit at those auctions using, not just Treasury securities, but any of the marketable securities that presently qualify as collateral for discount-window loans, with the same  margins or “haircuts” applied to relatively risky collateral as would be applied were they used as collateral for discount-window loans. A “product-mix” auction, such as that the Bank of England has been using in its “Indexed Long-Term Repo Operations,” would allow multiple bids made using different kinds of securities to be efficiently dealt with, so that credit gets to the parties willing to pay the most for it.

So, instead of having a discount-window for emergency loans, not to mention various ad-hoc lending programs, in addition to routine liquidity auctions for the conduct of “ordinary” monetary policy, the Fed would supply liquid funds through routine auctions only, while making the auctions sufficiently “flexible”to allow any illiquid financial institute with “good banking securities” to successfully bid for such funds. Thanks to this set up, the Fed would no longer have to concern itself with emergency lending as such. Its job would simply be to get the total amount of liquidity right, while leaving it to the competitive auction process to put that liquidity where it commands the highest value. In other words, it really would have no other duty save that of regulating “the overall availability of liquid assets, and through it the general course of spending, prices, and employment.”

Continue Reading A Monetary Policy Primer:


[1]Deposit insurance can serve a function similar to that of having an alert lender of last resort. Most banking systems today rely on a combination of insurance and last-resort lending.

Diamond and Dybig’s famous 1983 model — one of the most influential works among modern economic writings — is in essence a clever, formal presentation of the conventional wisdom, in which deposit insurance is treated as a solution to the problem of banking panics. For critical appraisals of Diamond and Dybvig see Kevin Dowd, “Models of Banking Instability,” and chapter 6 of Lawrence White’s The Theory of Monetary Institutions.

[2]Although that badly-needed treatise has yet to be written, considerable chunks of the relevant record are covered in Charles W. Calomiris and Stephen Haber’s 2014 excellent work, Fragile by Design: The Political Origins of Banking Crises and Scarce Credit. I offer a much briefer survey of relevant evidence, including evidence of the harmful consequences of governments’ involvement in the regulation and monopolization of paper currency, in chapter 3 of Money: Free and Unfree. I express my (mostly minor) differences with Calomiris and Haber here.

[Cross-posted from]

Illegal immigration is at its lowest point since the Great Depression. President Trump has claimed success, but nearly all of the decrease occurred under prior administrations. The president’s campaign rhetoric does appear to have caused a small increase in illegal immigration prior to assuming office. Because immigrants moved up their arrival dates a few months, the typical amount in illegal entries failed to materialize in the spring. But these recent changes are small in the big picture: 98.2 percent of the reduction in illegal immigration from 1986 to 2017 occurred before Trump assumed office.

Naturally, illegal border crossings are difficult to measure. The only consistently reported data are the number of immigrants that Border Patrol catches attempting to cross. Border Patrol has concluded that the number of people who make it across is proportional to the number of people it catches. All else being equal, more apprehensions mean more total crossers. Of course, the agency could catch more people because it has deployed more agents. But we can control for the level of enforcement by focusing on the number of people each agent catches.

Figure 1 provides the number of people that each Border Patrol agent brought into their custody during each of the last 50 years. As it shows, illegal immigration peaked in the mid-1980s. From 1977 to 1986, each border agent apprehended almost 400 people per year. After the 1986 amnesty legislation that authorized new agents and walls, the flow fell at a fairly steady rate. Following the housing bubble burst, the 2009 recession, and the concomitant border buildup, the flow has essentially flatlined. In 2016, each border agent nabbed fewer than 17 people over the course of the entire year. That’s one apprehension every two and a half weeks of work. The “crisis” is over and has been for a decade.

Figure 1: Apprehensions Per Border Patrol Agent, FY 1957-2017

Sources: Apprehensions 1957-2016: Border Patrol; Apprehensions FY 2017 (projected from October-June data): Border PatrolBorder Patrol Staffing: Border Patrol, INS Statistical Yearbooks and INS Annual Reports

Following Trump’s election, the flow did fall further, but this was mostly a continuation of the existing trend. Before Trump assumed office, there was a slight departure from the trend (represented as a dotted line in the Figure 2), but this June’s apprehension figures are roughly where we would expect based on the last decade and a half of data. I interpret this to mean that we saw a Trump effect before he assumed office, when some additional asylum seekers and immigrants came to the border a few months ahead of schedule in fear of the changes that he might bring. But the effect dissipated after he assumed office.

Figure 2: Monthly Apprehensions Per Border Agent and Exponential Trendline, October 1999 to June 2017

Sources: Apprehensions FY 2000-16: Border Patrol; Apprehensions FY 2017: Border Patrol; Border Patrol Staffing: Border Patrol

Zooming in further on the Obama and Trump administration months only reinforces the interpretation of a pre-election Trump effect (Figure 3). In every pre-Trump year, illegal flows spiked during the month of May or earlier (a phenomenon which goes back to at least 2000). Donald Trump launched his campaign in June 2015. Instead of waiting until the spring, immigrants started coming to the border during the winter months for the first time, peaking in December. In 2016, there was the typical spike in the spring, but then after Trump won the Republican nomination, apprehensions rose quickly peaking in November at well over the Spring numbers.

Figure 3: Monthly Apprehensions Per Agent and Exponential Trendline, January 2009 to June 2017

Sources: See Figure 2

There were 90,000 more apprehensions made from August 2016 to January 2017 than in the pre-Trump August 2014 to January 2015. Assuming that this is a sign of a Trump effect—and immigrants moved their travel plans up roughly six months earlier than they otherwise would have—each month from February to June under the Trump administration would have seen 15,000 or so more arrivals and from August to January 15,000 fewer per month. This would place the first months of the new president’s tenure right about on the trend line from the Obama administration (Orange line in Figure 3).

In this case, Trump is benefiting not so much from his current rhetoric or policies, but from his rhetoric on the campaign trail. Immigrants chose to come earlier than they would have, and so the normal spring rush failed to materialize. If this is the case, then it’s possible that apprehensions will return to the normal trend next year. Of course, the administration’s new policies may have started to make an impact by that point. Only time will tell.

Even if the normal trend returns, large scale illegal immigration is over. Whoever deserves credit, the job is done. Congress should move on and start talking about real issues.

“Europe’s Taxes Aren’t as Progressive as Its Leaders Like to Think,” wrote the Wall Street Journal’s Joseph C. Sternberg yesterday. Citing tax expert Stefan Bach from the German Institute for Economic Research, Sternberg shows how Germany’s tax system is only mildly progressive overall. Sternberg therefore states that politicians need to “tackle” indirect taxation if they want to have a major impact on the economy.

Now, Sternberg is undoubtedly right that broad-based tax systems which incorporate social contributions and VATs tend to be less progressive than those which rely more heavily on progressive income taxes. That is, if we narrowly look at the effects of taxes alone, rather than government spending. But does it make any economic sense to look at a tax system in isolation?

Good economic theory would suggest that to the extent we care about progressivity and redistribution, revenues should be collected in the least distortionary way possible, with redistribution done via cash transfers. So judging the desirability of a tax system by its degree of progressivity is not a good starting point. From an economic perspective, the assessment should be how distortionary different taxation systems across the world are. European tax systems have huge problems in this regard, but their progressivity or otherwise should not be a major consideration.

The second and more important related point is that assessing progressivity should not seek to separate the issues of taxes from transfers. To judge progressivity, one must look at the position of households across the income spectrum after both, not least because one person’s taxes are (now or later) another person’s cash transfer.

I cannot find figures to do this for Germany, but am familiar with some headline UK and US stats.

Every year when the UK Office for National Statistics (ONS) releases its publication The effects of taxes and benefits on household income, historical datasets a similar lament to Sternberg’s arises. Calculating total taxes paid as a proportion of gross income (market income plus government cash transfers), critics of the tax system assert that the poorest quintile pay 35.0% of their gross income in taxes, on average, which is almost identical to the average 34.1% for the top quintile (2015/16 figures). Like Sternberg, many conclude that the tax system is not progressive enough.

Yet a few seconds’ thought to what these figures show highlights how misleading this is. Gross income (the denominator in the calculation) includes cash transfers, which are transfers from one group to another. That a household uses money redistributed to it to spend, in turn paying what the ONS describes as indirect taxes (things like VAT, beer duty, tobacco duty, the TV license and fuel duty), can hardly be described as “regressive”.  This is akin to taking from Peter to pay Paul and then saying that – because Paul spends a large proportion of this money – the  tax system is unfair.

Put simply, benefits don’t fall like manna from heaven. One person’s taxes are someone else’s cash transfers. That the tax system is not ultra-progressive then is not what matters – it’s what the overall tax AND transfers system does that counts.

Thought of in this way, we can calculate effective tax rates, which measure the net contribution for the average household in each income quintile as a proportion of their market income. This is key: how much of the income that you earn is being taxed away and given to others? I.e. how progressive is the taxpayer-funded welfare state.

Table 1 shows the poorest fifth of households in the UK on average actually face an effective tax rate (all taxes minus cash benefits, divided by earned income) of -34.1 per cent, while the richest fifth face an average tax rate of 31.8 per cent. This means that, for every £1 earned in market income, the average household in the poorest quintile is transferred another 34.1p in cash benefits, while the average household in the top decile pays 31.8p in tax. The tax and cash transfers system, in other words, is very progressive.

But even this excludes so-called “benefits in kind,” which the UK state provides lots of, and which disproportionately benefit the poor. Once benefits-in-kind (education, healthcare, and subsidies for housing, rail and buses) are considered, these effective tax rates for the average household in the richest and poorest fifths become -140.2 per cent and 25.3 per cent respectively (see Table 2).

Now, the UK figures are not completely comprehensive. Unlike the figures below for the US (below), they do not seek to assign the impact of corporate income taxes on workers across the income distribution. They also exclude the cash-value benefits of other public goods, such as defense, law and order etc., and the courts which uphold property rights, where there is an argument that the rich benefit disproportionately (though development work stresses the importance of property rights for the poor too). But overall, it’s clear that welfare states are hugely redistributive.

Can similar figures be found for the US? The best comparator figures I can find come from the CBO’s June 2016 report The Distribution of Household Income and Federal Taxes, 2013.” There are three important differences in the methodology from the UK figures which means they are likely to look more progressive on the surface: the quintiles are assigned by market income, rather than disposable income; the transfers include transfers from state and local programs but only federal taxes (and sales taxes tend to be more regressive); and the figures presented include in-kind assistance, meaning they are closer in methodology to Table 2 than Table 1. But the results show the same trend.

The average household in the bottom quintile receives around $2.90 in cash transfers or in-kind benefits for every $1 earned, against the top quintile which faced an effective tax rate of 24.8%. Interestingly, as Greg Mankiw noted before, middle income Americans have shifted since 1979 from being, on average, net contributors to net beneficiaries under this measure.

Of course, averages hide a lot of information. Government programs redistribute heavily to those with children and to old people. But if we are going to assess crude measures of progressivity by looking across the income spectrum, it makes sense to include transfers too.

In conclusion, there are many problems with tax systems here and in Europe. But aiming to make them more progressive should not be an underlying economic aim. To the extent that redistribution is considered a valid goal, it should be undertaken through spending, and these stats above show countries such as the US and UK are already hugely redistributive or “progressive” in this regard. 

AEI scholar Abby McCloskey’s recent column on paid family leave argues that just “12 percent of private-sector employees have access to paid family leave from their employer.” For McCloskey, this is one of many reasons that the federal government should create a paid family leave entitlement program.

The 12 percent figure surely sounds appallingly low. In fact, it is so low that it seems suspect: it doesn’t match well with real-life experience or casual observation. The figure also doesn’t match with data from nationally representative surveys. For example, 63 percent of employed mothers said their employer provided paid maternity leave benefits in one national study, a 50 percentage point difference from the most recent BLS figure.[1]

So what gives? It seems many U.S. women take paid parental leave, but the Bureau of Labor Statistics (BLS) doesn’t count it. The BLS requires paid family leave be provided “in addition to any sick leave, vacation, personal leave, or short-term disability leave that is available to the employee.” This means that when employees take paid leave for family purposes, it doesn’t count if it could have been used for another purpose. 

In the real world, parents with conventional benefit programs often save and pool paid personal leave, vacation, sick leave, and short-term disability in the event of a birth or adoption. On average, employees with five years of service are provided 22 days of sick and vacation leave. A majority of private-sector employees can carry over unused sick days from previous years, which adds to the tally. Meanwhile, the median short-term disability benefit is 26 weeks for private-sector workers; six to eight weeks can be used toward paid maternity leave.

These benefits do exactly the same thing as paid family leave. As Human Resources Inc. puts it, “family-leave is usually created from a variety of benefits that include sick leave, vacation, holiday time, personal days, short-term disability…” And although not all employers, especially small businesses, have official paid family leave policies, “Many employers are flexible and can work out an agreement with you.” Benefits that aren’t spelled out in the company manual are surely undercounted by BLS figures, too.

Paid leave doesn’t always fit neatly under the BLS’s survey categories for other reasons. Unconventional benefit packages, like consolidated paid leave (or PTO banks) allow employees to use paid leave for any reason, family or otherwise. Consolidated paid leave is on the rise; the BLS reports that 35 percent of private-sector employees receive it. In some industries, more than half of employees receive this flexible benefit.

Unlimited paid leave plans are also growing in certain industries. These plans allow employees to take as much leave as they want, whenever they want, assuming they meet performance expectations. But unlimited and consolidated paid leave don’t provide paid family leave separately, so neither count.

As a result, BLS figures seem to grossly underestimate paid family leave availability. BLS methods penalize employers that provide flexible benefits, by pretending their benefits don’t exist.

This helps to explain why BLS figures differ dramatically from other surveys. In spite of that, don’t expect government-sponsored paid leave advocates to update their figures any time soon.

[1] Note that the Listening to Mothers III study focused on employed mothers; BLS focuses on private-sector employees.

“Is Amazon getting too big?” asks Washington Post columnist Steven Pearlstein, in a 4000-word column seeking justification for the Democrat Party’s quixotic pledge to “break up big companies” in its recent “Better Deal.” “Just this week,” notes Pearlstein, “Democrats cited stepped-up antitrust enforcement as a centerpiece of their plan to deliver ‘a better deal’ for Americans should they regain control of Congress and the White House.” He concludes by saying “it sometimes takes a little public power to keep private power in check.” But maybe it takes a lot of public power to write antitrust lawyers some big checks.

Politics aside, the question “Is Amazon getting too Big?” should have nothing to do with antitrust, which is supposedly about preventing monopolies from charging high prices. Surely no sane person would dare accuse Amazon of monopoly or high prices. 

Even Mr. Pearlstein has doubts: “Is Amazon so successful, is it getting so big, that it poses a threat to consumers or competition? By current antitrust standards, certainly not… Here is a company, after all, known for disrupting and turbocharging competition in every market it enters, lowering prices and forcing rivals to match the relentless efficiency of its operations and the quality of its service. That is, after all, usually how firms come to dominate an industry…”

That should have ended this story “by current antitrust standards.” But if we simply lower those standards, then “Better Way” antitrust shakedown threats could become far more numerous, unpredictable, and lucrative for politically-generous antitrust law firms

Among the 19 largest law firm contributions to political parties in 2015/2016, according to Open Secrets, all but one, Jones Day, contributed overwhelmingly to Democrats. More to the point, all of these law firms contributing most generously to the Democratic Party are specialists in antitrust and mergers: They appear on U.S. News list of top Antitrust attorneys. And the Trial Lawyers Association (now disguised as “American Association for Justice”) contributed over $2.1 million to Democrats, over $1 million to liberal organizations and $67,500 to Republicans.

Antitrust law is a very big, profitable and concentrated industry. Antitrust lawyers’ have a special interest in greatly expanding the reach and grip of antitrust law. They were surely delighted by Pearlstein’s prominent endorsement of law journal paper by Lina Khan, a 28-year old student and fellow at the “liberal-leaning” think tank New America.

Ms. Kahn believes it self-evident that low operating profits must prove Amazon is “choosing to price below-cost.” That’s uninformed accounting. What low profits actually show is that Amazon has been plowing-back rapidly expanding cash flow into capital expenditures, such cloud computing, a movie studio, and unique consumer electronics (Kindle and Echo).

 “If Amazon is not a monopolist, Khan asks, why are financial markets pricing its stock as if it is going to be?” That’s uninformed finance theory. Investors rightly see Amazon’s current and future growth of cash flow (the result of expensive investments) as the source of future dividends and/or capital gains (more net assets per share).

Kahn believes antitrust has been unduly constrained by “The Chicago School approach to antitrust, which gained mainstream prominence and credibility in the 1970s and 1980s.” She thinks Chicago’s “undue focus on consumer welfare is misguided. It betrays legislative history, which reveals that Congress passed antitrust laws to promote a host of political economic ends.”

The trouble with grounding policy on legal precedent is that Congress passed many laws to promote the special interest of producers at the expense of consumers—including the Interstate Commerce Commission (1887), the National Economic Recovery Act (1933), the Civil Aeronautics Board (1938), and numerous tariffs and regulations designed to benefit interest groups and the politicians who represent them.

The well-named chapter “Antitrust Pork Barrel” in The Causes and Consequences of Antitrust quotes Judge Richard Posner noting that antitrust investigations are usually initiated “at the behest of corporations, trade associations, and trade unions whose motivation is at best to shift the cost of their private litigation to the taxpayer and at worse to harass competitors.”

To grasp how and why anti-trust is easily abused as a rent-seeking device, it helps to relearn the wisdom of Frederic Bastiat: “The seller wants the goods on the market to be scarce, in short supply, and expensive. The [buyer] wants them abundant, in plentiful supply, and cheap. Our laws should at least be neutral, not take the side of the seller against the buyer, of the producer against the consumer, of high prices against low prices, of scarcity against abundance [emphasis added].” 

Contrary to Bastiat, however, Ms. Kahn claims to have found “growing evidence shows that the consumer welfare frame has led to higher prices and few efficiencies.” 

Growing evidence turns out to mean three papers, one of which seems to say what she says it does (but only about mergers, not concentration): “Research by John Kwoka of Northeastern University,” Pearlstein writes, “has found that three-quarters of mergers have resulted in [were followed by?] price increases without any offsetting benefits. Kwoka cited industries such as airlines, hotels, car rentals, cable television and eyeglasses.” 

If you believe that, mergers left consumers overcharged by the Marriott hotel and Enterprise Rent-A-Car ‘monopolies.’ Even if that sounds plausible, Kwoka’s evidence does not. Two-thirds of his sample covers just three industries (petroleum, airlines, and professional journal publishing), the price estimates are unweighted without standard error data, and several mergers date back to 1976-82. As Federal Trade Commission economists Vita and Osinksi charitably noted, “Kwoka has drawn inferences and reached conclusions … that are unjustified by his data and his methods.”

Pearlstein turns to another paper in Kahn’s trio: “There is little debate that this cramped [Chicago] view of antitrust law has resulted in an economy where two-thirds of all industries are more concentrated than they were 20 years ago, according to a study by President Barack Obama’s Council of Economic Advisers, and many are dominated by three or four firms.” 

Nothing in Pearlstein’s statement is even approximately correct. The Obama CEA looked at shares revenue earned by two different lists of Top 50 firms (not “three or four”) in just 13 industries (not “all industries”) in 1997 and 2012. Pearlstein’s “two-thirds of all” really means 10 out of 13, though the U.S. has considerably more than 13 industries. In transportation, retailing, finance, real estate, utilities, and education, for example, the Top 50 had a slightly larger share of sales in 2012 than in 1997. So what?

Should we fear monopoly price gouging simply because 50 firms account for a larger share of the nation’s very large number of retail stores, real estate brokers, or finance companies? Of course not. “An increase in revenue concentration at the national level,” the Obama CEA concedes, “is neither a necessary nor sufficient condition in market power.”

The Obama CEA did add that “in a few industries… there is some evidence of increasing market concentration.” How few? Just three: Hospitals, railroads, and wireless providers. Those industries are heavily regulated, as is banking. 

The CEA notes the 10 largest banks had a larger share of bank loans in 2010 than in 1980, which is hardly a surprise. Hundreds of banks that existed before the 1981-82 stagflation and 2008-09 Great Recession had closed by 2010. More lending now flows through nonbanks and securities. And the Internet (e.g., lending tree) makes shopping for loans or credit cards more competitive than ever.

Did the Obama CEA present any evidence that its extraneous data about industry-level or market concentration “has led to higher prices and few efficiencies”? Certainly not. They made no such claim because so many previous efforts have failed. “The Market Concentration Doctrine” could not explain higher prices when Harold Demsetz examined it in 1973, and it still can’t.


Generally speaking, the Washington Post editorial board does a great job on trade issues. They are pro-trade and they see trade agreements as a way to liberalize trade. However, I want to offer a response to something in a recent Post editorial about one particular technical aspect of the NAFTA renegotiation. Here’s the passage:

Alas, the administration also specified that the trade deficit with Mexico and the (smaller) one with Canada be reduced as a result of the talks, which isn’t possible and wouldn’t necessarily be desirable even if it were. Possibly even more counterproductive, Mr. Trump’s goals include the elimination of the so-called Chapter 19 dispute-resolution mechanism, which creates a special NAFTA-based forum to challenge a member country’s claims that another is selling exports below cost (“dumping”). This check against potentially protectionist litigation brought by U.S. industries in U.S. forums was Canada’s precondition for joining the U.S.-Canada free-trade agreement, upon which NAFTA was built; and it’s one reason that exports from Canada and Mexico are far less likely than those of other nations to face penalties in the United States.

Eliminating Chapter 19 probably would be a dealbreaker for Canada. And why would Mr. Trump seeks its elimination? After all, as he said in that call with Mr. Peña Nieto, “Canada is no problem . . . we have had a very fair relationship with Canada. It has been much more balanced and much more fair.” Perhaps he means the proposal as a bargaining chip, to be traded for some other, more valuable concession. Or perhaps he will be willing to finesse it behind closed doors, just as he pleaded with Mr. Peña Nieto to help him wiggle out of his unwise promise to make Mexico pay for a border wall. We certainly hope the administration can be pragmatic on this point, lest it trigger the trade war with our neighbors that Mr. Trump once promised but so far has sidestepped.

Starting with some technical points, let me note that dumping is defined as more than just sales below cost, as it could also mean export sales that are below the price in the home market or a third country market. (It’s a fundamentally arbitrary calculation, which you can read more about here.) Also, Chapter 19 covers countervailing duties (extra tariffs imposed on imports of subsidized goods) as well.

But the main issue is with the Post’s substantive defense of Chapter 19. It’s important to understand how the process works. U.S. agencies—the Department of Commerce (DOC) and the International Trade Commission (ITC)—make decisions about whether anti-dumping and countervailing duties are necessary in particular cases. These decisions, like other administrative decisions, can then be appealed to U.S. federal courts, in this case the Court of International Trade (CIT) in New York. Under NAFTA Chapter 19, however, Canadians and Mexicans have the option to appeal the agency decision to a special NAFTA panel (they can also go to the Court of International Trade, and sometimes do that instead of Chapter 19).

So, when someone says that NAFTA Chapter 19 panels protect against the abuse of antidumping/countervailing duties, in essence this means they think the U.S. courts are not up to the job of reviewing these agency decisions.

Is it possible that U.S. courts are insufficient here? In my opinion, we don’t have enough data on this question yet. What we would need to see in this regard is evidence of the impartiality (or partiality) of U.S. courts. For example, a basic piece of evidence would be how U.S. courts have ruled on these agency decisions. My colleague Dan Ikenson looked at some data on this a while back:

Between January 2004 and June 2005, IA [the Import Administration of the DOC] published 26 redeterminations of antidumping proceedings pursuant to remand orders from the courts. As Table 4 indicates, seven of the remand orders required IA to explain how its decisions were consistent with the law and did not expressly mandate that IA make any changes. But of the 19 remands that did require methodological changes, 14 produced lower antidumping duty rates upon recalculation for at least one of the foreign companies involved.

At first glance, this looks like a functioning judicial review of agency decisions, but we need to expand on this data, and then compare it to the results from NAFTA Chapter 19 panels (that is, compare the results of NAFTA panel decisions to those of CIT decisions; and compare the DOC and ITC responses to each kind of ruling). We have a Cato Trade Policy Center project to gather this data going on right now.

As for the point that “exports from Canada and Mexico are far less likely than those of other nations to face penalties in the United States,” it’s true in a sense. If you compare Canada and Mexico to the rest of the world, fewer of their imports are covered by these special duties. On the other hand, when you compare countries that are similarly situated in terms of development levels, you get a different result. For example, imports from Canada are more likely to be subject to duties than are imports from the United Kingdom and Germany.

And there’s also the question of whether NAFTA Chapter 19 is constitutional—it’s not clear whether this kind of agency decision review can be carried out by anyone other than a U.S. court.

Chapter 19 looks like it might play an outsized role in the NAFTA renegotiation, as people on both sides have latched on to it as a symbol of their cause. The reality is more nuanced, and it’s important to gather some hard evidence in order to assess the real value of the provision.

Vox’s Dylan Scott offers “An oral history of Obamacare’s 7 near-death experiences.” It’s a well-balanced take on how close the Affordable Care Act and ObamaCare—they are different animals—have come to oblivion.

Scott includes an excerpt of an email I sent him on his original, planned theme of “ObamaCare’s nine lives.” But I thought it might be worthwhile to include my entire response to him. I have lightly edited my email for clarity, added two illegal acts I had forgotten (#5 and #7), and added hyperlinks to useful references.

Just nine? The premise is inapposite, though. 

Jonathan Gruber was right: had the public known what the Affordable Care Act does, it never would have passed, because even more Democrats would have voted to kill it. ObamaCare keeps surviving not because it has nine lives, but because the executive and judicial branches keep rewriting the Affordable Care Act, outside the legislative process, to save it from constitutional and political accountability.

These unconstitutional and illegal actions began before the ink was dry on Obama’s signature. They include:

  1. Allowing Congress to remain in the FEHBP from 2010 until 2014;
  2. The bevy of exemptions Sebelius issued unions and other firms from various regulations;
  3. The threats Sebelius made to insurers who spoke publicly and truthfully about the cost of those regulations; 
  4. Sebelius soliciting funds for Enroll America from companies she regulates;
  5. Sebelius raiding the Prevention and Public Health Fund to the tune of $454 million to fund federal Exchanges; 
  6. The Supreme Court rewriting the individual mandate in 2012; 
  7. Sebelius gutting the Supreme Court’s Medicaid ruling by coercing states to implement parts of the Medicaid expansion the Court made optional;
  8. Obama’s illegal “if you like your health plan” fix/grandmothered-plans exemptions;
  9. The IRS issuing subsidies through federal exchanges;
  10. The Supreme Court upholding ObamaCare’s subsidies and penalties in federal-Exchange states;
  11. The Obama administration making illegal CSR payments;
  12. The Obama administration illegally diverting reinsurance payments from the treasury to insurance companies;
  13. The Obama administration declaring Congress to be a small business; 
  14. The Obama administration giving members of Congress an illegal $12,000 premium contribution to their SHOP premiums;
  15. The Trump administration continuing to make illegal CSR payments;
  16. The Trump administration continuing to give Congress and illegal exemption from the ACA…

Etc., etc.

It’s not that Obamacare has nine lives. It’s that ObamaCare has 90 or 900 or 9,000 committed ideologues who are willing to violate the law to protect it from the voters. 

What we have left is no longer the law Congress enacted. The ACA was a legitimate law, duly passed by Congress. ObamaCare is an illegitimate law that no Congress ever passed or ever could have passed. 

The ACA is dead. Long live ObamaCare.

Absent these unlawful actions, Republicans’ recent attempt to repeal ObamaCare would have succeeded.

Years ago.

Trying to tamp down impeachment talk earlier this year, House minority leader Nancy Pelosi (D-CA) insisted that President Donald Trump’s erratic behavior didn’t justify that remedy: “When and if he breaks the law, that is when something like that would come up.” 

Normally, there isn’t much that Pelosi and Tea Party populist Rep. Dave Brat (R-VA) agree on, but they’re on the same page here. In a recent appearance on Trump’s favorite morning show, “Fox & Friends,” Brat hammered Democrats calling for the president’s impeachment: “there’s no statute that’s been violated,” Brat kept insisting: They cannot name the statute!” 

Actually, they did: it’s “Obstruction of Justice, as defined in 18 U.S.C. § 1512 (b)(3),” according to Rep. Brad Sherman (D-CA) who introduced an article of impeachment against Trump on July 12. Did Trump break that law when he fired FBI director James Comey over “this Russia thing”? Maybe; maybe not. But even if “no reasonable prosecutor” would bring a charge of obstruction on the available evidence, that wouldn’t mean impeachment is off-limits. Impeachable offenses aren’t limited to crimes.

That’s a settled point among constitutional scholars: even those, like Cass Sunstein, who take a restrictive view of the scope of “high Crimes and Misdemeanors” recognize that “an impeachable offense, to qualify as such, need not be a crime.” University of North Carolina law professor Michael Gerhardt sums up the academic consensus: “The major disagreement is not over whether impeachable offenses should be strictly limited to indictable crimes, but rather over the range of nonindictable offenses on which an impeachment may be based.” 

In some ways, popular confusion on this point is understandable. Impeachment’s structure echoes criminal procedure: “indictment” in the House, trial in the Senate—and the constitutional text, to modern ears, sounds something like “grave felonies, and maybe lesser criminal offenses too.”

But “high crimes and misdemeanors,” a term of art in British impeachment proceedings for four centuries before the Framers adopted it, was understood to reach a wide range of offenses that, whether or not criminal in nature, indicated behavior incompatible with the nature of the office. For James Madison, impeachment was the “indispensable” remedy for “Incapacity, negligence, or perfidy” on the part of the president—categories of conduct dangerous to the republic, only some of which will also constitute crimes. 

The criminal law is designed to punish and deter, but those goals are secondary to impeachment, which aims at removing federal officers unfit for continued service. And where the criminal law deprives the convicted party of liberty, the constitutional penalties for impeachable offenses “shall not extend further than to removal from Office,” and possible disqualification from future officeholding. As Justice Joseph Story explained, the remedy “is not so much designed to punish an offender, as to secure the state against gross official misdemeanors. It touches neither his person, nor his property; but simply divests him of his political capacity.”

No doubt being ejected from a position of power on the grounds that you’re no longer worthy of the public’s trust can feel like a punishment. But the mere fact that removal is stigmatizing doesn’t suggest that criminal law standards apply. Raoul Berger once illustrated that point with an analogy Donald Trump would probably find insulting: “to the extent that impeachment retains a residual punitive aura, it may be compared to deportation, which is attended by very painful consequences, but which, the Supreme Court held, ‘is not a punishment for a crime.’”

Had the Framers restricted impeachment to statutory offenses, they’d have rendered the power a “nullity” from the start. In the early Republic, there were very few federal crimes, and certainly not enough to cover the range of misdeeds that would rightly disqualify public officials from continued service.

Criminality wasn’t an issue in the first impeachment to result in the removal of a federal officer: the 1804 case of district court judge John Pickering. Pickering’s offense was showing up to work drunk and ranting like a maniac in court. He’d committed no crime; instead, he’d revealed himself to be a man “of loose morals and intemperate habits,” guilty of “high misdemeanors, disgraceful to his own character as a judge.”

As Justice Story noted in 1833, in the impeachment cases since ratification, “no one of the charges has rested upon any statutable misdemeanours.” In fact, over our entire constitutional history, fewer than a third of the impeachments approved by the House “have specifically invoked a criminal statute.” What’s been far more common, according to a comprehensive report by the Nixon-era House Judiciary Committee, are “allegations that the officer has violated his duties or his oath or seriously undermined public confidence in his ability to perform his official functions.”

The president’s violation of a particular criminal statute can serve as evidence of unfitness, but not all such violations do. That’s obvious when one considers the enormous growth of the federal criminal code in recent decades. Overcriminalization may have reached the point where Donald Trump, like everyone else, is potentially guilty of “Three Felonies a Day,” but even in Lawrence Tribe’s wildest imaginings, that wouldn’t translate to three impeachable offenses daily. If Trump were to import crocodile feet in opaque containers, fill an (expansively defined) wetland on one of his golf courses, or misappropriate the likeness of “Smokey Bear,” he’d have broken the law, but would not have committed an impeachable offense.

It’s also easy enough to imagine a president behaving in a fashion that violates no law, but nonetheless justifies his removal. To borrow an example from the legal scholar Charles Black, if the president proposed to do his job remotely so he could “move to Saudi Arabia [and] have four wives” (as well as his very own glowing orb), he couldn’t be prosecuted for it. Still, Black asks: “is it possible that such gross and wanton neglect of duty could not be grounds for impeachment”?

A more plausible impeachment scenario presented itself recently, with reports that President Trump had “asked his advisers about his power to pardon aides, family members and even himself” in connection with the special counsel’s Russia investigation. The president’s power to self-pardon is an open question, but his power to pardon others has few limits. There’s little doubt Trump could issue broad prospective pardons for Don Jr., Jared Kushner, Paul Manafort, Mike Flynn, and anyone else who might end up in the Mueller’s crosshairs—and it would be perfectly legal. It would also be impeachable, as James Madison suggested at the Virginia Ratifying Convention: “if the President be connected, in any suspicious manner, with any person, and there be grounds to believe he will shelter him, the House of Representatives can impeach him; [and he can be removed] if found guilty.”

Some years ago, I put together a collection of essays on the expansion of the criminal sanction into areas of American life where it doesn’t belong—published under the title, Go Directly to Jail: The Criminalization of Almost Everything. The idea that criminal law concepts had infected and weakened the constitutional remedy of impeachment wasn’t quite what I had in mind with that subtitle, but it seems to fit.

Congress has made the problem worse by outsourcing its investigative responsibilities to the executive branch. As Princeton’s Keith Whittington observes in a recent essay for the Niskanen Center, “relying so heavily on prosecutors to develop the underlying charges supporting impeachment has come at a high cost…it has created the widespread impression that the impeachment power can only appropriately be used when criminal offenses have been proven.”

It’s important to get this straight, because confusing impeachment with a criminal process can be harmful to our political health. It may lead us to stretch the criminal law to “get” the president or his associates, warping its future application to ordinary citizens. And it can leave the country saddled with a dangerously unfit president whose contempt for the rule of law is apparent, even if he hasn’t yet committed a crime.

E-Verify is the federal government’s national identification system that some employers currently use to verify the employment authorization of their new hires either voluntarily or under a requirement by state law. The Legal Workforce Act (LWA) would make this program mandatory for all employers in all states. Its proponents contend that E-Verify is simpler than the current I-9 form process and that it will protect employers from government raids and I-9 audits. But these talking points are false, and seemingly for this reason, employers refuse to use it voluntarily.

In 2017, 729,595 employers participated in E-Verify. Using a calculation from a USCIS-commissioned study, this figure corresponds to 9.5 percent of the 7.7 million private sector employers, which is somewhat higher than the actual level because some E-Verify users are public employers. It also greatly inflates the level of voluntary compliance. Only 10 states and D.C. have greater than 10 percent participation in E-Verify—all of them have expansive E-Verify mandates of some kind. To achieve this level of compliance, states need to require E-Verify for some private sector employers—either by fining non-users or rescinding subsidies from them. President Bush’s 2008 executive order mandating E-Verify for federal contractors drives the relatively high level of compliance in D.C.

Nearly half of all employers who use E-Verify operate in the 10 top E-Verify-using states, while only 17 percent of the businesses operate there. Only 6 percent of the 40 states without the more expansive mandates use E-Verify. As the Figure below shows, the true level of “voluntary” compliance is undoubtedly much lower than 6 percent. Another dozen states have various E-Verify requirements for public employers or contractors, and federal contractors exist in every state, meaning that it’s impossible to determine the precise level of voluntary participation.

Figure: E-Verify Participation Rates, 2017

Sources: Author’s calculations based on USCIS; SBA; BLS (Top 10 states: Alabama, Arizona, Georgia, Mississippi, Missouri, Nebraska, North Carolina, Rhode Island, South Carolina, Tennessee, and Utah, plus D.C.)

Unfortunately, USCIS has not released more recent data since 2013 on the size of the businesses using E-Verify, but assuming the 2013 proportions, 98 percent of businesses with less than 10 employees were not using E-Verify in 2017. The numbers also reveal that even when the law requires businesses to use E-Verify, they are reluctant to do so. Even in Alabama, which has achieved the highest E-Verify participation rate with a universal E-Verify requirement, less than half of the state’s businesses actually use the program. Arizona, which has the oldest universal E-Verify requirement and where two intentional violations can result in the “business death penalty,” actually fares worse. Only 39 percent of businesses use the program.

These facts demonstrate that most businesses—especially small businesses—do not consider E-Verify “business friendly.” 

E-Verify is a government-run national identification system that some U.S. employers currently use to verify the employment status of their new hires on a voluntary basis or a compulsory basis if they are federal contractors or operate in states with an E-Verify mandate. The Legal Workforce Act (LWA), which has passed the House Judiciary Committee on three occasions since 2012, would mandate that all employers use the program. In scope, LWA would surpass all other regulations in U.S. history, applying to every single employer and every single worker—illegal and legal—with deleterious consequences for both.

Proponents see E-Verify as an inexpensive silver bullet to end illegal immigration. But naturally, this technocratic dream fails to fit reality. As my colleagues’ recent study shows, E-Verify does slightly reduce unauthorized immigrant wages, but not nearly far enough to “turn off the jobs magnet.” Unsurprisingly, the market finds a way to connect willing workers with willing employers. However, while E-Verify fails to separate illegal workers from their jobs, it does manage to do exactly that for many legal workers—U.S. citizens and work authorized immigrants.

E-Verify salesmen neglect to mention that the program applies to all workers, not just those here illegally, and that U.S. citizens and legal workers can end up caught in the system. I have previously explained how, from 2006 to 2016, legal workers already had 580,000 jobs held up due to E-Verify errors, and that of these, 130,000 lost their jobs completely. These shocking numbers would grow worse under mandatory E-Verify. Under the most conservative estimate, if applied to all employers, E-Verify would delay at least 1.7 million jobs for legal workers and eliminate nearly half a million jobs over 10 years.

How E-Verify already harms U.S. workers

LWA requires employers to submit the information employees provide them on the I-9 forms to E-Verify. If the information fails to match the records of the Department of Homeland Security or Social Security Administration, E-Verify issues a “tentative nonconfirmation” (TNC). Under LWA, people who receive a TNC would need to challenge it within 2 weeks or it would become a “final nonconfirmation” (FNC), which requires an employer to immediately terminate their employment or face major fines or jail time.

Errors can occur because employers enter the name incorrectly. This mistake is particularly common for people with multiple or hyphenated last names or names with difficult spellings. They also happen when bureaucrats incorrectly enter information into their databases or when employees fail to fully update their information after a name change.

To sort out the problem, employees then have to visit in person the Social Security Administration or U.S. Citizenship and Immigration Services (USCIS)—the new DMVs for employment. Employees and employers have to stumble through this process in the dark because E-Verify is unable to tell them the origin of the problem. Workers may need to file Privacy Act requests to access their records and fix the issue. In these cases, it can take more than three months to even obtain a response. LWA allows employers to delay hiring a worker until they clear this bureaucracy.

Even worse, E-Verify can cause legal workers to lose their jobs entirely. Authorized job seekers can receive an FNC if they fail to challenge the TNC or if their employer fails to notify them. According to a USCIS-commissioned study, 17 percent of FNC errors were the fault of the employee not following the regulations. The other 83 percent were the result of employers not informing the worker about the TNC, so they could challenge it.

E-Verify’s boosters tout its 99.8 percent accuracy rate, implying that U.S. workers have little to fear. But even a low rate applied to a population as large as the U.S. workforce would result in hundreds of thousands of errors. Indeed, in 2016, under voluntary use of the system, E-Verify caught up 63,000 legal workers in its regulatory scheme. It will become much worse if Congress mandates it for all employers.

Legal Workforce Act will harm even more U.S. workers

In order to project the number of errors under LWA, we need to know the number of hires that employers will make through the system and the rate at which E-Verify will wrongly not confirm a job applicant.

The Census Bureau reports the number of new hires, almost 100 million in 2015, but LWA would also allow employers for the first time to voluntarily check their existing workers, a practice that the current regulations prohibit. This means that the number of E-Verify checks will exceed the number of annual hires that the Census records. Unfortunately, we cannot know by how much because it depends on the desire of employers to use this procedure, but it could raise the estimated number of new checks by tens of millions. For this estimate, I took the most conservative position and assume no company will check any of its existing employees.

The future error rate is more difficult to assess. The E-Verify system’s accuracy has improved over time, so this trend will likely continue as the bureaucracy, employers, and employees figure out the system. However, there is likely a natural floor beneath which the system cannot improve—perfect accuracy is almost certainly impossible (especially considering the causes of the errors). For this reason, it is unlikely that the error rates will continue to improve at the current rate indefinitely.

On the other hand, the rates could grow much worse, especially following the initial rollout, because LWA would flood it with new employers who have no desire or ability to use the program. For the last 10 years, the system has mostly incorporated larger employers. About 90 percent of employers have less than 15 employees, compared to only 8 percent of E-Verify businesses, as another USCIS-commissioned study found. Smaller employers have less human resource staff to implement these types of regulations, and there will certainly be a learning curve regardless.

LWA would also likely incorporate a large new population of employees who are more likely to be the victims of E-Verify errors. According to the USCIS-commissioned study, this group includes legal immigrants and Hispanics. Because many Hispanics and legal immigrants live in states where E-Verify is not currently mandatory at the state-level, it is likely that LWA’s national mandate would increase errors for them. Given the uncertainties, this estimate takes the conservative assumption that the E-Verify improvements would continue at the same pace that it did under the Obama administration (2009-2016), an 8.5 percent reduction in the error rate annually, even though this is unlikely to continue.

Finally, LWA will definitely increase the rate of job loss due to E-Verify. FNCs currently only happen to legal workers if they fail to challenge or an employer fails to notify them about a TNC. LWA would introduce a new way to receive a FNC: inability to resolve the issue in time. LWA requires that a person prove their right to work in less than 10 working days or be fired. The bill allows a single one-time extension at the discretion of the Secretary of Homeland Security (p. 36). Yet as explained above, we know that it often takes much longer than that to resolve a TNC error.

Unfortunately, USCIS only provides case resolution details in groups: such as, less than 3 business days, 3 to 8 days, or more than 8 days. In 2012, the last year for which we have data, 36 percent of all erroneous TNCs took more than 9 or more days to resolve. If we assume that the full 36 percent cannot obtain an extension of the 10-day limit, then roughly 57,000 legal workers would lose their jobs due to this one provision of LWA alone.

However, for this estimate, I will assume that all of these workers will receive the one-time extension. If we further assume that the daily number of TNC resolutions in the 3 to 8 day period continued at the same rate thereafter, then 13.3 percent of all TNCs currently take more than 20 business days to resolve. The true share is likely higher than this because if someone cannot sort the error out in 8 days, it likely has a more complicated origin than those resolved in the first week. Nonetheless, this projection assumes that this share will continue into the future.

Estimate of the number of job delays and losses due to E-Verify

The Legal Workforce Act imposes the mandate on all U.S. employers in stages based on the size of the firm over a 2-year period. The table below begins the year in which E-Verify becomes fully mandatory for all employers. Year 1 uses the 2016 error rate as the starting point. Under these assumptions, nearly 1.5 million legal workers over 10 years would receive erroneous TNCs, and of these, nearly 430,000 would lose their jobs completely.

Projection of E-Verify Errors Under Mandatory E-Verify

  Total Hires TNC Job Delays FNC Error Job Loss LWA Job Loss From TNC Delays Total Job Losses Total Errors Year 1







Year 2







Year 3







Year 4







Year 5







Year 6







Year 7







Year 8







Year 9







Year 10














Sources: Author’s calculations based on: Total hires: Census Bureau; TNC share overcome (2011-2016): U.S. Citizenship and Immigration Services (USCIS) USCIS archived pages; E-Verify Erroneous TNC and FNC rates (2006-2010): Westat; LWA Job Loss From TNC Delays: Cato Institute. (Note FNC error rate in Westat expressed in terms of share of FNCs. See here for how the FNC rate was calculated for years without data.)

The number of TNCs reported above come from public numbers from USCIS, but the non-public numbers in USCIS in response to Cato’s 2013 Freedom of Information Act request for years 2008 to 2012 show 50,000 more TNCs during this period. Again, this means that this estimate is the lowest possible outcome for mandatory E-Verify.

LWA also increases the consequences of a TNC for a legal worker relative to current law. The legislation would allow the employer to delay hiring the job applicant until after they clear this process, which means that people would lose wages throughout the job delay period. Moreover, anyone who is attempting short-term employment could lose their job completely, even if they ultimately cleared E-Verify (p. 20). This provision highlights how E-Verify purports to be pro-U.S. worker legislation, but actually is anti-U.S. workers.

Congress should reject mandatory E-Verify. It’s a big government waste of resources. It won’t accomplish its intended goal, but it will punish Americans seeking jobs.

Leaders at all levels of government and civil society are alarmed at the continued rise, year after year, in the death rate from opioid overdose. The latest numbers for 2015 report a record 33,000 deaths, the majority of which are now from heroin. Health insurers are not a disinterested party in this matter.

Cigna, America’s fifth largest insurer, recently announced it has made good progress towards its goal of reducing opioid use by its patients by 25% by mid-2019. To that end, Cigna is limiting the quantities of opioids that can be dispensed to patients and requiring authorizations for most long acting opioid prescriptions. Cigna is encouraging its providers to curtail their use of opioid prescriptions for pain patients and is providing them with data on the opioid use patterns of their patients (prescription drug monitoring programs) with an aim towards reducing abuse.

In a Washington Post report on this announcement Cigna CEO David Cordani is quoted as saying, “We determined that despite no profit rationale—in fact it’s contrary to that—that societally we needed to step into the void and we stepped in pretty aggressively.”

No profit rationale?

Paying for fewer opioids saves the insurer money in the short run. And opioids have become costlier as “tamper-resistant” reformulations, encouraged by the FDA, have led to new patents allowing manufacturers to demand higher prices.

There is growing evidence that, as doctors curtail their opioid prescriptions for genuine pain patients, many in desperation seek relief in the illegal market, where they are exposed to adulterated opioids as well as heroin. For the same reason, recent studies on the effect of state-based Prescription Drug Monitoring Programs (PDMPs) suggest they have not led to reductions in opioid overdose rates and may actually be contributing to the increase. It is reasonable to be skeptical that Cigna’s internal prescription drug monitoring program will work any differently.

All of this intersects with a problem generated by the community rating regulations of the Affordable Care Act. The ACA requires insurance companies to sell their policies to people who have very expensive health conditions for the same premiums they charge healthy people. In addition, the ACA’s “risk-adjustment” programs, aimed at reimbursing insurers for losses due to a disproportionate share of the sickest patients, systematically underpay insurers for many of these enrollees. This penalizes insurers whose networks and drug formularies are desirable to those who are sick. Insurers respond to this disincentive by designing policies with provider networks, drug formularies, and prescription co-payment schedules that are unattractive to such patients, hoping they will seek their coverage elsewhere. This “race to the bottom” between the health plans results in decreased access and suboptimal health care for many of the sickest patients.

Researchers at the University of Texas and Harvard University, in a National Bureau of Economic Research working paper, show “some consumers are unprofitable in a way that is predictable by their prescription drug demand,” and “…Exchange insurers design formularies as screening devices that are differentially unattractive to unprofitable consumer types,” resulting in lower levels of coverage for patients in those categories. They rank drug classes by net-loss to the insurer (per capita enrollee spending minus per capital enrollee revenue). Opioid antagonists, used to treat opioid addiction, exact the third highest penalty on insurers, about $6,000 for every opioid antagonist user. (See Table 2.)

This suggests that patients suffering from opioid dependency and/or addiction (there is a difference) are victims of the race to the bottom spawned by the ACA’s community rating mandate.

Thus, the opioid overdose crisis and the ACA mandates—especially community rating—combine to make the “perfect storm.” Insurers team up with state and federal regulators to curtail the prescription of opioids for chronic pain patients, leading many to suffer needlessly and driving some, in desperation, to the illegal drug market and the risk of death from overdose. Meanwhile, those seeking rescue from the torment of dependency and addiction must access a health insurance system that is penalized for providing help.

Supporters of the RAISE Act are counting on the media, voters, and policy makers to focus on talking points supporting the bill rather than its actual substance.  Those supporters want the debate over this bill to be “skilled or merit based immigrants versus family-based immigrants” – a debate that they could win.  But they can only do that if everybody focuses on the talking points and they remain ignorant of the actual contents of the bill.  The RAISE Act talking points are grossly deceptive, at best, and do not accurately describe the bill’s contents or what its effects would be.  Each heading below is a major talking point that RAISE’s supporters are using followed by what the facts actually are.

“The RAISE Act creates a merit and skills-based immigration system.”

The RAISE Act does not increase merit and skills-based immigration over the existing cap.  The bill sets an annual cap of 140,000 green cards annually for merit and skills-based immigrants – the exact same number apportioned to the current employment-based green card for skilled workers.  The RAISE Act merely cuts other immigration categories, such as family-based green cards, while creating a points system for obtaining one of the 140,000 merit and skills-based green cards.  Cutting family reunification does not create a merit or skills-based immigration system.  

“The RAISE Act is very similar to the merit-based Canadian and Australian immigration systems”

The Canadian and Australian merit-based immigration systems are far more open than either current U.S. immigrant law or what the RAISE Act would create.  As a percent of the population, which is the only meaningful way to compare the size of immigrant flows in different countries or across time, the Australian and Canadian merit-based immigration policies allow about 3.5 and 2.4 times as many immigrants annually as the United States, respectively.  The RAISE Act would widen this gulf even further whereby the annual immigrant flows to Australia and Canada would be about 7.9 and 5.3 times as great as to the United States.  

The comparison is even more lopsided when it comes to skilled immigrants.  The annual skilled immigrant flow to the United States under current law (and if the RAISE Act is implemented) is equal to 0.04 percent of the population – and that includes their family members!  The workers themselves are only 0.02 percent of the American population.  The annual skilled immigrant inflow to Canada is equal to 0.18 percent of their population and in Australia, it is a whopping 0.26 percent – 9 and 13 times as great as in the United States, respectively.

The Canadian and Australian systems even allow more family-based immigrants as an annual percentage of their population at 0.23 and 0.26 percent, respectively, than the United States.  In Canada, immigrants are awarded more points if they have distant relatives living there already.  The RAISE Act does no such thing. 

The Australian and Canadian immigration systems are far more open than the current U.S. immigration and what has been proposed by the RAISE Act. The Australian and Canadian immigration systems have very little in common with what’s proposed by the RAISE Act.

“The RAISE Act will increase wages for working Americans.”

The last time Congress cut legal migration in order to raise the wages of lower-skilled Americans, wage growth actually slowed down.  Michael Clemens, Ethan Lewis, and Hannah Postel looked at Congress’ 1964 cancellation of the Bracero program for low-skilled Mexican farm workers discovered that farm worker wages rose more slowly after the migration was cut because farmers turned toward mechanization and planted crops that required less labor.  Not only did the supposed wage gains from cutting legal migration not occur after 1964, the rate of wage increase actually slowed.

The details and justifications for the RAISE Act and Congress’ 1964 cancellation of Bracero are eerily similar.  The Bracero program allowed in half a million workers a year before it was eliminated – which is roughly the number of green cards that RAISE would cut.  Bracero workers have a relatively similar skills level, compared to Americans, of the workers who currently enter of the family-based green cards that the RAISE Act intends to cut.  Braceros were also concentrated in some states just like new immigrants are.

More fundamentally, immigration bears little blame for low wages.  This point is not controversial among economists who study this issue.  The National Academy of Sciences’ (NAS) literature survey on the economic effects of immigration concluded that:

When measured over a period of 10 years or more, the impact of immigration on the wages of native-born workers overall is very small.  To the extent that negative impacts occur, they are most likely to be found for prior immigrants or native-born workers who have not completed high school—who are often the closest substitutes for immigrant workers with low skills.

Immigration’s long-run relative wage impact on native-born American workers is close to zero.  The only potential exception by education group is high school dropouts who might face more labor market competition from immigration that would produce a maximum relative decline of about 1.7 percent from 1990 to 2010.  All groups of native-born Americans by education, which accounts for 91 percent of adults, see relative wage increases from immigration during the same time studied.

Restricting immigration doesn’t raise wages and, even if it did for those Americans who directly compete with immigrants, it would lower wages for the roughly 91 percent of Americans who do not.

“The RAISE Act would restore immigration to historical numerical norms.”

This statement only makes sense if American history began in 1925 – a year after the National Origin Quota Act became law.  Because the United States is more populated than it was in the past, the only way to reasonably compare immigrant flows over time is to view the as a percentage of the population.  After all, a million immigrants in 1790 would have a far greater impact on the American population than a million immigrants in 2017. 

As my colleague David Bier pointed out, the average historical immigration rate from 1820 to 2017 (the government did not count annual immigrant entries before 1820), was 0.45 percent of the U.S. population – higher than the approximately 0.32 percent of today.  In other words, current U.S. immigrant flows are actually 29 percent below the historical norms and would have to rise by over 400,000 a year to be them.  That’s very different from RAISE’s proposed cut of approximately 500,000 green cards annually.

“Current immigrants are very low skilled.”

New immigrants to the United States are more highly educated than native-born Americans.  About 39 percent of immigrants admitted to the United States in 2015 had a college degree or above compared to about 31 percent of adult natives.  The share of new immigrants to the United States with at least a college degree is almost double the 21 percent that it was in 1995.  About 29.4 percent of all immigrant adults living in the United States in 2015 had a college degree or above, slightly below the 30.8 of all native adults who have the same education.  These facts show that new immigrants are more educated than people realize, are increasingly better educated over time, the annual immigrant flow is making the stock of immigrants more educated, and this is happening under the current immigration policy.   

“Our current immigration system has increased economic inequality.”

This is an odd argument for Republicans to make but Senior Trump administration aide Stephen Miller did so at a press briefing earlier this week.  The evidence on how immigration affects economic inequality in the United States is actually mixed – which is more than you can say about Miller’s other economic invocations where he’s dead wrong.  Some research finds relatively small effects of immigration on economic inequality and others find substantial ones.  The variance in findings can be explained by different research methods.  For instance, outcomes vary considerably between studies that measure how immigration affects economic inequality among only natives and another study that includes immigrants and their earnings.  Both methods seem reasonable but the effects on inequality are small compared to other factors.  I don’t see the problem if an immigrant quadruples his income by coming to the United States, barely affects the wages of native-born Americans here, and increases economic inequality as a result.  The standard of living is much more important than earnings or wealth distribution – especially when the gains are so vast. 

“Amnesty for illegal immigrants punishes legal immigrants who got in line and tried to do things the right way.  That’s unfair.”

RAISE Act supporters don’t make this argument in support of this specific bill but many of them have in opposition to legalizing illegal immigrants.  However, the RAISE Act punishes virtually every wannabe immigrant who is currently in line to get a green card by kicking them out of the running.  If RAISE becomes law, it would continue the clear the green card backlog for one year and then cut those who have been in line but didn’t make the cut in the first year.  As a meaningless concession, the RAISE Act awards those who are eliminated from the backlog with 2 points under the new points system that requires a total of 20 to get in the new line for another green card.  How can RAISE Act supporters complain about the unfairness of an amnesty while supporting this bill? 

The RAISE Act says, “you’ve been in line for a green card for 20 years and followed all of the rules?  Too bad!  You’re out.  Here are a few points that will not add up to a green card in compensation.”  What kind of message does that send to immigrants who tried to follow the rules the right way?

Why are Americans less likely to move to better opportunities than they used to be? The Wall Street Journal reports:

When opportunity dwindles, a natural response—the traditional American instinct—is to strike out for greener pastures. Migrations of the young, ambitious and able-bodied prompted the Dust Bowl exodus to California in the 1930s and the reverse migration of blacks from Northern cities to the South starting in the 1980s.

Yet the overall mobility of the U.S. population is at its lowest level since measurements were first taken at the end of World War II, falling by almost half since its most recent peak in 1985.

In rural America, which is coping with the onset of socioeconomic problems that were once reserved for inner cities, the rate of people who moved across a county line in 2015 was just 4.1%, according to a Wall Street Journal analysis. That’s down from 7.7% in the late 1970s.

One particular problem with today’s immobility is that people find themselves in areas where jobs are dwindling and pay tends to be lower. Why don’t they move to where the jobs are? This comprehensive article for the Journal by Janet Adamy and Paul Overberg points to a few factors:

For many rural residents across the country with low incomes, government aid programs such as Medicaid, which has benefits that vary by state, can provide a disincentive to leave. One in 10 West Branch [Mich.] residents lives in low-income housing, which was virtually nonexistent a generation ago.

And then there are regulations that discourage mobility:

While small-town home prices have only modestly recovered from the housing market meltdown, years of restrictive land-use regulations have driven up prices in metropolitan areas to the point where it is difficult for all but the most highly educated professionals to move….

Another obstacle to mobility is the growth of state-level job-licensing requirements, which now cover a range of professions from bartenders and florists to turtle farmers and scrap-metal recyclers. A 2015 White House report found that more than one-quarter of U.S. workers now require a license to do their jobs, with the share licensed at the state level rising fivefold since the 1950s.

Brink Lindsey wrote about both land-use regulations and occupational licensing as examples of “regressive regulation”—regulatory barriers to entry and competition that work to redistribute income and wealth up the socioeconomic scale—in his Cato White Paper, “Low-Hanging Fruit Guarded by Dragons: Reforming Regressive Regulation to Boost U.S. Economic Growth.”

The Journal notes that 

the lack of mobility has become a drag on the entire U.S. economy.

“We’re locking people out from the most productive cities,” says Peter Ganong, an assistant professor of public policy at the University of Chicago who studies migration. “This is a force that widens the urban-rural divide.”

Ganong made similar points in a Cato Research Brief, “Why Has Regional Income Convergence in the U.S. Declined?

Declining mobility hurts U.S. innovation and economic growth and widens the rural-income culture gap. Government regulation plays a major role in declining mobility. But as Lindsey noted, those regulations are “guarded by dragons”—”the powerful interest groups that benefit from the status quo, all of which can be counted upon to defend their privileges tenaciously.” Despite the potential for agreement by right, left, and libertarian policy analysts on the problems with regressive regulation, all those wonks together may be no match for organized dentists, barbers, massage therapists, and homeowners who perceive that they benefit from keeping others out.


The Department of Housing and Urban Development (HUD) spends $10 billion a year on “community planning and development” subsidies to state and local governments. Community development sounds uplifting, but it involves mundane activities such as filling potholes.

Budget expert Tad DeHaven alerted me to this article in the Altoona Mirror yesterday:

City Council recently approved a fairly standard plan for spending its annual entitlement money from the federal Department of Housing and Urban Develop­ment.

… This year’s funding for [Community Development Block Grant] CDBG was $1.42 million — not much different from last year’s amount.

Loan paybacks of $162,000 brought this year’s CDBG total to $1.58 million, according to a summary provided by CDBG program Manager Mary Johnson.

Of that, $346,000 will go for rehabilitation of single-family homes; $306,000 for demolition of blighted properties; $301,000 for program administration; $332,000 for street paving; $235,000 for the city’s bike patrol and $67,000 for code enforcement all in low- to moderate-income areas.

Of $193,000 in HOME funding, $129,000 will go for rehabilitation of rental properties, $44,000 for an upgrade of Improved Dwellings for Altoona’s Woodrow Wilson Gardens parking lot in Garden Heights and $19,000 for program administration.

These activities are entirely local in nature, so why involve the federal government? The Woodrow Wilson apartment company pays $674,000 a year in local property taxes. Why not let the company keep some of that cash and pave its own parking lot? That would be easier than imposing federal income taxes on Altoona residents, sending the money to Washington to pay for the HUD bureaucracy, and having some of it trickle back down to Altoona.

Check this out: in the news story, “program administration” costs are $301,000 + $19,000 = $320,000 a year. Those costs consume a rather stunning 20 percent of the $1.58 million in federal aid. It goes toward the salaries of Mary Johnson and other officials, plus the costs of putting together 77-page “action plans,” 292-page “consolidated plans,” 48-page “downtown plans,” and other items listed here.

If Altoona is representative, then $2 billion of the $10 billion in federal aid is consumed by such local bureaucracy.

In addition, there are federal bureaucratic costs. The HUD “budget justification” says that 729 workers are in “community planning and development” costing $103 million in wages and benefits, or $141,000 each. These expensive paper-pushers simply hand out recycled money to the states, thus adding nothing to nation’s overall output. Indeed, because they are not producing useful goods and services in the private sector, their compensation represents a loss to the economy.

For a discussion of why HUD’s community development activities ought to be abolished, see DeHaven’s analysis here.

Talk of oil sanctions is in the air. Some would like the Trump administration to ban the importation of crude oil from Venezuela in response to that country’s recent fraudulent election. And some are predicting that if such a boycott were implemented, gasoline prices would increase by 10–15 percent, or 25–30 cents a gallon.

Venezuelan production is about 800,000 barrels a day, approximately 1 percent of the 80.4 million barrels a day world output. If 1 percent of world output were suddenly and permanently removed from the world market, then a 10 to 20 percent increase in price would certainly be a reasonable prediction, given what economists know about the relationship between reduced quantity and increased price in oil markets in the short run.

But boycotts are not true supply reductions; they are supply rearrangements. The United States and Venezuela both purchase and sell oil on a world market. In such a large market, country-of-origin and country-destination information quickly become blurred as crude oil and its refined products slosh from buyers to sellers, oftentimes via third parties. And even if the United States could somehow be a stickler at tracking and avoiding Venezuela-originated products, they would simply get re-routed to some other buyer—perhaps China or India—while other oil products would reroute to the United States.

Another factor is the nature of Venezuelan oil. Oil is not a homogenous, uniform worldwide product, but an idiosyncratic mixture of different hydrocarbons (and impurities) that varies from one country to the next and one oil field to the next. Venezuelan crude is “heavy,” meaning it contains more of the larger hydrocarbon molecules that are difficult to break down into usable products like gasoline. And some of the products it does yield aren’t especially valuable, like bunker oil. Refineries have to be specially configured to process the oil they receive, and heavy crude configurations are especially demanding. The United States does have such refineries, but the availability of light crude from the shale oil boom, which is much easier to break down into valuable products, has made the use of the extra configurations unnecessary.

Still, a U.S. boycott of Venezuelan oil could cause a short-lived price increase as markets adjust to the news. That increase could be extended depending on the actions of petroleum investors, who are a notoriously skittish lot. They could respond to a boycott by stockpiling oil in fear of lasting supply constraints. But U.S. inventories are currently relatively high, and stockpiling oil—pumping it into large storage tanks or leaving it on tankers at sea—is costly, which means the investors would shoulder a cost for their skittishness. Meanwhile, world oil production—including Venezuela’s—would likely not change from its current high level, except perhaps to produce a little extra to sell to the skittish investors. So we doubt any gas price increase would last for long.

Does this mean the United States should go ahead with a boycott? We are wholly agnostic on the wisdom and justice of boycotts, embargoes, and other such sanctions. (For a discussion of this topic, click here.) Rather, we argue that, economically, these policies seem to be pointless. Venezuela would simply sell its oil to other nations; a U.S. boycott would be as ineffective as similar policies against Cuba, Russia, and other countries.

World oil prices shift because of changes in world supply and demand, e.g., wars that block trade routes, recessions that reduce demand, expansions that increase demand, or producer collusion that constrains supply. A rift between the United States and Venezuela would be small potatoes in comparison, and any boycott would be made meaningless by the world market. Hence our skepticism that there would be much of a boycott price spike.

On August 3, The American Conservative ran a lengthy piece of mine dealing with the whistleblower protection nightmare that is the Department of Defense. One of the subjects of that piece is now former NSA IG George Ellard, and because I had even more on his case than I could fit into the TAC piece, I wanted to share the rest of what I know–and don’t know–about the allegations against Ellard, the final disposition of the case, why the Obama administration’s whistleblower retaliation “fix” is itself broken, and what might be done to actually provide meaningful protections for would-be national security whistleblowers in the Pentagon and elsewhere in the national security establishment.

Regarding what little we know about the specifics of Ellard’s case, I had this to say in the TAC piece:

As the Project on Government Oversight first reported in December 2016, a three-member interagency Inspector General External Review Panel concluded in May 2016 that the then-Inspector General of the National Security Agency (NSA), George Ellard, had, according to POGO, “himself had previously retaliated against an NSA whistleblower[.]” This apparently occurred during the very same period that Ellard had claimed that “Snowden could have come to me.” The panel that reviewed Ellard’s case recommended he be fired, a decision affirmed by NSA Director Mike Rogers. 

But there was a catch: the Secretary of Defense had the final word on Ellard’s fate. Outgoing Obama administration Defense Secretary Ash Carter, apparently indifferent to the magnitude of the Ellard case, left office without making a decision.

In the months after Donald Trump became president, rumors swirled inside Washington that Ellard had, in fact, escaped termination. One source, who requested anonymity, reported that Ellard had been seen recently on the NSA campus at Ft. Meade, Maryland. That report, it turns out, was accurate.

On July 21, in response to the author’s inquiry, the Pentagon public affairs office provided the following statement:

“NSA followed the appropriate procedures following a whistleblower retaliation claim against former NSA Inspector General George Ellard. Following thorough adjudication procedures, Mr. Ellard continues to be employed by NSA.”

After I’d finished the TAC piece, Ellard’s attorney, Terrence O’Donnell of the Washington mega law firm of Williams & Connolly, sent me the following statement about his client, George Ellard:

The Office of the Assistant Secretary of Defense (ASD) examined and rejected an allegation that former NSA Inspector General, George Ellard, had retaliated against an NSA employee by not selecting that employee to fill a vacancy in the OIG’s Office of Investigations.

In a lengthy, detailed, and well-reasoned memorandum, the ASD concluded that Dr. Ellard had not played a role in that personnel decision or, in the terms of the applicable laws and regulations the ASD cited, Dr. Ellard “did not take, fail to take, or threaten to take or fail to take any action” associated with the personnel decision.

This judgment echoes the conclusion reached by the Department of Defense’s Office of the Inspector General.  An External Review Panel (ERP) later came to the opposite conclusion, leading to the ASD review.  The ASD concluded that “the evidence cited in the ERP report as reflective of [Dr. Ellard’s] alleged retaliatory animus toward Complainant … is of a character so circumstantial and speculative that it lacks probity.”

In assessing Dr. Ellard’s credibility and in rendering its decision, the ASD also considered Dr. Ellard’s “distinguished career of public service, spanning more than 21 years of service across the executive, legislative, and judicial branches, culminating in almost 10 years of service as the NSA IG.”  Dr. Ellard, the ASD noted, has been “entrusted to address some of our nation’s most challenging national security issues”; successive NSA Directors have consistently rated Dr. Ellard’s performance as “Exceptional Results” and “Outstanding”; and he has been “commended by  well-respected senior officials with whom [he has] worked closely over the years for [his] ability and integrity.”

Dr. Ellard is serving as the NSA Chair on the faculty of the National War College, a position he held prior to the ERP review.

Quite a bit to unpack in that statement. Let’s start with the ASD’s decision to overrule the External Review Panel (ERP), a key component of the Obama-era PPD-19, the directive designed to prevent in all government departments or agencies the very kind of thing Ellard allegedly did. Here are the key paragraphs of PPD-19 with respect to ERP recommendations:

If the External Review Panel determines that the individual was the subject of a Personnel Action prohibited by Section A while an employee of a Covered Agency or an action affecting his or her Eligibility for Access to Classified Information prohibited by Section B, the panel may recommend that the agency head take corrective action to return the employee, as nearly aspracticable and reasonable, to the position such employee would have held had the reprisal not occurred and that the agency head reconsider the employee’s Eligibility for Access to Classified Information consistent with the national security and with Executive Order 12968. (emphasis added)

An agency head shall carefully consider the recommendation of the External Review Panel pursuant to the above paragraph and within 90 days, inform the panel and the DNI of what action he or she has taken. If the head of any agency fails to so inform the DNI, the DNI shall notify the President. (emphasis added)

Taking the ERP’s recommendations is strictly optional.

What’s so significant about the ERP recommendation in Ellard’s case was that the ERP not only apparently believed that the whistleblower in question should be given a fair chance at getting the position he or she originally applied for within the IG itself, but that Ellard’s actions were–in the view of three non-DoD IG’s who examined the case–so severe that they recommended he be terminated. 

O’Donnell quoted from a Pentagon memo clearing Ellard that is not public. The ERP’s findings, along with their record of investigation, are not public. Nor do we know how thorough–or cursory–the ASD’s review of the Ellard case was prior to the decision to clear Ellard. Given all of that, who are we to believe?

There are some key facts we do know that lead me to believe that the ERP’s recommendations were not only likely soundly based, but that the whistleblower retaliation problem inside the Pentagon is deeply entrenched.

O’Donnell’s statement also claimed that the ASD’s decision to reverse the ERP and clear Ellard of wrongdoing “…echoes the conclusion reached by the Department of Defense’s Office of the Inspector General.” But it’s the DoD IG itself, as an institution, that is also under a major cloud because of other whistleblower retaliation claims coming from former NSA or DoD IG employees–specifically former NSA senior executive service member Thomas Drake and for DoD Assistant Inspector General John Crane. As I’ve noted previously, the independent Office of Special Counsel found adequate evidence of whistleblower retaliation and document destruction to refer the matter to the Justice Department’s own IG; Crane’s case is getting a look from the Government Accountability Office (GAO), Congress’s own executive branch watchdog.

The DoD and NSA IG’s have clear conflicts of interest when employees from within their own ranks are implicated in potential criminal wrongdoing. PPD-19 was supposed to be the answer to such conflicts of interest, but it’s lack of teeth from an enforcement standpoint renders it a badly flawed remedy for an extremely serious integrity problem.

And what about Congress? PPD-19 speaks to that as well:

On an annual basis, the Inspector General of the Intelligence Community shall report the determinations and recommendations and department and agency head responses to the DNI and, as appropriate, to the relevant congressional committees. 

But Congress doesn’t need to wait for the IC IG to tell it what is already publicly known about the Ellard, Drake, and Crane cases. It has ample cause to not only investigate these cases, but to take action to replace PPD-19 with a whistleblower protection system that actually protects those reporting waste, fraud, abuse, or criminal conduct and punishes those who attempt to block such reporting. Two options that deserve consideration are 1) empowering OSC to examine these kinds of cases and issue unreviewable summary judgments itself or 2) revive the expired Independent Counsel statute, rewritten with a focus on whistleblower reprisal case investigations.

One thing is beyond dispute. The PPD-19 process is not the answer for protecting whistleblower and punishing those who retaliate against them. We need a credible system that will do both. The only question now is whether anybody in the House or Senate will step up to the task of building a new one. 

For years, Randal O’Toole has warned governments that urban rail systems usually make no economic or practical sense. They are more expensive and less flexible than bus systems. But cities keep making wildly optimistic assumptions about rail costs and ridership, and new lines keep getting built. It is a triumph of politics over experience.

The other day, the Washington Post reported ridership data on phase 1 of D.C. Metro’s Silver Line:

But of the five stations that opened in July 2014, only the end-of-line Wiehle-Reston station has come close to projected ridership. At three stops in Tysons — McLean, Greensboro and Spring Hill — ridership is a mere fraction of what planners projected in a 2004 environmental impact report. In May of this year, for example, average daily weekday ridership was 1,618 at the McLean station, slightly below the 1,634 in May 2015 and well below the 3,803 the Silver Line was projected to serve in its first year of operation, according to the 2004 report.

So actual ridership on some parts of this Northern Virginia line are less than half of the original estimate. By the way, the cost of the project ended up almost doubling from what the planners and politicians had promised. Federal taxpayers picked up part of the tab.

Phase 2 of the project is under construction, and it will extend the Silver Line to Dulles Airport, 28 miles from D.C. The project never made sense to me. The airport already has the dedicated and congestion-free Dulles Access Road that connects the airport to the inner suburbs and downtown.

Let’s say you are a NYC businesswomen flying into Dulles for some lobbying in D.C. If you take the rail system, it will probably take you much longer to get downtown than if you took a taxi along the Access Road. Then when you get off the Metro downtown, you may still need a cab to get to your final destination.

Or let’s say you are a Virginia family flying out of Dulles on vacation. Would you want to drive to a Metro station with all your bags, leave your car parked there, and then risk missing your plane by taking the unreliable rail system? I don’t think so. I’ll bet ridership on Phase 2 of the Silver Line will come in low as well.

For decades, federal subsidies have induced state and local officials to build costly and inefficient light- and heavy-rail systems when bus systems and highway expansion generally make more sense. Congress should end the bias in favor of rail by ending federal aid for urban transit, as discussed at