Cato Op-Eds

Individual Liberty, Free Markets, and Peace
Subscribe to Cato Op-Eds feed

Yesterday President Trump fired FBI Director James Comey.  Although the manner in which this was handled was ham-fisted, this is likely to be seen, at least in retrospect, as a wise move.

The warning signs about James Comey were there all along.  The Wall Street Journal summarized some of his spectacular misjudgments in a 2013 editorial titled, “The Political Mr. Comey.“  The overzealous pursuit of Frank Quattrone and Steven Hatfill.  The appointment of Patrick Fitzgerald who then ran amok in the Valerie Plame and Robert Novak case.   

I disagree with the Journal’s take on Comey’s fight with then-White House Counsel Alberto Gonzales over the reauthorization of Bush’s warrantless surveillance program—that goes on the plus side of Comey’s ledger.  But there are even more bad judgments that the Journal did not mention. For example, Comey went after Martha Stewart in a case of ruthless ambition.

When the high stakes “enemy combatant” controversy was pending before the Supreme Court, Comey pulled one of his stunts, holding a press conference to “inform” the public of the gravity of the case.  Attorney and author Scott Turow rightly called out Comey’s outrageous trial by news conference.

We can do much better than James Comey.  If Trump can repeat the careful process by which he selected Neil Gorsuch for the Supreme Court and secure a fairly swift confirmation vote, this matter will soon be forgotten.  If the selection process is mishandled, the political storm clouds will hang over the White House for quite some time.

My own review of the troubled history of the FBI can be found here and here.

Traditional educators frequently claim that public charter schools are failing, even when evidence indicates that they perform no worse than traditional institutions on student test scores. This logic fails to recognize costs, which are paramount to educational success, primarily because wasted funds could otherwise be efficiently allocated towards further academic achievement. If students are receiving less public funding in charters, then choice schools are significantly outperforming residentially assigned institutions.

I just released a study with Patrick Wolf, Larry Maloney, and Jay May examining disparities in funding between students in charters and traditional public schools in 15 metropolitan areas in the 2013–14 school year. As shown in figure 1 from the report, students enrolled in a public charter school receive substantially less funding than those in traditional public schools in all but one location. In fact, we find that students in charter schools receive about $5,721 less in total annual funding than their peers in district schools.

Source: Wolf, Maloney, May, and DeAngelis (2017). “Charter School Funding: Inequity in the City.” School Choice Demonstration Project, Department of Education Reform, University of Arkansas.

Critics of this type of evaluation often argue that funding disparities are due to differences in types of students. After all, traditional public schools (TPS) may have a larger proportion of students requiring additional educational resources. While the TPS in our study do enroll more special needs children, we find that these differences do not fully explain the funding gap between traditional public schools and public charter schools.

Funding inequity across the two sectors has only gotten worse over time. Eleven years after the research team first revealed that public charter schools receive less funding than their traditional public schools peers, the funding disparity had grown by about 79% in eight cities.

Should these results surprise us? If you could force your customers to buy your product at a high price, would you need to reduce expenses? Perhaps more importantly, if your customers could not leave, how would you know which costs to cut? The traditional system of schooling makes it impossible to allocate resources efficiently, even if local public school leaders are highly competent and benevolent.

Nonetheless, these findings are important for decision-makers to consider, especially if they care about improving student outcomes through efficiently allocating educational funding. Just imagine what would happen to the education sector if families could choose which institution to send their funds to. Schools would be rewarded for quality and efficiency, freeing up the resources necessary to improve the lives of millions of children around the nation.

In four little panels Steve Kelley punctures the government’s bizarre claims about its powers and our rights.

Although many false arrests are exposed in court, that’s cold comfort when you’re getting handcuffed and realize you’ll be locked up a while. Here’s a fairly recent example of such a false arrest caught on tape:

For related Cato work, go here and here.

H/T: Jacob Sullum at Hit & Run.


Two front-page stories in the Washington Post today tell a depressing story:

President Trump’s most senior military and foreign policy advisers have proposed a major shift in strategy in Afghanistan that would effectively put the United States back on a war footing with the Taliban…more than 15 years after U.S. forces first arrived there.

Seventeen years and $10 billion after the U.S. government launched the counternarcotics and security package known as Plan Colombia, America’s closest drug-war ally is covered with more than 460,000 acres of coca. Colombian farmers have never grown so much, not even when Pablo Escobar ruled the drug trade. 

There are high school students about to register for the draft who have never known a United States not at war in Afghanistan and Iraq. And of course the policy of drug prohibition has now lasted more than a century, though the specific Colombian effort began only under President Clinton around 1998, getting underway in 2000.

I wrote an op-ed, “Let’s Quit the Drug War,” in the New York Times in 1988. Cato scholars and authors have been writing about the seemingly endless war(s) in the Middle East for years now. Maybe it’s time for policymakers to start considering whether endless war is a sign of policy failure.

And maybe one day, a generation from now, our textbooks will not tell our children, We have always been at war with Eastasia.

One of the key concerns about climate change is ecosystem resilience. This is particularly true for those that are anchored over large locations with little ability to move. Ecological communities in the Chesapeake Bay come to mind.

According to the U.S. National Climate Assessment report published in 2014 (Melillo et al., 2014), there is “very high confidence that coastal ecosystems are particularly vulnerable to climate change because they have already been dramatically altered by human stresses, as documented in extensive and conclusive evidence” (Moser et al., 2014). Additionally, the report claims there is “very high confidence that climate change will result in further reduction or loss of the services that these ecosystems provide, as there is extensive and conclusive evidence related to this vulnerability” (Moser et al., 2014).

That Assessment has been criticized as being far too alarmist, too political, and very incomplete with regard to its summarization of important scientific literature. It didn’t help that when it was released, the National Oceanic and Atmospheric Administration (whose bailiwick includes coastal ecosystems), called the report “a key deliverable in President Obama’s Climate Action Plan” in the press release for its rollout.

It’s important to quantify claims like the ones made above, and one type of ecosystem that has received considerable attention in this regard is the seagrass biome. These dense underwater meadows are found in numerous coastal waters, including those of the United States. They are a foundational basis for an ecosystem as diverse and variegated as those associated with coral reefs, but they get little public attention because they aren’t nearly as showy. But they are important. Their presence helps to reduce coastal erosion, improve water quality and mediate ocean chemistry, as which adds economic value. Given the important functions that they perform within their coastal ecosystems, it should come as no surprise, therefore, that concerns have arisen over the current and future ability of seagrass ecosystems to withstand rising atmospheric CO2 concentrations – i.e. global warming and ocean acidification.

A new study by Shelton et al. (2017) sheds some important light in this regard. Working with over 160,000 observations from Puget Sound, Washington, USA, the team of six researchers created a database of eelgrass, a common constituent of seagrass ecosystems worldwide. They surveyed data along hundreds of kilometers of shoreline over the 41-year period 1972-2012 in the Puget Sound, home to millions of people as well as tourism, transportation and recreation. It’s fair to call it the Chesapeake Bay of the Northwest, and there’s all kinds of pressures to keep it healthy. Their long survey period includes rapid economic development as well as increases in dissolved carbon dioxide as atmospheric concentrations rose. Their hope was to quantify the natural and anthropogenic factors contributing to eelgrass change across various spatial and temporal scales.

Shelton et al. indeed did report there were “substantial changes” in eelgrass populations over the four decades of study. But a look at the smaller spatial scales yielded “no obvious geographic coherence in [the] trends,” noting that adjacent eelgrass sites sometimes had opposite trends. This lack of geographic coherence, according to Shelton et al., “would [not] be expected if shared oceanographic or climate drivers controlled eelgrass trends,”.  Those would include, especially, climate change or ocean acidification.

Scaling up to the regional level and covering the entire estuary, Shelton et al. report, as illustrated in the figure below, that “over the past 40 years, eelgrass in Puget Sound has proven resilient to large-scale climatic and anthropogenic change,” confirming once again that “we do not see coincident changes in eelgrass populations that would indicate a major shared climatic driver across sites.”

This large-scale stability of eelgrass populations observed in the Puget Sound estuary over the past four decades has endured despite (1) a more than doubling of the human population in the area and (2) multiple major oceanographic anomalies (including several major El Niño and La Niña events), which is a testament to the adaptability and resistance of this keystone marine ecosystem species to human influence.

Perhaps more important, this undermine the “very high confidence” the U.S. National Climate Assessment assigns to predictions of future coastal ecosystem demise in response to CO2-induced global warming and ocean acidification. The reality is that estimates of such vulnerability are largely overstated. One can only hope that the forthcoming 2018 U.S. National Climate Assessment will temper such projections by incorporating the realism observed in nature from studies like that of Shelton et al.

The discussion around private school choice legislation is almost always framed as an intense battleground with teachers on one side and families on the other. Political scientists are quick to point out that teachers win the skirmish more often than not because their interests are concentrated amongst a few, while their enemies, the parents, bear costs that are widely dispersed. While the political theory behind the claim is strong, the argument that school choice programs are at odds with the interests of professional educators is feeble.

Discouragement & Hostile Work Environments

The traditional public school system has utterly failed teachers in the United States. Educators operate in a system that does not reward them for performance or determination. Instead, their motivation levels are shattered after they find out that time served and meaningless credentials, rather than effort, lead to career success.

Perhaps even worse, public school teachers must function within a hostile environment where children are compelled to attend and parents are forced to pay. If citizens were forced to read my blog posts, I am sure that many of them would stress and complain. It would be impossible to please the diverse set of required readers, especially if they were grouped primarily by their zip codes. Alternatively, if families could choose their educational services, they could match with educators based on interests and learning styles, creating a friendly and feasible work environment for teachers.


As critics of the U.S. education system often contend, current levels of teacher pay do not entice large quantities of highly skilled labor to enter the field. Perhaps more importantly, the uniform pay scale does not incentivize teachers to perform above minimal levels. Alternatively, as Andrew Coulson pointed out in School, Inc., high quality teachers in places like South Korea can earn millions of dollars each year through the system of voluntary exchange.

Private school choice can benefit teachers through increasing motivation levels, improving work environments, and rewarding high performance. In an educational system of voluntary schooling selections, institutions would need to compete for high quality talent through improving job satisfaction and compensation levels. Instead of searching for enemies within the education sector, we should realize that teachers ought to embrace school choice as tightly as possible.

Steven Camarota of the Center for Immigration Studies (CIS) responded to our criticism of his claim that the border wall will pay for itself. Most of Camarota’s comments confuse the multiple and different simulations that I published with David Bier. He only responds to a handful of our points and then spends most of his space attacking a section called “A Better Cost Estimate Should Include These Variables.” We did not incorporate any of the suggestions from that section into our corrected version of his fiscal analysis.

The only changes we made in our headline findings, relative to Camarota, were that we adjusted for the border crosser age of arrival in 2015, adjusted for the education level for 2015 border crossers, and used an actual cost estimate for the border wall. We also copied Camarota’s methods for our additional simulations but clearly stated the changes we made and why.

Camarota’s comments are in the block quotes, my responses are below.

“[D]espite the Cato blog post being titled ‘The Border Wall Cannot Pay for Itself’, their own cost estimates would simply mean that a border wall would have to stop 16 to 20 percent of those expected in the next decade to pay for itself (as opposed to 9 to 12 percent in my estimate).”

Camarota misread our response. The point of generating a new estimate from his assumptions was to demonstrate how flawed his report was by showing that small changes drastically change his results.  These are not our “own estimates,” but rather, they would have been his estimates if he had bothered to use more up-to-date and precise numbers.  Instead, Camarota pretends that our updates are a comprehensive fiscal cost estimate despite the fact that we have an entire section dedicated to explaining what sorts of other factors a good estimate would need to include.

Cato argues for excluding state and local costs. Cato makes the argument that costs at the state and local level should not be counted, even though this information is available from the NAS study and I included it in my analysis. The only reason they give for not including these costs is that ‘the federal government will actually be paying for the wall.’ This is a very odd argument. The federal government often considers the costs of its policies at the state and local level, so why should building a wall be any different? These costs are real and have to be paid for by the same taxpayers who pay for the federal government.”

Camarota’s comment is perplexing. In the “Calculating the Fiscal Cost Section” of our blog, we used the average net present value flows for consolidated federal, state, and local governments in Table 8-12 of the NAS report. Camarota used that same table in his paper. We even averaged the net fiscal costs for all eight tables like Camarota did. The only exception is that we controlled for the age of the border crossers.  Camarota’s passage is actually criticizing one of the three additional simulations we ran later in the blog with different assumptions.  A person reading his criticism would inaccurately assume that we used a different table from the NAS than we really did.

“[T]he Cato authors argue that my analysis assumes that legal and illegal immigrants cost the same. For example, they say my analysis assumes that illegal immigrants will retire in the United States at the same rate as legal immigrants. In fact, my analysis very much takes this into account. Nowrasteh and Bier do mention the reduction in fiscal costs associated with illegal vs. legal immigrants that I included in my analysis, but they do not seem to understand the implications.”

Camarota’s statement is false. We never argue that legal and illegal immigrants impose the same fiscal costs.  Camarota does attempt to adjust downward the cost of border crossers, but he drew his estimate from a 2013 Heritage report that provides an estimate for a single year, not a lifetime. Thus, it does not take into account the emigration rate of each group.  The NAS report takes into account only the average emigration rate for all immigrants and not the emigration rate for illegal immigrants, meaning that this is Camarota’s assumption as well.

Furthermore, Camarota does not respond to our point that the Heritage report is methodologically incompatible with the NAS report.  Heritage’s report focuses on households headed by illegal immigrants while the NAS estimate measures individuals.  NAS also discounts a 75-year projection to the present value while the Heritage report does not discount a 50-year projection and, thus, reports a meaningless figure.  There is no sound justification for combining the figures from these two incompatible reports.

Cato inflates cost of the wall. Cato argues the cost of the wall will be much higher than the $12 to $15 billion Senate Majority Leader Mitch McConnell (R-Ky.) has said Congress will spend, and the senator is certainly in a good position to know what Congress is likely to spend. A wall is not an entitlement program that grows on its own without Congress specifically allocating money. Further, Congress and the president will determine the structure, design, and length of the wall, as well as spending levels. In some sense “the wall” is whatever Congress and the president decide.”

David Bier and I decided to rely on actual DHS cost estimates that included maintenance and eminent domain.  Camarota relied on a quote by Senator Mitch McConnell.  Camarota confused what Senator Mitch McConnell said the Senate would spend on a border wall with what a complete border wall would actually cost. The two are not the same. Camarota then assumed that whatever Congress decides to spend would complete whatever project Congress sets out to complete.  Following Camarota’s line of thinking, if Congress wanted to build a complete border wall out of sunshine and puppy dogs then it will be so because they decreed it.

Camarota twists the words of a Senator to fit his own meaning while we take the average per-mile costs of construction and maintenance. The reader can decide which method produces a fairer cost estimate.   

“Cato recalculated the education level of illegal immigrants in order to reduce their costs, but they do not explain how they did this.”

Camarota correctly guessed how we estimated the education of illegal border crossers.  This is Camarota’s strongest point but it only accounts for less than half of the difference in our estimates.  Adjusting for the age of arrival accounts for most of the difference between our Camarota-inspired estimate of -$43,444 and Camarota’s actual estimate of -$74,722 (more on this below).  Adjusting for age of arrival is important.       

Camarota did not respond to some important points:

  • Cato’s adjustment for illegal immigrant age of entry. This minor adjustment accounts for slightly more than half of the difference between CIS and Cato and means that each border crosser produces a -$59.210 net fiscal impact. That means the border wall would have to deter about 739,092 border crossers without incurring additional costs to pay for the wall. That is approximately 44 percent of all estimated future border crossers over the next decade – more than twice as high as Camarota’s worst-case-scenario estimate.  
  • Illegal immigrant border crossers are younger than what Camarota estimates if Border Patrol apprehensions data is meaningfully related. The surge in UAC since 2010 has lowered those ages even further, making the net fiscal impact more positive. Age is an important adjustment that Camarota should take account of.
  • Border crossers are down in the first few months of the Trump administration. This might or might not continue depending on whether President Trump’s words turn into action as well as myriad economic factors. Recent research by Warren and Kerwin found that approximately 140,000 border-crossers entered annually from 2011-2013. If those lower numbers hold then the 1.7 million estimated border crossers over the next decade that Camarota relies upon may already be too high without factoring in President Trump’s other non-wall immigration enforcement actions.  In that case, the border wall will have to deter a much larger percentage of border crossers than he claims even without any changes to his model.

Camarota’s response to our blog is disappointing. He is correct that we are arguing that illegal immigrants have a smaller negative fiscal impact relative to legal immigrants when controlling for age and education. His estimates of new border crosser education levels could also be better than ours is. However, Camarota misses our biggest criticisms when he ignores actual border wall cost estimates and refuses to acknowledge that a border crosser’s age of arrival is important to determining his net fiscal impact.  There is not a good reason for relying on Senator McConnell’s quote for a border wall cost estimate while ignoring real cost projections and failing to adjust for the age of arrival of the border crossers. 

In March, the Federal Open Market Committee (FOMC) signaled it could begin shrinking the Fed’s balance sheet sometime later this year. However, with limited official details about what that means and none forthcoming from last week’s FOMC press release, many questions remain:

  • How will the Fed decide exactly when to begin shrinking its balance sheet, and will the move be data or date dependent?
  • Once the wind-down begins, how rapidly will the balance sheet shrink and to what new normal level?
  • How will the Fed dispose of its assets: by simply refraining from reinvesting the proceeds from maturing securities, passively shrinkage the balance sheet, or by actively disposing of some assets to ensure a smoother path for balance sheet reduction?
  • And would asset sales, should they occur, include both mortgage-backed securities (MBS) and Treasuries or would the Fed initially focus on a single asset class?

Back in September 2014, the FOMC released its Policy Normalization Principles and Plans (henceforth “the Framework”), its official statement outlining a three-step normalization strategy, including balance sheet reduction. First, the Fed would raise policy rates[1] to “normal levels.” Second, the Fed would begin to shrink the balance sheet in a “gradual and predictable manner” by ending the reinvestment policy. And third, the wind down would continue until the Fed holds only enough securities to conduct monetary policy “efficiently and effectively” with a portfolio consisting primarily of Treasuries. There is, of course, a caveat that the Fed can deviate from the Framework as economic conditions change. Since December 2015 the Fed has raised policy rates three times, but it has yet to update the Framework to provide further details on the next steps for balance sheet normalization.

With only the broad principles in the Framework as yet available, more detailed information must be gleaned from elsewhere. Fortunately, nearly every Federal Reserve official has discussed the balance sheet to some extent recently; but while their attention may be uniform, the policy discussion is not. Some officials have said nothing beyond the Framework, while others, particularly those regional bank presidents that do not vote on this year’s FOMC, have offered additional comments about the timing, speed, and ultimate target size associated with reducing the balance sheet. This essay examines the views of FOMC permanent voters first, then regional Fed presidents voting in 2017, followed by non-voting regional presidents.

PERMANENT FOMC VOTING MEMBERS Federal Reserve Chair Janet Yellen

Chair Yellen has said very little beyond the Framework, and, as the leader of the Fed, keeping to the official talking points is no surprise. In a March speech, Yellen reiterated that the balance sheet would remain elevated until “sometime after” rates rise, though she declined to add specific benchmarks. When asked for additional clarity during the March FOMC press conference she said only that shrinking the balance sheet is not predicated on a pre-specified level for the federal funds rate and that overall monetary policy normalization would be “well under way” before shrinking the balance sheet commenced. George Selgin did not think much of her remarks.

New York Federal Reserve Bank President & FOMC Vice Chairman William Dudley

Dudley, a dove who is a close ally of Chair Yellen, gave a slight preview of the Framework in a May 2014 speech indicating that he wanted to see rates quite a bit higher before the cessation of reinvestments. This was a break from the 2011 Exit Strategy Principles that had called for ending reinvestments first and raising rates secondarily. In that talk, Dudley downplayed the potential adverse consequences of the larger for longer balance sheet approach, believing it prudent to tolerate those risks as the Fed moved off the zero lower bound. Dudley’s preferred order, to raise rates before touching the balance sheet, is, of course, the order now in the Framework.

More recently Dudley discussed balance sheet actions beyond the Framework. In  March he mentioned that shrinking the balance sheet and raising interest rates are, “…two different, yet related, ways of removing monetary policy accommodation.” Because ending reinvestments could act similarly to a rate hike, Dudley cautioned, “…when we begin to end reinvestment, we will have to consider the implications for the appropriate short-term interest rate trajectory.” He has also commented on the mechanics of how to shrink the balance sheet, saying he does not see “a strong need to differentiate between mortgages and Treasuries” as the reinvestment policy ends, which he believes might end this year or in early 2018. Nonetheless, the New York Fed’s trading desk has conducted very small MBS sales to test the operational readiness of such transactions.

Federal Reserve Vice Chairman Stanley Fischer

In a February 2016 speech, Fischer said that because the federal funds rate is now adjusted using two new tools, interest on excess reserves (IOER) and overnight reverse repurchases (ON RRP), the Fed can change the size of the balance sheet independently from interest rate policy. At that time, Fischer saw benefits to maintaining a larger balance sheet, remarking that when to “…begin phasing out reinvestment will depend on how economic and financial conditions and the economic outlook evolve.”

In November, Fischer reiterated the Framework position, saying that shrinking the balance sheet would commence when “…the short-term interest rate approaches more normal levels.” However, he also offered a position different from Dudley’s, explicitly stating that the Fed would begin by ending reinvestments on mortgage-backed securities while continuing to roll over Treasuries. Just last month, Fischer said he does not expect significant market disturbances, such as another taper tantrum when reinvestments end, given the muted market responses to Fed officials’ discussions about shrinking the balance sheet, thus far.

Federal Reserve Governor Jerome Powell

Powell said in a recent interview that he wants the Fed “well into the normalization process” before the balance sheet begins to shrink. With rates far from zero, “removing accommodation” by ending reinvestments would then proceed in “a very predictable almost automatic way.”

Federal Reserve Governor Lael Brainard

Though Brainard is widely considered to be the most dovish Federal Reserve official, she voted with the rest of her colleagues to raise interest rates at the March FOMC meeting. She has also signaled a willingness to increase the speed of rate increases provided the new administration makes good on its campaign pledges of expansionary fiscal policy.

Brainard offered more details about the normalization strategy than her colleagues on the Board when she identified two available strategies in a recent speech. The first is the complementarity strategy, in which balance sheet adjustments would be viewed as an independent and thus second tool for conducting monetary policy. As Brainard says, “Under this strategy, both tools would be actively used to help achieve the Committee’s goals…to take advantage of the ways in which the balance sheet might affect certain aspects of the economy or financial markets differently than the short-term rate.” The Fed might deploy the balance sheet to affect term premiums on longer-term securities and use the policy rates to affect money markets. The second option is the subordination strategy, in which the policy rates would remain the primary tool for the Fed’s conduct of monetary policy. Once normalization of short term rates was “well under way” the balance sheet could begin to shrink in a “gradual [and] predictable way.” When reinvestments end, the balance sheet would then shrink on “autopilot.”

Brainard is an advocate of the subordination strategy and supports the automatic process that Powell discussed, though she does maintain that were the economy to be hit with a large adverse shock restarting reinvestments could be prudent in order to preserve traditional policy space in the federal funds rate.

2017 VOTING REGIONAL BANK PRESIDENTS Minneapolis Federal Reserve Bank President Neel Kashkari

Kashkari made national headlines when he posted an essay explaining his dissent at the March FOMC meeting, where all his colleagues voted for a rate increase. In dissenting, he noted that a 2% inflation target was no reason to raise rates as though 2% was a ceiling . His preferred strategy was for the Fed to publish a detailed plan for shrinking its balance sheet, allow some time to gauge the market reaction, and then continue to use short term rates as the primary policy lever. Kashkari supports Brainard’s subordination strategy when he says, “…we can return to using the federal funds rate as our primary policy tool, with the balance sheet normalization under way in the background.”

Philadelphia Federal Reserve Bank President Patrick Harker

Harker, an engineer by training, has been more precise than his colleagues. In January he said, “When we are at or above 100 basis points — and we are moving toward that — I think it is time to start serious consideration of first stopping reinvestment and then over a period of time unwinding the balance sheet.” In March, Harker said that the right number for interest rates could be 1.5%, but that balance sheet reduction is not going to be dependent on a trigger or a target and that it will also depend on the “momentum” of the economy — a position similar to Chair Yellen’s at the March FOMC press conference. Harker does prefer the “Treasury-heavy” portfolio called for in the Framework, though he is not sure that the Fed should completely get out of the MBS market.

Chicago Federal Reserve Bank President Charles Evans

Evans, who originally gained national prominence when the Fed began to employ Forward Guidance, is one of the more dovish members, believing that only two hikes in 2017 are possible, while, by contrast, Eric Rosengren of Boston is predicting four. When it comes to shrinking the balance sheet, though, Evans is one of the few to comment, not on the timing, but on a new target size. Recently, he estimated a target size for the balance sheet of $1-1.5 trillion, requiring as much as $3 trillion of securities to roll off. That is drastically different from former Federal Reserve Chairman Ben Bernanke’s estimate for a new normal balance sheet of $2.5-$4 trillion. Despite the potential reduction, Evans has yet to say when reinvestments might actually end.

Dallas Federal Reserve Bank President Robert Steven Kaplan

Kaplan has become more vocal on balance sheet action throughout this year. In January, he said that 2017 would be a good year to discuss a “plan of action” to “slim” the balance sheet, but that nothing should actually be done until rates hikes were “further along.” Kaplan echoed those sentiments in February: “…as we make further progress in removing accommodation, I believe we should be turning our attention to a discussion of how we might begin the process of reducing the size of the Federal Reserve balance sheet.”

After the March rates hike, Kaplan went even further, saying that as rates rise the Fed should publish a plan to shrink the balance sheet. He added that he does not want balance sheet normalization to “unduly affect” financial market conditions, suggesting that securities rolling off ought to be kept to a percentage of daily trading volumes in MBS and Treasuries. Such a strategy would require more active management of the balance sheet than the autopilot strategy proposed by Brainard and Powell. For Kaplan, one of the most important considerations as the balance sheet begins to shrink is to “minimize disruption” to markets.


As mentioned, the most varied opinions about the next move for the balance sheet come from the regional bank presidents who do not have a vote on the FOMC in 2017.

St. Louis Federal Reserve Bank President James Bullard

Bullard is known to be the low dot on the “dot plot,” as he believes the economy is stuck in a low rate regime likely to persist for years. He differs from many of his colleagues in other important ways. For example, Bullard believes that the policy rates are currently at the appropriate levels and that the Fed has, “…delayed a little bit too long in reducing the size of the balance sheet.” While he doesn’t necessarily oppose another hike this year, Bullard thinks the FOMC’s priority should be reducing the balance sheet in an effort to increase the Fed’s ability to react to the next downturn.

San Francisco Federal Reserve Bank President John Williams

Recently, Williams offered perhaps the most comprehensive assessment of the future of the Fed’s balance sheet, with a call for the reinvestment policy to end this year. Like Evans, Williams offered a target, saying that a balance sheet around $2 trillion is likely appropriate, though added that no decision had been made. But, unlike Evans, Williams also offered a timeframe, remarking that getting to a balance sheet that size would likely take 5 years. Williams also believes that with the policy rate and the balance sheet moving contemporaneously, the path of each one will be slower than if they were operating alone, similar to Dudley. He thinks the Fed will raise rates twice more this year, though leaves open the possibility for a third hike if the data support it — a position held by his colleague in Boston.

Boston Federal Reserve Bank President Eric Rosengren

Rosengren is now one of the leading hawks, having announced in a recent speech that he anticipates three more rate hikes this year, likely at every other FOMC meeting. While Dudley and Williams believe shrinking the balance sheet might slow rates hikes, all else equal, and Bullard thinks balance sheet reduction can replace a rates increase, Rosengren believes the path for rate increases is not affected much by gradually shrinking the balance sheet and that the process can begin soon. As identified by Ben Bernanke, Rosengren also differs from his colleagues by being the very rare Fed official to discuss asset sales — though he stopped short of actually advocating them in the speech. However, Rosengren also thinks it is likely that the Fed would resume asset purchases during future recessions, “…unless they are very, very mild.”

Kansas City Federal Reserve Bank President Esther George

George is the most hawkish member on the FOMC, having said that the Fed was behind the curve in raising rates in December 2015, repeatedly voting to trim assest purchases during QE3, and having far and away the most dissenting FOMC votes — now that Jeffrey Lacker has stepped down. And yet, at a recent event George indicated that she did not think that any decision regarding the balance sheet would be made soon. She wants the Fed to spend more time analyzing its path toward normalization, stating that in the meantime the size of the balance sheet is not likely to change. This is a change from her position back in 2014, when she thought it was appropriate to begin shrinking the balance sheet via “passive runoff” before the first rate hike, following the policy articulated in the original 2011 Exit Strategy Principles.

Cleveland Federal Reserve Bank President Loretta Mester  

In three recent speeches Mester has shown an increasing comfort level with shrinking the balance sheet this year. She wants to end reinvestments in 2017 and believes this move is consistent with the Framework, putting the Fed on a path towards a balance sheet consisting primarily of Treasuries. And just yesterday, Mester supported her colleagues’ notion to announce a plan for balance sheet reduction, which will take “several years,” as well as a return to using the federal funds rate as the “main tool” for monetary policy. Mester added that the balance sheet will eventually be “considerably smaller than it is today.”


How Federal Reserve officials view the balance sheet will change as new data come in. There are also potential shifts at the Fed via new personnel. With the retirement of Governor Tarullo in April, President Trump can appoint three new Fed Governors. Additionally, Rafael Bostic will assume leadership of the Atlanta Fed in early June and sit on the FOMC next year while the Richmond Fed is continuing its search for Jeffrey Lacker’s successor, who will have a vote in 2018 as well.

Whoever comes to the Fed and however the views of those already there change, the important questions about the balance sheet will remain. These questions can be grouped into four buckets: Timing, Mechanics, Interest Rates and the Endgame.

On timing, the most important question is when the reinvestment policy ends. There is a growing chorus suggesting that 2017 will see the end of the reinvestment policy, as laid out in the Framework. However, many officials condition their balance sheet remarks as data dependent. It is unknown how much the data would need to soften to move a Fed official’s view away from Mester’s position and towards George’s.

The mechanics of the balance sheet wind down are extremely uncertain. Will the Federal Reserve simply allow for passive shrinking when securities mature, or will they actively manage the process and shrink the balance sheet on a smoother path, perhaps limited by trading volume ratios as Kaplan suggested? These questions require clear answers in the kind of public, detailed plan called for by Kashkari and Kaplan. Another mechanical issue to address is distinguishing between Treasuries and other securities. Is that distinction less important, as Dudley has implied, or will the Fed start by paring back its MBS holdings, as Fischer has suggested?

Related to the mechanics is how shrinking the balance sheet will affect the path of interest rates. Will the Fed adopt the subordination strategy advocated by Brainard? Or will the balance sheet runoff tighten financial market conditions such that the paths for rates hikes and shrinking the balance sheet could be slower together, as Dudley and Williams have considered? Or could ending reinvestments be a substitute for a rates hike, as Bullard prefers?

And lastly, what is the Fed’s endgame when it comes to balance sheet normalization; what is the proper size? Many Fed officials have noted an elevated demand for currency, compared to what existed before the crisis, but only a few have offered specifics as to the balance sheet’s final size. Will the balance sheet stay quite large, something Ben Bernanke advocates, or will it pare down to $2 trillion, as Williams suggests, or even beyond that to $1.5 trillion, as Evans estimates?

As most officials concede, the Federal Reserve is about to take actions with which it has virtually no experience. Providing further details on how and when they will normalize the balance sheet would go a long way to reducing uncertainty. But even then, it will remain critical to track where Fed officials stand on this issue and how those views evolve with the data.

[1] The Framework discusses, “…steps to raise the federal funds rate and other short-term interest rates to more normal levels…” That language, however, is ambiguous as the federal funds market has shrunk dramatically in a financial system awash in reserves. Consequently, interest rate policy is now conducted using two new policy rates to create a federal funds rate target “range:” the interest paid on excess reserves (IOER) creates the target ceiling while the overnight reverse repurchase (ON RRP) rate creates the target floor. Both rates are set administratively by the Fed. For further reading on the Fed’s new monetary control mechanism using IOER and ON-RRP for a federal funds rate range, see “A Monetary Policy Primer, Part 9: Monetary Control, Now” by George Selgin.

[Cross-posted from]

The Wall Street Journal reports, “the Pentagon has endorsed a plan to invest nearly $8 billion to bulk up the U.S. presence in the Asia-Pacific region over the next five years by upgrading military infrastructure, conducting additional exercises and deploying more forces and ships.”

The reasons behind such a military build-up in Asia are not entirely clear. Here are Senator John McCain’s statements justifying it:

“This initiative could enhance U.S. military power through targeted funding to realign our force posture in the region, improve operationally relevant infrastructure, fund additional exercises, pre-position equipment and build capacity with our allies and partners,” Mr. McCain told Adm. Harris in an April hearing.

Dustin Walker, a spokesman to Mr. McCain, described the plan in an email as a way to make the American posture in the region more “forward-leaning, flexible, resilient and formidable.”

This is essentially a garbled word salad of Pentagon jargon that emphasizes tactical justifications while omitting any strategic rationale. The reporter gets a bit closer to a clear strategic justification here: “The effort is seen by backers as one way to signal more strongly the U.S. commitment to the region as Washington confronts an increasingly tenuous situation on the Korean peninsula, its chief security concern in the area.”

To be clear, spending almost $8 billion to boost U.S. military presence in Asia will have precisely zero utility in resolving the “tenuous situation on the Korean peninsula,” and in fact would likely be detrimental to that goal. Furthermore, signaling “more strongly the U.S. commitment to the region” is unnecessary even on the terms of our current strategy. The United States already maintains more than 154,000 active-duty military personnel in the region. Washington keeps scores of major bases throughout Asia, five aircraft carrier strike groups, including 180 ships and 1,500 aircraft, two-thirds of the Marine Corps’ combat strength, five Army Stryker Brigades, and more than half of overall U.S. naval power. And finally, the United States is treaty-bound to defend most of the region’s major nations, including Japan, South Korea, the Philippines, Thailand, Australia, and New Zealand. Do we really need $8 billion worth of more troops, equipment, exercises, and infrastructure to signal our commitment? Hardly.

Rather than a buildup, Washington should be debating how and when to draw down forces in Asia. The massive U.S. military presence in the Asia-Pacific region is not necessary to protect America’s core economic and security interests. And staving off a rising China or upholding the “liberal world order” are bad reasons for maintaining preponderant military power in the region. Indeed, in some ways it exacerbates tensions by making China feel encircled and motivating Pyongyang to obtain deliverable nuclear weapons. China is a long way from achieving a hegemonic position in Asia and the region generally is in a state of defense dominance where conquest is hard, offense is risky, and deterrence is robust. American military dominance is simply not needed to keep the region peaceful, to protect trade flows, or solve myriad local disputes. 

Last week, Senator Ron Johnson (R-WI) introduced the State Sponsored Visa Pilot Program Act of 2017. Senator John McCain (R-AZ) is an official co-sponsor. If enacted, this bill would create a flexible state-sponsored visa system for economic migrants whereby states would regulate the type of visas and the federal government would handle admissions and issue the actual visas. Representative Ken Buck (R-CO) plans to introduce a companion version in the House in the near future. 

This is an innovative bill but we have encountered one persistent question from conservatives, libertarians, and others who are sympathetic to the idea of immigration federalism: Is a state-sponsored visa constitutional? 

The state-sponsored visa is perfectly consistent with the current migration system. The Johnson-Buck bill does not actually end federal control of migration but it merely creates a visa category whereby the states select the migrants through whatever processes they establish. The federal government is in full control of visa issuance and admission at ports of entry. Thus, states would be acting as sponsors on behalf of migrants whom they represent in their states in the same way that they currently sponsor foreign-born students at state universities and other workers in their capacity as employers.

In 2014, Brandon Fuller and Sean Rust authored a policy analysis for Cato that explored how a state-sponsored visa program could operate in the United States. They wrote a section addressing the constitutionality of such a program:

Historically, the Supreme Court has interpreted Congress to have “plenary power” over immigration, generally giving deference to the political branches of the federal government as an extension of the Naturalization Clause under Article 1, section 8, clause 4, which gives Congress the power “To establish an uniform Rule of Naturalization.”[1] Under current interpretations, this gives Congress the sole power to establish naturalization guidelines. However, Congress can also allow states to be involved in immigration policy in areas besides naturalization, such as managing a state-based visa within federal guidelines. Some immigration policies, with the exception of naturalization, can be partly devolved to the states within a range of powers permitted by the federal government.

The recent case of Arizona v. the United States, which decided the constitutionality of Arizona’s strict immigration laws, reiterates the point that states are allowed to participate in immigration policy and enforcement, but only within the scope permitted by the federal government.[2] In debating the case of Arizona v. United States, Peter Spiro, an immigration law scholar at Temple University’s Beasley School of Law, wrote, “[I]n Arizona, the Supreme Court constricted the possibilities for unilateral state innovation on immigration, both good and bad. That does not stop the federal government from affirming state discretion.” A state-based visa program does just that—allowing states to participate in the selection of immigrants under guidelines permitted by the federal government which is consistent with current interpretations of the Supremacy Clause and the plenary power of the federal government in the matter of immigration.

It is also important to note that U.S. law defines a nonimmigrant visa holder as “an alien who seeks temporary entry to the United States for a specific purpose,” and the federal government may set conditions in accordance with this purpose. For example, in the current immigration system a foreign entrant may be required to be attached to a singular petitioning employer under a number of employer-based non-immigrant visas, such as the H-1B. Like holders of employment-based visas, state-based visa holders would be nonimmigrants with a temporary right to live and work in the United States and an option to pursue permanent residency. As such, the state-based system is simply a variation on the condition being attached to the foreign entrant.

The Johnson-Buck bill is a federally created visa that allows states to sponsor migrants that would operate by the guidelines established under the Supreme Court cases argued over the Arizona immigration enforcement laws. The same precedents that established that states can increase immigration enforcement beyond what the federal government intended, within the confines of a federal program, also allow states to choose whether to have more legal migrants under a federally managed system. 

Naturalization is a solely federal power that the state-sponsored bill does not interfere with. If a worker on a state-sponsored visa finds an employer or a family member to sponsor him for lawful permanent residency then he will have full mobility, employment, and residence rights just like any green card holder.

The federal government currently runs the visa system in the United States and the Supreme Court has interpreted the Constitution to give Congress that power. There is nothing unconstitutional with Congress asking the states to play a role in the process of selecting migrants for visas.

[1] INS v. Chadha, 462 U.S. 919 (1983).

[2] Chamber of Commerce v. Whiting, 563 U.S. (2011); Arizona v. United States, 567 U.S. (2012).

A decade ago an errant pass in a basketball game hit my thumb hard along the nail. After a couple days of intense pain, the thumbnail fell off and then grew back misshapen. It turned out that the injury killed a portion of the nail bed. As afflictions go it is pretty minor, but it is a tad grotesque and makes a few tasks a bit more difficult.

An orthopedic surgeon suggested I either opt for surgery—which may not have worked or been covered by insurance—or else have the entire nail permanently removed for aesthetic reasons. I oped to leave it alone and began getting a regular manicure to keep the thumbnail under control.

A couple months ago, the owner of the salon I frequent asked if a new employee could do my manicure. The issue was that he spoke no English and had no license, but they assured me he had been doing manicures for years in Vietnam and was quite talented. I agreed.

The owner explained my thumbnail issue to him, and he spent several minutes on the digit. A few days later, to my surprise, the dead nail bed began growing again. The nail now looks almost normal.

The story of my healing nail asks a question: to what extent should states license manicurists, or professions that by and large have nothing to do with health and safety? Wisconsin—and many other states—requires graduation from an accredited institution that teaches the trade as well as hundreds of hours of experience. It does not automatically recognize licenses issued by another state or country either. In other words, there would be no clear path for this manicurist to legally practice his profession in the state.

The typical state licenses hundreds of professions. Some of those are unobjectionable—most people want doctors and anesthetists to undergo a licensing regime before assuming their professions, for instance. But other licenses are problematic. For instance, many states require interior designers and florists to be licensed. Do we really need to be protected from a rogue designer who might do damage to the color scheme of our homes? The same question can also be asked of manicurists, barbers, aestheticians, and other professions that have little to do with health or safety.

The harm in excessive licensing is twofold. First, people with an aptitude for a profession but without the means to take the classes to obtain the license are effectively shut out of a way to earn a decent living. A license for an interior designer, for instance, requires six years of training, including at least two years of school.

Second, the higher wages from excessive licensing translates to higher costs for these services as well. A manicure in Oshkosh—a former home of mine—costs more than in Washington DC, where I currently reside. While not everyone might need or want such services, the disparity in prices between my high-cost current home and my former low-cost residence suggests that someone’s getting a bad deal.

A study I wrote with my colleague Logan Albright, published last month by the Wisconsin Policy Research Center, examines the inexorable expansion of licensing in the state—driven both by the expansion of the service sector as well as the increase in the number of occupations in the state requiring a license. We suggest that in an economy where states have been ratcheting up their efforts to attract jobs and boost economic growth, it is time for Wisconsin to examine the current licensing regime and think cogently about which tasks merit licensing and which can do without. Many other states have begun to do precisely this—and are concluding that their current licensing regime has gone too far.

Such an exercise should be a bipartisan affair. Unnecessary licensing hurts the entire state, but those who come from low-income households or lack the means to obtain the training to get such jobs suffer the most.

Governor Walker and the state legislature have both announced they will look at this issue. There’s a lot to look at, we submit.

Ike Brannon is president of the consulting firm Capital Policy Analytics.


At a Cato Institute Capitol Hill Briefing today, Senate Homeland Security Committee Chairman Ron Johnson (R-WI) and Congressman Ken Buck (R-CO) announced their intention to introduce new immigration legislation that would allow states to sponsor workers, entrepreneurs, and investors. Sen. Johnson introduced his version this afternoon. In 2014, Cato wrote a policy analysis about this idea. My colleague Alex Nowrasteh and I have published blog posts and op-eds about it, and Cato’s Handbook for Policymakers urged Congress to implement such a policy.

State-sponsored visas would build much-needed flexibility and adaptability into the federal immigration system. We are pleased that members of Congress are finally taking up this innovative and important idea.

The federal government’s monopoly over legal immigration fails to address the diversity of economic needs among the states. A more decentralized visa program could head off local problems before they build into a national crisis, building flexibility into the system that exists in every other area of the market. Giving states greater control would also increase political support for immigration programs and allow Congress to reform the system without needing to agree on every issue.

The federal government determines the number of foreign workers, the type of work that they can perform, and the terms under which they must live. The question today is whether any of these functions could be better handled at the state level.

As a legal matter, this is a question that Congress may answer. Most recently, in the Arizona v. U.S. decision, the Supreme Court held that the states are limited in this area only to the extent that Congress chooses to limit them.

From an economic perspective, the static federal monopoly makes little sense. In a market economy, you want systems that adjust quickly to changes at the local level. The federal system doesn’t change until local problems build into a national one, while a decentralized system could head off issues before a crisis develops. Despite widespread agreement that there has been a crisis for more than a decade, no changes have occurred.

The federal-only system also makes little sense politically. Giving states greater control would increase political support for immigration programs. The fights in Congress that have killed reform efforts in the past could be effectively transferred to state Capitols. Congress could fix the system without finding total agreement.

From an enforcement perspective, guest worker programs have historically reduced illegal immigration, creating an incentive for people to come to the United States legally. And limiting workers to a single state is actually less of a challenge than limiting them to a single employer, as the current federal guest worker programs do. More importantly, according to the Government Accountability Office, about 90 percent of overstays are tourists, not guest workers, because the workers want to be invited back to work legally. This incentive has kept their overstay rate well below 3%.

As is detailed in the Cato policy analysis, this idea has been implemented successfully in two other geographically diverse, former British colonies—Canada and Australia. Both countries use regional visa programs to distribute immigration more fairly and allow rural areas to obtain labor for difficult jobs.

The popularity of these programs can be seen in their rapid growth over the last two decades. They are now the second largest source of economic immigration to these countries.

The United States has a long history of federalism and federal-state partnerships, yet it has so far not applied this tradition to immigration. But some states have already passed bills advocating state-based visas. All states already directly sponsor visa applicants as students through their public universities or workers in their capacity as employers. These protocols could be expanded to allow states to sponsor workers on behalf of their industries.

Hopefully, the fact that it is two conservative members of Congress who are pushing this proposal will change the game politically.

In 2015, following the lead of many other states, Virginia passed a “law that says women have a right to breast-feed anywhere they have a legal right to be,” as the Washington Post reports. The law provides “no exemption for religious institutions,” as well as no quarter, it would seem, for owners’ ordinary rights to set terms and conditions when they invite visits from the general public. Now a mother and her attorney say Summit Church in Springfield, in the D.C. suburbs, had no right to ask her to use a private room after she began feeding her baby without a cover during a sermon.

Should Annie Peguero, of Dumfries, Va., press a claim in court, she might have to contend with Virginia’s version of the Religious Freedom Restoration Act, which provides in relevant part (h/t Ann Althouse): “No government entity shall substantially burden a person’s free exercise of religion even if the burden results from a rule of general applicability unless it demonstrates that application of the burden to the person is (i) essential to further a compelling governmental interest and (ii) the least restrictive means of furthering that compelling governmental interest.” But since not all states have a version of RFRA—and particularly since, if the Post’s readers are typical, a large sector of polite opinion is taking Ms. Peguero’s side and appears to see nothing wrong with applying such laws to churches in Summit’s position—it seems likely that this will not the last such claim. 

Personally, I’m fine with public breast-feeding no longer being classed as an automatically shocking thing. But why is government dictation of how a church may arrange its worship services no longer classed as an automatically shocking thing?

[cross-posted and adapted from Overlawyered]

Results of last week’s DC voucher study, showing some significant negative effects on standardized math tests, has school choice opponents in overdrive writing voucher obituaries. But at least some commentators, like the New York Times’ David Leonhardt, concede that choice works, but only if it is shackled to regulations they like. That choice works if carefully managed is perhaps an inevitable concession with the broad notion of school choice clearly in ascendance. However, the idea that regulated choice produces better outcomes flies in the face of basic economic theory and choice research.

Negative Impacts of Regulation

State requirements often come in the form of standardized test scores or restrictions on the types of teachers that may be employed. These regulations force schools to focus narrowly on state tests, which do not appear to matter in the long-run, and limit the supply of teachers, lowering educational quality while increasing costs.

As I illustrate below, programs with less restrictive voucher laws lead to more impressive experimental evaluations of student math achievement, perhaps because the costs of regulation ward off high quality private institutions:

Vouchers > Charters

In “free” public charter schools, bureaucrats decide how much education ought to cost per child, and each charter school is limited to accepting that amount, even if it is a high performing institution.  The lack of price differentiation in charter schools is detrimental since it does not give institutions the incentive, nor the information, necessary to innovate or excel.

Unsurprisingly, the empirical evidence supports economic theory.  While the DC Opportunity Scholarship has mixed results on test scores, the effects are clear for long-term outcomes: a massive 21-percentage point increase in graduation rates.  Perhaps more importantly, private school choice programs in the United States have significantly improved communities overall through decreased criminal activity, reduced state expenses, and racial integration.

It should not astonish us that families are selecting schools that do not specialize in producing obedient test-taking machines. Naturally, it is likely that parents care less about standardized tests than the overall development of their children. In that sense, the recent evidence only reinforces the fact that markets behave as expected, even in the realm of education. If we really want to ensure that children have access to high quality schools, we ought to use the most powerful form of regulation that we have: parental choice.

Police shootings are back in the news. Michael Slager has pleaded guilty to federal charges involving the killing of Walter Scott. Federal officials have declined to bring charges against the officers involved in the shooting death of Alton Sterling. Meanwhile Texas officer Roy Oliver has been fired in the wake of the shooting death of 15 year old Jordan Edwards.

Each shooting incident has to be considered separately to take account of all the surrounding circumstances. There are a range of possibilities—from self-defense on the part of the officer, to tragic accident or mistake, to manslaughter or even first degree homicide. To ensure just outcomes, one of the most important things is to have independent, impartial investigations whenever there is a questionable shooting, especially where someone is killed or injured. Preferably, this will be done by a completely separate police department or the state attorneys general office, rather than the federal government. Another best practice for police shootings involves transparency. Police departments should identify the shooter and disclose his or her record, such as previous involvement in shootings or previous lawsuits alleging wrongdoing. Authorities should also make videos available. Mayor Rahm Emanuel tried to make the Lacquan McDonald case go away with a quiet legal settlement. It was only when a reporter went to court to seek the release of the video that the scandal was exposed and real movement for police reform could begin. 

For related Cato work, go to our police misconduct web site. Still more here, here, and here.

At the contentious interface of climate science and policy, there’s one thing that people of all flavors agree upon: if Greenland were to shed all its ice in a century that would be an unmitigated catastrophe, raising sea levels an average of 22 feet simply from the sheer volume of water contained thereon.

It therefore stands to reason that a melting Greenland makes for good copy, and The Economist’s witty scribes have just published two alarming articles (here and here). Alarming, that is, because they left out a few pertinent facts.

“The most worrying changes are happening in Greenland, which lost an average of 375bn tonnes of ice per year between 2011 and 2014.”

This is a finely selected cherry that The Economist plucked. 2012 was an exceedingly warm year averaged of the island-continent. Had they included all the recent data, they would have shown that accumulation of ice on Greenland recently reached a record high compared to the previous three decades:

Source:  Danish Meteorological Institute

“This is equivalent of over 400 massive icebergs measuring 1km on each side disappearing each year.”

At the turn of this century, the U.S. Geological Survey reported the volume of Greenland’s ice at 2,600,000km3. The maximum melt of 400km3 is a grand total of 1/5000th of its ice.

The United Nations’ Intergovernmental Panel on Climate Change (IPCC) projects that melting of Greenland’s ice will raise sea level a bit under one-tenth of a meter, or three inches by 2100. 

“Even if current emissions remain stable, the consensus is that global sea levels will rise 74cm [29 in] by the end of the century.”

Nope. The same IPCC science summary projects around 38cm [15in] for this scenario.

With regard to the much-feared sudden loss of Greenland’s ice, we read that “little is known about how Greenland’s vast ice sheet will react to future warming.”

Well, when it comes to the big one, no, not really. A 2013 ice core got all the way back to the beginning of the previous interglacial, when there was a 6,000 year period of very warm summer temperatures. Prior to this work by Dorthe Dahl-Jensen and her colleagues, it was thought summer temperatures in this period, called the Eemian, were about 2⁰C warmer than the 20th century average over northwest Greenland, where the core was drilled. 

Instead, Dahl-Jensen found that it averaged a whopping 6⁰ warmer. She estimated this was associated with a maximum loss of about 30% of Greenland’s ice. That may be charitable because where she drilled, in the cold northwestern region, the ice core revealed that the thickness of the ice was reduced by about 10%. Another, more recent study also found 6⁰C of warming, this time at Summit, where Greenland’s ice cap reaches maximum elevation. That work did not use the ice itself to estimate the Eemian altitude, but a computer model, which showed around a 70% loss of total Greenland ice.

So this much heat rained down on Greenland during the Eemian: 6⁰C X 6,000 summers, or 36,000 degree-summers. And let’s say, maybe, humans could warm it 5⁰C for 500 summers, or 2,500 degree-summers. That would only knock 2500/36000 of the maximum 30% of the ice off, or two percent. This works out to five inches of sea-level rise from the melting of Greenland in the first study and 13 in the second one. 

Most readers likely will not click through the links and read the journal article behind the headlines, nor have a friendly local climatologist to debunk these sort of stories. No wonder so many are so worried.

Writing in The Wall Street Journal on April 27–making another last-ditch pitch for a 20% border tax on business imports–Martin Feldstein asserts that unless corporate tax rate cuts are “offset” by tax increases on imports or payrolls then larger projected deficits would crash the stock market by raising long-term interest rates. “The markets’ current fragility,” he writes, “reflects overpriced assets–the S&P 500 price/earnings ratio is now 70% above its historical average–after a decade of excessively low long-term interest rates engineered by the Federal Reserve.” 

The odd notion that the Fed could somehow depress bond yields for a decade is an irrelevant ambiguity, since the whole point of Feldstein’s story is to claim budget deficits raise bond yields and higher bond yields threaten “overpriced” stocks.

In a recent blog, I found no evidence to support the dogma that bond yields rise and fall with rising or falling budget deficits (actual or projected). Wall Street Journal columnist Greg Ip opines that “interest rates haven’t responded to deficits lately because private investment has been so lackluster.” But that excuse makes interest rates dependent on private investment, not deficits, and leaves us tangled in circular illogic. If interest rates depend on private investment and deficits “crowd out private investment,” then interest rates could never respond to deficits because private investment would always be lackluster when deficits were large (which would also make deficits the opposite of a “fiscal stimulus”).

Switching from bonds to stocks in this blog, I find no evidence that the S&P 500 stock index is “overpriced” relative to long-term interest rates (which is the only meaning of “overpriced” that relates to Feldstein’s argument about deficits and bond yields).

Feldstein claims stocks are “overpriced” because “the S&P 500 price/earnings ratio is now 70% above its historical average.” But there is no reason to expect the p/e ratio to revert to its long-term average unless bonds yields revert to their long-term average.

The graph illustrates this connection by inverting the trailing S&P 500 price/earnings ratio and expressing it as an earnings/price ratio. This became known as “The Fed Model,” though I prefer to call it “The Reynolds Model,” because I first used it in March 1991 (to suggest bonds, rather than stocks, were overpriced). From 1970 to 2016, the average e/p ratio was 6.52 (equivalent to a p/e ratio of 15.2) while the average yield on 10-year bond yield was almost identical at 6.57%. That connection between stocks and bonds has been quite close over the long haul (though not before August 1971 when the dollar was convertible into gold).

An oversimplified thumb rule from Investopia says, “If the earnings yield is less than the rate of the 10-year Treasury yield, stocks as a whole may be considered overvalued.” In 2016, the earnings yield of 4.17 was about twice as high as the 10-year Treasury yield of 1.84, which suggests the earnings/price ratio was then too high and therefore the price/earnings ratio (or bond yield) was too low

On May 1, 2017, the p/e ratio was 25.26, which is equivalent to an e/p ratio of 3.96 (=1/25.26). Since an earnings yield of 3.96 is obviously much higher than recent bond yields of 2.3%, the market is still “undervalued”–not “fragile.”

If bond yields rose to 3.96% (much less to the historical average of 6.57%), that might indeed pose a risk to stocks–unless there was an offsetting rise in expected earnings. However, Feldstein wouldn’t dare to predict that U.S. bond yields may approach 4-7% in the foreseeable future regardless what happens to budget deficits. And his assumed connection between deficits and bond yields is pure conjecture, without credible empirical support.

Feldstein’s latest argument for adding new import or payroll taxes relies on budget deficits pushing up bond yields and thus threatening “overpriced” stocks. Unfortunately, those claims about deficits, bonds, and stocks all rest on faulty theories and nonexistent evidence. 

Border apprehensions of illegal immigrants are substantially down in the first few months of the Trump administration. In fact, the border apprehension figure for the month of March is only 16,600, the lowest monthly figure since 2000. Apprehensions are an important proxy metric for the inflow of illegal immigrants. Many are giving credit to the Trump administration for this rapid and seemingly historical collapse in illegal immigration.

There was another historical decline in border apprehensions that was even quicker, more dramatic, and far cheaper than the decline in border apprehensions that began in 2006 and has trended downward to the Trump administration. It occurred in the 1950s when the government streamlined the Bracero guest worker visa program and allowed more legal migration. These two periods of time lasted about the same number of years and provide an easy comparison between two means of diminishing illegal immigration: by making it legal or doubling down on enforcement.

The border patrol apprehended virtually the same number of illegal immigrants in 1954, when the deregulated Bracero program began operation, as in the year 2006. The deregulated Bracero program lasted from 1954 to about 1965 and the 2006 decline has lasted until today. Apprehensions fell after both of those but they fell further and much more rapidly in the 1950s (Figure 1). Apprehensions declined by 93 percent from 1954 to 1956 but only by 34 percent from 2006 to 2008. Figure 2 shows the same numbers indexed to 1 in the first year. Clearly, the Bracero Era witnessed a much more rapid and complete decline in illegal immigrant entries than the crackdown from 2006 to today.

Figure 1
Border Patrol Apprehensions

Sources: USCIS, CBP, and INS.

Figure 2
Border Patrol Apprehensions, Indexed

Sources: USCIS, CBP, and INS.

There are more border patrol agents in the 2006-2017 period than during the Bracero Era. The 2006-2017 period began with 11.4 times as many agents as in the Bracero Era and ended with 13.3 times as many while the decline in apprehensions was slower and less complete (Figure 3). From the beginnings of the two periods, the number of border patrol agents climbed by 38 percent in the Bracero Era and by 61 percent in the 2006-2017 period. The indexed number of border patrol agents shows that the agency grew more under the 2006-2017 period (Figure 4).

Figure 3
Border Patrol Officers

Sources: USCIS, CBP, and INS.

Figure 4
Border Patrol Officers, Indexed

Sources: USCIS, CBP, and INS.

These two periods are not perfectly comparable. The decline in border apprehensions since 2006 was aided by the housing market collapse, the Great Recession, the 61 percent increase in the number of border patrol agents, the relative demographic decline in Mexico, numerous state-level laws immigration enforcement laws, Secure Communities, and the increase in interior deportations under Presidents Bush and Obama. The Bracero Era collapse in illegal immigration occurred during an expanding domestic economy, a 38 percent increase in the number of border patrol agents, accelerating Mexican fertility, and a fiercely named but relatively moderate interior immigration enforcement scheme that also legalized many illegal workers. Although imperfectly comparable, many natural, political, and macroeconomic factors conspired to lower illegal immigration since 2006 while most of those factors pushed in the opposite direction during the Bracero Era–providing further evidence that Bracero was more effective than enforcement-only.

The Bracero Era’s immigration enforcement policy was called “Operation Wetback,” a nasty immigration enforcement operation begun in 1954 that removed almost two million illegal Mexican migrants. While brutal and unnecessary, many of the migrants rounded up under Operation Wetback were legalized on the spot, a process derogatorily referred to as “drying out” illegal migrant workers, and given a Bracero work visa. The number of migrants “dried out” is not well recorded but 96,239 were processed thusly in 1950. The Department of Labor actually preferred legalized illegal migrants over newly-admitted Braceros. Other apprehended illegal immigrants were made to “walk around the statute”–basically taking a single step into Mexico and then returning under the watchful eye of a border patrol agent who then handed him a work visa and drove him back to his farm. The government did not tolerate illegal immigration but they made it simple for migrants to get a guest worker visa and used the border patrol to funnel the migrants into the legal system. Many border patrol agents and other officials testified to what a success Bracero was in reducing illegal immigration.

The main goal of modern immigration policy is to end illegal immigration. The government should choose the cheapest way to accomplish that task–which is by expanding guest worker visas. Border patrol agents and walls are expensive, they decrease economic growth, and have never completely stopped illegal immigration. Guest worker visas are cheap, they increase economic growth, and they reduced illegal immigration far more rapidly and effectively than the modern enforcement-only method. If it was easier for would-be illegal immigrants to instead earn a legal work visa similar to Bracero then almost all future and current illegal immigrants could be funneled into the legal market without increased enforcement. This was the policy followed in the Bracero Era and it worked quicker, more efficiently, more completely, and more cheaply than the modern enforcement-only approach.

If the Trump administration really wants to end illegal immigration then they should copy Eisenhower’s policy of enforcement and liberalization toward guest worker visas rather than Obama’s enforcement-only approach.

Private school choice programs have been proposed in state legislatures all across the nation, and public interest in the term “school choice” reached an all-time high earlier this year. Since school choice programs create accountability to parents and children, education scholars have discussed whether state-driven accountability is on the wane. While robust accountability to the state is essential in traditional public schooling institutions, it is inferior to accountability to every single family.

Necessary in Involuntary Settings

Accountability to the public is necessary in schools with compulsory attendance based on age and zip codes. What would happen if state officials did not set minimum standards? Public schools could serve children inadequately and even harm them to a certain degree before parents were forced to decide whether to pay out of pocket for a private institution or move. In many cases, parents would not be able to afford to opt out of the free school due to income constraints.

Suppose you were required to send your child to a residentially assigned public restaurant until they were eighteen years old, because, after all, nutrition may be the most basic right of them all. If your child becomes sick from food poisoning, you may still decide to keep them there based on income restrictions and perceived differences in quality. Of course, the state would need to intervene in order to keep the compulsory public restaurants accountable to minimum safety and, perhaps, taste standards.

Political Process Problems

While state accountability is necessary in the public sphere, we should recognize the shortcomings. First, who is deciding what the standards ought to look like, and how do we keep those people accountable? The commonly cited answer is that state officials are held accountable to the public through the political process. The main problem with that argument is that it assumes that the political process is efficient in holding bureaucrats accountable. 

Inefficiency runs rampant in the political sphere because voters do not have an incentive to become politically knowledgeable. If I am voting in a presidential election, for example, I have around a 1 in 60,000,000 chance of determining the outcome. On the other hand, it is extremely costly to gain information on every policy that a given politician talks about and influences. The counterintuitive result is that voters actually make a rational decision to be politically irrational.

Even if all voters were completely rational, we would still face the problems associated with majority rule. Policies around educational standards result from the most politically powerful groups in society. The consequence is that children from disadvantaged groups are harmed by the uniform set of standards decided by the elites. 

Similarly, suppose we went into the grocery store and voted on the cart that we received. Even if we were in the majority and got the cart that we preferred, we would still end up with some of the things we wanted, and much of what we did not care to have. 

Consequences of Central Planning

I have sat in many rooms filled with intelligent people attempting to determine what educational accountability systems ought to look like. What measures should we focus on? What weights should we assign to each measure? What do we do to schools that do not meet goals? Each individual truly tries their best to improve the educational experiences of all children. However, it is sadly an impossible problem to solve, especially given the constraints of the traditional system of schooling.

Perhaps most importantly, something as small as altering the weight of a certain accountability measure is likely to change the life trajectory of many children. Should character skills assessments count as 11% or 10%? If we arbitrarily decide on 10%, rather than 11%, we may very well harm children at the margin that desperately needed behavioral development. The result? Moving the needle in the wrong direction could mean that one more child, at the margin, ends up in prison for the rest of their life.

We should not force children to suffer the consequences of the political process. Instead, we should allow all families to get what they want through voluntary educational choices, regardless of income level or political power.

There was–perhaps still is–a Cuban aphorism that “Sugar is made with blood.” Few other people were better situated to actually comment on what went into producing sugar (consumed all the way from England to India) than those whose labor created it. None knew more intimately than the slave just how much human misery was squeezed into every cup. Sugar and tobacco were the New World’s primary cash crops because their stimulating and addictive chemistries gave European aristocrats incredible amounts of wealth and power. Factory workers dumped sugar into their tea to up calorie counts and make it through the day while corporatists and slave masters reaped a harvest of stimulated profits. The slave’s blood fed the production of cane, and cane fed the new generations of drudge workers. Sugar, in many regards, was made with blood, and history is much the same. But to find out just how sanguine our cup is, we have to be willing to ask disturbing questions. To enjoy tales about the good times and the pleasant things, the heroes and victories, we have to be direct and honest about our past.’s newest podcast, Liberty Chronicles, will present listeners with a humane history of Liberty and Power, neither romanticizing the present nor failing to bluntly analyze the past. The saga of human history is incredibly painful and, often, not terribly inspirational. In many ways, it is a long train of cautionary tales each of which has failed to adequately instruct successive generations. Despite the constant stream of evidence that prosperity requires peaceful cooperation, we consistently fail to improve ourselves. We ignore our true histories–the painful catalog of who exercised violence against whom–to tell myths that temporarily bandage any serious wounds.

To understand more fully who did what to whom and why, we have to be willing to jettison our preconceived notions about the world we know and love. We have to stop trying to justify history and begin really listening to its record. We have to break from the nationalistic, hopeful narratives of an ever-improving synthesis and recognize that the past offers us no nice, neat little lessons or predetermined end-points. Having done these ideological exercises, we can commit ourselves to exploring the past from the perspectives of those actual human beings who created and lived it. With a bit of practice, we can start training ourselves to practice empathy and sympathy by straining to understand people so radically different from ourselves.

Liberty Chronicles combines libertarian methodology with a variety of historical theories and perspectives. We will help listeners eschew academic gatekeepers and propagandizers, taking up Carl Becker’s famous invitation that “Everyman” become “His Own Historian.” We begin today with a discussion of H.L. Mencken’s history of the bathtub and over the next several weeks we will broaden our ideological toolkit to prepare for investigations of our own. Having covered history from above, history from below, Marxism vs. Classical Liberalism, methodological individualism, and conspiracy theory, we will move to the Early Modern period and the development of Liberty and Power in colonial America. From there and then, the battle between those seeking liberty and those seeking power has remained an open contest. Subscribe on your favorite podcatcher, add us on Facebook and Twitter, send us your questions, share the news far and wide all across the land! The history of libertarianism and its war on power is more relevant and necessary now than perhaps ever before.