Cato Op-Eds

Individual Liberty, Free Markets, and Peace
Subscribe to Cato Op-Eds feed

The National Academies of Sciences, Engineering and Medicine released a major new report on the fiscal and economic impacts of immigration on the United States yesterday. The report is being heralded by all sides of the immigration debate as the most important collection of research on this issue. This reception could be due to the Academies’ meticulously avoiding any policy implications from their research, allowing policy wonks to draw their own conclusions. Here are my top four policy implications of the new research:

1) Dramatically expanded high skilled immigration would improve federal and state budgets, while spurring economic growth. The fiscal and economic benefits of high skilled immigration are tremendous. The net value to the federal budget is between $210,000 and $503,000 for each immigrant with a bachelor’s degree over their lifetime (the full chart below highlights the overall impact). The sections on immigrant entrepreneurship and innovation are also universally positive. “High-skilled immigrants raise patenting per capita, which is likely to boost productivity and per capita economic growth,” they conclude (p. 205).

Exempting spouses and children of legal immigrants, as Congress intended, would double the flow of high skilled immigrants, allowing the United States to capture these benefits.

2) Legalization could hasten assimilation. One conclusion of the report is that wage and language assimilation is lower among the 1995-1999 cohort of immigrants than among the 1975-1979 cohort. The rise of illegal immigration likely explains much of this difference. More than one in four immigrants today is illegally present in the United States. As Douglas Massey has shown, documented and undocumented immigrants had roughly the same wages until the 1986 law banning employment of undocumented immigrants, which depressed the wages of undocumented immigrants. Legalization would reverse this.

Moreover, other studies have shown that immigrants who are legalized rapidly increase their earnings and invest in skills, including language acquisition. A legalization program that specifically required language classes, education, and workforce participation while restricting welfare, as the 2013 Senate-passed bill did, would further enhance the gains from legalization.

3) A large guest worker program can mitigate the negative fiscal impacts of low-skilled immigration. The most negative finding in the report is that the lowest skilled immigrants have negative fiscal impacts, but those impacts are entirely driven by costs in childhood and retirement, as the figure below from the report shows (p. 331). A large guest worker program that allowed low-skilled immigrants with less than a high school degree to enter during their prime years and retire in their home country would be a strong fiscal gain for the United States.

4) Governments should strengthen the wall around the welfare state. The positive fiscal gains from immigration could be improved by limiting immigrants’ access to benefits. As I have shown before, immigrants overall did very well after benefits were partially restricted in 1996, and my colleagues have detailed a number of ways that these barriers could be reinforced. One particular insight of the report is that most of the welfare usage comes after retirement, so that should be a focus of reform.

There are many other implications of this report, but these four are enough for Congress to get started on.

On Tuesday, President Obama delivered a short address to the Leaders Summit on Refugees at the United Nations.  He went out of his way to praise the Mexican government by stating:“Mexico … is absorbing a great number of refugees from Central America.” 

In reality, the Mexican government has done very little to absorb refugees.  From 2013 to 2015, Mexico only recognized 720 refugees from Honduras, 721 from El Salvador, and 62 from Guatemala.  During the time period, Mexico granted asylum to 129 Hondurans, 82 Salvadorans, and 17 Guatemalans.  That’s a total of 1,731 refugees and asylum seekers from those countries.  Only 83 of them were children. 

In 2015 alone, Mexico deported 175,136 people to Honduras, Guatemala, and El Salvador - more than 100 times as many as were accepted by the humanitarian visa programs from 2013 to 2015.    

Instead, President Obama should have thanked the Mexican government for enforcing American immigration laws in a way that shields his administration from criticism.  Mexico has improved its immigration laws in recent years but refugee and asylum laws are one area still in desperate need of reform.  Let’s not let flowery speeches obscure the reality.

Thanks to Bryan Johnson for bringing this to my attention and Guillermina Sutter Schneider for her translation of Mexican government documents. 

George Will writes in his column today about the importance of the Port of Charleston – and by extension, trade – to the economy of South Carolina. Recent completion of the 10-year project to widen the Panama Canal to accommodate more traffic and passage of a new class of container ships with nearly triple the capacity of their immediate predecessors has exposed a logistics snafu that could cost South Carolina’s economy billions of dollars: Charleston Harbor is too shallow to accommodate these much larger, “Post-Panamax” ships efficiently (only limited sections of the harbor are deep enough and only during high tide).

According to the American Society of Civil Engineers, these vessels can lower shipping costs from 15-20 percent, but harbors need to be at least 47 feet deep to accommodate them. The U.S. Army Corps of Engineers reports that only seven of the 44 major U.S. Gulf Coast and Atlantic ports are “Post-Panamax ready.” American ports must be modernized if the United States is going to continue to succeed at attracting investment in manufacturing and if U.S. companies are going to compete successfully in the global economy.

As I wrote in the Wall Street Journal last year:

The absence of suitable harbors, especially in the fast-growing Southeast, means fewer infrastructure- and business-development projects to undergird regional growth. It also means that Post-Panamax ships will have to continue calling on West Coast ports, where their containers will be put on trucks and railcars to get products from Asia to the U.S. East and Midwest—a slower and more expensive process.

The problem can be traced to one major issue: funding.  And that issue is made more complicated by another problem: protectionism.  Most funding of infrastructure inevitably come from federal and state budgets – taxpayers, who should have a voice in the debate about whether these infrastructure projects constitute wise public investments.  But a couple of long-standing, though obscure, protectionist laws have conspired to reduce capacity in dredging services, ensuring that projects take twice as long and cost twice as much as they should.

As I wrote in the WSJ:

This capacity shortage is the result of the Foreign Dredge Act of 1906 and the Merchant Marine Act of 1920 (aka the Jones Act). These laws prohibit foreign-built, -chartered, or -operated dredgers from competing in the U.S. The result is a domestic dredging industry that is immune to competition, has little incentive to invest in new equipment, and cannot meet the growing demand for dredging projects at U.S. ports.

For the next few years, federal, state and local government spending on dredging is expected to be about $2 billion annually. That spending will be supplemented by investments from U.S. ports and their private terminal partners to the tune of $9 billion a year to build and upgrade harbors, docks, terminals, connecting roads and rail, and storage facilities, as well as to purchase cranes and other equipment. There would be a lot more of these job-creating investments if European dredging companies were allowed to offer their services.

The Transatlantic trade talks offer a great opportunity to fix this problem. The best dredging companies in the world are European, mainly from the low-lying countries of Belgium and the Netherlands, where mastery of marine engineering projects has been developed over the centuries.

Industry analysts at Samuels International Associates estimate that European dredgers could save U.S. taxpayers $1 billion a year on current projects, and enable more projects to be completed more quickly. The European Dredging Association boasts that its member companies win 90% of the world’s projects that are open to foreign competition.

In a global economy where capital is mobile, workforce skills, the cost of regulation, taxes, energy costs, proximity to suppliers and customers and dozens of other criteria factor into where a company will invest. And for companies with transnational supply chains, transportation costs are crucial considerations.

Today the U.S. is falling behind…

Over at Café Hayek today, Don Boudreaux assesses Will’s piece and offers an excellent analogy between administrative protectionism (tariffs and the like) and physical protectionism (harbor disrepair), which reminded me of this masterful passage from Fredric Bastiat equating tariffs and physical impediments to trade with sweeping brilliance and simplicity:

Between Paris and Brussels obstacles of many kinds exist. First of all, there is distance, which entails loss of time, and we must either submit to this ourselves, or pay another to submit to it. Then come rivers, marshes, accidents, bad roads, which are so many difficulties to be surmounted. We succeed in building bridges, in forming roads, and making them smoother by pavements, iron rails, etc. But all this is costly, and the commodity must be made to bear the cost. Then there are robbers who infest the roads, and a body of police must be kept up, etc. Now, among these obstacles there is one which we have ourselves set up, and at no little cost, too, between Brussels and Paris. There are men who lie in ambuscade along the frontier, armed to the teeth, and whose business it is to throw difficulties in the way of transporting merchandise from the one country to the other. They are called Customhouse officers, and they act in precisely the same way as ruts and bad roads.

A recent police-involved shooting in Charlotte, North Carolina helps illustrate the importance of body cameras and why North Carolina’s body camera law is misguided and unhelpful. Last night, Governor Pat McCrory declared a state of emergency following protests over the shooting of Keith Scott, who was shot and killed by an officer on Tuesday. The protests have left one citizen on life support and numerous police officers injured. The National Guard has been deployed. Making footage of the shooting publicly available would show that the Charlotte-Mecklenburg Police Department is dedicated to accountability and transparency while providing Charlotte residents with valuable information about the police who are tasked with protecting their rights.

Although the officer who shot Scott was not wearing a body camera, three officers at the scene were. There are concerns associated with making this body camera footage available to the public. But in my Cato Policy Analysis “Watching the Watchmen” I outline policies that I think balance citizens’ privacy with the need to increase accountability and transparency.

Many routine police interactions with citizens present significant privacy concerns. What about when police talk to informants, children, or victims of sexual assault? What if a citizen is naked, or is the victim of a traffic accident? What about footage that shows someone’s bedroom or living room? Police regularly interact with people experiencing one of the worst days of their lives, and it would be irresponsible to think that a desire to increase police accountability outweighs these privacy concerns.

Nonetheless, the Scott shooting is an example of the kind of police encounter that presents few privacy concerns. Scott was outside, in a parking lot. He and the responding officers did not have a reasonable expectation of privacy. The shooting, like the Walter Scott shooting, could have been filmed by passerbys. There is no indication from the reporting that I’ve seen suggesting that Scott was naked, intoxicated, or blurting out confidential information. According to Charlotte-Mecklenburg Police Chief Kerr Putney, Scott refused repeated demands to put down a gun. Scott’s family claim he was reading a book in his vehicle.  

An upcoming North Carolina law heavily restricts access to body camera footage, requiring members of the public to have a court order before accessing video. Despite the fact that the law doesn’t come into effect until next week, Charlotte-Mecklenburg Police Chief Kerr Putney has mentioned it while discussing his decision not to release the footage. However, he has stated that Scott’s family will be able to view the relevant footage, which he claims does not show Scott definitively pointing a gun at anyone.

Footage of the Keith Scott shooting is not the kind of footage sometimes mentioned in discussions about which body camera footage should be exempt from public release requests. Scott was outside, and his shooting is clearly of interest to the public. As such, footage of the shooting should be released.

Yesterday, Hillary announced her latest policy prescription to increase low-cost housing: don’t hold your breath, it’s anything but original. The basic prescription is simply to double down on tax subsidies for housing developers. 

To that end, Hillary proposes enlarging the Low Income Housing Tax Credit (LIHTC) program and shifting the tax burden from housing developers and financial institutions back to taxpayers.

Here are a few reasons she should reconsider:

  1. The Low Income Housing Tax Credit program (hereafter “the subsidy”) crowds out market-provided low-cost housing. That means taxpayers are paying for low-cost housing that would otherwise be provided by the market for free.
  2. The IRS has proven entirely inept in its role as administrator of the subsidy. This is not a controversial point (the Government Accountability Office agrees).
  3. The subsidy has a highly fragmented, complex system of delivery, which means it is inefficient, and by extension, expensive.
  4. As a consequence, the subsidy doesn’t even stack up well against comparable housing subsidies: research describes the subsidy as 19-44% more expensive than comparable housing subsidies.
  5. To make matters worse, the subsidy is often not viable as a stand-alone. Forty percent or more of housing units receiving this subsidy end up utilizing other subsidies, while they’re at it.
  6. The subsidy is a tax expenditure and as such does not appear as an outlay on the federal budget. This means that Congress never has to confront any of the problems noted to this point.

Still unconvinced? Here are a few more reasons why expansions of the Low Income Housing Tax Credit program should be opposed.

In yesterday’s Washington Post, a headline proclaimed: “Saudi Arabia is Facing Unprecedented Scrutiny from Congress.” The article focused on a recently defeated Senate bill which sought to express disapproval of a pending $1.15 billion arms sale to Saudi Arabia. Unfortunately, though the presence of a genuine debate on U.S. support for Saudi Arabia – and the ongoing war in Yemen – is a good sign, Congress has so far been unable to turn this debate into any meaningful action.  

Yesterday’s resolution, proposed by Kentucky Senator Rand Paul and Connecticut Senator Chris Murphy, would have been primarily symbolic. Indeed, support for the bill wasn’t really about impacting Saudi Arabia’s military capacity. As co-sponsor Sen. Al Franken noted, “the very fact that we are voting on it today sends a very important message to the kingdom of Saudi Arabia that we are watching your actions closely and that the United States is not going to turn a blind eye to the indiscriminate killing of men, women and children.” This message was intended as much for the White House as for the Saudi government, with supporters arguing that the Obama administration should rethink its logistical support for the war in Yemen.

Unfortunately, opponents of the measure carried the day, and the resolution was defeated 71-26. These senators mostly argued that the importance of supporting regional allies outweighed any problems. Yet in doing so, they sought to avoid debate on the many problems in today’s U.S.-Saudi relationship. In addition to the war in Yemen – which is in many ways directly detrimental to U.S. national security interests, destabilizing that country and allowing for the growth of extremist groups there – Saudi Arabia’s actions across the Middle East, and funding of fundamentalism around the world are often at odds with U.S. interests, even as it works closely with the United States on counterterror issues. As a recent New York Times article noted, in the world of violent jihadist extremism, the Saudis are too often “both the arsonists and the firefighters.”

Despite these problems, the growing debate over the U.S.-Saudi relationship in congress has yielded few results. A previous arms bill proposed by Senators Murphy and Paul also failed to gain traction. That measure would have barred the sale of air-to-ground munitions to the Saudis until the President was able to certify that they were actively working against terrorist groups and seeking to avoid civilian casualties inside Yemen. Though it has the potential to actually slow the Saudi war effort and protect civilians inside Yemen, the bill has languished in committee since April.

Worse, the only concrete measure passed by Congress on this issue is counterproductive. The Justice Against Sponsors of Terrorism Act (JASTA) would lift various sovereign immunity protections, allowing the families of 9/11 victims to directly sue the Saudi government. Yet even with the release of the 9/11 report’s missing 28 pages, there is no evidence that the Saudi government - as opposed to individuals within the country - financed al Qaeda. The JASTA bill thus has few positive impacts, but creates a worrying precedent for the United States: allowing citizens to sue a foreign government implies that other states may let their citizens sue the United States over issues like drone strikes. A presidential veto is expected as a result, and many in Congress are having second thoughts about a veto override.

So despite the headlines, Congress has had a fairly limited impact on the U.S.-Saudi relationship. But the simple fact that debate is occurring on Capitol Hill is a positive sign. Perhaps it can convince the Saudi government to reconsider some of their destabilizing actions in the Middle East, particularly the horrible humanitarian toll of the conflict in Yemen. At the very least, an active debate in Congress can help to remind the White House that our interests and Saudi Arabia’s don’t always align.  

In the latest issue of Cato Journal, I review Casey Mulligan’s book, Side Effects and Complications: The Economic Consequences of Health-Care Reform.

Some ACA supporters claim that, aside from a reduction in the number of uninsured, there is no evidence the ACA is having the effects Mulligan predicts. The responsible ones note that it is difficult to isolate the ACA’s effects, given that it was enacted at the nadir of the Great Recession, that anticipation and implementation of its provisions coincided with the recovery, and that administrative and congressional action have delayed implementation of many of its taxes on labor (the employer mandate, the Cadillac tax). There is ample evidence that, at least beneath the aggregate figures, employers and workers are responding to the ACA’s implicit taxes on labor…

Side Effects and Complications brings transparency to a law whose authors designed it to be opaque.

Have a look (pp. 734-739).

I was just about to treat myself to a little R&R last Friday when — wouldn’t you know it? — I received an email message from the Brookings Institution’s Hutchins Center. The message alerted me to a new Brookings Paper by former Minneapolis Fed President Narayana Kocherlakota. The paper’s thesis, according to Hutchins Center Director David Wessel’s summary,is that the Fed “was — and still is — trapped by adherence to rules.”

Having recently presided over a joint Mercatus-Cato conference on “Monetary Rules for a Post-Crisis World” in which every participant, whether favoring rules or not, took for granted that the Fed is a discretionary monetary authority if there ever was one, I naturally wondered how Professor Kocherlakota could claim otherwise. I also wondered whether the sponsors and supporters of the Fed Oversight Reform and Modernization (FORM) Act realize that they’ve been tilting at windmills, since the measure they’ve proposed would only require the FOMC to do what Kocherlakota says it’s been doing all along.

So, instead of making haste to my favorite watering hole, I spent my late Friday afternoon reading, “Rules versus Discretion: A Reconsideration.” And a remarkable read it is, for it consists of nothing less than an attempt to champion the Fed’s command of unlimited discretionary powers by referring to its past misuse of what everyone has long assumed to be those very powers!

To pull off this seemingly impossible feat, Kocherlakota must show that, despite what others may think, the FOMC’s past mistakes, including those committed during and since the recent crisis, have been due, not to the mistaken actions of a discretionary FOMC, but to that body’s ironclad commitment to monetary rules, and to the Taylor Rule especially.

Those who have paid any attention to John Taylor’s own writings on the crisis and recovery will not be surprised to discover that his own response to Kocherlakota’s article is less than enthusiastic, to put it gently. As Taylor himself exposes many of the more egregious shortcomings of Kocherlakota’s paper, I’ll concentrate on others that Taylor doesn’t address.

A Fanciful Consensus

These start with Kocherlakota’s opening sentence, declaring that “Over the past forty years, a broad consensus has developed among academic macroeconomists that policymakers’ choices should closely track predetermined rules.” That sentence is followed by others referring to “the consensus that favors the use of rules over discretion in the making of monetary policy” and to the “conventional wisdom” favoring the same.

That such a broad consensus favoring rules exists is news to me; I suspect, moreover, that it will come as a surprise to many other monetary economists. For while it’s true that John Taylor himself claimed, in a passage cited by Kocherlakota, that a “substantial consensus” exists regarding the fact “that policy rules have major advantages over discretion,” Taylor wrote this in 1992, when both the Great Moderation and Taylor’s own research, not to mention the work of earlier monetarists, appeared to supply a strong prima-facie case for rules over discretion. To say that this strong case had as its counterpart a “broad consensus” favoring strict monetary rules in practice seems to me to be stretching things even with regard to that period. In any case it can hardly be supposed that the consensus that may have been gathering then has remained intact since!

Instead, as everyone knows, the crisis, whether for good reasons or bad ones, led to a great revival of “Keynesian” thinking, with its preference for discretionary tinkering. To suggest, as Kocherlakota does, that monetarist ideas — and a preference for monetary rules over discretion is essentially monetarist — have remained as firmly in the saddle throughout the last decade as they may have been in 1992 is to indulge in anachronism.

How does Kocherlakota manage to overlook all of this? He does so, in part, by confusing the analytical devices employed by most contemporary macroeconomists, including new-Keynesians, with the policy preferences of those same macroeconomists. Thus he observes that “Most academic papers in monetary economics treat policymakers as mere error terms on a pre-specified feedback rule” and that “Most modern central bank staffs model their policymaker bosses in exactly the same way.” These claims are valid enough in themselves. But they point, not to the policy preferences of the economists in question, but merely to the fact that in formal economic models every aspect of economic reality that’s represented at all is represented by one or more equations.

In the Kydland-Prescott model, for example, a discretionary monetary policy is represented by a desired future rate of inflation, where that rate depends in turn on current levels of various “state” variables; the rate is, to employ the phrase Kocherlakota himself employs in describing rule-based policy, “a fixed function of some publicly observable information.” Discretion consists, not of the absence of a policy function, but in the fact that an optimal policy is chosen in each period. (The rule for which Kydland and Prescott argue consists, in contrast, of having policymakers pick a low inflation rate and commit to stick to it come what may.) This example alone should suffice to make it perfectly clear, if it isn’t so already, that representing monetary policy with a formula, and hence with what might be regarded as a monetary rule of sorts, is hardly the same thing as favoring either the particular rule the formula represents, or monetary rules generally.

Inputs aren’t Injunctions

Suppose that we nevertheless allow that most monetary economists and policy makers favor rules. Doing so certainly makes Kocherlakota’s claim that the Fed has been rule-bound all along appear more plausible. But it hardly suffices to establish that claim’s truth. How, then, does Kocherlakota do that? He does it, or attempts to do it, by misrepresenting the part that the Taylor Rule plays in the Fed’s deliberations, and by artful equivocation.

The misrepresentation consists of Kocherlakota’s confounding a mere input into the Fed’s post-1993 policymaking with a rule that the Fed was bound to obey. Starting not long after 1993, when Taylor published his now famous paper showing that, over the course of the preceding decade or so, the Fed behaved as if it had been following a simple feedback rule, the Fed began to actually employ versions of what had by then come to be known as the Taylor Rule to inform its policy decisions. In particular, board staff began supplying the FOMC with baseline inflation and unemployment rate forecasts based on the assumption that the Fed adhered to a Taylor Rule.

It is to these forecasts or “projections” that Kocherlakota refers in claiming both that the Fed was “unwilling to deviate greatly from the recommendations of the Taylor Rule” and that its poor handling of the crisis and recovery were instances of the failure of that rule. As Taylor explains (for I can’t do any better), Kocherlakota’s proof consists of nothing save

an informal and judgmental comparison of the Fed staff’s model simulations and a survey of future interest rate predictions of FOMC members at two points in time (2009 and 2010). He observes that the Fed staff’s model simulations for future years were based on a Taylor rule, and FOMC participants were asked, “Does your view of the appropriate path for monetary policy [or interest rates in 2009] differ materially from that [or the interest rate in 2009] assumed by the staff.” However, a majority (20 out of 35) of the answers were “yes,” which hardly sounds like the Fed was following the Taylor rule. Moreover, these are future estimates of decisions not actual decisions, and the actual decisions turned out much different from forecast.

As for equivocation, Kocherlakota begins his paper by referring to Kydland and Prescott’s finding that (in Kocherlakota’s words) “to require monetary policymakers to follow a pre-determined rule” would enhance welfare (my emphasis). He thus understands a “monetary rule” to be, not merely a convenient rule-of-thumb, but a formula that must be followed, which is only proper, since that is the understanding that has been shared by all proponents of rules both before and since Kydland and Prescott’s famous contribution. But when it comes to establishing that the FOMC has been committed to the Taylor Rule all along, he speaks, not of the FOMC’s having had no choice but to adhere to that rule, but of its “unwillingness to deviate from” it, of its understanding that the rule is “a useful” or “key” “guide to policy,” and of its “reliance” upon it.

The plain truth is that the FOMC’s members have long been entirely free to make any decisions they like, including decisions that deviate substantially from the Taylor Rule owing to their consideration of “non-rulable information” — Kocherlakota’s term for the sort of information that formal rules can’t take into account. To the extent that they so deviated (and John Taylor himself insists that they deviated a great deal), they faced no sanctions of any kind — not even such desultory sanctions as the FORM Act would impose, were it to become law. What’s more, Kocherlakota himself understands that they were free to deviate as much as they liked, for he goes on to answer in the affirmative the question, “Could the FOMC Have Done Anything Differently?” What Kocherlakota apparently fails to appreciate is that an FOMC that could have done things differently is ipso facto one that was not genuinely “rule-bound.”

Theory and Practice

In light of all this, what merit is there to Kocherlakota’s formal demonstration, near the end of his paper, of the superiority of discretion over rules? Not much. For once one recognizes that, if the FOMC allowed itself to be guided by the Taylor Rule, it did so voluntarily, then one must conclude that its conduct was that of an essentially discretionary policy regime. It follows that, if Kocherlakota’s formal model of discretionary policy were reliable, it would predict that a discretionary Fed confronted by the same “environment” faced by the actual Fed would do just what the actual Fed did, including (perhaps) following a faulty monetary rule, rather than something wiser.

Suppose, on the other hand, that Kocherlakota’s model of discretion did predict that a legally discretionary FOMC might slavishly follow a severely flawed rule. What policy lesson could one draw from such a model, other than the lesson that unlimited monetary discretion is a bad thing, and that the only way out is to impose upon the FOMC a different and better monetary rule than the one the FOMC would voluntarily adopt were it left to its own devices?

To state the point differently, there are not two but three conceivable FOMCs to be considered in properly assessing the merits of discretion. There is, first of all, the actual FOMC which, according to Kocherlakota, followed a (bad) rule, though it did so of its own volition. Then there’s Kocherlakota’s ultra-discretionary FOMC, which uses discretion, not the way the FOMC actually used it, but to do just the right thing, or at least something a lot better than what the actual FOMC did. Finally, there is a genuinely rule-bound FOMC, where the rule may differ from one that the FOMC might voluntarily follow if it could. The third possibility is one that Kocherlakota altogether ignores. That matters, because even if Kocherlakota’s ultra-discretionary Fed is the best of the three, that fact would matter only if he told us how to make an already legally discretionary FOMC do what his ultra discretionary FOMC does. Since he does nothing of the sort, his ultra-discretionary FOMC is a mere chimera.

If, on the other hand, we can identify a rule that does better than the FOMC’s favorite rule, supposing that it really has one, then we could really improve things by forcing the FOMC to follow that rule. Imaginary discretion beats both a bad monetary rule and actual discretion that depends on such a rule; but a better rule beats imaginary discretion, because a better rule is not merely something one can imagine, but a real policy alternative.

Kydland, Prescott, and Those Dead Guys

Finally, a word or two concerning Kocherlakota’s scholarship. Of the many general arguments favoring monetary rules over monetary discretion, he refers only to that of Kydland and Prescott, in which the authorities are modeled as being both equipped with all the pertinent information needed to wield discretion responsibly, and free from any inclination to abuse their powers. What’s remarkable about Kydland and Prescott is, not that by making these assumptions they were more faithful to reality than past advocates of monetary rules, who based their arguments on appeals to limited information (and monetary authorities’ limited forecasting powers especially) and the potential for abuse, but that despite assuming away the problems past rule advocates had emphasized, they were still able to make a powerful case for monetary rules!

A compelling case for discretion must, on the other hand, answer not only Kydland and Prescott’s argument, but also the less subtle but no less important arguments of Henry Simons, Milton Friedman, and Jacob Viner, among others. Despite his conclusion that “there are good reasons to believe that societies will achieve better outcomes if central banks are given complete discretion to pursue well-specified goals,” Kocherlakota never really does this. Instead, his demonstrations make only very limited allowances for those central-banker infirmities that caused early exponents of rules to plead for them in the first place. In particular, he allows that central bankers may suffer from an “inflation bias.” But he does not allow for the many other political as well as cognitive biases to which central bankers may be subject. More importantly, he does not allow for the very real possibilities that central bankers might respond to “non-rulable” information inappropriately, or that such information might be inaccurate or otherwise misleading.*

More egregious still is Kocherlakota’s failure to refer to any work by John Taylor save his 1993 paper. Since Kocherlakota comes within an ace of blaming Taylor for the fact that the U.S. economy has gone to seed, you’d think that he would at least acknowledge Taylor’s own rather different opinion on the matter. Instead he leaves his readers with the impression that Taylor himself believes that his rule remains the centerpiece of a “broad consensus” in which the Fed itself takes part. As Taylor points in his own reply to Kocherlakota to some evidence to the contrary, I’ll simply observe that, if he believed that the Fed stuck to his rule in the years surrounding the subprime debacle, he wouldn’t have called his book on the Fed’s role in that debacle Getting Off Track.

In short, Kocherlakota’s attempt to treat the Fed’s failures as proof of the desirability of monetary discretion is as unsuccessful as it is bold. He might, after all, have spared himself the effort, had he only kept in mind an advantage of discretion that even its most determined opponents aren’t likely to deny, to wit: that it’s the bigger part of valor.

____________________

*Even so, Kocherlakota’s formal demonstration still favors a rule over discretion in the event that “the bias of the central bank exceeds the standard deviation of the central bank’s non-rulable information.”

[Cross-posted from Alt-M.org]

When writing a few days ago about the newly updated numbers from Economic Freedom of the World, I mentioned in passing that New Zealand deserves praise “for big reforms in the right direction.”

And when I say big reforms, this isn’t exaggeration or puffery.

Back in 1975, New Zealand’s score from EFW was only 5.60. To put that in perspective, Greece’s score today is 6.93 and France is at 7.30. In other words, New Zealand was a statist basket cast 40 years ago, with a degree of economic liberty akin to where Ethiopia is today and below the scores we now see in economically unfree nations such as Ukraine and Pakistan.

But then policy began to move in the right direction; between 1985 and 1995 especially, the country became a Mecca for market-oriented reforms. The net result is that New Zealand’s score dramatically improved and it is now comfortably ensconced in the top-5 for economic freedom, usually trailing only Hong Kong and Singapore.

To appreciate what’s happened in New Zealand, let’s look at excerpts from a 2004 speech by Maurice McTigue, who served in the New Zealand parliament and held several ministerial positions.

He starts with a description of the dire situation that existed prior to the big wave of reform.

New Zealand’s per capita income in the period prior to the late 1950s was right around number three in the world, behind the United States and Canada. But by 1984, its per capita income had sunk to 27th in the world, alongside Portugal and Turkey. Not only that, but our unemployment rate was 11.6 percent, we’d had 23 successive years of deficits (sometimes ranging as high as 40 percent of GDP), our debt had grown to 65 percent of GDP, and our credit ratings were continually being downgraded. Government spending was a full 44 percent of GDP, investment capital was exiting in huge quantities, and government controls and micromanagement were pervasive at every level of the economy. We had foreign exchange controls that meant I couldn’t buy a subscription to The Economist magazine without the permission of the Minister of Finance. I couldn’t buy shares in a foreign company without surrendering my citizenship. There were price controls on all goods and services, on all shops and on all service industries. There were wage controls and wage freezes. I couldn’t pay my employees more—or pay them bonuses—if I wanted to. There were import controls on the goods that I could bring into the country. There were massive levels of subsidies on industries in order to keep them viable. Young people were leaving in droves.

Maurice then discusses the various market-oriented reforms that took place, including spending restraint.

What’s especially impressive is that New Zealand dramatically shrank government bureaucracies.

When we started this process with the Department of Transportation, it had 5,600 employees. When we finished, it had 53. When we started with the Forest Service, it had 17,000 employees. When we finished, it had 17. When we applied it to the Ministry of Works, it had 28,000 employees. I used to be Minister of Works, and ended up being the only employee… if you say to me, “But you killed all those jobs!”—well, that’s just not true. The government stopped employing people in those jobs, but the need for the jobs didn’t disappear. I visited some of the forestry workers some months after they’d lost their government jobs, and they were quite happy. They told me that they were now earning about three times what they used to earn—on top of which, they were surprised to learn that they could do about 60 percent more than they used to!

And there was lots of privatization.

[W]e sold off telecommunications, airlines, irrigation schemes, computing services, government printing offices, insurance companies, banks, securities, mortgages, railways, bus services, hotels, shipping lines, agricultural advisory services, etc. In the main, when we sold those things off, their productivity went up and the cost of their services went down, translating into major gains for the economy. Furthermore, we decided that other agencies should be run as profit-making and tax-paying enterprises by government. For instance, the air traffic control system was made into a stand-alone company, given instructions that it had to make an acceptable rate of return and pay taxes, and told that it couldn’t get any investment capital from its owner (the government). We did that with about 35 agencies. Together, these used to cost us about one billion dollars per year; now they produced about one billion dollars per year in revenues and taxes.

Equally impressive, New Zealand got rid of all farm subsidies… and got excellent results.

[A]s we took government support away from industry, it was widely predicted that there would be a massive exodus of people. But that didn’t happen. To give you one example, we lost only about three-quarters of one percent of the farming enterprises—and these were people who shouldn’t have been farming in the first place. In addition, some predicted a major move towards corporate as opposed to family farming. But we’ve seen exactly the reverse. Corporate farming moved out and family farming expanded.

Maurice also has a great segment on education reform, which included school choice.

But since I’m a fiscal policy wonk, I want to highlight this excerpt on the tax reforms.

We lowered the high income tax rate from 66 to 33 percent, and set that flat rate for high-income earners. In addition, we brought the low end down from 38 to 19 percent, which became the flat rate for low-income earners. We then set a consumption tax rate of 10 percent and eliminated all other taxes—capital gains taxes, property taxes, etc. We carefully designed this system to produce exactly the same revenue as we were getting before and presented it to the public as a zero sum game. But what actually happened was that we received 20 percent more revenue than before. Why? We hadn’t allowed for the increase in voluntary compliance.

And I assume revenue also climbed because of Laffer Curve-type economic feedback. When more people hold jobs and earn higher incomes, the government gets a slice of that additional income.

Let’s wrap this up with a look at what New Zealand has done to constrain the burden of government spending. If you review my table of Golden Rule success stories, you’ll see that the nation got great results with a five-year spending freeze in the early 1990s. Government shrank substantially as a share of GDP.

Then, for many years, the spending burden was relatively stable as a share of economic output, before then climbing when the recession hit at the end of the last decade.

But look at what’s happened since then. The New Zealand government has imposed genuine spending restraint, with outlays climbing by an average of 1.88 percent annually according to IMF data. And because that complies with my Golden Rule (meaning that government spending is growing slower than the private sector), the net result according to OECD data is that the burden of government spending is shrinking relative to the size of the economy’s productive sector.

P.S. For what it’s worth, the OECD and IMF use different methodologies when calculating the size of government in New Zealand (the IMF says the overall burden of spending is much smaller, closer to 30 percent of GDP). But regardless of which set of numbers is used, the trend line is still positive.

P.P.S. Speaking of statistical quirks, some readers have noticed that there are two sets of data in Economic Freedom of the World, so there are slightly different country scores when looking at chain-weighted data. There’s a boring methodological reason for this, but it doesn’t have any measurable impact when looking at trends for individual nations such as New Zealand.

P.P.P.S. Since the Kiwis in New Zealand are big rugby rivals with their cousins in Australia, one hopes New Zealand’s high score for economic freedom (3rd place) will motivate the Aussies (10th place) to engage in another wave of reform. Australia has some good polices, such as a private Social Security system, but it would become much more competitive if it lowered its punitive top income tax rate (nearly 50 percent!).

Today, legislative efforts began in eleven cities (see right) aimed at requiring police departments to be more transparent about the surveillance technology they use. The bills will also reportedly propose increased community control over the use of surveillance tools. These efforts, spearheaded by the ACLU and other civil liberty organizations, are important at a time when surveillance technology is improving and is sometimes used without the knowledge or approval of local officials or the public.

Many readers will be familiar with CCTV cameras and wiretap technology, which police use to investigate crimes and gather evidence. Yet there is a wide range of surveillance tools that are less well-known and will become more intrusive as technology advances.

Facial recognition software is already used by some police departments. As this technology improves it will be easier for police to identify citizens, especially if it is used in conjunction with body cameras. But our faces are not our only biometric identifiers. Technology in the near future will make it easier to identify us by analyzing our gait, voice, irises, and ears.

This is concerning given that (thanks to the state of legal doctrine) police are not carrying out a Fourth Amendment search when they analyze your features with biometric tools. As Duke law professor Nita Farahany explained to the Senate Judiciary Subcommittee on Privacy, Technology and the Law in 2012:

If the police use facial recognition technology to scan an individual’s face while in a public place, and that individual is not detained or touched as he is scanned, then no Fourth Amendment search has occurred. Neither his person nor his effects have been disturbed, and he lacks any legal source to support a reasonable expectation that his facial features will be hidden from government observation. He has chosen to present his face to the world, and he must expect that the world, including the police, may be watching.

Even features below your skin may soon be used by police to identify you with ease. British and American Intelligence officials reportedly used vein recognition analysis while seeking to identify “Jihadi John,” a British terrorist responsible for a string of beheadings in Syria.

Potential terrorism and national security concerns are often cited to justify the secrecy surrounding the “Stingray.” Stingrays, which work by mimicking cell-towers, collect identifying data from cellphones within range. This data allows investigators to track and identify targets. Although dozens of law enforcement agencies have used Stingrays, their use is shrouded in secrecy thanks in part to FBI non-disclosure agreements. The Federal Communications Commission (which oversees radio-emitting tools) has granted the FBI authority over the regulation of Stingrays. The FBI’s non-disclosure agreements are so secretive that they can require prosecutors to drop charges rather than publicly reveal that a Stingray has been used.

As my colleague Adam Bates has explained, Stingrays are almost certainly overwhelmingly used for routine investigations that have nothing to do with terrorism:

Despite repeated references to “terrorists” and “national security” as a means for maintaining secrecy about Stingray use, the data that has been released detailing the purposes of actual Stingray investigations - such as this breakdown from the Tallahassee Police Department that contains not a single terrorism reference - suggests that Stingrays are used virtually entirely for routine law enforcement investigations.

Cell tower simulators can be mounted on airplanes, but advances in drone technology means that flying surveillance tools above cities won’t always require manned aircraft. The prospect of cell tower simulators mounted on huge solar-powered drones capable of staying aloft for months is worrying enough, but as technology improves drones will be getting smaller as well as bigger. Drones the size of small birds have already been developed and used for military operations, and we should expect similar drones to be routinely used by domestic law enforcement in the not-too-distant future.

As I pointed out yesterday, persistent surveillance technology, which provides users with highly detailed views of areas the size of an entire town, can be mounted on drones and have already been used by the military. Less invasive but nonetheless worrying persistent aerial surveillance technology has been used in Baltimore. Many Baltimore officials were not told about the surveillance, which began in January and is funded by a billionaire couple.

This level of secrecy surrounding the use and funding of surveillance is one of the issues that activists are hoping to address in their campaign. According to Community Control Over Police Surveillance’s guiding principles, “Surveillance technologies should not be funded, acquired or used without the knowledge of the public and the approval of their elected representatives on the city council.”

Even staying indoors will not necessarily keep you safe from secretive police snooping. Last year it was revealed that the New York Police Department had been using x-ray vans capable of seeing through the walls of buildings and vehicles. City officials refused to answer basic questions about the vans related to funding and judicial authorization.

Law enforcement agencies across the country will continue to take advantage of surveillance technology as it improves. Lawmakers can ensure that police departments are transparent about the kind of surveillance tools they’re using and how these tools are paid for. This is especially important given the rate of technological change and the history of police secrecy.

Global Science Report is a feature from the Center for the Study of Science, where we highlight one or two important new items in the scientific literature or the popular media. For broader and more technical perspectives, consult our “Current Wisdom.”

In our continuing examination of U.S. flood events, largely prompted by the big flood in Louisiana last month and the inevitable (and unjustified) global-warming-did-it stories that followed, we highlight a just-published paper by a research team led by Dr. Stacey Archfield of the U.S. Geological Survey examining trends in flood characteristics across the U.S. over the past 70 years.

Previous studies we’ve highlighted have shown that a) there is no general increase in the magnitude of heavy rainfall events across the U.S., and thus, b) unsurprisingly, “no evidence was found for changes in extreme precipitation attributable to climate change in the available observed record.”  But since heavy rainfall is not always synonymous with floods, the new Archfield paper provides further perspective.

The authors investigated changes in flood frequency, duration, magnitude and volume at 345 stream gauges spread across the country. They also looked to see if there were any regional consistencies in the changes and whether or not any of the observed changes could be linked to large-scale climate indices, like El Niño.

What they found could best be described largely as a “negative” result—basically, few departures from the a priori expectation (often called the null hypothesis) that there are no coherent changes in flood characteristics occurring across the U.S.  Here’s their summary of their research findings:

Trends in the peak magnitude, frequency, duration and volume of frequent floods (floods occurring at an average of two events per year relative to a base period) across the United States show large changes; however, few trends are found to be statistically significant. The multidimensional behavior of flood change across the United States can be described by four distinct groups, with streamgages either experiencing: 1) minimal change, 2) increasing frequency, 3) decreasing frequency, or 4) increases in all flood properties. Yet, group membership shows only weak geographic cohesion. Lack of geographic cohesion is further demonstrated by weak correlations between the temporal patterns of flood change and large-scale climate indices. These findings reveal a complex, fragmented pattern of flood change that, therefore, clouds the ability to make meaningful generalizations about flood change across the United States.

The authors added:

Observed changes in short term precipitation intensity from previous research and the anticipated changes in flood frequency and magnitude expected due to enhanced greenhouse forcing are not generally evident at this time over large portions of the United States for several different measures of flood flows.

“Negative” results of this kind are a refreshing change from the “positive” results—results which find something “interesting” to the researchers, the journal publishers, or the funding agencies—that have come to dominate the scientific literature, not just on climate change, but in general. The danger in “positive” results is that they can ingrain falsehoods both in the knowledgebase of science itself, and also in the minds of the general public.  We’ve discussed how the appetite for producing “interesting” results—which in the case of climate change means results that indicate the human impact on weather events/climate is large, unequivocal, and negative—leads to climate alarm becoming “a self promulgating collective belief.”

What is needed to break this positive feedback loop, aka availability cascade, are researchers like Archfield et al. who aren’t afraid to follow though and write up experiments that don’t find something “interesting” together with a media that’s not afraid to report them. In today’s world dominated by climate hype, it is the non-interesting results that are, in fact, the most interesting. But, bear in mind that the interesting aspect of them stems not so much from the results themselves being rare, but rather from the rarity of the reporting of and reporting on such results.

Reference:

Archfield, S. A., et al., 2016. Fragmented patterns of flood change across the United States. Geophysical Research Letters, doi: 10.1002/2016GL070590.

Yesterday, police in Oklahoma released aerial and dash camera footage of an unarmed man named Terence Crutcher being shot by an officer as he stood beside his SUV with his hands in the air. Tulsa Police Chief Chuck Jordan described the footage as “very difficult to watch,” and the officer who shot Crutcher is on administrative leave. The aerial footage of the shooting ought to remind us how important transparency policy is in the age of the “Pre-Search” and the role persistent aerial surveillance may come to play in police misconduct investigations.

Reporting from earlier this year revealed that police in Baltimore have been testing persistent aerial surveillance technology, described by its developer as “Google Earth with TiVo,” which allows users to keep around 30 square miles under surveillance. The technology, developed by Persistent Surveillance Systems (PSS), has helped Baltimore police investigate thefts and shootings. But the trial was conducted in secret, without the knowledge of key Baltimore city officials, and was financed by a billionaire couple.

Shortly after news of the persistent surveillance in Baltimore was reported I and others noted that it should cause concern. Citizens engaged in lawful behavior deserve to know if their movements are being filmed and tracked by police for hours at a time. Yet, as disturbing as the secretive persistent surveillance in Baltimore is, technology already exists that is far more intrusive.

PSS’ surveillance allows analysts to track individuals, but not to see any of that individual’s identifying features. On the massive image put together by PSS one individual takes up one pixel. ARGUS-IS, a persistent surveillance system designed for the military, mounted on a drone or aircraft at 20,000 feet allows users to see 6 inch details in a 1.8 billion pixel image showing a 10 square mile area. When ARGUS-IS technology is incorporated into Gorgon Stare, another surveillance system, the coverage area expands to about 39 square miles. A video highlighting ARGUS-IS’ capabilities is below:

It’s currently not feasible for most American law enforcement agencies to use Gorgon Stare and other military surveillance equipment given the cost. However, domestic law enforcement has not been shy about using military equipment, and technological advances suggest that we should be asking “when” questions, not “if” questions, when discussing police using persistent aerial surveillance tools that capture highly detailed images.

Baltimore police don’t seem concerned about the privacy worries associated with persistent surveillance. From Vice:

For its part, the police department denies that officers have done anything wrong, or that the planes even amount to a form of surveillance. TJ Smith, media relations chief for the Baltimore Police Department, told VICE the aerial program “doesn’t infringe on privacy rights” because it captures images available in public spaces.

Baltimore police aren’t alone. Dayton, Ohio Police Chief Richard Biehl said while discussing PSS surveillance, “I want them to be worried that we’re watching, […] I want them to be worried that they never know when we’re overhead.”

I’ve written before about how aerial surveillance Supreme Court cases from the 1980s grant police a great deal of freedom when it comes to snooping on citizens from the sky. Lawmakers can provide increased privacy and restrict the kind of persistent aerial surveillance that has been taking place in Baltimore. However, if lawmakers want to allow for constant “eyes in the sky” they should at least take steps to ensure that persistent surveillance equipment is used to increase police accountability and transparency, not just to aid criminal investigations. As body cameras have shown, police tools can help law enforcement and criminal justice reform advocates, but only if the right policies are in place.

News stories are now reporting that the Minnesota stabber Dahir Adan entered the United States as a Somali refugee when he was 2 years old.  Ahmad Khan Rahami, the suspected bomber in New York and New Jersey, entered as an Afghan asylum-seeker with his parents when he was 7 years old.  The asylum and refugee systems are the bedrocks of the humanitarian immigration system and they are under intense scrutiny already because of fears over Syrian refugees.    

The vetting procedure for refugees, especially Syrians, is necessarily intense because they are overseas while they are being processed.  The security protocols have been updated and expanded for them.  This security screening should be intense.  The process for vetting asylum-seekers, who show up at American ports of entry and ask for asylum based on numerous criteria, is different.  Regardless, no vetting system will prevent or detect child asylum-seekers or child refugees from growing up and becoming terrorists any more than a child screening program for U.S.-born children will be able to prevent or detect those among us will grow up to be a terrorist. 

Adan and Rahami didn’t manage to murder anyone due to their incompetence, poor planning, potential mental health issues, luck, armed Americans, and the quick responses by law enforcement.  Regardless, some may want to stop all refugees and asylum seekers unless they are 100 percent guaranteed not to be terrorists or to ever become terrorists.  Others are more explicit in their calls for a moratorium on all immigration due to terrorism.  These folks should know that precautionary principle is an inappropriate standard for virtually every area of public policy, even refugee screening.   

Even so, these systems are surprisingly safe.  According to a new Cato paper, from 1975 to the end of 2015, America allowed in just over 700,000 asylum-seekers and 3.25 million refugees.  Four of those asylum-seekers became terrorists and killed four people in attacks on U.S. soil.  Twenty of the 3.25 million refugees became terrorists and they killed three Americans on U.S. soil.  Neither figure includes refugees or asylum-seekers who travelled overseas to become terrorists abroad as I was solely focused on terrorists targeting the homeland.

The chance of being murdered in a terrorist attack committed by an asylum-seeker was one in 2.7 billion a year.  The chance of being murdered in a terrorist attack committed by a refugee is one in 3.4 billion a year.  These recent attacks in New York, New Jersey, and Minnesota will make the refugee and asylum programs look more dangerous by increasing the number of terrorists who entered under them.  Fortunately, the attackers didn’t kill anybody so the chance of dying in a terrorist attack committed by immigrants who entered in these categories won’t increase – although that is small comfort to the victims who were wounded.   

The terrorism risk posed by refugees and asylum-seekers could skyrocket in the future and justify significant changes in either humanitarian immigration programs, including more intense screening or other actions.  The recent attacks in Minnesota, New York, and New Jersey, however, do not justify such drastic changes.  

It won’t surprise anyone who follows data security to know that this past summer saw a hack of databases containing Louisiana driver information. A hacker going by the ironic handle “NSA” offered the data for sale on a “dark web” marketplace.

Over 290,000 residents of Louisiana were apparently affected by the data breach. The information stolen was typical of that which is held by motor vehicle bureaus: first name, middle name, last name, date of birth, driver’s license number, state in which the driver’s license was issued, address, phone number, email address, a record of driving offenses and infractions, and any and all fines paid to settle tickets and other fines.

This leak highlights the risks of state participation in the REAL ID Act. One of the problems with linking together the databases of every state to create a national ID is that the system will only be as secure as the state with the weakest security.

REAL ID mandates that states require drivers to present multiple documents for proof of identity, proof of legal presence in the United States, and proof of their Social Security number. The information from these documents and digital copies of the documents themselves are to be stored state-run databases just like the one that was hacked in Louisiana.

For the tiniest increment in national security—inconveniencing any foreign terrorist who might use a driver’s license in the U.S.—REAL ID increases the risk of wholesale data breaches and wide-scale identity fraud. It’s not a good trade-off.

A National Bureau of Economic Research working paper by David Autor, David Dorn and Gordon Hanson, titled “The China Shock: Learning from Labor Market Adjustment to Large Changes in Trade,” has created Piketty-like buzz in U.S. trade policy circles this year.  Among the paper’s findings is that the growth of imports from China between 1999 and 2011 caused a U.S. employment decline of 2.4 million workers, and that wages and employment prospects for those who lost jobs remained depressed for many years after the initial effect. 

While commentators on the left have trumpeted these findings as some long-awaited refutation of Adam Smith and David Ricardo, the authors have distanced themselves from those conclusions, portraying their analysis as an indictment of a previously prevailing economic consensus that the costs of labor market adjustment to increased trade would be relatively subdued (although I’m skeptical that such a consensus ever existed). But in a year when trade has been scapegoated for nearly everything perceived to be wrong in society, the release of this paper no doubt reinforced fears – and fueled demagogic rants – about trade and globalization being scourges to contain, and even eradicate.

Last week, Alan Reynolds explained why we should take Autor, et. al.’s job-loss figures with a pinch of salt, but there is an even more fundamental point to make here. That is: Trade has one role to perform – to grow the economic pie. Trade fulfills that role by allowing us to specialize. By expanding the size of markets to enable more refined specialization and economies of scale, trade enables us to produce and, thus, consume more.  Nothing more is required of trade. Nothing!

Still, politicians, media, and other commentators blame trade for an allegedly unfair distribution of that pie and for the persistence of frictions in domestic labor markets. But reducing those frictions and managing distribution of the larger economic pie are not matters for trade policy.  They are matters for domestic policy. Trade does its job. Policymakers must do their jobs, too.

Trade is disruptive, no doubt. When consumers and businesses enjoy the freedom to purchase goods and industrial inputs from a greater number of suppliers, those suppliers are kept on their toes. They must be responsive to their customers needs and, if they fail, they inevitably contract or perish. Yes, trade renders domestic production of certain products (and the jobs that go with those activities) relatively inefficient and, ultimately, unviable. Unfortunately, people are just as quick to observe this trade-induced destruction as they are to overlook the creation of new domestic industries, firms, and products that emerge elsewhere in the economy as a result of this process. In other words, the losses attributable to trade’s destruction are seen, the gains from trade’s creation are invisible, and popular discord is the inevitable outcome.

The adoption of new technology disrupts the status quo, as well – and to a much greater extent than trade does. Technological progress accounts for far more job displacement.  Yet we don’t hear calls for taxing or otherwise impeding innovation. You know all those apps on your mobile phone – the flashlight, map, camera, clock, and just about every other icon on your screen? They’ve made hundreds of thousands of manufacturing jobs redundant. But as part of the same process, we got Uber, AirBnb, Amazon, the Apps-development industry itself, and all the value-added and jobs that come with those disruptive technologies.

Trade and technology (as well as changing consumer tastes, demand and supply shocks, etc.) are catalysts of both destruction and creation. In 2014, the U.S. economy shed 55.1 million jobs. That’s a lot of destruction. But in the same year, the economy added 57.9 million jobs – a net increase of 2.8 million jobs.

Overcoming scarcity is a fundamental objective of economics. Making more with less (fewer inputs) is something we celebrate – we call it increasing productivity. It is the wellspring of greater wealth and higher living standards. Imagine a widget factory where 10 workers make $1000 worth of widgets in a day. Then management purchases a new productivity enhancing machine that enables 5 workers to produce $1000 worth of widgets in a day. Output per worker has just doubled. But in order for the economy to benefit from that labor productivity increase, the skills of the 5 workers no longer needed on the widget production line need to be redeployed elsewhere in the economy.  New technology, like trade, frees up resources to be put to productive use in other firms, industries, or sectors.

Whether and how we (as a society; as an economy) are mindful of this process of labor market adjustment are questions relevant to our own well-being and are important matters of public policy. What policies might reduce labor market frictions? What options are available to expedite adjustment for those who lose their jobs?  Have policymakers done enough to remove administrative and legal impediments to labor mobility? Have policymakers done enough to make their jurisdictions attractive places for investment? Are state-level policymakers aware that our federalist system of government provides an abundance of opportunity to identify and replicate best practices?

U.S. labor market frictions are to some extent a consequence of a mismatch between the supply and demand for certain labor skills.  Apprenticeship programs and other private sector initiatives to hire people to train them for the next generation of jobs can help here. But the brunt of the blame for sluggish labor market adjustment can be found in the collective residue of bad policies being piled atop bad policies. Reforming a corporate tax system that currently discourages repatriation of an estimated $2 trillion of profits parked in U.S. corporate coffers abroad would induce domestic investment and job creation. Curbing excessive and superfluous regulations that raise the costs of establishing and operating businesses without any marginal improvements in social, safety, environmental, or health outcomes would help. Permanently eliminating imports duties on intermediate goods to reduce production costs and make U.S.-based businesses more globally competitive would attract investment and spur production and job creation. Eliminating occupational licensing practices would bring competition and innovation to inefficient industries. Adopting best practices by replicating the policies of states that have been more successful at attracting investment and creating jobs (and avoiding the policies of states that lag in these metrics) could also contribute to the solution of reducing labor market adjustment costs.

But we should keep in mind that there are no circumstances under which curtailing the growth of the pie — curbing trade — can be considered a legitimate aim of public policy. The problem to solve is not trade. The problem is domestic policy that impedes adjustment to the positive changes trade delivers.

CNN contributor Van Jones has an op-ed in the San Francisco Chronicle in which he worries that the Trans Pacific Partnership (TPP) will undermine a California program to promote solar energy:

Because TPP would threaten a successful California rebate program for green technologies that are made in-state, the deal could result in the elimination of good-paying green jobs in fields like solar and wind manufacturing and energy efficiency. Green jobs employ all kinds of people — truck drivers, welders, secretaries, scientists — all across the state. These jobs can pull people out of poverty while protecting the planet.

I have some good news for him: That California rebate program probably already violates WTO rules, and, in fact, is one of eight U.S. state renewable energy programs that were just challenged by India in a formal WTO complaint. As a result, the TPP is not particularly important on this issue.  The WTO already has it covered.

I also have some even better news for him: These kinds of programs do not create jobs, and are bad for the environment as well, so we should be happy to see them go (either eliminating them on our own, or in response to findings of violation under international trade agreements).

What exactly is wrong with these programs?  Think about the impact of having eight U.S. states (that’s how many were mentioned in India’s complaint), and countless localities around the world, encouraging local production of solar panels or other forms of energy. The end result of such policies is clear: Lots of small, inefficient producers, leading to more expensive energy. That doesn’t sound very green.

As for the jobs argument, advocates of policies that discriminate in favor of local companies should also factor into their calculations the jobs lost when the rest of the world adopts similar policies. These programs are not secret, and once someone starts doing it, the practice proliferates. Even if the California policies led to the purchase of locally made goods, when other governments do the same thing it reduces the sales of California made goods to other jurisdictions.  The end result, therefore, is not additional jobs in California, but rather products that are more expensive and less efficiently made.

As widely reported, the soft employment data for August and declines in August retail sales and industrial production (manufacturing IP also down) have reduced market odds on a Fed rate hike at its meeting 20-21 September. According to the CME FedWatch Tool, based on trading in federal funds, the probability of a rate hike tomorrow is only 0.12. The same CME tool gives a probability of .46 the Fed will stand pat through December. Now what? I wish I knew. Here is how I think about the question.

First, it now appears that the Fed will go into its December meeting, as it did last year, with forward guidance on the table for a federal funds rate increase. The FOMC might, of course, alter its 2016 forward guidance at its September meeting. If the Committee reduces its guidance to indicate a fed funds range 25 basis points higher than now, but below prior guidance, will that create a strengthened implied “promise” to act in December? That would double down on its current problem with forward guidance. Will the FOMC hike even if employment data through November remain soft? Or, suppose employment growth resumes; will the market take seriously that the FOMC would consider a 50 bps hike in December as implied by current forward guidance?

Second, what are Janet Yellen’s incentives? A year from now, looking back, is the Fed likely to be in a better position and her reputation enhanced if the Fed has raised the federal funds target rate in 2016 and it turns out to be premature or the if Fed has held steady whereas it would have been better to have tightened in 2016? Given the data in hand as I write, it seems to me that waiting makes more sense. Yes, unemployment is below 5 percent and recent employment growth solid, but softening. However, there is little sign of rising inflation. On conventional measures, there is still slack in the labor market; for example, the labor-force participation rate is still well below prior levels. And, don’t forget that in 1999 unemployment fell to almost 4 percent.

Third, if the Fed gets behind by not moving in 2016, how hard will it be to catch up? How much difference can it make if the Fed moves in early 2017 rather than in 2016? Only an old-fashioned fine-tuner can believe it makes much difference.

We can replay this same argument at every future FOMC meeting. What must happen to create a compelling case for the Fed to move? My interpretation of the rate increase last December is that it had less to do with compelling new information than with the fact that the Fed had long promised to move in 2015. That says much more about the wisdom of forward guidance than about sensible monetary policy.

Here is a suggestion for the FOMC, which seems so obvious that I assume the Committee must already be considering it. The FOMC should recast its forward guidance away from the calendar. At its September meeting, the guidance should apply to end of third quarter 2017, 2018 and 2019 rather than end of those calendar years. At each meeting, the guidance would then apply to 4 quarters ahead, 8 quarters ahead and 12 quarters ahead. With this approach, the Committee would never again face an apparent calendar deadline to act.

Seems obvious to me, and very simple. Yes, perhaps guts forward guidance and that would be a good thing. The mantra should be “data dependence, not date dependence.”

The Spin Cycle is a reoccurring feature based upon just how much the latest weather or climate story, policy pronouncement, or simply poobah blather spins the truth. Statements are given a rating between 1-5 spin cycles, with fewer cycles meaning less spin. For a more in-depth description, visit the inaugural edition.

In mid-August a slow moving unnamed tropical system dumped copious amounts of precipitation in the Baton Rouge region of Louisiana. Reports were of some locations receiving over 30 inches of rain during the event. Louisiana’s governor John Bel Edwards called the resultant floods “historic” and “unprecedented.”

Some elements in the media were quick to link in human-caused climate change (just as they are to seemingly every extreme weather event). The New York Times, for example, ran a piece titled “Flooding in the South Looks a Lot Like Climate Change.”

We were equally quick to point out that there was no need to invoke global warming in that the central Gulf Coast is prime country for big rain events and that similar, and even larger, rainfall totals have been racked up there during times when there were far fewer greenhouse gases in the atmosphere—like in 1979 when 45 inches of precipitation fell over Alvin, TX from the slow passage of tropical storm Claudette, or in 1940 when 37.5 in. fell on Miller Island, LA from another stalled unnamed tropical system.

But we suspected that this wouldn’t be the end of it, and we were right.

All the while, an “international partnership” funded in part by the U.S. government (through grants to climate change cheerleader Climate Central), called World Weather Attribution (“international effort designed to sharpen and accelerate the scientific community’s ability to analyze and communicate the possible influence of climate change on extreme-weather events such as storms, floods, heat waves and droughts”) and was fervently working to formally (i.e., through a scientific journal publication) “attribute” the Louisiana rains to climate change.

The results of their efforts were made public a couple of weeks ago in parallel with the submission (we’ll note: not acceptance) of their article to the journal Hydrology and Earth System Science Discussions.

Their “attribution” can well, be attributed, to two factors. First, their finding that there has been a large increase in the observed probability of extreme rainfall along the central Gulf Coast—an increase that they claim can be directly related to the rise in the global (!) average temperature. And second, their finding that basically the single (!) climate model they examined also projects an increase in the probably of heavy rainfall in the region as a result of human-induced climate changes. Add the two together, throw in a splashy press release from a well-funded climate change propaganda machine and headlines like the AP’s “Global warming increased odds for Louisiana downpour” are the result.

As you have probably guessed since you are reading this under our “Spin Cycle” tag, a closer look finds some major shortcomings to this conclusion.

For example, big rains are part of the region’s history—and most (but not all) are result from meandering tropical weather systems whose progress has been slowed by mid-latitude circulation features. In most cases, the intensity of the tropical system itself (as measured by central pressure or maximum wind speed) is not all that great, but rather the abundant feed of moisture feed from the Gulf of Mexico and slow progress of the storm combine to produce some eye-popping, or rather boot-soaking, precipitation totals.  Here is a table of the top 10 rainfall event totals from the passage of tropical systems through the contiguous U.S. since 1921 (note that all are in the Gulf Coast region). Bear in mind that the further you go back in time, the sparser the observed record becomes (which means an increased chance that the highest rainfall amounts are missed). The August 2016 Louisiana event cracks the top 10 as number 10. A truly impressive event—but hardly atypical during the past 100 years. 

 

As the table shows, big events occurred throughout the record. But due to the rare nature of the events as well as the spotty (and changing) observational coverage, doing a formal statistical analysis of frequency changes over time is very challenging. One way to approach it is to use only the stations with the longest period of record—this suffers from missing the biggest totals from the biggest events, but at least it provides some consistency in observational coverage.  Using the same set of long-term stations analyzed by the World Weather Attribution group, we plotted the annual maximum precipitation in the station group as a function of time (rather than global average temperature). Figure 1 is our result. We’ll point out that there is not a statistically significant change over time—in other words, the intensity of the most extreme precipitation event each year has not systematically changed in a robust way since 1930. It’s a hard sell link this non-change to human-caused global warming.

Figure 1. Annual maximum 3-day rainfall total for stations with at least 80 years of record in the region 29-31N, 95-85W.

Admittedly, there is a positive correlation in these data with the global average surface temperature, but correlation does not imply causation. There is a world of distance between local weather phenomena and global average temperature. In the central Gulf Coast, influential denizens of the climate space, as we’ve discussed, are tropical cyclones—events whose details (frequency, intensity, speed, track, etc.) are highly variable from year to year (decade to decade, century to century) for reasons related to many facets of natural variability. How the complex interplay of these natural influencers may change in a climate warmed by human greenhouse gas emissions is far from certain and can be barely even be speculated upon. For example, the El Niño/La Niña cycle in the central Pacific has been shown to influence Gulf Coast tropical cyclone events, yet the future characteristics of this important factor vary considerably from climate model to climate model and confidence in climate model expectations of future impacts is low according to the U.N. Intergovernmental Panel on Climate Change (IPCC).

Which means that using a single climate model family in an “attribution” study of extreme Gulf Coast rainfall events is a recipe for distortion—at best, a too limited analysis, at worse, a misrepresentation of the bigger picture. 

So, instead of the widely advertised combination in which climate models and observations are in strong agreement as to the role of global warming, what we really have is a situation in which the observational analysis and the model analysis are both extremely limited and possibly (probably) unrepresentative of the actual state of affairs.

Therefore, for their overly optimistic view of the validity, applicability, and robustness of their findings that global warming has increased the frequency of extreme precipitation events in central Louisiana, we rate Climate Central’s World Weather Attribution’s degree of spin as “Slightly Soiled” and award them two Spin Cycles.

 

Slightly Soiled.  Over-the-top rhetoric. An example is the common meme that some obnoxious weather element is new, thanks to anthropogenic global warming, when it’s in fact as old as the earth. An example would the president’s science advisor John Holdren’s claim the “polar vortex,” a circumpolar westerly wind that separates polar cold from tropical warmth, is a man-made phenomenon. It waves and wiggles all over the place, sometimes over your head, thanks to the fact that the atmosphere behaves like a fluid, complete with waves, eddies, and stalls. It’s been around since the earth first acquired an atmosphere and rotation, somewhere around the beginning of the Book of Genesis. Two spin cycles.

Washington Post columnist and former Bush 43 speechwriter Michael Gerson has not always been charitable toward libertarians. He has been pretty good on Donald Trump and ObamaCare, though, and today he ties the two together:

Only 18 percent of Americans believe the Affordable Care Act has helped their families…A higher proportion of Americans believe the federal government was behind the 9/11 attacks than believe it has helped them through Obamacare…

Trump calls attention to these failures, while offering (as usual) an apparently random collection of half-baked policies and baseless pledges (“everybody’s got to be covered”) as an alternative. There is no reason to trust Trump on the health issue; but there is plenty of reason to distrust Democratic leadership. No issue — none — has gone further to convey the impression of public incompetence that feeds Trumpism.

Read the whole thing.

In a new report, scholars from the Urban Institute claim ObamaCare premiums “are 10 percent below average employer premiums nationally.” There is variation among states. The authors report ObamaCare premiums are actually higher in 12 states, by as much as 68 percent. 

At Forbes.com, I explain the Urban scholars are making the “apples to apples” comparison they claim to be:

The Urban Institute study instead engages in what my Cato Institute colleague Arnold Kling calls a game of “hide the premium.” As ACA architect Jonathan Gruber explained, “This bill was written in a tortured way” to create a “lack of transparency” because “if…you made explicit that healthy people pay in and sick people get money, it would not have passed.” When it did pass, it was due to what Gruber called the “huge political advantage” that comes from hiding how much voters are paying, as well as ”the stupidity of the American voter.”

That lack of transparency has allowed supporters to claim the ACA is providing coverage to millions who are so sick that insurance companies previously wouldn’t cover them, while simultaneously claiming Exchange coverage is no more expensive than individual-market coverage prior to the ACA or than employer-sponsored coverage. When we incorporate the full premium for Exchange plans, the smoke clears and we see Exchange coverage is indeed more expensive than employer-sponsored coverage. There ain’t no such thing as a free lunch.

If you think this is fun, just imagine the shell games we could play with a public option.

Read the whole thing.

Pages