Cato Op-Eds

Individual Liberty, Free Markets, and Peace
Subscribe to Cato Op-Eds feed

New data on worker pay in the government and private sector has been released by the Bureau of Economic Analysis. There is good news: the pace of federal pay increases slowed during 2015, while the pace of private-sector pay increases picked up. 

After increasing 3.9 percent in 2014, average compensation for federal government civilian workers increased 1.9 percent in 2015. Meanwhile, average private-sector compensation increased 1.4 percent in 2014, but then sped up to a 3.8 percent increase in 2015.

Private workers thus closed the pay gap a little with federal workers in 2015, but there is a lot of catch up to do. In 2015 federal workers had average compensation (wages plus benefits) of $123,160, which was 76 percent higher than the private-sector average of $69,901. This essay examines the data in more detail.

Average federal compensation grew much faster than average private-sector compensation during the 1990s and 2000s. But the figure shows that federal increases have roughly equaled private-sector increases since 2010. President Obama modestly reined in federal pay increases after the spendthrift Bush years. Will the next president continue the restraint?


For background on federal pay issues and the BEA data, see here.


Donald Trump Jr. tweeted out this meme yesterday:

Social media immediately took arms to attack him.  I think the Skittles meme is actually a valuable and useful way to understand the foreign-born terrorist threat but size of the bowl is way too small.  This is the proper Skittles analogy:

Imagine a bowl full of 3.25 million Skittles that has been accumulated from 1975 to the end of 2015.  We know that 20 of the Skittles in that bowl intended to do harm but only three of those 20 are actually fatal.  That means that one in 1.08 million of them is deadly.  Do you eat from the bowl without quaking in your boots?  I would.

Perhaps future Skittles added into the bowl will be deadlier than previous Skittles but the difference would have to be great before the risks become worrisome as I write about here.  The Trump Jr. terrorism-Skittles meme is useful to understand terrorism risk – it just requires a picture of a bowl large enough to fit about 7,200 pounds of Skittles.  

It’s high time that I got ‘round to the subject of “monetary control,” meaning the various procedures and devices the Fed and other central banks employ in their attempts to regulate the overall availability of liquid assets, and through it the general course of spending, prices, and employment, in the economies they oversee.

In addressing this important subject, I’m especially anxious to disabuse my readers of the popular, but mistaken, belief — and it is popular, not only among non-experts, but also among economists — that monetary control is mainly, if not entirely, a matter of central banks’ “setting” one or more interest rates.  As I hope to show, although there is a grain of truth to this perspective, a grain is all  the truth there is to it. The deeper truth is that “monetary control” is fundamentally about controlling the quantity of (hang on to your hats) … money! In particular, it is about altering the supply of (and, in recent years, the demand for) “base” money, meaning (once again) the sum of outstanding Federal Reserve notes and depository institutions’ deposit balances at the Fed.

Although radical changes to the Fed’s monetary control procedures since the recent crisis don’t alter this fundamental truth about monetary control, they do make it impractical to address the Fed’s control procedures both before and since the crisis within the space of a single blog entry.  Instead, I plan to limit myself here to describing monetary control as the Fed exercised it in the years leading to the crisis.  I’ll then devote a separate post to describing how the Fed’s methods of monetary control have changed since then, and why the changes matter.

The Mechanics of “Old-Fashioned” Monetary Control

In those good-old pre-crisis times, the Fed’s chief monetary control challenge was one of adjusting the available quantity of base money, and of bank’s deposit balances at the Fed especially, sufficiently to sustain or sponsor general levels of lending and spending consistent with its ultimate employment and inflation objectives. If, for example, the FOMC determined that the Fed had to encourage lending and spending beyond already projected levels if it was to avoid a decline in inflation,  a rise in unemployment, or a combination of both, it would proceed to increase depository institutions’ reserve balances, with the intent of encouraging those institutions  to put their new reserves to work by lending (or otherwise investing) them.  Although the lending of unwanted reserves doesn’t reduce the total amount of reserves available to the banking system, it does lead to a buildup of bank deposits as those unwanted reserves get passed around from bank to bank, hot-potato fashion.  As deposits expand, so do banks’ reserve needs, owing partly (in the U.S.) to the presence of minimum legal reserve requirements. Excess reserves therefore decline. Once there are no longer any excess reserves, or rather once there is no excess beyond what banks choose to retain for their own prudential reasons, lending and deposit creation stop.

Such is the usual working-out of the much-disparaged, but nevertheless real, reserve-deposit multiplier. As I explained in the last post in this series, although the multiplier isn’t constant — and although it can under certain circumstances decline dramatically, even assuming values less than one — these possibilities don’t suffice to deprive the general notion of its usefulness, no matter how often some authorities claim otherwise.

Regardless of other possibilities, for present purposes we can take the existence of a multiplier, in the strict sense of the term, and of some official estimate of the value of this multiplier, for granted. The question then is, how, given such an estimate, does (or did) the Fed determine just how much base money it needed to create, or perhaps destroy, to keep overall credit conditions roughly consistent with its ultimate macroeconomic goals? It’s here that the “setting” of interest rates, or rather, of one particular interest rate, comes into play.

The particular rate in question is the “federal funds rate,” so called because it is the rate depository institutions charge one another for overnight loans of “federal funds,” which is just another name for deposit balances kept at the Fed. Why overnight loans? In the course of a business day, a bank’s customers make and receive all sorts of payments, mainly to and from customers of other banks. These days, most of these payments are handled electronically, and so consist of electronic debits from and credits to banks’ reserve accounts at the Fed. Although the Fed allows banks to overdraw their accounts during the day, it requires them to have sufficient balances to end each day with non-zero balances sufficient to meet their reserve requirements, or else pay a penalty. So banks that end up short of reserves borrow “fed funds” overnight, at the “federal funds rate,” from others that have more than they require.

Notice that the “federal funds rate” I just described is a private-market rate, the level of which is determined by the forces of reserve (or “federal funds”) supply and demand. What the Fed “sets” is not the actual federal funds rate, but the “target” federal funds rate. That is, it determines and then announces a desired  federal funds rate, to which it aspires to make the actual fed funds rate conform using its various monetary control devices.

Importantly, the target federal funds rate (or target ffr, for short) is only a means toward an end, and not a monetary policy end in itself. The Fed sets a target ffr, and then tries to hit that target, not because it regards some particular fed funds rate as “better” in itself than other rates, but because it believes that, by supplying banks with reserve balances consistent with that rate, it will also provide them with a quantity of reserves consistent with achieving its ultimate macroeconomic objectives. The fed funds rate is, in monetary economists’ jargon, merely a policy “instrument,” and not a policy “objective.” Were a chosen rate target to prove incompatible with achieving the Fed’s declared inflation or employment objectives, the target, rather than those objectives, would have to be abandoned in favor of a more suitable one. (I am, of course, describing the theory, and not necessarily Fed practice.)

But why target the federal funds rate? The basic idea here is that changes in depository institutions’ demand for reserve balances are a rough indicator of changes in the overall demand for money balances and, perhaps, for liquid assets generally. So, other things equal, an increase in the ffr not itself inspired by any change in the Federal Reserve policy can be taken to reflect an increased demand for liquidity which, unless the Fed does something, will lead to some decline in spending, inflation, and (eventually) employment. Rather than wait to see whether these things transpire, an ffr-targeting Fed would respond by increasing the available quantity of federal funds just enough to offset the increased demand for them. If the fed funds rate is indeed a good instrument, and the Fed has chosen the appropriate target rate, then the Fed’s actions will allow it to do a better job achieving its ultimate goals than it could if it merely kept an eye on those goal variables themselves.

Open Market Operations

To increase or reduce the available supply of federal funds, the Fed increases or reduces its open-market purchases of Treasury securities. (Bear in mind, again, that I’m here describing standard Fed operating procedures before the recent crisis.) Such “open-market operations” are conducted by the New York Fed, and directed by the manager of that bank’s System Open Market Account (SOMA). The SOMA manager arranges frequent (usually daily) auctions by which it either adds to or reduces its purchases of Treasury securities, depending on the FOMC’s chosen federal funds rate target and its understanding of how the demand for reserve balances is likely to evolve in the near future. The other auction participants consist of a score or so of designated large banks and non-bank broker-dealers known as “primary dealers.” When the Fed purchases securities, it pays for them by crediting dealers’ Fed deposit balances (if the dealers are themselves banks) or by crediting the balances of the dealers’ banks, thereby increasing the aggregate supply of federal funds by the amount of the purchase. When it sells securities, it debits dealer and dealer-affiliated bank reserve balances by the amounts dealers have offered to pay for them, reducing total system reserves by that amount.

Because keeping the actual fed funds rate near its target often means adjusting the supply of federal funds to meet temporary rather than persistent changes in the demand for such, the Fed undertakes both “permanent” and “temporary” open-market operations, where the former involve “outright” security purchases or (more rarely) sales, and the latter involve purchases or sales accompanied by  “repurchase agreements” or “repos.” (For convenience’s sake, the term “repo” is in practice used to describe a complete sale and repurchase transaction.) For example, the Fed may purchase securities from dealers on the condition that they agree to repurchase those securities a day later, thereby increasing the supply of reserves for a single night only. (The opposite operation, where the Fed sells securities with the understanding that it will buy them back the next day, is called an “overnight reverse repo.”) Practically speaking, repos are collateralized loans, except in name, where the securities being purchased are the loan collateral, and the difference between their purchase and their repurchase price constitutes the interest on the loan which, expressed in annual percentage terms, is known as the “repo rate.”

The obvious advantage of repos, and of shorter-term repos especially, is that, because they are self-reversing, a central bank that relies extensively on them can for the most part avoid resorting to open-market sales when it wishes to reduce the supply of federal funds. Instead, it merely has to refrain from “rolling over” or otherwise replacing some of its maturing repos.

Sustainable and Unsustainable Interest Rate Targets

Having described the basic mechanics of open-market operations, and how such operations can allow the Fed to keep the federal funds rate on target, I don’t wish to give the impression that achieving that goal is easy. On the contrary: it’s often difficult, and sometimes downright impossible!

For one thing, just what scale, schedule, or types of open-market operations will serve best to help the Fed achieve its target is never certain. Instead, considerable guesswork is involved, including (as I’ve mentioned) guesswork concerning impending changes in depository institution’s demand for liquidity. Ordinarily such changes are small and fairly predictable; but occasionally, and especially  when a crisis strikes, they are both unexpected and large.

But that’s the least of it. The main challenge the Fed faces consists of picking the right funds rate target in the first place. For if it picks the wrong one, it’s attempts to make the actual funds rate stay on target are bound to fail. To put the matter more precisely, given some ultimate inflation target, there is at any time a unique federal funds rate consistent with that target. Call that the “equilibrium” federal funds rate.** If the Fed sets its ffr target below the equilibrium rate, it will find itself engaging in endless open-market purchases so long as it insists on cleaving to that target.

That’s the case because pushing the ffr below its equilibrium value means, not just accommodating changes in banks’ reserve needs, thereby preserving desirable levels of lending and spending, but supplying them with reserves beyond those needs. Consequently the banks, in ridding themselves of the unwanted reserves, will cause purchasing of all sorts, or “aggregate demand,” to increase beyond levels consistent with the Fed’s objectives. Since the demand for loans of all kinds, including that for overnight loans of federal funds, itself tends to increase along with the demand for other things, the funds rate will once again tend to rise above target, instigating another round of open-market purchases, and so on. Eventually one of two things must happen: either the Fed, realizing that sticking to its chosen rate target will cause it to overshoot its long-run inflation goal, will revise that target upwards, or the Fed, in insisting on trying to do the impossible, will end up causing run-away inflation.  Persistent attempts to push the fed funds rate below its equilibrium value will, in other words, backfire: eventually interest rates, instead of falling, will end up increasing in response to an increasing inflation rate.  A Fed attempting to target an above-equilibrium ffr will find itself facing the opposite predicament: unless it adjusts its target downwards, a deflationary crisis, involving falling rather than rising interest rates, will unfold.

The public would be wise to keep these possibilities in mind whenever it’s tempted to complain that the Fed is “setting” interest rates “too high” or “too low.”  The public may be right, in the sense that the FOMC needs to adjust the fed funds target if it’s to keep inflation under control.  On the other hand, if what the public is really asking is that the Fed try to force rates above or below their equilibrium values, it should consider itself lucky if the FOMC refuses to listen.

A Steamship Analogy

I hope I’ve already said enough to drive home the fact that there was, prior to the crisis at least, only a grain of truth to the common belief that the Fed “sets” interest rates. It is, of course, true that the Fed can influence the prevailing level of interest rates through its ability to alter both the actual and the expected rate of inflation. But apart from that, and allowing that the Fed has long-run inflation goals that it’s determined to achieve, it’s more correct to say that the Fed’s monetary policy actions are themselves “set” or dictated to it by market-determined equilibrium rates of interest, than that the Fed “sets” interest rates.

But in case the point still isn’t clear, an analogy may perhaps help. Suppose that the captain of a steamship wishes to maintain a sea speed of 19 knots. To do so, he (I said it was a steamship, didn’t I?) sends instructions, using the ship’s engine-order telegraph, to the engineer, signaling “full ahead,” “half ahead,” “slow ahead,” “dead slow ahead,” “stand by,” “dead slow astern,” “slow astern,” and so on, depending on how fast, and in what direction, he wants the propellers to turn. The engine room in turn acknowledges the order, and then conveys it to the boiler room, where the firemen are responsible for getting steam up, or letting it down, by stoking more or less coal into the boilers. But engine speed is only part of the equation: there’s also the current to consider, variations in which require compensating changes in engine speed if the desired sea speed is to be maintained.

Now for the analogy: the captain of our steamship is like the FOMC; its engineer is like the manager of the New York Fed’s System Open Market Account, and the ship’s telegraph is like…a telephone.  The instructions “full ahead,” “half ahead,” “stand by,” “half astern,” and so forth, are the counterparts of such FOMC rate-target adjustments as “lower by 50 basis points,” “lower by 25 basis points,” “stand pat,” “raise by 25 basis points,” etc. Coal being stoked into the ship’s boilers is like fed funds being auctioned off. Changes in the current are the counterpart of changes in the demand for federal funds that occur independently of changes in Fed policy.  Finally, the engine speed consistent at any moment with the desired ship’s speed of 19 knots is analogous to the “equilibrium” federal funds rate.  Just as a responsible ship’s captain must ultimately let the sea itself determine the commands he sends below, so too must a responsible FOMC allow market forces dictate how it “sets” its fed funds target.


Such was monetary control before the crisis. Since then, much has changed, at least superficially; so we must revisit the subject with those changes in mind. But first, I must explain how these changes came about, which means saying something about how the S.S. Fed managed to steam its way straight into a disaster.


**I resist calling it the “natural” rate, because that term is mostly associated with the work of the great Swedish economist Knut Wicksell, who meant by it the rate of interest consistent with a constant price level or zero inflation, rather than with any constant but positive inflation rate. The difference between my “equilibrium” rate and Wicksell’s “natural” rate is simply policymakers’ preferred long-run inflation rate target. Note also that I say nothing here about the Fed’s employment goals.  That’s because, to the extent that such goals are defined precisely rather than vaguely, they may also be incompatible, not only with the Fed’s long-run inflation objectives, but with the avoidance of accelerating inflation or deflation.

[Cross-posted from]

This week in New York, President Obama is hosting a summit on refugees with other governments with the goal of doubling refugee resettlement internationally. The summit will come a week after the president announced that the United States will plan to accept 110,000 refugees in fiscal year (FY) 2017, which begins in October. While this is 25,000 more than this year’s target, it is still much less than the share of the international displaced population that the country has historically taken.

If the next administration follows through on the president’s plans, 2017 would see the largest number of refugees accepted since 1994, when the U.S. refugee program welcomed nearly 112,000 refugees. Yet, due to the scale of the current refugee crisis, the number represents a much smaller portion of persons displaced from their homes by violence and persecution around the world than in 1994.

As Figure 1 shows, the U.S. program took almost a half a percent of the displaced population, as estimated by the United Nations, in 1994. Next year, it will take just 0.17 percent. The average since the Refugee Act was passed in 1980 has been 0.48 percent. There was a huge drop-off after 9/11, as the Bush administration attempted to update its vetting procedures, and although the rate rebounded slightly, it continued its downward descent. 

Figure 1: Percentage of the U.N. High Commissioner for Refugees Population of Concern Admitted Under the U.S. Refugee Program (FY 1985-2017)

Sources: United Nations High Commissioner for Refugees, U.S. Department of State. *The 2017 figure uses the 2016 UN estimate and assumes that the U.S. will reach its 2017 goal.

But as the Obama and Bush administrations slowly ramped up the refugee program after 2001, the increases could not keep up with the number of newly displaced persons. Today, the number of displaced persons has reached its highest absolute level since World War II. As Figure 2 shows, the U.S. refugee program has almost returned to its early 1990s peak in absolute terms, but all of the increases in the program since 2001 have been matched by even greater increases in the number of displaced persons.  

Figure 2: Admissions Under the U.S. Refugee Program and U.N. High Commissioner for Refugees Population of Concern (FY 1985-2017)

Sources: See Figure 1

The increase in the refugee target for 2017 is still an improvement, but if it is to be anything other than a completely arbitrary figure, it should be based primarily on the international need for resettlement. The Refugee Act of 1980 states that the purpose of the program is to implement “the historic policy of the United States to respond to the urgent needs of persons subject persecution.” In other words, it should be a calculation based first on the needs for resettlement.

If the U.S. program accepted the same rate as it has historically since the Refugee Act was passed in 1980, it would set a target of 300,000 refugees for 2017. If it accepted the average rate from 1990 to 2016, the target would be 200,000. This is the amount that Refugee Council USA—the advocacy coalition for the nonprofits that resettle all refugees to the United States—has urged the United States to accept. 

The decline in the acceptance rate for the U.S. refugee program also highlights the inflexibility of a refugee target controlled entirely by the U.S. government. White House spokesman Josh Earnest said that the president would have set a higher target, but because accepting refugees is “not cheap,” and congressional appropriators were not on board with the plan, they could not.

As I’ve argued before, the United States could accept more refugees using private money and private sponsors without needing Congress’s sign-off. The United States used to have a private refugee program in the 1980s, and for decades before the Refugee Act of 1980, the United States resettled tens of thousands of refugees with private money.

Canada currently has a private sponsorship program that is carrying the disproportionate weight of its large refugee resettlement program, with better results than the government-funded program. Other countries are following Canada’s example, it’s time the United States do so as well

There should be no limits on American charity. If American citizens want to invite people fleeing violence and persecution abroad to come into their homes, the government should not stand in their way. In the face of this historic crisis, the United States should allow Americans to lead a historic response that is consistent with our past.

On September 13, the University of California at Berkeley announced that it may have to take down online lecture and course content that it has offered free to the public: 

Despite the absence of clear regulatory guidance, we have attempted to maximize the accessibility of free, online content that we have made available to the public. Nevertheless, the Department of Justice has recently asserted that the University is in violation of the Americans with Disabilities Act because, in its view, not all of the free course and lecture content UC Berkeley makes available on certain online platforms is fully accessible to individuals with hearing, visual, or manual disabilities.

That Berkeley is not just imagining these legal dangers is illustrated by this clip from Tamar Lewin of the New York Times from February of last year: “Harvard and M.I.T. Are Sued Over Lack of Closed Captions.” 

I’ve been warning about this, to no apparent avail, for a long time. In a July post in this space, I noted the tag-team alliance of the U.S. Department of Justice, disabled-rights groups, and fee-seeking private lawyers in gearing up web-accessibility doctrine: when extreme positions are too politically unpalatable for DoJ to endorse directly, it supports private rights groups in their demands, and when a demand is too unpopular or impractical even for the rights groups, there’s nothing to stop the freelance lawyers from taking it up. 

Neither political party when in office has been willing to challenge the ADA, so there is every reason to believe this will continue. If you appreciate high-level course content from leading universities, but are not a paying enrolled student, you’d better enjoy it now while you can.   

From my new policy analysis (joint with Angela Dills and Sietse Goffard) on state marijuana legalizations:

In November 2012 voters in the states of Colorado and Washington approved ballot initiatives that legalized marijuana for recreational use. Two years later, Alaska and Oregon followed suit. As many as 11 other states may consider similar measures in November 2016, through either ballot initiative or legislative action. Supporters and opponents of such initiatives make numerous claims about state-level marijuana legalization.

Advocates think legalization reduces crime, raises tax revenue, lowers criminal justice expenditures, improves public health, bolsters traffic safety, and stimulates the economy. Critics argue that legalization spurs marijuana and other drug or alcohol use, increases crime, diminishes traffic safety, harms public health, and lowers teen educational achievement. Systematic evaluation of these claims, however, has been largely absent.

This paper assesses recent marijuana legalizations and related policies in Colorado, Washington, Oregon, and Alaska.

Our conclusion is that state marijuana legalizations have had minimal effect on marijuana use and related outcomes. We cannot rule out small effects of legalization, and insufficient time has elapsed since the four initial legalizations to allow strong inference. On the basis of available data, however, we find little support for the stronger claims made by either opponents or advocates of legalization. The absence of significant adverse consequences is especially striking given the sometimes dire predictions made by legalization opponents.

 Global Science Report is a feature from the Center for the Study of Science, where we highlight one or two important new items in the scientific literature or the popular media. For broader and more technical perspectives, consult our monthly “Current Wisdom.”

For more than two weeks Hurricane Hermine (including, its pre-hurricane and post-hurricane life) was prominent in the daily news cycle.  It threatened, at one time or another, destruction along U.S. coastlines from the southern tip of Florida westward to New Orleans and northward to Cape Cod.  Hurricane/global warming stories, relegated to the hell of the formerly attractive by the record-long absence of a major hurricane strike on U.S. shores, were being spiffed up and readied for publication just as soon as disaster would strike.  But, alas, Hermine didn’t cooperate, arguably generating more bluster in the press than on the ground, although some very exposed stretches of North Florida did incur some damage.  

Like Jessie, Woody and Stinky Pete in Toy Story 2, the hurricane/global warming stories have been put back in their boxes (if only they could be sent to a museum!).  

But, they didn’t have to be. There was much that could have been written speculating on the role of global warming in the Hermine’s evolution—but it’s just not politically correct.

With a bit of thought-provocation provided by newly-minted Cato Adjunct Scholar Dr. Ryan Maue—one of the best and brightest minds in the  world on issues of tropical cyclone/climate interactions (and other extreme weather types)—we’ll review Hermine’s life history and consider what factors “consistent with” human-caused climate change may have shaped its outcome.

We look forward to having Ryan’s more formal input into our future climate change discussions, but didn’t want to pass up an opportunity to work some of his thoughts into our Hermine recap—for who knows, we may have to wait another 10 years for a Florida hurricane landfall!

Hermine was probably the most hyped category 1 hurricane in history—thanks to a large media constituency hungry for weather disaster stories to link with climate change peril. 

Widespread access to weather forecast models and a thirst to be first with the story of impending global-warming fueled doom led to wild speculation in the media (both new and traditional) about “worst case scenarios” of a “major hurricane” landfall nearly a full week prior to the nascent swirl in the tropical atmosphere becoming an officially designated tropical cyclone by the National Hurricane Center. As “Invest 99L” (as it was originally designated) trudged westward through the tropical Atlantic environs during the week of August 22nd, it was struggling to maintain its life, much less its worst case aspirations. Pretty much every day you could find reports of a potentially dire impact from the Florida East Coast all the way through waterlogged Louisiana.

To hear it told, the combination of unusually warm water, the lack of recent hurricane landfalls and rising sea levels set the stage for a disaster. 

Invest 99L ultimately did survive an unfavorable stretch of environmental conditions in the Florida Straits and eventually, in the Gulf of Mexico, grew into Hurricane Hermine and became the first in over 10 years to make landfall in Florida when it came ashore near St. Marks in the early morning hours of Friday, September 2nd. It was a category 1 hurricane at landfall and caused some coastal flooding along Florida’s Gulf Coast and knocked out power to more than 100,000 homes and businesses in the Tallahassee area.  In reality, no hurricane-force sustained winds were measured onshore.

As it traversed the Southeastern U.S. from Florida’s Big Bend region to North Carolina’s Outer Banks, the Labor Day forecast for Jersey Shore became ominous, as Hermine’s post-tropical personality was expected to lash the Mid-Atlantic coast with near-hurricane force winds for several days, flooding low-lying areas and severely eroding beaches with each tidal cycle.  Many Labor Day holiday plans were cancelled.

As it turned out, Hermine’s post-tropical remnants travelled further offshore than originally anticipated and never quite attainted projected strength—a combination which resulted in much less onshore impact than was being advertised, with much grumbling from those who felt had cancelled their vacation plans for nothing.

In the end, Hermine’s hype was much worse than its bite. But all the while, if folks really wanted to write stories about how storm behavior is “consistent with” factors related to global warming, they most certainly could. 

For example, regarding Hermine, Ryan sent out this string of post-event tweets:


As is well known, we have gone much further than we ever have without a Category 3 (major) or higher hurricane strike.  The last was nearly 11 years ago, when Wilma struck south Florida on October 24, 2005.  With regard to this and the reduced frequency of U.S. strikes in general, hurricane researcher Chunzai Wang from NOAA’s Atlantic Oceanographic and Meteorological Laboratory  gave an informative presentation about a year ago, titled “Impacts of the Atlantic Warm Pool on Atlantic Hurricanes.” It included this bullet point:

● A large (small) [Atlantic Warm Pool—a condition enhanced by global warming] is unfavorable (favorable) for hurricanes to make landfall in the southeast United States.  This is consistent with that no hurricanes made landfall in the southeast U.S. during the past 10 years, or hurricanes moved northward such as Hurricane Sandy in 2012. [emphasis added]

In an article we wrote a few years back titled “Global Savings: Billion-Dollar Weather Events Averted by Global Warming,” we listed several other examples of elements “consistent with” climate change that may inhibit Atlantic tropical cyclone development and avert, or mitigate, disaster—these include increased vertical wind shear, changes in atmospheric steering currents, and Saharan dust injections.

But, you have probably never read any stories elsewhere that human-caused climate changes may be acting to lessen the menace of Atlantic hurricane strikes on the U.S. Instead you’ve no doubt read that “dumb luck” is the reason why hurricanes have been eluding our shores and why, according to the Washington Post, our decade-plus major hurricane drought is “terrifying” (in part, because of climate change).

We have little wonder why.

You Ought to Have a Look is a regular feature from the Center for the Study of Science.  While this section will feature all of the areas of interest that we are emphasizing, the prominence of the climate issue is driving a tremendous amount of web traffic.  Here we post a few of the best in recent days, along with our color commentary.

There was an interesting stream of articles this week that, when strung together, provides a pretty good idea as to how the scientific literature on climate change can (and have) become biased in a hurry.

First up, consider this provocative article by Vladimir Jankovic and David Schultz of University of Manchester titled “Atmosfear: Communicating the effects of climate change on extreme weather.” They formalize the idea that climate change communication has become dominated by trying to scare folks into acceptance (and thus compliance with action). The abstract is compelling:

The potential and serious effects of anthropogenic climate change are often communicated through the soundbite that anthropogenic climate change will produce more extreme weather. This soundbite has become popular with scientists and the media to get the public and governments to act against further increases in global temperature and their associated effects through the communication of scary scenarios, what we term “atmosfear.” Underlying atmosfear’s appeal, however, are four premises. First, atmosfear reduces the complexity of climate change to an identifiable target in the form of anthropogenically forced weather extremes. Second, anthropogenically driven weather extremes mandate a responsibility to act to protect the planet and society from harmful and increased risk. Third, achieving these ethical goals is predicated on emissions policies. Fourth, the end-result of these policies—a non-anthropogenic climate—is assumed to be more benign than an anthropogenically influenced one. Atmosfear oversimplifies and misstates the true state of the science and policy concerns in three ways. First, weather extremes are only one of the predicted effects of climate change and are best addressed by measures other than emission policies. Second, a pre-industrial climate may remain a policy goal, but is unachievable in reality. Third, the damages caused by any anthropogenically driven extremes may be overshadowed by the damages caused by increased exposure and vulnerability to the future risk. In reality, recent increases in damages and losses due to extreme weather events are due to societal factors. Thus, invoking atmosfear through such approaches as attribution science is not an effective means of either stimulating or legitimizing climate policies.

With a dominant atmosphere of atmosfear running through climate science, its pretty easy to see how the scientific literature (which is contributed to and gatekept by the scientific establishment) rapidly becomes overrun with pro-establishment articles—a phenomena that is the subject of a paper by a team led by Silas Nissen of the Danish Niels Bohr Institute. In their paper “Publication bias and the canonization of false facts” they describe how the preferential publication of “positive” results (i.e., those which favor a particular outcome), leads to a biased literature and, as a result, a misled public. From their abstract:

In our model, publication bias—in which positive results are published preferentially over negative ones—influences the distribution of published results. We find that when readers do not know the degree of publication bias and thus cannot condition on it, false claims often can be canonized as facts. Unless a sufficient fraction of negative results are published, the scientific process will do a poor job at discriminating false from true claims. This problem is exacerbated when scientists engage in p-hacking, data dredging, and other behaviors that increase the rate at which false positives are published…To the degree that the model accurately represents current scholarly practice, there will be serious concern about the validity of purported facts in some areas of scientific research.

Nissen and colleagues go on to conclude:

In the model of scientific inquiry that we have developed here, publication bias creates serious problems. While true claims will seldom be rejected, publication bias has the potential to cause many false claims to be mistakenly canonized as facts. This can be avoided only if a substantial fraction of negative results are published. But at present, publication bias appears to be strong, given that only a small fraction of the published scientific literature presents negative results. Presumably many negative results are going unreported. While this problem has been noted before, we do not know of any previous formal analysis of its consequences regarding the establishment of scientific facts.

And once the “facts” are ingrained, they set up a positive feedback loop as they get repeatedly “reviewed” in an increasingly popular pastime (enjoyed by national and international institutions alike) of producing assessment reports of the scientific literature, oftentimes as the foundation and justification for policymaking, such as the demonstrably atrocious “National Assessments” of climate change in the U.S. published by—who else—the federal government, to support—what else—it’s climate change policies.  In his new paper, “The mass production of redundant, misleading, and conflicted systematic reviews and meta-analyses,” Stanford’s John Ioannidis describes the proliferation of systematic review papers and why this is a terrible development for science and science-based policy. In an interview with Retraction Watch, Ioannidis discusses his findings. Here’s a taste:

Retraction Watch: You say that the numbers of systematic reviews and meta-analyses have reached “epidemic proportions,” and that there is currently a “massive production of unnecessary, misleading, and conflicted systematic reviews and meta-analyses.” Indeed, you note the number of each has risen more than 2500% since 1991, often with more than 20 meta-analyses on the same topic. Why the massive increase, and why is it a problem?

John Ioannidis: The increase is a consequence of the higher prestige that systematic reviews and meta-analyses have acquired over the years, since they are (justifiably) considered to represent the highest level of evidence. Many scientists now want to do them, leading journals want to publish them, and sponsors and other conflicted stakeholders want to exploit them to promote their products, beliefs, and agendas. Systematic reviews and meta-analyses that are carefully done and that are done by players who do not have conflicts and pre-determined agendas are not a problem, quite the opposite. The problem is that most of them are not carefully done and/or are done with pre-determined agendas on what to find and report.

Ioannidis concludes “Few systematic reviews and meta-analyses are both non-misleading and useful.”

Together, the chain of events described above leads to what’s been called an “availability cascade”—a self-promulgating process of collective belief. As we’ve highlighted on previous occasions, availability cascades lead to nowhere good in short order. The first step in combatting them is to recognize that we are caught up in one. The above papers help to illuminate this, and you really ought to have a look!



Ioannidis, J., 2016. The mass production of redundant, misleading, and conflicted systematic reviews and meta-analyses, Milbank Quarterly, 94, 485–514, DOI: 10.1111/1468-0009.12210.

Jankovic, V. and D. Schultz, 2016.  Atmosfear: Communicating the effects of climate change on extreme weather. Weather, Climate and Society, DOI:

Nissen, S, et al., 2016. Publication bias and the canonization of false facts. Archived at,

On Monday, I argued that a new report by the Center for Immigration Studies (CIS) entitled “Immigrants Replace Low-Skill Natives in the Workforce” provided no evidence that immigrants are causing low-skilled natives to quit working. In fact, the trends point toward immigration pushing employed natives up the skills ladder. In his response yesterday, the author Jason Richwine either ignores my points or backtracks the claims in his original report.

Here are six examples:

1. In his paper, Mr. Richwine writes that “an increasing number of the least-skilled Americans [are] leaving the workforce” (my emphasis). I pointed out that this statement is not true, that the number of high school dropouts not working has actually declined since 1995. But in his response, he drops the “increasing,” altering his claim to say that “low-skilled Americans have been dropping out of the labor force even as low-skill immigrants have been finding plenty of work.” This altered claim is true. Some low-skilled Americans have dropped out of the labor force during this time, just not more of them, which is the implication.

2. In his paper, Mr. Richwine concludes that the share of male high school dropouts who are not working has increased. I pointed out that the share has increased only because natives are getting educated, not because the number of dropouts not working has increased. In response, Mr. Richwine tries to redeem his analysis by claiming that the fact that more Americans are graduating high school is “meaningless.” But in his paper, Mr. Richwine thought the distinction between high school graduates and dropouts was very important. As he explained:

In other words, high school graduates are not the same as high school dropouts. 

He was right the first time. Even George Borjas, the restrictionist Harvard professor, separates high school dropouts from graduates when studying the effect of immigration because they perform different tasks in the economy. The result is that they have very different outcomes. Indeed, the labor force participation rate of native-born Americans in their prime is consistently 15 to 20 percentage points above that of high school dropouts. While Mr. Richwine is free to think that there is no distinction between high school dropouts and high school graduates, the labor market disagrees.

Moreover, the skills upgrading during this time went beyond simply more natives graduating high school. In fact, the only skill categories that have seen a growth in the number of natives from 1995 to 2014 were those with some college education or a college degree. The number of natives with a high school degree or less decreased by roughly 10 million, while the number of natives who have received some college education or a college degree grew by roughly 10 million. This means that natives who are graduating high school are more often going on to get further education. 

3. In his paper, Mr. Richwine was seeking to explain an interesting trend—that the overall labor force participation rate for native-born Americans has gone down for prime age workers—and argues that more native-born high school dropouts not working explains much of this trend. I pointed out that this group doesn’t explain any of the trend since there are fewer native-born dropouts not working today than in 1995, but now he argues that there is no distinction between high school dropouts and high school graduates, so combining these two, how much of the decrease in the labor force participation rate for prime age natives over the last two decades can be explained by more lower-skilled workers abandoning work? Once again, the answer is zero—or actually less than zero since the number has declined.

Net Growth in the Number of Prime Age Natives Not in the Labor Force from 1995 to 2014

Source: Census Bureau, Current Population Survey, March Supplement

4. In his paper, Mr. Richwine claimed that low-skilled immigration was indirectly leading fewer low-skilled natives to work. I pointed out that actually all of the increase in the number of natives not working came from higher skilled categories. Now in his response to me, Mr. Richwine claims that this fact is also irrelevant because “no matter how many stories we tell about movement between categories, the picture [overall] is getting worse rather than better.” Actually, it is relevant. His entire paper was built around the supposed relationship between low-skilled immigration and low-skilled natives not working. Now he says it doesn’t matter which natives stopped working.

5. After Mr. Richwine finished backtracking, he doubled down on the worst point in his original paper, asking, “Why should we assume that natives can increase their skills in response to immigration?” First, the simple fact is that natives have upgraded their education and skills in recent years as immigrants have entered lower-skilled fields. Lower-skilled Americans have repeatedly shown that they can increase their skills. Second, as I pointed out, there is reason to believe that this relationship is causal because immigration raises the relative wage of higher-skilled workers. Incentives matter, and increasing the rewards for graduating high school or college has incentivized lower educated Americans to climb the skills ladder.

Still, Mr. Richwine argues that because “not everyone can become a skilled worker,” we should not “bring in more immigrants.” In this view, it doesn’t matter how many Americans benefit from immigration. Indeed, even if immigration helped those at the bottom much more than it hurt them by pushing them up the skills ladder, it shouldn’t be allowed unless every single high school dropout who isn’t even looking for work can—not only not be hurt—but benefit. What a strange argument.

6.  Mr. Richwine explicitly concedes that his analysis provides no evidence whatsoever that immigrants are a threat to the employment prospects of natives. But he nonetheless offers the strange theory that if there were no immigrants, policymakers would act to help those low-skilled natives. While he offers no evidence for this theory either, it does appear to be the case that government becomes more active in labor market regulation and welfare state intrusions during periods of low immigration. Former Center for Immigration Studies board member Vernon M. Briggs Jr. even wrote that one cost of immigration liberalization is that it would have prevented Congress from passing left wing worker, family, and welfare legislation.

If immigration is what is standing in the way of such legislation, then that is decidedly a good thing as well. These interventions have almost always done more harm than good. Indeed, as I showed in a recent post, a major reason for the dramatic improvement in immigrant labor market outcomes was the 1996 welfare reform that restricted their access to benefits. When government ignored them, they thrived. If the presence of immigrants encourage policymakers to do the same for natives (which they did to some extent in 1996), that is just another benefit of immigration.

So in the end, Mr. Richwine is left with this argument: It doesn’t matter if immigrants don’t harm natives. It doesn’t matter if immigrants help natives overall. It doesn’t matter if immigrants help the worst-off natives without hurting any of them. Only if pro-immigration proponents can prove that immigrants not only don’t hurt but actively help all of the poorest educated natives who aren’t even looking for work should the United States allow any immigrants to come in. This is an argument of someone who could never—even theoretically—change his mind.

Tomorrow, September 17, is Constitution Day in America, celebrating the day 229 years ago when the Framers of the Constitution finished their work in Philadelphia over that long hot summer and then sent the document they’d just completed out to the states for ratification. Reflecting a vision of liberty through limited government that the Founders had first set forth 11 years earlier in the Declaration of Independence, “We the People,” through that compact, sought “to form a more perfect Union, establish Justice, insure domestic Tranquility, provide for the common defence, promote the general Welfare, and secure the Blessings of Liberty to ourselves and our Posterity.”

The political and legal history that has followed has been far from perfect, of course. Has any nation’s been otherwise? But in the great sweep of human affairs, we’ve done pretty well living under this, the world’s oldest written constitution still in effect. Much that has happened to and under the Constitution during the ensuing years has been good, such as the ratification of the Civil War Amendments, which incorporated at last the grand principles of the Declaration of Independence; much has not been so good, such as the major reinterpretations of the document, without amendment, that took place during the New Deal, giving us the modern executive state that undermines the Founders’ idea of liberty through limited government.

Here at the Cato Institute we’ve long celebrated this day as we did yesterday with our 15th annual Constitution Day symposium. Those who missed the event, which concluded with Arizona Supreme Court Justice Clint Bolick’s B. Kenneth Simon Lecture in Constitutional Thought, will be able to see it in a couple of days at Cato’s events archives. At the symposium, as we do every year, we released our 15th annual Cato Supreme Court Review, which is already online, along with the volumes from previous years. As the Founders and Framers understood, liberty under constitutionally limited government will endure only as long as it remains alive in the hearts and minds of the people. That’s why we mark this day each year.

Michael Lind, a co-founder of left-leaning New America, is urging the federal government to create universal mobility accounts that would give everyone an income tax credit, or, if they owe no taxes, a direct subsidy to cover the costs of driving. He argues that social mobility depends on personal mobility, and personal mobility depends on access to a car, so therefore everyone should have one.

This is an interesting departure from the usual progressive argument that cars are evil and we should help the poor by spending more on transit. Lind responds to this view saying that transit and transit-oriented developments “can help only at the margins.” He applauds programs that help low-income people acquire inexpensive, used automobiles, but–again–thinks they are not enough.

Lind is virtually arguing that automobile ownership is a human right that should be denied to no one because of poverty. While I agree that auto ownership can do a lot more to help people out of poverty than more transit subsidies, claiming that cars are a human right goes a little to far.

Lind may not realize that there just aren’t that many poor people who don’t have cars anymore. According to the 2015 American Community Survey, 91.1 percent of American households have at least one vehicle and 95.5 percent of American workers live in a household with at least one vehicle. A lot of the households with no vehicles could afford to own one but choose not to, so the number of poor people without vehicles is very small. Of course, some two-worker families may have only one vehicle, but older used cars are affordable enough that the cost hardly seems a barrier to anyone with a job.

One implication is that adding a mobility credit to an already complicated tax system means that the vast majority of the credit will go to high-income people and people who already have cars. If only 2 to 3 percent (or even more) of people are both in poverty and lack access to cars, then why should everyone else also get a tax credit? Lind argues against means testing, saying that middle- and upper-income voters resent such programs, but non-means-tested programs always end up favoring the rich, if only because many poor people won’t file tax returns and so won’t know to ask for their credit or subsidy.

Lind makes no attempt to estimate how much the program will cost, but to make a different to low-income people I suspect it would have provide a credit of at least $500 per person per year. With 150 million workers (there were 146 million in 2015, but this program is supposed to increase employment), that’s $75 billion a year, or about 5 percent of personal income tax revenues.

Which raises the question of how to pay for the program. Giving people either tax breaks or direct subsidies either means borrowing more money or raising some other taxes. Whether borrowed or taxed, someone is eventually going to have to pay for it, and that means middle- to high-income people. This pretty much destroys the argument that they will support this policy so long as they get some of the benefits.

More important questions are whether this program would even make a real difference and if there are other programs that could do better for less cost. Lind offers no evidence of the former, but I can think of a lot of ways to help low-income workers that would both cost less and not imply that mobility is some kind of a human right.

Most important would be to remove the barriers to mobility that have been erected by other left-leaning groups, namely the smart-growth crowd and other urban planners. Too much city planning for the past several decades has been based on increasing traffic congestion to try to force people out of their cars. Yet congestion is mostly likely to harm low-income people, because their jobs are less likely to allow flex time, working at home, or other escapes from traffic available to knowledge workers. Low-income people also have less choice about home locations, especially in regions that have made housing expensive, meaning their commutes can be longer than those of higher-income workers.

Relieving traffic congestion in every major urban area in the country would cost a lot less than $75 billion a year. This doesn’t necessarily mean building new roads. Instead, start by removing the so-called traffic calming devices that actually take more lives by delaying emergency service vehicles than they save by reducing auto-pedestrian accidents. Then add dynamic traffic signal controls that have been proven to cost-effectively save people’s time, energy, and clean air. Converting HOV lanes to HOT lanes greatly reduces congestion and actually produces revenue. At relatively little cost, steps like these would remove many of the barriers to automobility for low-income families.

I’m not going to do more than mention the effects of increased auto ownership on urban sprawl, energy consumption, pollution, and greenhouse gas emissions other than to say that low-income people tend to drive cars that are least safe, least energy efficient, and most polluting. I think these problems are exaggerated by the left, but it is ironic that Lind wants to expand auto ownership when most other progressives want to restrict automobile use.

Ultimately, however, I have to reject any implication that mobility is some kind of a human right. Human rights include the rights not to be bothered by government for things that you want to do or believe that don’t harm other people–things like free speech, freedom of the press, and freedom of religion. Human rights do not include the right to expect other people to pay for things that you would like but can’t or don’t want to work for.

Rather than invent new human rights, people who are concerned about poverty should first ask what kind of barriers government creates that prevent social mobility. Those barriers should all be removed before any thought is given to taxing some people in order to give money or resources to others.

A major Wall Street Journal article claims, “A group of economists that includes Messrs. Hanson and Autor estimates that Chinese competition was responsible for 2.4 million jobs lost in the U.S. between 1999 and 2011.”  In a recent interview with the Minneapolis Fed, however, David Autor said, “That 2 million number is something of an upper bound, as we stress.” The central estimate was a 10% job loss which works out to 1.2 million jobs in 2011, rather than 2.4 million.  Since 2011, however, the U.S. added 600,000 manufacturing jobs – while imports from China rose by 21% – so both the job loss estimate and its alleged link to trade (rather than recession) need a second look.

“The China Shock,” by David Autor, David Dorn and Gordon Hanson examined the effect of manufactured imports from one country (China) on local U.S. labor markets. That is interesting and useful as far as it goes.  But a microeconomic model designed for local “commuting zones” cannot properly be extended to the entire national economy without employing a macroeconomic model.  

For one thing, the authors look only at one side of trade – imports – and only between two countries.  They ignore rising U.S. exports to China - including soaring U.S. service exports to China.  They are at best discussing one side of bilateral trade. And they fail to consider spillover effects of China’s soaring imports from other countries (such as Australia, Hong Kong and Canada) which were then able to use the extra income to buy more U.S. exports. 

Autor, Dorn and Hanson offer a seemingly rough estimate that “had import competition not grown after 1999” then there would have been 10% more U.S. manufacturing jobs in 2011.  In that hypothetical “if-then” sense, they suggest that “direct import competition [could] amount to 10 percent of the realized job loss” from 1999 to 2011. 

Any comparison of jobs in 1999 and 2011 should have raised more alarm bells than it has. After all, 1999 was the peak of a long tech boom, while manufacturing jobs in 2011 were still at the worst level of the Great Recession.  

From 1999 to 2011, manufacturing jobs fell by 5.6 million, but between recessions (December 2001 to December 2007) manufacturing jobs fell just 2 million.  The alleged job loss would look much smaller if it had been measured between 1999 and 2007 or extended beyond 2011 to 2015-16.  In the Hickory NC area the Wall Street Journal focused on, for example, unemployment was down to 4.8% in July (lower than 2007) although manufacturing accounted for only a fourth of all jobs.

We can easily erase half of the alleged job loss from China imports by simply updating the figures to 2015.

There were 11.7 million manufacturing jobs in 2011, so a 10% increase would have raised that to 12.9 million – suggesting a hypothetical job loss of 1.2 million, rather than 2.4 million. 

From 2011 to 2015, U.S. imports from China rose 21%, but U.S. manufacturing jobs were up to 12.3 million. Half the job loss Autor, Dorn and Hanson attributed to imports from China vanished from 2011 to 2015, and that certainly was not because we imported less from China.

Written with Christopher E. Whyte of George Mason University

What would it mean if a country couldn’t keep any secrets? 

The question may not be as outlandish as it seems. The hacks of the National Security Agency and the Democratic National Committee represent only the most recent signposts in our evolution toward a post-secrecy society. The ability of governments, companies, and individuals to keep information secret has been evaporating in lock step with the evolution of digital technologies. For individuals, of course, this development raises serious questions about government surveillance and people’s right to privacy. But more broadly, the inability for governments to keep secrets foreshadows a potential sea change in international politics.

To be sure, the U.S. government still maintains many secrets, but today it seems accurate to describe them as “fragile secrets.” The NSA hack is not the first breach of American computer networks, of course, but the nature of the hack reveals just how illusory is our ability to keep secrets. The Snowden affair made clear that the best defense isn’t proof against insider threats. The Shadow Brokers hack – against the NSA’s own top hacker group – has now shown that the best defense isn’t proof against outsider threats either. Even if the Shadow Brokers hack is a fabrication and the information was taken from the NSA in other ways – a traditional human intelligence operation, for instance, where a man with a USB drive managed to download some files – it seems clear that we’re in an era of informational vulnerability.

And what is true for the federal government is even more clearly true for private organizations like the Democratic National Committee. The theft and release of the DNC’s email traffic – likely carried out by Russian government hackers – illustrates that it’s not just official government information at risk. Past years have made it clear that civil society organizations – both venerable (political parties, interest groups, etc.) and questionable (the Church of Scientology, for instance, was the target of a range of disruptive attacks in 2008-‘09) – are as often the targets of digital intrusion as are government institutions.

At this point, it seems fair to think that there is no government or politically-relevant information that couldn’t, at some point, find its way into the hands of a hacker. From there, it is just a short hop into the public domain.

This is not to say, of course, that everything will wind up being made public. There is clearly too much information to wade through, most of it of little interest or value to anyone. And a great deal of information is so time sensitive that its value will evaporate long before anyone can steal it. But even there, as we know from the exposure of the NSA’s global surveillance programs, under the right circumstances hackers can monitor all manner of communications at the highest levels of government in real time. 

Nonetheless, as the frequency of broken secrets has risen, it has become clear that we have entered a new phase of “revelation warfare,” one in which revealed truths join disinformation and propaganda in the foreign policy toolkit. Disinformation, of course, remains an important weapon in the struggle for hearts and minds – witness Russia’s recent attempts to obfuscate the debate over NATO membership in Sweden, for example. But in many cases the truth may turn out to be even more powerful. Nations using disinformation campaigns always risk getting caught, thereby tarnishing their messages. Once a secret truth is revealed, however, it doesn’t matter whether the thief gets caught: the truth is out there.

The implications of this new era are profound. 

Leaks have traditionally been an insider’s threat of last resort, but the ability to steal information from a distance clearly provides exponentially greater opportunity for foreign nations to influence the political process here at home. Whether they come in the form of secrets about candidates’ personal lives, the shady dealings of political organizations, or deception by government officials, strategic revelations have the potential to destroy candidates, change the course of elections, and obstruct the making and implementation of public policy.

More fundamentally, too much transparency could seriously threaten the ability of governments to do things that require a certain amount of secrecy and discretion. Though most government operations do not require any special need for classified information or secrets, the conduct of foreign policy certainly does. Intelligence gathering, diplomacy, and the conduct of war all promise to become much more problematic if secrets become too difficult to keep.

Over time, we may be in store for an extended “Watergate effect,” in which the steady stream of hacked secrets leads to one ugly revelation after another, further eroding public confidence in our political institutions. On the other hand, a “sunlight effect” is also possible. The explosive potential of having one’s secrets revealed might just convince bureaucrats and politicians that the wisest course of action is to have nothing to hide.

However things turn out, it is clear that we need to get used to living in an era of increasingly fragile secrets.

Donald Trump recently unveiled a new child care plan whereby the government will force employers to give time off to new mothers in exchange for some shuffling of the tax code.  Mothers do tend to benefit from such schemes but they also end up paying for their time off in other, indirect ways like lower wages.  Forcing employers to pay their female employees to take time off decreases the labor demand for child-bearing age women and increases their supply, thus lowering their wages.  Economist Larry Summers, former Director of the National Economic Council during President Obama’s first administration, wrote a fantastic paper explaining this effect.

Many firms have maternity leave policies that balance an implicit decrease in wages or compensation for working-age mothers with time off to care for a newborn.  The important point about these firm-specific policies is that they are flexible.  Some women want a lot of time off and aren’t as sensitive to the impact of their careers while others want to return to work immediately.  A one-size fits all government policy will remove this flexibility.    

Regardless of the merits or demerits of Trump’s plan, economists Patricia Cortes and Jose Tessada discovered an easier and cheaper way to help women transition from being workers to being mothers who work: allow more low-skilled immigration.  In a 2008 paper, they found:

Exploiting cross-city variation in immigrant concentration, we find that low-skilled immigration increases average hours of market work and the probability of working long hours of women at the top quartile of the wage distribution.  Consistently, we find that women in this group decrease the time they spend in household work and increase expenditures on housekeeping services.

The effect wasn’t huge but skilled women did spend less time on housework and more time working at their job.    

Younger women with higher educations and young children would be the biggest beneficiaries from an expansion of childcare services provided by low-skilled immigrants.  There are about 5.4 million working-age women with a college degree or higher that also have at least one child who is under the age of 8 (Table 1).  Almost 78 percent of them are employed, 2 percent are unemployed looking for work, and 21 percent are not in the labor force.    

Table 1

Age of Youngest Child by Mother’s Employment Status, College and Above Educated, Native-Born Mothers, Ages 18-64


Less than 1 Year Old





























Not in Labor Force




















     Source: March CPS, 2014. 

As the youngest child ages, the percentage of women not in the labor force also shrinks, probably because less labor at home is required to take care of them (Figure 1).  Cheaper childcare provided by low-skilled immigrants can increase the rate at which these skilled female American workers return to the job after having a child. 

Age of Youngest Child by Percent of Mothers Not in Labor Force, College and Above Educated, Native-Born Mothers, Ages 18-64


     Source: March CPS, 2014. 

Trump’s immigration policy doesn’t allow him to propose helping mothers in this cheaper way.  Allowing more lower-skilled immigrants to help with childcare does conflict with Trump’s position on cutting LEGAL immigration.  Liberalizing low-skilled immigration will give more options to working mothers, boost opportunity for immigrants, increase the take-home pay of working mothers (complementarities), allow more Americans with higher skills to enter the workforce, and slightly liberalize international labor markets.  It’s a win-win for everybody except for the current government protected, highly regulated, heavily licensed, and very expensive day care industry. 

It’s hard being a new mother.  My wife’s ability to deliver, feed, and care for our three-month-old son with only a short break from her job is impressive.  Instead of lowering my wife’s wages indirectly with an ill-conceived family leave policy, it would be better to expand her ability to hire immigrants to help out at home.  More choices, not mandated leave, will lead to better outcomes.  If only Trump’s immigration platform didn’t make this solution impossible.    

Last week’s nuclear test by North Korea generated a wealth of commentary and analysis about the future of security on the Korean peninsula. The United States and South Korea quickly responded to the test. On September 13, the United States flew two B-1 bombers over South Korea in a show of force, reminiscent of bomber flights conducted after North Korea’s third nuclear test in March 2013. South Korea’s response didn’t feature any displays of military force, but in many respects it was more dangerous due to its implications for crisis stability.

Two days after the nuclear test, South Korea’s Yonhap News Agency reported on the Korea Massive Punishment & Retaliation (KMPR) operational concept. According to the news report, “[KMPR] is intended to launch pre-emptive bombing attacks on North Korean leader Kim Jong-un and the country’s military leadership if signs of their impending use of nuclear weapons are detected or in the event of a war.” The strikes would likely be conducted using conventionally-armed ballistic and cruise missiles. Jeffrey Lewis of the Middlebury Institute of International Studies summed up the concept, “[South Korea’s] goal is to kill [Kim] so that he can’t shove his fat little finger on the proverbial button.”

A preemptive decapitation strike against North Korean political leadership is a bad idea for a host of reasons.

First, it will be very difficult for South Korea to know with certainty that a nuclear attack is imminent. Based on the press release accompanying the latest nuclear test, North Korea says it will mount nuclear warheads on Hwasong missile units. All the missiles in the Hwasong family are liquid fueled, meaning they take time to fuel before they can launch, but recent test footage indicates that they can be fired from stretches of highway. This presents a very complicated detection problem for South Korea. Moreover, even if Seoul can detect missiles moving into position, these missiles can be armed with either conventional or nuclear warheads further contributing to detection difficulties.

Second, decapitation strikes are very difficult to pull off. In the opening hours of the 2003 invasion of Iraq, the United States tried to kill Saddam Hussein with cruise missiles and bombs from aircraft to no avail. Furthermore, even if the KMPR plan was implemented successfully and Kim Jong-un was killed before he could order a strike, there is no telling what happens next. Kim could simply devolve launch authority to missile units already in the field, telling them to go through with an attack even if he is killed. Or, without its dictator, the North could descend into chaos as various groups and individuals vie for control. Such a scenario is rife with uncertainty, and if armed conflict with South Korea broke out in the wake of a preemptive decapitation it could be more difficult to bring the conflict to a close.

Finally, the United States faces great danger under the KMPR if Washington cannot reign in its alliance partner. The KMPR is based on preemption, acting before North Korea has a chance to use its nuclear weapons. This means that Seoul would go on the offensive, and would likely do so with little warning to preserve the element of surprise. Admittedly, we do not know all the details of the KMPR, but if the United States is not consulted before South Korea launches its preemptive strike then Washington risks being drawn into a war not of its choosing. Such a risk exists with any alliance relationship, and while the language of the U.S.-South Korea defense treaty is explicitly defensive, if war broke out in the aftermath of a South Korean preemptive attack there would be strong political pressure for the United States to help its ally. 

North Korea must be deterred from ever using its nuclear weapons, but South Korea’s preemptive decapitation plan is fraught with problems that contribute to instability. The idea of killing Kim Jong-un before he can use nuclear weapons may be politically attractive, but operationally the gambit is unlikely to succeed. The United States should make it clear to Seoul that going through with the KMPR would be disastrous and not in Seoul’s best interests. 

According to the Washington Post,

The Canadian government has quietly approved new drug regulations that will permit doctors to prescribe pharmaceutical-grade heroin to treat severe addicts who have not responded to more conventional approaches.

In an ideal world, heroin and other opioids would be fully legal, but medicalizing at least helps users avoid street heroin that is adulterated or much stronger than advertised (e.g., because it’s laced with fentanyl, an even more potent opioid). The adulterated, stronger-than-advertised samples are the main cause of adverse health effects, not heroin per se.


In April, the Trudeau government announced plans to legalize the sale of marijuana by next year, and it has appointed a task force to determine how marijuana will be regulated, sold and taxed.

Maybe the U.S. can learn something about drug policy from its neighbors to the North.

Global economic freedom improved slightly to 6.85 on a scale of 0 to 10 according to the new Economic Freedom of the World report, co-published in the United States today by the Fraser Institute and Cato. The United States still ranks relatively low on economic freedom, and is below Chile (13), the United Kingdom (10) and Canada (5). The top four countries, in order, are Hong Kong, Singapore, New Zealand and Switzerland.

The long-term U.S. decline beginning in 2000 is the most pronounced among major advanced economies. It mirrors a pattern among OECD (Organization for Economic Cooperation and Development) nations of steady increases in economic freedom in the decades leading into the 2000s, a fall in freedom especially in response to the global economic crisis, and a subsequent slight increase from the low. (See the graph below.) Neither the United States nor the average OECD country has recovered the high levels of economic freedom that they reached last decade.

Given that economic freedom is strongly related to prosperity and progress in the whole range of human well-being, the relatively low levels of the advanced countries are worrisome. Economic freedom in the top countries affects not only those nations, but exerts great impact on the global economy and human well-being around the world. A chapter by Dean Stansel and Meg Tuszynski documents the decline in the U.S. level of economic freedom and how it tracks the decline in the health of the U.S. economy since 2000. From 1980 to 2000 U.S. GDP grew 3.4% per year on average, compared to 1.8% thereafter. Stansel and Tuszinski describe how specific policies have weakened property rights and every major category of economic freedom studied in the index.

The rise in global economic freedom in the past 35 years has brought huge improvements to humanity. Authors James Gwartney, Robert Lawson and Josh Hall document the convergence in levels of economic freedom between developing and developed economies. Poor countries are catching up to rich countries in terms of economic freedom. This helps explain the unprecedented gains in global poverty reduction during this period (see the graph below).

Economic freedom is unambiguously good for the poor. It reduces poverty and increases the income of the poorest. As Professor Gwartney and his co-authors point out: “the average income of the poorest 10% in the most economically free nations is twice the average per-capita income in the least free nations.”

The report also includes chapters on the long-term decline of freedom in Venezuela, the increase of economic freedom in Ireland, and the effect of gender disparity on economic freedom. Read the whole report here.

My Cato trade policy colleagues and I recently released a Working Paper analyzing the Trans-Pacific Partnership (TPP). We find that the agreement is “net liberalizing,” and that despite its various flaws, the agreement will improve people’s lives and should be ratified. Some aspects of the agreement were obviously good (like lower tariffs) and others were easy to condemn (like labor regulations); but for many of the TPP’s 30 chapters, our opinion is more ambivalent. 

The TPP’s chapter on “state-owned enterprises” (SOEs) is one of those. The TPP’s SOE rules are good rules, but they’re not nearly as ambitious as we wish they’d be. We gave the chapter a minimally positive grade of 6 out of 10. Here’s some of what we had to say in our report:

Concerns about the role of SOEs have grown in recent years because SOEs that had previously operated almost exclusively within their own territories are increasingly engaged in international trade of goods and services or acting as investors in foreign markets. The chapter represents the first ever attempt to discipline SOEs as a distinct category through trade rules.

The provisions include three broad obligations meant to reduce the discretion of governments to use state-ownership as a tool for trade protectionism. These obligations are that SOEs and designated monopolies must operate according to commercial considerations only, must not give or receive subsidies in a way that harms foreign trade, and must not discriminate against foreign suppliers.

While the privatization of public assets is an important part of economic liberalization, the State-Owned Enterprises chapter does not attempt to eliminate or prevent government ownership. Instead, it is intended to reduce the economic distortions caused by direct government ownership of prominent firms. The chapter’s three main obligations make an excellent contribution to international economic law. The rules are simple and well-tailored to address the problem of protectionism conducted through management of SOEs and the granting of special privileges to them….

On the down side, the chapter provides for numerous exceptions to the basic rules, which will have the effect of diminishing the impact of these otherwise market-oriented disciplines. Each party maintains detailed lists of exemptions, allowing the bulk of existing SOEs to continue operating without paying much heed to the SOE disciplines. Most members have carved out specific SOEs from the chapter’s disciplines, particularly in the energy and finance sectors. The extent of these exemptions significantly limits the chapter’s practical impact. Moreover, by requiring a controlling interest, the definition of an SOE leaves out a great number of enterprises that receive special treatment by the state due to inappropriate government involvement through ownership.

It’s good that the TPP has rules for SOEs and that the rules in the TPP are good ones, but the agreement isn’t going to do much, if anything, to reform existing SOEs in the 12 member countries. 

Even so, the TPP’s SOE rules are not a total wash. They have real value in the long run, because they will provide a blueprint for future negotiations involving China.

Any trade negotiations between the United States and China, whether it’s China’s entry into the TPP or the formation of a larger agreement including both economies, will surely be contentious. The U.S. business lobby has a lot of complaints about various Chinese policies that disadvantage foreign companies, and SOEs are near the top of the list. At the same time, the Chinese government continues to rely on state-ownership in key sectors, especially banking and finance, not simply as a tool for protectionism, but also as a way to manage its economy more broadly. They will be reluctant to give up that control in a trade agreement. 

Ultimately, the greatest benefit from SOE liberalization will be enjoyed by the Chinese people, who currently suffer from rampant cronyism, mismanagement, and misallocation of investment in SOE-dominated industries.

It’s impossible to know at this point precisely how future U.S.–China trade negotiations will go or how much influence the TPP’s SOE rules will have on them. But those rules do provide an encouraging step in the right direction. Despite its practical weaknesses, the SOE chapter is a positive component of the TPP and is one of many reasons why advocates of free markets should support the agreement.

Yesterday, Cato published my policy analysis entitled “Terrorism and Immigration: A Risk Analysis” where I, among other things, attempt to quantify the terrorist threat from immigrants by visa category. 

One of the best questions I received about it came from Daniel Griswold, the Senior Research Fellow and Co-Director of the Program on the American Economy and Globalization at the Mercatus Center. Full disclosure: Dan used to run Cato’s immigration and trade department and he’s been a mentor to me. Dan asked me how many of the ten illegal immigrant terrorists I identified crossed the Mexican border?

I didn’t have a good answer for Dan yesterday but now I do. 

Of the ten terrorists who entered the country illegally, three did so across the border with Mexico. Shain Duka, Britan Duka, and Eljvir Duka are ethnic Albanians from Macedonia who illegally crossed the border with Mexico as children with their parents in 1984. They were three conspirators in the incompetently planned Fort Dix plot that was foiled by the FBI in 2007, long after they became adults. They became terrorists at some point after immigrating here illegally. Nobody was killed in their failed attack.

Gazi Ibrahim Abu Mezer, Ahmed Ressam, and Ahmed Ajaj entered illegally or tried to do so along the Canadian border. Ajaj participated in the 1993 World Trade Center bombing, so I counted him as responsible for one murder in a terrorist attack. Abdel Hakim Tizegha and Abdelghani Meskini both entered illegally as stowaways on a ship from Algeria. Shahawar Matin Siraj and Patrick Abraham entered as illegal immigrants but it’s unclear where or how they did so.

Based on this history, it’s fair to say that the risk of terrorists crossing the Southwest border illegally is minuscule.

Beginning in 2009, developers in Seattle became leaders in micro-housing. As the name suggests, micro-housing consists of tiny studio apartments or small rooms in dorm-like living quarters. These diminutive homes come in at around 150–220 sq. ft. each and usually aren’t accompanied by a lot of frills. Precisely because of their size and modesty, this option provides a cost-effective alternative to the conventional, expensive, downtown Seattle apartment model.

Unfortunately, in the years following its creation, micro-housing development has all but disappeared. It isn’t that Seattle prohibited micro-housing outright. Instead, micro-housing’s gradual demise was death by a thousand cuts, with a mushroom cloud of incremental zoning regulation finally doing it in for good. Design review requirements, floor space requirements, amenity requirements, and location prohibitions constitute just a few of the Seattle Planning Commission’s assorted weapons of choice.

As a result of the exacting new regulations placed on tiny homes, Seattle lost an estimated 800 units of low-cost housing per year. While this free market (and free to the taxpayer) solution faltered, Seattle poured millions into various housing initiatives that subsidize housing supply or housing demand, all on the taxpayer’s dole.

Sadly, Seattle’s story is anything but unusual. Over the past almost one hundred years, the unintended consequences of well-meaning zoning regulations have played out in counterproductive ways time and time again. Curiously, in government circles zoning’s myriad failures are met with calls for more regulations and more restrictions—no doubt with more unintended consequences—to patch over the failures of past regulations gone wrong.

In pursuit of the next great fix, cities try desperately to mend the damage that they’ve already done. Euphemistically-titled initiatives like “inclusionary zoning” (because who doesn’t want to be included?) force housing developers to produce low-cost apartments in luxury apartment buildings, thereby increasing the price of rent for everyone else. Meanwhile, “housing stabilization policies” (because who doesn’t want housing stabilized?) prohibit landlords from evicting tenants that don’t pay their rent, thereby increasing the difficulty low-income individuals face in getting approved for an apartment in the first place.

The thought seems to be that even though zoning regulations of the past have systematically jacked up housing prices, intentionally and unintentionally produced racial and class segregation, and simultaneously reduced economic opportunities and limited private property rights, what else could go wrong?

Perhaps government planners could also determine how to restrict children’s access to good schools or safe neighborhoods. Actually, zoning regulations already do that, too.

Given the recent failures of zoning policies, it seems prudent for government planners to begin exercising a bit of humility, rather than simply proposing the same old shtick with a contemporary twist.

After all, they say that the definition of insanity is doing the same thing over and over and expecting different results.