Policy Institutes

Yesterday, Cato published my policy analysis entitled “Terrorism and Immigration: A Risk Analysis” where I, among other things, attempt to quantify the terrorist threat from immigrants by visa category. 

One of the best questions I received about it came from Daniel Griswold, the Senior Research Fellow and Co-Director of the Program on the American Economy and Globalization at the Mercatus Center. Full disclosure: Dan used to run Cato’s immigration and trade department and he’s been a mentor to me. Dan asked me how many of the ten illegal immigrant terrorists I identified crossed the Mexican border?

I didn’t have a good answer for Dan yesterday but now I do. 

Of the ten terrorists who entered the country illegally, three did so across the border with Mexico. Shain Duka, Britan Duka, and Eljvir Duka are ethnic Albanians from Macedonia who illegally crossed the border with Mexico as children with their parents in 1984. They were three conspirators in the incompetently planned Fort Dix plot that was foiled by the FBI in 2007, long after they became adults. They became terrorists at some point after immigrating here illegally. Nobody was killed in their failed attack.

Gazi Ibrahim Abu Mezer, Ahmed Ressam, and Ahmed Ajaj entered illegally or tried to do so along the Canadian border. Ajaj participated in the 1993 World Trade Center bombing, so I counted him as responsible for one murder in a terrorist attack. Abdel Hakim Tizegha and Abdelghani Meskini both entered illegally as stowaways on a ship from Algeria. Shahawar Matin Siraj and Patrick Abraham entered as illegal immigrants but it’s unclear where or how they did so.

Based on this history, it’s fair to say that the risk of terrorists crossing the Southwest border illegally is minuscule.

Beginning in 2009, developers in Seattle became leaders in micro-housing. As the name suggests, micro-housing consists of tiny studio apartments or small rooms in dorm-like living quarters. These diminutive homes come in at around 150–220 sq. ft. each and usually aren’t accompanied by a lot of frills. Precisely because of their size and modesty, this option provides a cost-effective alternative to the conventional, expensive, downtown Seattle apartment model.

Unfortunately, in the years following its creation, micro-housing development has all but disappeared. It isn’t that Seattle prohibited micro-housing outright. Instead, micro-housing’s gradual demise was death by a thousand cuts, with a mushroom cloud of incremental zoning regulation finally doing it in for good. Design review requirements, floor space requirements, amenity requirements, and location prohibitions constitute just a few of the Seattle Planning Commission’s assorted weapons of choice.

As a result of the exacting new regulations placed on tiny homes, Seattle lost an estimated 800 units of low-cost housing per year. While this free market (and free to the taxpayer) solution faltered, Seattle poured millions into various housing initiatives that subsidize housing supply or housing demand, all on the taxpayer’s dole.

Sadly, Seattle’s story is anything but unusual. Over the past almost one hundred years, the unintended consequences of well-meaning zoning regulations have played out in counterproductive ways time and time again. Curiously, in government circles zoning’s myriad failures are met with calls for more regulations and more restrictions—no doubt with more unintended consequences—to patch over the failures of past regulations gone wrong.

In pursuit of the next great fix, cities try desperately to mend the damage that they’ve already done. Euphemistically-titled initiatives like “inclusionary zoning” (because who doesn’t want to be included?) force housing developers to produce low-cost apartments in luxury apartment buildings, thereby increasing the price of rent for everyone else. Meanwhile, “housing stabilization policies” (because who doesn’t want housing stabilized?) prohibit landlords from evicting tenants that don’t pay their rent, thereby increasing the difficulty low-income individuals face in getting approved for an apartment in the first place.

The thought seems to be that even though zoning regulations of the past have systematically jacked up housing prices, intentionally and unintentionally produced racial and class segregation, and simultaneously reduced economic opportunities and limited private property rights, what else could go wrong?

Perhaps government planners could also determine how to restrict children’s access to good schools or safe neighborhoods. Actually, zoning regulations already do that, too.

Given the recent failures of zoning policies, it seems prudent for government planners to begin exercising a bit of humility, rather than simply proposing the same old shtick with a contemporary twist.

After all, they say that the definition of insanity is doing the same thing over and over and expecting different results.

Terrorism, I have argued previously, has hijacked much of the American foreign policy debate. Regardless of whether we are discussing Iraq, Iran, Libya, Russia, or nuclear weapons, it seems we are really talking about terrorism. But although it feels like we talk about terrorism nonstop these days, we actually talk about it a lot less than we did right after 9/11.

As Figure One shows, the news media’s attention to terrorism declined steadily through 2012. The Syrian civil war and then the emergence of the Islamic State reversed the trend. But even so there were almost 40% fewer news stories mentioning the words “terror” or “terrorism” in 2015 compared to the peak in 2002. 

 

Of course, the one time every year when we can guarantee seeing plenty of news about terrorism is around the anniversary of the 9/11 attacks. Figure Two compares the daily average coverage of terrorism each year to the number of stories published on September 11. We can think of the difference between the coverage on 9/11 and the daily average as the “anniversary attention effect.” The biggest anniversary effect came in 2011 on the 10th anniversary of the attacks, when the major U.S. newspapers printed almost six times more articles mentioning terrorism than the daily average. The smallest anniversary effect came in 2015 when the effect only boosted coverage of terrorism by about a third. Though the trend is a bit noisy, over time it is clear that the anniversary effect is shrinking. From 2002 through 2006, anniversary coverage was an average of 2.37 times higher than the daily average, but over the last five years from 2012–2016 anniversary coverage has averaged just 1.88 times higher.

The shrinking anniversary attention effect suggests that the resonance of 9/11 may be waning as the attacks recede into history. Of course, we should not be too hasty to conclude that public fears of terrorism are also fading. As John Mueller has written here (and in greater detail here), Americans have harbored a healthy level of fear of future terrorist attacks ever since 9/11. But given how hyperbolic and utterly divorced from reality much of the terrorism rhetoric has been this election cycle, we can only hope that 9/11 is beginning to lose some of its symbolic power. Though it is important to honor friends and family we have lost to terrorism, we cannot let emotion dictate foreign policy. 

James Madison presciently warned “it will be of little avail to the people that the laws are made by men of their own choice if the laws be so voluminous that they cannot be read, or so incoherent that they cannot be understood.” Sadly, however, Madison’s admonishment has fallen on deaf ears when it comes to modern statutes and regulations—which in some cases are so numerous and complex that they cannot be deciphered by trained attorneys, much less the general public.

What’s worse, federal prosecutors have seized the opportunity to use these vague statutes, and they now have the ability to prosecute almost anyone for anything. One protection against these incoherent laws and regulations, however, is that, in most criminal cases, the prosecution must prove a defendant had a certain degree of criminal intent (mens rea) to prove a violation. But in order for this protection to be effective, the courts must properly instruct the jury on the level of intent required by the statute.

In United States v. Clay, the district court—as well as a panel of judges on the Eleventh Circuit Court of Appeals—failed in this respect. In 2002, the Florida legislature enacted the “80/20 Statute,” which requires certain medical providers receiving state Medicaid funds to spend 80 percent of such funds towards “the provision of behavioral health care services” or refund the difference to Florida’s Agency for Health Care Administration (AHCA). The statute, however, was ambiguous as to how the expenditures were to be calculated and did not set out any certain guidelines. Despite this ambiguity, in 2011, federal prosecutors indicted Mr. Clay and others for healthcare fraud and making false statements relating to not properly calculating and reporting their expenditures to the AHCA. The defendants were prosecuted under a federal fraud statute, which requires the government to prove that the defendants “knew” the reports were false. The judge, however, instructed the jury that it could convict if the defendants knew either that the submissions were “untrue” or if they acted “with deliberate indifference as to the truth,” which is certainly not the same as the “knowledge” required by the statute. The district court allowed this jury instruction despite a 2011 Supreme Court case that held “deliberate indifference” cannot substitute for a statutory knowledge requirement, and a three-judge panel in the Eleventh Circuit upheld the district court’s instruction.

The Cato Institute has joined with the National Association of Criminal Defense Lawyers, the Washington Legal Foundation, the Reason Foundation and twelve criminal and business law professors in requesting the full Eleventh Circuit to rehear the case and vacate the panel’s opinion. The district court’s jury instruction was a clear departure from Supreme Court precedent, and, if upheld, would weaken one of the fundamental checks on vague statutes and over-zealous prosecutors—the requirement that the government prove someone knows they are committing a crime. 

Arlen and Cindy Foster are farmers in Miner County, South Dakota. Arlen’s grandfather bought the land over a century ago, and the family has been working it ever since. In 1936, Arlen’s father planted a tree belt on the south end of the farm as a conservation measure. As the weather warms, the snow around the tree belt melts and the water flows into the circular depression, called a “prairie pothole” (circled in blue on the lower right hand part of the picture). Unfortunately for the Fosters, the federal government has declared that the shallow depression is a protected wetland, and thus denied them the productive use of that portion of their land. 

Department of Agriculture regulations define what qualifies as a wetland, but remain vague on some of the details. The regulations say that, if a parcel’s wetland status can’t be determined due to alteration of the vegetation (such as through filling or tilling the land), a similar parcel from the “local area” will be chosen to act as a proxy. “Local area” is never defined, but a 2010 internal field circular refers agency officials to an Army Corps of Engineers manual that uses the parallel language “adjacent vegetation.” Here, the agency interpreted “local area” to refer to an area of almost 11,000 square miles and then selected proxy site some 33 miles from the Fosters’ farm. That proxy site supports wetland vegetation, so the Fosters’ land was also declared a protected wetland.

 

The Fosters appealed that determination all the way to the Eighth Circuit Court of Appeals, which blindly deferred to the agency’s strained and unnatural interpretation of “local area,” upholding the determination as a reasonable reading of the regulations’ requirements. Now the Fosters are seeking Supreme Court review, and the Cato Institute has filed an amicus brief in support of their petition for certiorari. The Eighth Circuit relied on the 1997 Supreme Court case Auer v. Robbins, which held that courts should give broad deference to an agency’s interpretation of its own regulations. While Auer’s holding has repeatedly been called into question by both the Supreme Court and various lower courts, the Eighth Circuit’s decision goes beyond Auer’s already shaky foundations. The decision actually afforded the agency “second-level” Auer deference, deferring to an interpretation of a vaguely written agency circular that interprets a vague regulation that in turn interprets a vague statute–all to get to a definition of “local area” that is nothing close to a natural and reasonable interpretation of that term.

We argue that Auer should not be extended in this way for several reasons. Since major policy decisions are being made in internal documents with no notice to the public, ordinary people are denied fair warning of what the law requires of them. And since these interpretive decisions are not binding on the agency itself, it is free to change them at any time, again without any notice to the public. Second-level Auer deference also undermines the rule of lenity—a traditional rule of interpretation stating that ambiguity in criminal statutes must be resolved in favor of the defendant—even more than first-level Auer deference already does. It effectively allows agencies to create new crimes (again without notice to the public) by doing as little as reinterpreting a footnote in a memo. Cato urges the Supreme Court take the case so that it may rein in the expansion of Auer deference and make it clear to administrative agencies that they cannot avoid judicial review by refusing to promulgate clear, unambiguous regulations. 

Cato published a paper of mine today entitled “Terrorism and Immigration: A Risk Analysis.”  I began this paper shortly after the San Bernardino terrorist attack in December last year when it became clear that few had attempted a terrorism risk analysis of immigration in general, let alone focusing on individual visa categories.  There were few studies on the immigration status of terrorists and the vast majority of them were qualitative rather than quantitative.  Inspired by the brilliant work of John Mueller and Mark Stewart, I decided to make my own.  

From 1975 through the end of 2015, 154 foreign-born terrorists murdered 3024 people on U.S. soil.  During that same time period, over 1.14 billion foreigners entered the United States legally or illegally.  About 7.4 million foreigners entered the United States for each one who ended up being a terrorist.  Startlingly, 98.6 percent of those 3024 victims were murdered on 9/11 (I did not count the terrorists as victims, obviously).  However, not every terrorist is successful.  Only 40 of those 154 foreign-born terrorists actually ended up killing anyone on U.S. soil.    

Immigrants frequently enter the United States on one visa and adjust their status to another.  Many tourists and other non-immigrants frequently enter legally and then fall out of status and become illegal immigrants.  I focused on the visas foreigners used to enter the United States because applications for that visa are when security screenings are initially performed. 

Table 1, copied from my paper, shows the chance of being killed in a terrorist attack on U.S. soil by foreigners by visa category.  Only three people have been killed on U.S. soil in terrorist attacks caused by refugees – which is a one in 3.64 billion chance a year of dying in an attack by a refugee.  If future refugees are 100 times as likely to kill Americans at past ones, all else being equal, then the chance of being killed in an attack caused by them will be one in 36 million a year.  That’s a level of risk we can live with. 

Table 1

 

 Source: “Immigration and Terrorism: A Risk Analysis.”

I chose to begin my analysis in 1975 for three main reasons.  First, I wanted to make sure to include many refugees because of the current public fear over Syrians.  The waves of Cubans and Vietnamese refugees during the 1970s provided a large pool of people in that category.  Second, I had to go back to the late 1970s to find refugees who actually killed people on U.S. soil in terrorist attacks.  Although some refugees since then have attempted terrorist attacks, none has successfully killed anyone.  Third, I wanted to see if there was a different result before and after the modern refugee screening system was created in 1980.  The timing of that immigration reform coincides with the end of successful refugee terrorist attacks but the small sample of three victims prior to 1980 and none afterwards speaks volumes.     

In any project of this size, many findings and facts get left on the editing floor.  Here are some:

  • The chance of being murdered by a non-terrorist is one in 14,275 a year compared to one in 3,609,709 a year for all foreign-born terrorist attacks.
  • The chance of being murdered on U.S. soil by any terrorist, native or foreigner, was one in 3.2 million a year.
  • The chance of being murdered in a terrorist attack on U.S. soil, committed by a foreigner after 9/11 was one in 177.1 million a year.
  • For every successful foreign-born terrorist who actually killed somebody on U.S. soil in an attack, over 28 million foreigners entered the United States.
  • 9/11 is a tremendous outlier in terms of deadliness – about an order of magnitude deadlier than the second-deadliest terror attack in world history.  Excluding 9/11 from this analysis helps us understand what most terrorist attacks in the past and the future are going to be like.  Doing that reveals that 91 percent of the deaths caused by all terrorists on U.S. soil, native or foreign-born, were committed by natives or those with unknown nationalities (usually because their identities were never uncovered) while 9 percent were committed by foreigners.  

And it came to pass in those days, that there went out a decree that all the world should be taxed.

And lo, the ubiquity of taxation made it possible for the Treasury Department to identify all the same-sex marriages in the land by zip code and present the data in tables and a map.

And in all the land only a few paranoids worried about the implications for privacy and freedom, of gay people and others, of a government that knows everything about you.

The massive decline in the U.S. poverty rate reported today by the Census (it fell from 14.8% of all families below to the poverty line to just 13.5%, the largest drop since the 1960s) may have come as a surprise to many economists and political commentators but it should not have. The one thing we have learned from the last three business cycles is that the poor benefit greatly from sustained economic growth.

When a recession occurs the unemployment rate can fall quickly, but it usually takes a long time before it returns to its previous pre-recession levels. no matter how aggressive our infrastructure spending may be. What eventually helps low-income workers is an economy where labor–skilled and unskilled–becomes difficult to find. When that happens companies bid against one another to find workers or else become creative-perhaps by investing in labor-saving equipment or else taking a chance on workers who haven’t been in the labor market for a while and don’t have the most sterling resumes.

When the unemployment rate approached 4.5% in the late 1990s poverty rates also declined significantly, as wages all across the income distribution grew steadily. Productivity grew smartly as well during this time–while the facile explanation for this was that businesses finally managed to take advantage of IT innovations, the same companies that were using IT to boost productivity were also the same ones that hire lots of low income workers (i.e. big box stores like Wal-Mart and Target) and they had every incentive to figure out how to do more with fewer workers, who were becoming more expensive. In Chicago the grocery chain Dominick’s sought out people living in government housing projects and spent significant resources training them to work for them, with surprising success. In Peoria another grocery chain, Kroger’s, worked with a local social service organization to train and employ young adults with Down Syndrome to work in their stores, also with a great deal of success. With luck more firms will have the need to get creative on employment soon.

Today’s numbers reflect the fact that strong and sustained economic growth and not redistribution are the best way to help low-income Americans. There’s lots that the next president and Congress can do on that front–in the last year the Department of Labor alone has imposed regulations that will cost businesses tens of billions of dollars a year to implement, and the FCC’s going to throttle investment in high speed internet for the now-inviolate right for a Netflix customer to not have to wait three minutes for his movie to load.

The lesson macroeconomists painfully learned in the 1970s was that they’re no good at forecasting the ebbs and flows of the business cycle and we’re better off concentrating our efforts on thinking about the things that can boost productivity and long-run growth. Today, however, that lesson has been all but ignored as we debate whether society would survive a quarter-point rise in the discount rate and how much of a free lunch new infrastructure spending would be.

A better lesson for politicians would be that 3% growth is 50% more than 2% growth and that it’s worth contemplating how to reach that copasetic rate once again. It should be a lesson for the rest of us as well.

The Trans-Pacific Partnership trade agreement between the United States and 11 other countries was reached late last year, signed by the parties earlier this year, and now awaits ratification by the various governments. In terms of the value of trade and share of global output accounted for by the 12 member countries, the TPP is the largest U.S. trade agreement to date.

In the United States, the TPP has been controversial from the outset, drawing criticism from the usual suspects – labor unions, environmental groups, and sundry groups of anti-globalization crusaders – but also from free traders concerned that the deal may be laden with corporate welfare and other illiberal provisions that might lead to the circumvention or subversion of domestic sovereignty and democratic accountability.

As free traders who recognize that these kinds of agreements tend to deliver managed trade liberalization (which usually includes some baked-in protectionism), rather than free trade, my colleagues and I at the Herbert A. Stiefel Center for Trade Policy Studies set out to perform a comprehensive assessment of the TPP’s 30 chapters with the goal of answering this question: Should Free Traders Support the Trans-Pacific Partnership?

Yesterday, Cato released our findings in this paper, which presents a chapter-by-chapter analysis of the TPP, including summaries, assessments, scores on a scale of 0 (protectionist) to 10 (free trade), and scoring rationales. Of the 22 chapters analyzed, we found 15 to be liberalizing (scores above 5), 5 to be protectionist (scores below 5), and 2 to be neutral (scores of 5). Considered as a whole, the terms of the TPP are net liberalizing – it would, on par, increase our economic freedoms.

Accordingly, my trade colleagues and I hope it will be ratified and implemented as soon as possible.

Drug policy watchers learned earlier this month that the latest substance to earn Schedule I status is the obscure plant ​called kratom. So what’s Schedule I? By the letter of the law, Schedule I of the Controlled Substances Act contains “drugs, substances, or chemicals” that meet the following criteria:

The drug or other substance has a high potential for abuse.
The drug or other substance has no currently accepted medical use in treatment in the United States.
There is a lack of accepted safety for use of the drug or other substance under medical supervision.

In this post, I’m not going to consider the penalties that apply to the use, possession, or sale of Schedule I substances. I’m just going to look at the criteria for inclusion. While they may appear plausible, these criteria are preposterous and completely indefensible as applied.

The most important unwritten fact about Schedule I is that all three of its criteria are terms of political art. Neither science nor the plain meanings of the words have much to do with what Schedule I really includes.

We can see this first in how Schedule I fails to include many substances that clearly belong there. These substances easily meet all three criteria. Yet they are in no danger whatsoever of being scheduled. It literally will never happen.

Solvent inhalants, such as toluene, have a high potential for abuse, have no accepted medical uses, and cannot be used safely even with close medical supervision. The same is true of obsolete anesthetics like diethyl ether and chloroform. Toluene, ether, and chloroform are all dangerous when used as drugs. Overdosing on each is relatively easy, they bring serious health risks at any level of use, and they have no valid medical uses today.

None, of course, will ever be scheduled, because each is also an essential industrial chemical. That they happen to be abusable as drugs is a fact that a crime-based drug policy can’t easily accommodate. And so that fact is simply ignored.

The substances included on Schedule I are an odd lot as well. Some clearly meet the criteria, but many do not.

Why, for example, is fenethylline Schedule I, while amphetamine is in the less restrictive Schedule II? On ingestion, fenethylline breaks down into two other compounds: theophylline – a caffeine-like molecule found in chocolate – and amphetamine.

People commonly use amphetamine under medical supervision in the United States; the popular ADHD drug Adderall is simply a mixture of various forms of amphetamine. Theophylline has also seen use by physicians for care of various respiratory issues. And people still use fenethylline under medical supervision in other countries. In the published literature, fenethylline is described as having a “lower abuse potential and little actual abuse compared to amphetamine.” (Emphasis added.) To say that fenethylline has “no accepted medical use in the United States” is, quite literally, to suggest that medical science changes when you cross the border.

​Fenethylline isn’t unique. Schedule I contains many drugs quite like it, molecules that bear a close but not exact resemblance to familiar and widely used medical drugs. Many of these are prodrugs – substances that break down in the body to become familiar, medically useful molecules like morphine or amphetamine. Others, like dimethylamphetamine, are held by the medical literature to be safer than their less strictly regulated chemical cousins.

This is not to say that fenethylline, dimethylamphetamine, or amphetamine itself is risk-free. No drug is. But one could hardly find a less rational set of classifications than this one, in which drugs are scheduled more severely if and when they are less risky.

O​r consider psilocybin. ​Psilocybin flunks the first criterion for Schedule I because it is in fact fairly difficult to abuse. Psilocybin​ ​binges don’t generally happen because even a single dose creates a swift and strong tolerance response: A second dose, or an added dose of any other traditional psychedelic, usually does little or nothing, and doses after that will likely be inert until several days have elapsed.

A user may have a regrettable or upsetting psilocybin​ ​experience, and many do. But users can’t have a binge, and deaths and serious illnesses are exceedingly rare. Psilocybin isn’t an entirely risk-free drug – again, no drug is risk-free – but it’s clearly not in the same league as cocaine (Schedule II) or even ketamine (Schedule III). Going by the letter of the law, psilocybin’s place on Schedule I is inexplicable.

Still more inexplicable is cannabis, which has​ a relatively low ​potential for abuse, ​many important medical uses, ​and ​​such a favorable safety profile that a life-threatening overdose is impossible. Too much cannabis can be deeply psychologically unpleasant, but it can’t be fatal.

As you all know, cannabis is Schedule I.

This has brought Americans, long the world’s most inventive people, to invent and ingest dozens of substitutes. Each of these so-called cannabimimetics became a recreational drug almost solely because a safe, well-studied, and well-tolerated recreational drug – cannabis – just happened to be illegal. Now there are dozens of cannabimimetics, all with somewhat different dosages, effects, and safety profiles. Much remains unknown about them, unlike the relatively well-studied compounds found in cannabis.

A similar process has taken place with the traditional psychedelics, generating a bewildering array of new psychoactive substances, each of which has a dosage, effect constellation, and risk profile that is relatively unknown when compared to, say, psilocybin or mescaline. It might even be said that Schedule I itself is the single largest cause of Schedule I drugs. In all, the mimetics are an area of comparative ignorance. Many of these new drugs may even deserve a bad reputation, if not a state-enforced ban. But, at least for a time, all of them were technically legal (at least, if we ignore the Federal Analogue Act, which is an entirely different mess of its own). If cannabis or psilocybin were legal instead, few would likely bother with the mimetics outside a laboratory setting.

Yet many of these mimetics could also be medically interesting, much like cannabis itself. We just don’t know yet, and we are a lot less likely ever to find out because it’s difficult to do research with Schedule I drugs.

To sum up, the list of drugs on Schedule I both over-includes and under-includes. I suspect that the list does not exist to fulfill the criteria. Rather, the criteria​ exist to make Congressional and DEA determinations look scientific, even when they clearly are not. They would appear to have no other function.

Compared to the criteria for inclusion, the list of drugs on Schedule I both over-includes and under-includes. I suspect that the list does not exist to fulfill the criteria. Rather, the criteria​ exist to make DEA determinations appear scientific, whatever they might be. They have essentially no other function.

With that in mind, let’s take a closer look at kratom.

As Jacob Sullum notes, the DEA has simply defined all use of kratom as abuse. Of course, then, the potential for abuse is (nominally) high. But it begs the question that science could and should have answered: What exactly is kratom’s abuse potential? Thanks to kratom’s new Schedule I status, U.S. researchers are in no position to question the DEA anytime soon.

This is typical of how drug scheduling works; to some extent the law creates its own medical facts by foreclosing research avenues that might otherwise be explored. But it can only do this by stunting our knowledge and perhaps delaying the development of useful new medicines.

What’s true of abuse potential is also true of “accepted medical use.” It too is an obfuscation; the DEA, and not doctors, determine what counts as accepted. But as Jeffrey Miron noted, kratom users report that it can relieve the symptoms of opiate addiction and help addicts kick the habit. Are they right? More clinical study might help, and we can be pretty sure that we’re not getting it now.

Finally, “lack of accepted safety for use” is – you guessed it – yet another determination made by a certain department in the executive branch. Not that it would change their minds, but Jacob Sullum correctly notes that kratom is relatively safe when compared to many other drugs, particularly the recreational opiates like heroin. In particular, while overdose on kratom is certainly possible, no fatal overdoses have ever been recorded. This is not to say that it’s impossible, of course, but when compared to heroin – or many other drugs – “no recorded fatal overdoses” is a pretty good track record.

In short: Schedule I is not a set of scientific criteria, rationally applied to the world of drugs. Rather, it’s a science-y looking smokescreen, one that allows the DEA to do virtually whatever it feels like – which is often completely indefensible.

Image by Uomo vitruviano (Own work) [CC BY-SA 3.0 (http://creativecommons.org/licenses/by-sa/3.0), via Wikimedia Commons.

“There is now a consensus that the United States should substantially raise its level of infrastructure investment,” writes former treasury secretary Lawrence Summers in the Washington Post. Correction: There is now a consensus among two presidential candidates that the United States should increase infrastructure spending. That’s far from a broad consensus.

“America’s infrastructure crisis is really a maintenance crisis,” says the left-leaning CityLab. The “infrastructure crisis is about socialism,” says the conservative Heritage Foundation. My colleague Chris Edwards says, “There is no widespread crisis of crumbling infrastructure.” “The infrastructure crisis … isn’t,” the Reason Foundation agrees.

As left-leaning Charles Marohn points out, the idea that there is an infrastructure crisis is promoted by an “infrastructure cult” led by the American Society of Civil Engineers. As John Oliver noted, relying on them to decide whether there is enough infrastructure spending is like asking a golden retriever if enough tennis balls are being thrown.

In general, most infrastructure funded out of user fees is in good shape. Highways and bridges, for example, are largely funded out of user fees, and the number of bridges that are structurally deficient has declined by more than 52 percent since 1992. The average roughness of highway pavements has also declined for every class of road.

Some infrastructure, such as rail transit, is crumbling. The infrastructure in the worst condition is infrastructure that is heavily subsidized, because politicians would rather build new projects than maintain old ones. That suggests the U.S. government should spend less, not more, on new infrastructure. It also suggests that we should stop building rail transit lines we can’t afford to maintain and maybe start thinking about scrapping some of the rail systems we have.

Aside from the question of whether our infrastructure is crumbling or not, the more important assumption underlying Summers’ article is that infrastructure spending always produces huge economic benefits. Based on a claim that infrastructure spending will produce a 20 percent rate of return, Summers says that financing it through debt is “entirely reasonable.” Yet such a rate of return is a pure fantasy, especially if it is government that decides where to spend the money. Few private investments produce such a high rate of return, and private investors are much more careful about where their money goes.

For every government project that succeeds, a dozen fail. Funded by the state of New York, the Erie Canal was a great success, but attempts to imitate that success by Ohio, Indiana, and Pennsylvania put those states into virtual bankruptcy.

The 1850 land grants to the Illinois Central Railroad paid off, at least for Illinois, but similar subsidies to the First Transcontinental Railroad turned into the biggest political corruption scandal of the nineteenth century. The Union Pacific was forced to reorganize within four years of its completion, and it went bankrupt again two decades later. The similarly subsidized Northern Pacific was forced to reorganize just a year after its completion in 1883 and, like the Union Pacific, would go bankrupt again in 1893.

The Interstate Highway System was a great success, but a lot of transportation projects built since then have been pure money pits. It’s hard to argue that any of the infrastructure spending that came out of the American Recovery and Reinvestment Act did anything to actually stimulate the economy.

Think the Atlanta streetcar, whose ridership dropped 48 percent as soon as they started charging a fare, generates economic development? Only in a fantasy world. Japan has used infrastructure spending to stimulate its way out of its economic doldrums since 1990. It hasn’t worked yet.

In the Baptists and bootleggers political model, Keynesians such as Summers are the Baptists who promise redemption from increased government spending while the civil engineers, and the companies that employ them, are the bootleggers who expect to profit from that spending. Neither should be trusted, especially considering how poorly stimulus spending has worked to date.

Making infrastructure spending a priority would simply lead to more grandiose projects, few of which will produce any economic or social returns. In all probability, these projects will not be accompanied by funding for maintenance of either existing or new infrastructure, with the result that more infrastructure spending will simply lead to more crumbling infrastructure.

Almost as an aside, Summers adds that, “if there is a desire to generate revenue to finance infrastructure investments, the best approaches would involve user fees.” That’s stating the obvious, but the unobvious part is, if we agree user fees are a good idea, why should the federal government get involved at all? The answer, of course, is that politicians would rather get credit for giving people infrastructure that they don’t have to pay for than rely on user fees, and the controversies they create, to fund them.

Instead of an infrastructure crisis, what we really have is a crisis over who gets to decide where to spend money on infrastructure. If we leave infrastructure to the private market, we will get the infrastructure we need when we need it and it will tend to be well maintained as long as we need it. If we let government decide, we will get too much of some kinds of infrastructure we don’t need, not enough of other kinds of infrastructure we do need, and inadequate maintenance of both.

The Third Circuit last week held oral arguments on whether an individual can be forced to decrypt a drive with incriminating information on it. The Fifth Amendment prohibits any person from bring “compelled in any criminal case to be a witness against himself.” The Third Circuit will hopefully recognize that being forced to decrypt information is just the kind of testimonial act that the Fifth Amendment prohibits.

In a forced decryption case there are two kinds of subpoenas that could be issued. The first compels the individual to turn over the encryption key or password. This isn’t the kind of subpoena in the Third Circuit case, but it is useful in looking at why this is also not allowed. The other kind of subpoena is to produce the documents themselves.

With a direct subpoena of the password the password itself isn’t incriminating, but the Supreme Court has held that that the Fifth Amendment also prevents compelling incriminating “information directly or indirectly derived from such testimony.” The Supreme Court “particularly emphasized the critical importance of protection against a future prosecution ‘based on knowledge and sources of information obtained from the compelled testimony.’” While the password itself isn’t incriminating it clearly provides the lead necessary to get incriminating information from the encrypted drives. Another close analogy that seems to apply was that the Supreme Court clearly prohibited compelling a person to disclose a combination to a safe.

The second type of subpoena, and the one in this case, seeks only the production of the documents supposedly encrypted on the hard drive. In this case, the order was to “produce” the whole hard drive in an “unencrypted state.” The production of documents is not usually considered testimonial (and therefore not protected by the Fifth Amendment) if the documents existence, location, and authenticity are a “foregone conclusion.” By being a foregone conclusion, no new information is given to the government by the defendant’s testimonial acts of turning over the document (showing his own knowledge of the document’s existence, location and authenticity).

The real problem with this second type of subpoena is that there is real question of if the documents subpoenaed actually exist even if they are encrypted on the hard drive. In the traditional safe analogy this isn’t a problem, we know the documents really exist inside the safe if only we could get at them. And so the compelling of the individual who can open the safe to do so and give us the documents isn’t testimonial (as long as they are not required to tell the government what the combination to the safe is). But in the case of encrypted documents, no plaintext or unencrypted documents actually exist at all when the subpoena was issued.

Now the potential defendant could use his password to decrypt the documents but this act of decryption itself is the testimonial act. Imagine if the government were to subpoena from a suspected murderer where they couldn’t find the body an order to “produce a document with the location of the body.” The creating of that document that doesn’t already exist is testimonial and cannot be compelled under the Fifth Amendment. An encrypted drive is like finding a piece of paper that the government cannot makes sense out of. Ordering the individual to use the personal knowledge in his mind (the password) to transform that document into one that makes sense for the government is testimonial  because it is creating something that did not already exist in that form using the knowledge in his mind. Forced decryption should not be allowed for the same reason. Hopefully the Third Circuit in United States  v. Apple Macpro Computer will recognize this.

Immigrants from India waiting to receive residency in the United States may die before they receive their green cards. The line is disproportionately long for Indians because the law discriminates against immigrants from populous countries, skewing the immigration flow to the benefit of immigrants from countries with fewer people. This policy—a compromise that resolved a long-dead immigration dispute—is senseless and economically damaging.

In the 1920s, Congress imposed the first-ever quota on immigration, but rather than just a worldwide limit, it also distributed the numbers between countries in order to give preference to immigrants from “white” countries. In 1965, Congress repealed this system with one that allowed immigrants from any country to receive up to seven percent of the green cards issued each year. This was an improvement, but is an anachronism today and it is causing its own pointless discrimination.

The per-country limits treat each nation equally, but not each immigrant equally. China receives the same treatment as Estonia, but immigrants from Estonia who apply today could receive their visas this year, while immigrants from China who apply today could have to wait a generation. It is equality in theory and inequality in practice. It is arbitrary and unfair.

Immigrants should be treated as individuals, not as national representatives. As I have written before, no one actually knows for sure the waits for legal immigrants, but Stuart Anderson of the National Foundation for American Policy has conservatively estimated decades-long waits for certain immigrants from China, India, Mexico, and the Philippines.

The entire system is an absurd relic of a bygone era. It was a compromise that enabled Congress to overcome its prior racial bias, but the explanation made sense in 1965, not today. Nation-based quotas are governmental discrimination that is every bit as useless—if not as malicious—as racial discrimination.

The per-country limits make employers think twice about hiring the best person for the job due to the disparate waits. This means lost productivity for the United States and a less competitive economy. It can separate families for such a long period of time that would-be legal immigrants attempt illegal entry rather than wait decades for a legal visa. 

Shockingly, some opponents of legal immigration would keep this system. Jessica Vaughn of the Center for Immigration Studies told Congress in 2012 not to fix the law on the hope that “maybe the green card delays will dampen some of the enthusiasm for overused guestworker [sic] categories,” which immigrants often use to initially come here before applying for a green card. In other words, she would keep the system so broken that skilled people don’t even want to bother trying to come to the United States and let other countries benefit from their talents.

In 2011, Congress overwhelmingly passed (389-15) a bill, the Fairness for High-Skilled Immigrants Act, that doubled the limits to 15 percent for family-sponsored immigrants and eliminated the limits entirely for employer-sponsored immigrants. While it failed to receive a vote in the Senate amid wrangling on unrelated issues, there is little doubt its current version (H.R. 213) with nearly 100 cosponsors—half of whom are Democrats—would pass if it came up for a vote today.

Congress is currently considering a bill to reform one high-skilled visa category, the EB-5 investor visa, which has a high likelihood of becoming law in some form. Proponents of ending the per-country limits have an opportunity to attach their fix to this bill. If they do, and Congress passes it, it would put to rest nearly a century of discriminatory immigration policy.

Tomorrow the House Financial Services Committee moves to “mark-up” (amend and vote on) the Financial Choice Act, introduced by Committee Chair Jeb Hensarling.  The Choice Act represents the most comprehensive changes to financial services regulation since the passage of Dodd-Frank in 2010.  Unlike Dodd-Frank, however, the Choice Act moves our system in the direction of more stability and fewer bailouts.

At the heart of the Choice Act is an attempt to improve financial stability by increasing bank capital, while improving the functioning of our financial system by reducing compliance costs and over-reliance on regulatory discretion.  While I would have chosen a different level of capital, the Choice Act gets at the fundamental flaw in our current financial system: government guarantees punish banks for holding high levels of capital which, unfortunately, leads to excessive leverage and widespread insolvencies whenever asset values (such as houses) decline.  Massive leverage still characterizes our banking system, despite the “reforms” in Dodd-Frank.

The Choice Act also includes important, even if modest, improvements in Federal Reserve oversight (see Title VII).  There was perhaps no contributor to the housing boom and bust that has been as ignored by Congress as the Fed’s reckless monetary policies in the mid-2000s.  Years of negative real rates (essentially paying people to borrow) drove a boom in our property markets.  The eminent economist John Taylor has written extensively and persuasively on this topic, yet it remained ignored by legislators prior to Hensarling’s efforts.  Such reforms are too late to unwind the Fed’s current distortionary policies, but they may prove helpful in moderating future booms and busts.

Despite its daunting 500+ pages, the Choice Act is still best viewed as a modest step in the right direction.  Considerably more needs to be done to bring market discipline and accountability to our financial system.  But at least the Choice Act moves us in the right direction, for that the bill merits applause and consideration.

 

The Center for Immigration Studies (CIS) released a report by Jason Richwine last week entitled “Immigrants Replace Low-Skill Natives in the Workforce.” The Cato Institute has previously pointed out the inaccuracies, methodological tricks, and disingenuous framing that have plagued CIS’s reports on numerous occasions, but this latest report performs poorly even relative to those prior attempts. More importantly, its underlying numbers actually buttress the case for expanding legal immigration.

The report’s central finding is that the share of native-born high school dropouts in their prime who are not working has grown at the same time as the population of similarly educated immigrants. While Mr. Richwine explicitly states that this finding “does not necessarily imply that immigrants push out natives from the workforce,” he goes on to imply exactly that throughout the report, blaming immigrants for “causing economic and social distress.” 

First of all, “distress” would imply that at least some more prime-age, lesser-skilled natives are out of work—i.e. unemployed or out of the labor force—now than prior to the wave of immigration in the 1990s. But this is incorrect. The numbers of such workers in their prime (ages 25 to 54) actually declined by 25 percent from 1995 to 2014, according to Census data. For the last decade, the number has remained roughly constant. Richwine is just wrong to state that “an increasing number of the least-skilled Americans [are] leaving the workforce.” (Note that while the CIS report focuses on native men, the trends in all of the following figures are the same direction regardless of sex.)

Figure 1: Prime-Age Native-Born High School Dropouts Unemployed or Not in the Labor Force (1995-2014)

Source: Census Bureau, Current Population Survey, March Supplement

Since the number of lesser-skilled native workers who are not working has not grown, all of the increase in the number of prime-age native workers who are not working has come from graduates of high school and college. As Figure 2 shows, the share of not-working prime-age natives who are high school dropouts declined substantially from 1995 to 2014.

Figure 2: Natives Unemployed or Out of the Labor Force—Number and Share Who Are High School Dropouts, Number Who Are High School Graduates  (1995-2014)

Source: Census Bureau, Current Population Survey, March Supplement

Mr. Richwine meticulously avoids absolute numbers in his report, focusing instead on the share of lesser-skilled natives who are not working. But the decline in the absolute number of high school dropouts explains all of the increase in the share who are not working. There are still the same small number of people at the bottom who have dropped out of high school and the workforce. But because so many other natives upgraded their skills, these troubled people are a greater share of natives in their skill demographic, while being a smaller share of natives overall.

As immigrants are entering the lower rungs of the economic ladder, natives are leaving those rungs in great numbers. Immigrants have partially filled-in the gaps that they have left, but on net, there has actually been less competition for jobs by new low-skilled workers. An increase in low-skilled labor supply simply does not explain any of the trends in low-skilled employment because there has been no such increase. The basic premise of the CIS report is wrong.

Figure 3: Prime-Age High School Dropouts by Nativity and Employment Status (1995-2014)

Source: Census Bureau, Current Population Survey, March Supplement

From this perspective, we see that the collapse in the number of native-born high school dropouts is a good thing because it represents an exodus of working Americans to higher education and better employment opportunities. A much larger share of employed natives is acquiring skills and moving up the economic ladder. Perhaps this is the most important point: the share of prime-age, native-born Americans who have dropped out of high school is falling fast—by 50 percent from 1995 to 2014.

Figure 4: Share of Prime-Age Natives Without a High School Degree (1995-2014)

Source: Census Bureau, Current Population Survey, March Supplement

As immigrant workers have entered the United States, natives have become more educated and skilled. There are good reasons to believe that this relationship is causal, as lesser-skilled immigration boosts wages for higher-skilled workers. Having immigrant workers to do these lower-skilled jobs frees natives to pursue higher quality employment. Mr. Richwine calls it “naïve” to think that immigration can “lift all boats” by encouraging natives to get educated, but whether it will lift all boats or not, it has lifted more boats than not. This skill-upgrading in response to immigration is not a new phenomenon. As I’ve written before:

In fact, immigration may have caused America’s “high school movement” – the increase in high school enrollment from 12 percent in 1910 to 50 percent in 1930. In a detailed 2002 study of the period for the International Monetary Fund, Rodney Ramcharan concluded, for instance, that “the massive immigration of unskilled labor in the late 19th and early 20th century triggered the U.S. high school movement” by raising “the private return to education and engendered schooling investment.”

As economists Francesco D’Amuri and Giovanni Peri have found, “immigrants often supply manual skills, leaving native workers to take up jobs that require more complex skills – even boosting demand for them. Immigrants replace ‘tasks’, not workers.” This, in turn, results in higher wages for natives. CIS’s report—while disingenuously framed—provides no evidence to contradict this finding.

Mr. Richwine suggests that the United States should radically transform its labor markets in order to accommodate a shrinking sliver of its population—those prime-age high school dropouts who aren’t working. Even if this proposal did benefit them, it would make no sense to hurt the 99 percent to attempt to help the one percent. There are other options to help the one percent of natives who, for whatever reasons, cannot hold a job or complete government-provided high school.

Washington Post fact checker Glenn Kessler gives a maximum Four Pinocchios to the claim that Hillary Clinton was fired during the Watergate inquiry, which has gotten a lot of circulation on social media. He makes a detailed case that there is no evidence for such a firing. However, along the way he does note some unflattering aspects of her tenure there:

In neither of his books does Zeifman say he fired Clinton. But in 2008, a reporter named Dan Calabrese wrote an article that claimed that “when the investigation was over, Zeifman fired Hillary from the committee staff and refused to give her a letter of recommendation.” The article quoted Zeifman as saying: “She was a liar. She was an unethical, dishonest lawyer. She conspired to violate the Constitution, the rules of the House, the rules of the committee and the rules of confidentiality.”…

In 1999, nine years before the Calabrese interview, Zeifman told the Scripps-Howard news agency: “If I had the power to fire her, I would have fired her.” In a 2008 interview on “The Neal Boortz Show,” Zeifman was asked directly whether he fired her. His answer: “Well, let me put it this way. I terminated her, along with some other staff members who were — we no longer needed, and advised her that I would not — could not recommend her for any further positions.”

So it’s pretty clear that Jerry Zeifman, chief counsel of the House Judiciary Committee during the Watergate inquiry, had a low opinion of the young Yale Law graduate Hillary Rodham. But because she reported to the chief counsel of the impeachment inquiry, who was hired separately by the committee and did not report to Zeifman, Zeifman had no authority over her. He simply didn’t hire her for the permanent committee staff after the impeachment inquiry ended.

Kessler also notes that Clinton failed the D.C. bar exam in that period. She never retook the exam (passing the Arkansas exam instead) and concealed her failure even from her closest friends until her autobiography in 1973.

And then there’s this:

Zeifman’s specific beef with Clinton is rather obscure. It mostly concerns his dislike of a brief that she wrote under Doar’s direction to advance a position advocated by Rodino — which would have denied Nixon the right to counsel as the committee investigated whether to recommend impeachment. 

That brief may get some attention during the next few years, should any members of the Clinton administration become the subject of an impeachment inquiry. Also in Sunday’s Post, George Will cites James Madison’s view that the power to impeach is “indispensable” to control of executive abuse of power. 

Teledoc, Inc. is a health services company that provides access to state-licensed physicians through telecommunications technology, usually for a fraction of the cost of a visit to a physician’s office or urgent care center. Teladoc sued the Texas Medical Board—comprised mostly of practicing physicians—because the board took steps to protect the interests of traditional physicians by imposing licensing rules such as requiring the in-person examination of patients before telephonic treatment is permitted.

Because the board isn’t supervised by the Texas legislature, executive, or judiciary, Teledoc argues that its self-dealing violates federal antitrust laws—and the federal district court agreed. The Texas Medical Board has now appealed to the U.S. Court of Appeals for the Fifth Circuit, where Cato filed an amicus brief urging the court to affirm the lower-court ruling and protect the fundamental right to earn a living.

Our brief argues that the Supreme Court has consistently held that the right to earn a living without unreasonable government interference is guaranteed by the Constitution, and that this protection dates back much earlier, to Magna Carta and the common law. Indeed, the right to earn a living is central to a person’s life and ability to pursue happiness. As Frederick Douglass wrote in his autobiography, “To understand the emotion which swelled in my heart as I clasped this money, realizing that I had no master who could take it from me—that it was mine—that my hands were my own, and could earn more of the precious coin—one must have been in some sense himself a slave… . I was not only a freeman but a free-working man.”

Licensing laws, which can be valid if protecting a legitimate public interest, are a tool of the state often employed by private market participants to restrict competition. By creating barriers to entry, existing firms or practitioners mobilize the state to wield monopoly power. This results in higher prices and fewer choices for consumers and diminished opportunities for entrepreneurs and workers.

While it may be appropriate to create a regulatory body exempt from antirust laws to achieve a specialized purpose, it’s inappropriate to grant private actors populating a licensing board limitless ability to claim such state-action immunity unless they are appropriately supervised by state officials. Without active supervision, private parties may wield state regulatory power purely for their own self-interest.

The Supreme Court has said that this active supervision standard is “flexible and context-dependent,” N.C. State Bd. of Dental Exam’rs v. FTC (2014), but not flimsy and porous. Moreover, there are other ways for states to obtain the specialized knowledge of professionals without creating regulatory bodies that rubber-stamp the assertions of active practitioners.

Teledoc offers an innovative service that makes obtaining healthcare easier and more affordable. The Fifth Circuit should protect its right to do so and the right of all persons to pursue a trade or career without onerous government-backed constraints instituted by private actors. 

Frederic Bastiat, the great French economist (yes, such creatures used to exist) from the 1800s, famously observed that a good economist always considers both the “seen” and “unseen” consequences of any action.

A sloppy economist looks at the recipients of government programs and declares that the economy will be stimulated by this additional money that is easily seen, whereas a good economist recognizes that the government can’t redistribute money without doing unseen damage by first taxing or borrowing it from the private sector.

A sloppy economist looks at bailouts and declares that the economy will be stronger because the inefficient firms that stay in business are easily seen, whereas a good economist recognizes that such policies imposes considerable unseen damage by promoting moral hazard and undermining the efficient allocation of labor and capital.

We now have another example to add to our list. Many European nations have “social protection” laws that are designed to shield people from the supposed harshness of capitalism. And part of this approach is so-called Employment Protection Legislation, which ostensibly protects workers by, for instance, making layoffs very difficult.

The people who don’t get laid off are seen, but what about the unseen consequences of such laws?

Well, an academic study from three French economists has some sobering findings for those who think regulation and “social protection” are good for workers.

…this study proposes an econometric investigation of the effects of the OECD Employment Protection Legislation (EPL) indicator… The originality of our paper is to study the effects of labour market regulations on capital intensity, capital quality and the share of employment by skill level using a symmetric approach for each factor using a single original large database: a country-industry panel dataset of 14 OECD countries, 18 manufacturing and market service industries, over the 20 years from 1988 to 2007.

One of the findings from the study is that “EPL” is an area where the United States historically has always had an appropriately laissez-faire approach (which also is evident from the World Bank’s data in the Doing Business Index).

Here’s a chart showing the US compared to some other major developed economies.

It’s good to see, by the way, that Denmark, Finland, and the Netherlands engaged in some meaningful reform between 1994-2006.

But let’s get back to our main topic. What actually happens when nations have high or low levels of Employment Protection Legislation?

According to the research of the French economists, high levels of rules and regulations cause employers to substitute capital for labor, with low-skilled workers suffering the most.

Our main estimation results show an EPL effect: i) positive for non-ICT physical capital intensity and the share of high-skilled employment; ii) non-significant for ICT capital intensity; and (iii) negative for R&D capital intensity and the share of low-skilled employment. These results suggest that an increase in EPL would be considered by firms to be a rise in the cost of labour, with a physical capital to labour substitution impact in favour of more non-sophisticated technologies and would be particularly detrimental to unskilled workers. Moreover, it confirms that R&D activities require labour flexibility. According to simulations based on these results, structural reforms that lowered EPL to the “lightest practice”, i.e. to the US EPL level, would have a favourable impact on R&D capital intensity and would be helpful for unskilled employment (30% and 10% increases on average, respectively). …The adoption of this US EPL level would require very largescale labour market structural reforms in some countries, such as France and Italy. So this simulation cannot be considered politically and socially realistic in a short time. But considering the favourable impact of labour market reforms on productivity and growth. …It appears that labour regulations are particularly detrimental to low-skilled employment, which is an interesting paradox as one of the main goals of labour regulations is to protect low-skilled workers. These regulations seem to frighten employers, who see them as a labour cost increase with consequently a negative impact on low-skilled employment.

There’s a lot of jargon in the above passage for those who haven’t studied economics, but the key takeaway is that employment for low-skilled workers would jump by 10 percent if other nations reduced labor-market regulations to American levels.

Though, as the authors point out, that won’t happen anytime soon in nations such as France and Italy.

Now let’s review an IMF study that looks at what happened when Germany substantially deregulated labor markets last decade.

After a decade of high unemployment and weak growth leading up to the turn of the 21th century, Germany embarked on a significant labor market overhaul. The reforms, collectively known as the Hartz reforms, were put in place in three steps between January 2003 and January 2005. They eased regulation on temporary work agencies, relaxed firing restrictions, restructured the federal employment agency, and reshaped unemployment insurance to significantly reduce benefits for the long-term unemployed and tighten job search obligations.

And when the authors say that long-term unemployment benefits were “significantly” reduced, they weren’t exaggerating.

Here’s a chart from the study showing the huge cut in subsidies for long-run joblessness.

So what were the results of the German reforms?

To put it mildly, they were a huge success.

…the unemployment rate declined steadily from a peak of almost 11 percent in 2005 to five percent at the end of 2014, the lowest level since reunification. In contrast, following the Great Recession other advanced economies — particularly in the euro area — experienced a marked and persistent increase in unemployment. The strong labor market helped Germany consolidate its public finances, as lower outlays on unemployment benefits resulted in lower spending while stronger taxes and social security contribution pushed up revenues.

Gee, what a shocker. When the government stopped being as generous to people for being unemployed, fewer people chose to be unemployed.

Which is exactly what happened in the United States when Congress finally stopped extending unemployment benefits.

And it’s also worth noting that this was also a  period of good fiscal policy in Germany, with the burden of spending rising by only 0.18 percent annually between 2003-2007.

But the main lesson of all this research is that some politicians probably have noble motives when they adopt “social protection” legislation. In the real world, however, there’s nothing “social” about laws and regulations that either discourage employers from hiring people and or discourage people from finding jobs.

P.S. Another example of “seen” vs “unseen” is how supposedly pro-feminist policies actually undermine economic opportunity for women.

A big story to come out of the last G-20 summit was that the Russians and Saudis were talking oil (read: an oil cooperation agreement). With that, everyone asked, again, where are oil prices headed? To answer that question, one has to have a model – a way of thinking about the problem. In this case, my starting point is Roy W. Jastram’s classic study, The Golden Constant: The English and American Experience 1560-2007. In that work, Jastram finds that gold maintains its purchasing power over long periods of time, with the prices of other commodities adapting to the price of gold. 

Taking a lead from Jastram, let’s use the price of gold as a long-term benchmark for the price of oil. The idea being that, if the price of oil changes dramatically, the oil-gold price ratio will change and move away from its long-term value. Forces will then be set in motion to shift supply of and demand for oil.  In consequence, the price of oil will change and the long-term oil-gold price ratio will be reestablished. Via this process, the oil-gold ratio will revert, with changes in the price of oil doing most of the work.

For example, if the price of oil slumps, the oil-gold price ratio will collapse. In consequence, exploration for and development of oil reserves will become less attractive and marginal production will become uneconomic. In addition to the forces squeezing the supply side of the market, low prices will give the demand side a boost. These supply-demand dynamics will, over time, move oil prices and the oil-gold price ratio up. This is what’s behind the old adage, there is nothing like low prices to cure low prices.

We begin our analysis of the current situation by calculating the oil-gold price ratios for each month. For example, as of September 5th, oil was trading at $46.97/bbl and gold was at $1323.50/oz. So, the oil-gold price ratio was 0.035. In June 2014, when oil was at its highs, trading at $107.26/bbl and gold was at $1314.82/oz, the oil-gold price ratio was 0.082. 

We can calculate these ratios over time. Those ratios are presented in the accompanying chart, starting in 1973 (the post-Bretton Woods period).  

Two things stand out in the histogram: the recent oil price collapse was extreme – the February 2016 oil-gold price ratio is way to the left of the distribution, with less than one percent of the distribution to its left. The second observation is that the ratio is slowly reverting to the mean, with a September 2016 ratio approaching 0.04.

But, how long will it take for the ratio to mean revert? My calculations (based on post-1973 data) are that a 50 percent reversion of the ratio will occur in 13.7 months. This translates into a price per barrel of WTI of $60 by March 2017 – almost exactly hitting OPEC’s sweet spot. It is worth noting that, like Jastram, I find that oil prices have reverted to the long-run price of gold, rather than the price of gold reverting to that of oil. So, the oil-gold price ratio reverts to its mean via changes in the price of oil.

The accompanying chart shows the price projection based on the oil-gold price ratio model. It also shows the historical course of prices. They are doing just what the golden constant predicts: oil prices are moving up. That said, there remains a significant gap between the January 2018 futures price of WTI, which stands at $51.50/bbl and the implied price estimate of $70.06/bbl which is generated by the oil-gold ratio model. Best to be long oil.

As a young professional woman myself, lately I’ve grown fatigued by the media’s on-going portrayal of women as victims of circumstance. Media messaging on one topic in particular – the gender pay gap – is especially discouraging because it’s assembled on the basis of flimsy facts. Although it necessitates a voyage outside my traditional topical expertise, setting the record straight seems a sufficiently worthwhile activity as to require it.

Let’s begin with the numbers. Hillary Clinton and others allege that women get paid 76 cents for every dollar a man gets paid – an alarming workplace injustice, if it’s true.

The 76 cent figure is based on a comparison of median domestic wages for men and women. Unfortunately, comparing men’s and women’s wages this way is duplicitous, because men and women make different career choices that impact their wages: 1) men and women work in different industries with varying levels of profitability and 2) men and women on average make different family, career, and lifestyle trade-offs.

For example, BLS statistics show that only 35% of professionals involved in securities, commodities, funds, trusts, and other financial investments and 25% of professionals involved in architecture, engineering, and computer systems design are women. On the other hand, women dominate the field of social assistance, at 85%, and education, with females holding 75% of jobs in elementary and secondary schools.

An August 2016 National Bureau of Economic Research study, Does Rosie Like Riveting? Male and Female Occupational Choices, suggests that industry segregation may not be structural or even coincidental. According to the authors of the study, women may select different jobs than men because they “may care more about job content, and this is a possible factor preventing them from entering some male dominated professions.”

Another uncomfortable truth for the 76-cent crowd: women are considerably more likely to absorb more care-taker responsibilities within their families, and these roles demand associated career trade-offs. Sheryl Sandberg’s Lean In describes 43% of highly-qualified women with children as leaving their careers or off-ramping for a period of time. And a recent Harvard Business Review report describes women as being more likely than men to make decisions “to accommodate family responsibilities, such as limiting (work-related) travel, choosing a more flexible job, slowing down the pace of one’s career, making a lateral move, leaving a job, or declining to work toward a promotion.”

It’s fair to assume that such interruptions impact long-term wages substantially. In fact, when researchers try to control for these differences, the wage gap virtually disappears. A recent Glassdoor study that made an honest attempt to get beyond the superficial numbers showed that after controlling for age, education, years of experience, job title, employer, and location, the gender pay gap fell from nearly twenty-five cents on the dollar to around five cents on the dollar. In other words, women are making 95 cents for every dollar men are making, once you compare men and women with similar educational, experiential, and professional characteristics.

It’s worth noting that the Glassdoor study could only control for obvious differences between professional men and women. It’s likely that other, more nuanced but documented differences, like spending fewer hours on paid work per week would explain some of the remaining five percent pay differential.

Now, don’t misunderstand. Certainly somewhere a degenerate, sexist, hiring manager exists. Someone who thinks to himself: you’re a woman, so you deserve a pay cut. But rather than the being the rule, this seems to be an exception. In fact, the data seems to indicate that the decisions that impact wages are more likely due to cultural and societal expectations. A recent study shows that a full two-thirds of Harvard-educated Millennial generation men expect their partners to handle the majority of child-care. It’s possible that women would make different, more lucrative career decisions given different social or cultural expectations.

Or maybe they wouldn’t. But in the meantime, Hillary’s “equal pay for equal work” rallying cry is irresponsible, in that it perpetuates a workplace myth: by painting women as victims of workplace discrimination, when they’re not, it holds my sex psychologically hostage by stripping us of the very confidence we need to succeed. It also unhelpfully directs our focus away from dealing with the real barrier to long-term earning power – social and cultural pressures – in favor of an office witch hunt.

And that’s why, on the gender pay gap, I’m not with her.

Pages