Feed aggregator

Cato published a paper of mine today entitled “Terrorism and Immigration: A Risk Analysis.”  I began this paper shortly after the San Bernardino terrorist attack in December last year when it became clear that few had attempted a terrorism risk analysis of immigration in general, let alone focusing on individual visa categories.  There were few studies on the immigration status of terrorists and the vast majority of them were qualitative rather than quantitative.  Inspired by the brilliant work of John Mueller and Mark Stewart, I decided to make my own.  

From 1975 through the end of 2015, 154 foreign-born terrorists murdered 3024 people on U.S. soil.  During that same time period, over 1.14 billion foreigners entered the United States legally or illegally.  About 7.4 million foreigners entered the United States for each one who ended up being a terrorist.  Startlingly, 98.6 percent of those 3024 victims were murdered on 9/11 (I did not count the terrorists as victims, obviously).  However, not every terrorist is successful.  Only 40 of those 154 foreign-born terrorists actually ended up killing anyone on U.S. soil.    

Immigrants frequently enter the United States on one visa and adjust their status to another.  Many tourists and other non-immigrants frequently enter legally and then fall out of status and become illegal immigrants.  I focused on the visas foreigners used to enter the United States because applications for that visa are when security screenings are initially performed. 

Table 1, copied from my paper, shows the chance of being killed in a terrorist attack on U.S. soil by foreigners by visa category.  Only three people have been killed on U.S. soil in terrorist attacks caused by refugees – which is a one in 3.64 billion chance a year of dying in an attack by a refugee.  If future refugees are 100 times as likely to kill Americans at past ones, all else being equal, then the chance of being killed in an attack caused by them will be one in 36 million a year.  That’s a level of risk we can live with. 

Table 1


 Source: “Immigration and Terrorism: A Risk Analysis.”

I chose to begin my analysis in 1975 for three main reasons.  First, I wanted to make sure to include many refugees because of the current public fear over Syrians.  The waves of Cubans and Vietnamese refugees during the 1970s provided a large pool of people in that category.  Second, I had to go back to the late 1970s to find refugees who actually killed people on U.S. soil in terrorist attacks.  Although some refugees since then have attempted terrorist attacks, none has successfully killed anyone.  Third, I wanted to see if there was a different result before and after the modern refugee screening system was created in 1980.  The timing of that immigration reform coincides with the end of successful refugee terrorist attacks but the small sample of three victims prior to 1980 and none afterwards speaks volumes.     

In any project of this size, many findings and facts get left on the editing floor.  Here are some:

  • The chance of being murdered by a non-terrorist is one in 14,275 a year compared to one in 3,609,709 a year for all foreign-born terrorist attacks.
  • The chance of being murdered on U.S. soil by any terrorist, native or foreigner, was one in 3.2 million a year.
  • The chance of being murdered in a terrorist attack on U.S. soil, committed by a foreigner after 9/11 was one in 177.1 million a year.
  • For every successful foreign-born terrorist who actually killed somebody on U.S. soil in an attack, over 28 million foreigners entered the United States.
  • 9/11 is a tremendous outlier in terms of deadliness – about an order of magnitude deadlier than the second-deadliest terror attack in world history.  Excluding 9/11 from this analysis helps us understand what most terrorist attacks in the past and the future are going to be like.  Doing that reveals that 91 percent of the deaths caused by all terrorists on U.S. soil, native or foreign-born, were committed by natives or those with unknown nationalities (usually because their identities were never uncovered) while 9 percent were committed by foreigners.  

And it came to pass in those days, that there went out a decree that all the world should be taxed.

And lo, the ubiquity of taxation made it possible for the Treasury Department to identify all the same-sex marriages in the land by zip code and present the data in tables and a map.

And in all the land only a few paranoids worried about the implications for privacy and freedom, of gay people and others, of a government that knows everything about you.

The massive decline in the U.S. poverty rate reported today by the Census (it fell from 14.8% of all families below to the poverty line to just 13.5%, the largest drop since the 1960s) may have come as a surprise to many economists and political commentators but it should not have. The one thing we have learned from the last three business cycles is that the poor benefit greatly from sustained economic growth.

When a recession occurs the unemployment rate can fall quickly, but it usually takes a long time before it returns to its previous pre-recession levels. no matter how aggressive our infrastructure spending may be. What eventually helps low-income workers is an economy where labor–skilled and unskilled–becomes difficult to find. When that happens companies bid against one another to find workers or else become creative-perhaps by investing in labor-saving equipment or else taking a chance on workers who haven’t been in the labor market for a while and don’t have the most sterling resumes.

When the unemployment rate approached 4.5% in the late 1990s poverty rates also declined significantly, as wages all across the income distribution grew steadily. Productivity grew smartly as well during this time–while the facile explanation for this was that businesses finally managed to take advantage of IT innovations, the same companies that were using IT to boost productivity were also the same ones that hire lots of low income workers (i.e. big box stores like Wal-Mart and Target) and they had every incentive to figure out how to do more with fewer workers, who were becoming more expensive. In Chicago the grocery chain Dominick’s sought out people living in government housing projects and spent significant resources training them to work for them, with surprising success. In Peoria another grocery chain, Kroger’s, worked with a local social service organization to train and employ young adults with Down Syndrome to work in their stores, also with a great deal of success. With luck more firms will have the need to get creative on employment soon.

Today’s numbers reflect the fact that strong and sustained economic growth and not redistribution are the best way to help low-income Americans. There’s lots that the next president and Congress can do on that front–in the last year the Department of Labor alone has imposed regulations that will cost businesses tens of billions of dollars a year to implement, and the FCC’s going to throttle investment in high speed internet for the now-inviolate right for a Netflix customer to not have to wait three minutes for his movie to load.

The lesson macroeconomists painfully learned in the 1970s was that they’re no good at forecasting the ebbs and flows of the business cycle and we’re better off concentrating our efforts on thinking about the things that can boost productivity and long-run growth. Today, however, that lesson has been all but ignored as we debate whether society would survive a quarter-point rise in the discount rate and how much of a free lunch new infrastructure spending would be.

A better lesson for politicians would be that 3% growth is 50% more than 2% growth and that it’s worth contemplating how to reach that copasetic rate once again. It should be a lesson for the rest of us as well.

The Trans-Pacific Partnership trade agreement between the United States and 11 other countries was reached late last year, signed by the parties earlier this year, and now awaits ratification by the various governments. In terms of the value of trade and share of global output accounted for by the 12 member countries, the TPP is the largest U.S. trade agreement to date.

In the United States, the TPP has been controversial from the outset, drawing criticism from the usual suspects – labor unions, environmental groups, and sundry groups of anti-globalization crusaders – but also from free traders concerned that the deal may be laden with corporate welfare and other illiberal provisions that might lead to the circumvention or subversion of domestic sovereignty and democratic accountability.

As free traders who recognize that these kinds of agreements tend to deliver managed trade liberalization (which usually includes some baked-in protectionism), rather than free trade, my colleagues and I at the Herbert A. Stiefel Center for Trade Policy Studies set out to perform a comprehensive assessment of the TPP’s 30 chapters with the goal of answering this question: Should Free Traders Support the Trans-Pacific Partnership?

Yesterday, Cato released our findings in this paper, which presents a chapter-by-chapter analysis of the TPP, including summaries, assessments, scores on a scale of 0 (protectionist) to 10 (free trade), and scoring rationales. Of the 22 chapters analyzed, we found 15 to be liberalizing (scores above 5), 5 to be protectionist (scores below 5), and 2 to be neutral (scores of 5). Considered as a whole, the terms of the TPP are net liberalizing – it would, on par, increase our economic freedoms.

Accordingly, my trade colleagues and I hope it will be ratified and implemented as soon as possible.

Drug policy watchers learned earlier this month that the latest substance to earn Schedule I status is the obscure plant ​called kratom. So what’s Schedule I? By the letter of the law, Schedule I of the Controlled Substances Act contains “drugs, substances, or chemicals” that meet the following criteria:

The drug or other substance has a high potential for abuse.
The drug or other substance has no currently accepted medical use in treatment in the United States.
There is a lack of accepted safety for use of the drug or other substance under medical supervision.

In this post, I’m not going to consider the penalties that apply to the use, possession, or sale of Schedule I substances. I’m just going to look at the criteria for inclusion. While they may appear plausible, these criteria are preposterous and completely indefensible as applied.

The most important unwritten fact about Schedule I is that all three of its criteria are terms of political art. Neither science nor the plain meanings of the words have much to do with what Schedule I really includes.

We can see this first in how Schedule I fails to include many substances that clearly belong there. These substances easily meet all three criteria. Yet they are in no danger whatsoever of being scheduled. It literally will never happen.

Solvent inhalants, such as toluene, have a high potential for abuse, have no accepted medical uses, and cannot be used safely even with close medical supervision. The same is true of obsolete anesthetics like diethyl ether and chloroform. Toluene, ether, and chloroform are all dangerous when used as drugs. Overdosing on each is relatively easy, they bring serious health risks at any level of use, and they have no valid medical uses today.

None, of course, will ever be scheduled, because each is also an essential industrial chemical. That they happen to be abusable as drugs is a fact that a crime-based drug policy can’t easily accommodate. And so that fact is simply ignored.

The substances included on Schedule I are an odd lot as well. Some clearly meet the criteria, but many do not.

Why, for example, is fenethylline Schedule I, while amphetamine is in the less restrictive Schedule II? On ingestion, fenethylline breaks down into two other compounds: theophylline – a caffeine-like molecule found in chocolate – and amphetamine.

People commonly use amphetamine under medical supervision in the United States; the popular ADHD drug Adderall is simply a mixture of various forms of amphetamine. Theophylline has also seen use by physicians for care of various respiratory issues. And people still use fenethylline under medical supervision in other countries. In the published literature, fenethylline is described as having a “lower abuse potential and little actual abuse compared to amphetamine.” (Emphasis added.) To say that fenethylline has “no accepted medical use in the United States” is, quite literally, to suggest that medical science changes when you cross the border.

​Fenethylline isn’t unique. Schedule I contains many drugs quite like it, molecules that bear a close but not exact resemblance to familiar and widely used medical drugs. Many of these are prodrugs – substances that break down in the body to become familiar, medically useful molecules like morphine or amphetamine. Others, like dimethylamphetamine, are held by the medical literature to be safer than their less strictly regulated chemical cousins.

This is not to say that fenethylline, dimethylamphetamine, or amphetamine itself is risk-free. No drug is. But one could hardly find a less rational set of classifications than this one, in which drugs are scheduled more severely if and when they are less risky.

O​r consider psilocybin. ​Psilocybin flunks the first criterion for Schedule I because it is in fact fairly difficult to abuse. Psilocybin​ ​binges don’t generally happen because even a single dose creates a swift and strong tolerance response: A second dose, or an added dose of any other traditional psychedelic, usually does little or nothing, and doses after that will likely be inert until several days have elapsed.

A user may have a regrettable or upsetting psilocybin​ ​experience, and many do. But users can’t have a binge, and deaths and serious illnesses are exceedingly rare. Psilocybin isn’t an entirely risk-free drug – again, no drug is risk-free – but it’s clearly not in the same league as cocaine (Schedule II) or even ketamine (Schedule III). Going by the letter of the law, psilocybin’s place on Schedule I is inexplicable.

Still more inexplicable is cannabis, which has​ a relatively low ​potential for abuse, ​many important medical uses, ​and ​​such a favorable safety profile that a life-threatening overdose is impossible. Too much cannabis can be deeply psychologically unpleasant, but it can’t be fatal.

As you all know, cannabis is Schedule I.

This has brought Americans, long the world’s most inventive people, to invent and ingest dozens of substitutes. Each of these so-called cannabimimetics became a recreational drug almost solely because a safe, well-studied, and well-tolerated recreational drug – cannabis – just happened to be illegal. Now there are dozens of cannabimimetics, all with somewhat different dosages, effects, and safety profiles. Much remains unknown about them, unlike the relatively well-studied compounds found in cannabis.

A similar process has taken place with the traditional psychedelics, generating a bewildering array of new psychoactive substances, each of which has a dosage, effect constellation, and risk profile that is relatively unknown when compared to, say, psilocybin or mescaline. It might even be said that Schedule I itself is the single largest cause of Schedule I drugs. In all, the mimetics are an area of comparative ignorance. Many of these new drugs may even deserve a bad reputation, if not a state-enforced ban. But, at least for a time, all of them were technically legal (at least, if we ignore the Federal Analogue Act, which is an entirely different mess of its own). If cannabis or psilocybin were legal instead, few would likely bother with the mimetics outside a laboratory setting.

Yet many of these mimetics could also be medically interesting, much like cannabis itself. We just don’t know yet, and we are a lot less likely ever to find out because it’s difficult to do research with Schedule I drugs.

To sum up, the list of drugs on Schedule I both over-includes and under-includes. I suspect that the list does not exist to fulfill the criteria. Rather, the criteria​ exist to make Congressional and DEA determinations look scientific, even when they clearly are not. They would appear to have no other function.

Compared to the criteria for inclusion, the list of drugs on Schedule I both over-includes and under-includes. I suspect that the list does not exist to fulfill the criteria. Rather, the criteria​ exist to make DEA determinations appear scientific, whatever they might be. They have essentially no other function.

With that in mind, let’s take a closer look at kratom.

As Jacob Sullum notes, the DEA has simply defined all use of kratom as abuse. Of course, then, the potential for abuse is (nominally) high. But it begs the question that science could and should have answered: What exactly is kratom’s abuse potential? Thanks to kratom’s new Schedule I status, U.S. researchers are in no position to question the DEA anytime soon.

This is typical of how drug scheduling works; to some extent the law creates its own medical facts by foreclosing research avenues that might otherwise be explored. But it can only do this by stunting our knowledge and perhaps delaying the development of useful new medicines.

What’s true of abuse potential is also true of “accepted medical use.” It too is an obfuscation; the DEA, and not doctors, determine what counts as accepted. But as Jeffrey Miron noted, kratom users report that it can relieve the symptoms of opiate addiction and help addicts kick the habit. Are they right? More clinical study might help, and we can be pretty sure that we’re not getting it now.

Finally, “lack of accepted safety for use” is – you guessed it – yet another determination made by a certain department in the executive branch. Not that it would change their minds, but Jacob Sullum correctly notes that kratom is relatively safe when compared to many other drugs, particularly the recreational opiates like heroin. In particular, while overdose on kratom is certainly possible, no fatal overdoses have ever been recorded. This is not to say that it’s impossible, of course, but when compared to heroin – or many other drugs – “no recorded fatal overdoses” is a pretty good track record.

In short: Schedule I is not a set of scientific criteria, rationally applied to the world of drugs. Rather, it’s a science-y looking smokescreen, one that allows the DEA to do virtually whatever it feels like – which is often completely indefensible.

Image by Uomo vitruviano (Own work) [CC BY-SA 3.0 (http://creativecommons.org/licenses/by-sa/3.0), via Wikimedia Commons.

“There is now a consensus that the United States should substantially raise its level of infrastructure investment,” writes former treasury secretary Lawrence Summers in the Washington Post. Correction: There is now a consensus among two presidential candidates that the United States should increase infrastructure spending. That’s far from a broad consensus.

“America’s infrastructure crisis is really a maintenance crisis,” says the left-leaning CityLab. The “infrastructure crisis is about socialism,” says the conservative Heritage Foundation. My colleague Chris Edwards says, “There is no widespread crisis of crumbling infrastructure.” “The infrastructure crisis … isn’t,” the Reason Foundation agrees.

As left-leaning Charles Marohn points out, the idea that there is an infrastructure crisis is promoted by an “infrastructure cult” led by the American Society of Civil Engineers. As John Oliver noted, relying on them to decide whether there is enough infrastructure spending is like asking a golden retriever if enough tennis balls are being thrown.

In general, most infrastructure funded out of user fees is in good shape. Highways and bridges, for example, are largely funded out of user fees, and the number of bridges that are structurally deficient has declined by more than 52 percent since 1992. The average roughness of highway pavements has also declined for every class of road.

Some infrastructure, such as rail transit, is crumbling. The infrastructure in the worst condition is infrastructure that is heavily subsidized, because politicians would rather build new projects than maintain old ones. That suggests the U.S. government should spend less, not more, on new infrastructure. It also suggests that we should stop building rail transit lines we can’t afford to maintain and maybe start thinking about scrapping some of the rail systems we have.

Aside from the question of whether our infrastructure is crumbling or not, the more important assumption underlying Summers’ article is that infrastructure spending always produces huge economic benefits. Based on a claim that infrastructure spending will produce a 20 percent rate of return, Summers says that financing it through debt is “entirely reasonable.” Yet such a rate of return is a pure fantasy, especially if it is government that decides where to spend the money. Few private investments produce such a high rate of return, and private investors are much more careful about where their money goes.

For every government project that succeeds, a dozen fail. Funded by the state of New York, the Erie Canal was a great success, but attempts to imitate that success by Ohio, Indiana, and Pennsylvania put those states into virtual bankruptcy.

The 1850 land grants to the Illinois Central Railroad paid off, at least for Illinois, but similar subsidies to the First Transcontinental Railroad turned into the biggest political corruption scandal of the nineteenth century. The Union Pacific was forced to reorganize within four years of its completion, and it went bankrupt again two decades later. The similarly subsidized Northern Pacific was forced to reorganize just a year after its completion in 1883 and, like the Union Pacific, would go bankrupt again in 1893.

The Interstate Highway System was a great success, but a lot of transportation projects built since then have been pure money pits. It’s hard to argue that any of the infrastructure spending that came out of the American Recovery and Reinvestment Act did anything to actually stimulate the economy.

Think the Atlanta streetcar, whose ridership dropped 48 percent as soon as they started charging a fare, generates economic development? Only in a fantasy world. Japan has used infrastructure spending to stimulate its way out of its economic doldrums since 1990. It hasn’t worked yet.

In the Baptists and bootleggers political model, Keynesians such as Summers are the Baptists who promise redemption from increased government spending while the civil engineers, and the companies that employ them, are the bootleggers who expect to profit from that spending. Neither should be trusted, especially considering how poorly stimulus spending has worked to date.

Making infrastructure spending a priority would simply lead to more grandiose projects, few of which will produce any economic or social returns. In all probability, these projects will not be accompanied by funding for maintenance of either existing or new infrastructure, with the result that more infrastructure spending will simply lead to more crumbling infrastructure.

Almost as an aside, Summers adds that, “if there is a desire to generate revenue to finance infrastructure investments, the best approaches would involve user fees.” That’s stating the obvious, but the unobvious part is, if we agree user fees are a good idea, why should the federal government get involved at all? The answer, of course, is that politicians would rather get credit for giving people infrastructure that they don’t have to pay for than rely on user fees, and the controversies they create, to fund them.

Instead of an infrastructure crisis, what we really have is a crisis over who gets to decide where to spend money on infrastructure. If we leave infrastructure to the private market, we will get the infrastructure we need when we need it and it will tend to be well maintained as long as we need it. If we let government decide, we will get too much of some kinds of infrastructure we don’t need, not enough of other kinds of infrastructure we do need, and inadequate maintenance of both.

The Third Circuit last week held oral arguments on whether an individual can be forced to decrypt a drive with incriminating information on it. The Fifth Amendment prohibits any person from bring “compelled in any criminal case to be a witness against himself.” The Third Circuit will hopefully recognize that being forced to decrypt information is just the kind of testimonial act that the Fifth Amendment prohibits.

In a forced decryption case there are two kinds of subpoenas that could be issued. The first compels the individual to turn over the encryption key or password. This isn’t the kind of subpoena in the Third Circuit case, but it is useful in looking at why this is also not allowed. The other kind of subpoena is to produce the documents themselves.

With a direct subpoena of the password the password itself isn’t incriminating, but the Supreme Court has held that that the Fifth Amendment also prevents compelling incriminating “information directly or indirectly derived from such testimony.” The Supreme Court “particularly emphasized the critical importance of protection against a future prosecution ‘based on knowledge and sources of information obtained from the compelled testimony.’” While the password itself isn’t incriminating it clearly provides the lead necessary to get incriminating information from the encrypted drives. Another close analogy that seems to apply was that the Supreme Court clearly prohibited compelling a person to disclose a combination to a safe.

The second type of subpoena, and the one in this case, seeks only the production of the documents supposedly encrypted on the hard drive. In this case, the order was to “produce” the whole hard drive in an “unencrypted state.” The production of documents is not usually considered testimonial (and therefore not protected by the Fifth Amendment) if the documents existence, location, and authenticity are a “foregone conclusion.” By being a foregone conclusion, no new information is given to the government by the defendant’s testimonial acts of turning over the document (showing his own knowledge of the document’s existence, location and authenticity).

The real problem with this second type of subpoena is that there is real question of if the documents subpoenaed actually exist even if they are encrypted on the hard drive. In the traditional safe analogy this isn’t a problem, we know the documents really exist inside the safe if only we could get at them. And so the compelling of the individual who can open the safe to do so and give us the documents isn’t testimonial (as long as they are not required to tell the government what the combination to the safe is). But in the case of encrypted documents, no plaintext or unencrypted documents actually exist at all when the subpoena was issued.

Now the potential defendant could use his password to decrypt the documents but this act of decryption itself is the testimonial act. Imagine if the government were to subpoena from a suspected murderer where they couldn’t find the body an order to “produce a document with the location of the body.” The creating of that document that doesn’t already exist is testimonial and cannot be compelled under the Fifth Amendment. An encrypted drive is like finding a piece of paper that the government cannot makes sense out of. Ordering the individual to use the personal knowledge in his mind (the password) to transform that document into one that makes sense for the government is testimonial  because it is creating something that did not already exist in that form using the knowledge in his mind. Forced decryption should not be allowed for the same reason. Hopefully the Third Circuit in United States  v. Apple Macpro Computer will recognize this.

Immigrants from India waiting to receive residency in the United States may die before they receive their green cards. The line is disproportionately long for Indians because the law discriminates against immigrants from populous countries, skewing the immigration flow to the benefit of immigrants from countries with fewer people. This policy—a compromise that resolved a long-dead immigration dispute—is senseless and economically damaging.

In the 1920s, Congress imposed the first-ever quota on immigration, but rather than just a worldwide limit, it also distributed the numbers between countries in order to give preference to immigrants from “white” countries. In 1965, Congress repealed this system with one that allowed immigrants from any country to receive up to seven percent of the green cards issued each year. This was an improvement, but is an anachronism today and it is causing its own pointless discrimination.

The per-country limits treat each nation equally, but not each immigrant equally. China receives the same treatment as Estonia, but immigrants from Estonia who apply today could receive their visas this year, while immigrants from China who apply today could have to wait a generation. It is equality in theory and inequality in practice. It is arbitrary and unfair.

Immigrants should be treated as individuals, not as national representatives. As I have written before, no one actually knows for sure the waits for legal immigrants, but Stuart Anderson of the National Foundation for American Policy has conservatively estimated decades-long waits for certain immigrants from China, India, Mexico, and the Philippines.

The entire system is an absurd relic of a bygone era. It was a compromise that enabled Congress to overcome its prior racial bias, but the explanation made sense in 1965, not today. Nation-based quotas are governmental discrimination that is every bit as useless—if not as malicious—as racial discrimination.

The per-country limits make employers think twice about hiring the best person for the job due to the disparate waits. This means lost productivity for the United States and a less competitive economy. It can separate families for such a long period of time that would-be legal immigrants attempt illegal entry rather than wait decades for a legal visa. 

Shockingly, some opponents of legal immigration would keep this system. Jessica Vaughn of the Center for Immigration Studies told Congress in 2012 not to fix the law on the hope that “maybe the green card delays will dampen some of the enthusiasm for overused guestworker [sic] categories,” which immigrants often use to initially come here before applying for a green card. In other words, she would keep the system so broken that skilled people don’t even want to bother trying to come to the United States and let other countries benefit from their talents.

In 2011, Congress overwhelmingly passed (389-15) a bill, the Fairness for High-Skilled Immigrants Act, that doubled the limits to 15 percent for family-sponsored immigrants and eliminated the limits entirely for employer-sponsored immigrants. While it failed to receive a vote in the Senate amid wrangling on unrelated issues, there is little doubt its current version (H.R. 213) with nearly 100 cosponsors—half of whom are Democrats—would pass if it came up for a vote today.

Congress is currently considering a bill to reform one high-skilled visa category, the EB-5 investor visa, which has a high likelihood of becoming law in some form. Proponents of ending the per-country limits have an opportunity to attach their fix to this bill. If they do, and Congress passes it, it would put to rest nearly a century of discriminatory immigration policy.

Tomorrow the House Financial Services Committee moves to “mark-up” (amend and vote on) the Financial Choice Act, introduced by Committee Chair Jeb Hensarling.  The Choice Act represents the most comprehensive changes to financial services regulation since the passage of Dodd-Frank in 2010.  Unlike Dodd-Frank, however, the Choice Act moves our system in the direction of more stability and fewer bailouts.

At the heart of the Choice Act is an attempt to improve financial stability by increasing bank capital, while improving the functioning of our financial system by reducing compliance costs and over-reliance on regulatory discretion.  While I would have chosen a different level of capital, the Choice Act gets at the fundamental flaw in our current financial system: government guarantees punish banks for holding high levels of capital which, unfortunately, leads to excessive leverage and widespread insolvencies whenever asset values (such as houses) decline.  Massive leverage still characterizes our banking system, despite the “reforms” in Dodd-Frank.

The Choice Act also includes important, even if modest, improvements in Federal Reserve oversight (see Title VII).  There was perhaps no contributor to the housing boom and bust that has been as ignored by Congress as the Fed’s reckless monetary policies in the mid-2000s.  Years of negative real rates (essentially paying people to borrow) drove a boom in our property markets.  The eminent economist John Taylor has written extensively and persuasively on this topic, yet it remained ignored by legislators prior to Hensarling’s efforts.  Such reforms are too late to unwind the Fed’s current distortionary policies, but they may prove helpful in moderating future booms and busts.

Despite its daunting 500+ pages, the Choice Act is still best viewed as a modest step in the right direction.  Considerably more needs to be done to bring market discipline and accountability to our financial system.  But at least the Choice Act moves us in the right direction, for that the bill merits applause and consideration.


The Center for Immigration Studies (CIS) released a report by Jason Richwine last week entitled “Immigrants Replace Low-Skill Natives in the Workforce.” The Cato Institute has previously pointed out the inaccuracies, methodological tricks, and disingenuous framing that have plagued CIS’s reports on numerous occasions, but this latest report performs poorly even relative to those prior attempts. More importantly, its underlying numbers actually buttress the case for expanding legal immigration.

The report’s central finding is that the share of native-born high school dropouts in their prime who are not working has grown at the same time as the population of similarly educated immigrants. While Mr. Richwine explicitly states that this finding “does not necessarily imply that immigrants push out natives from the workforce,” he goes on to imply exactly that throughout the report, blaming immigrants for “causing economic and social distress.” 

First of all, “distress” would imply that at least some more prime-age, lesser-skilled natives are out of work—i.e. unemployed or out of the labor force—now than prior to the wave of immigration in the 1990s. But this is incorrect. The numbers of such workers in their prime (ages 25 to 54) actually declined by 25 percent from 1995 to 2014, according to Census data. For the last decade, the number has remained roughly constant. Richwine is just wrong to state that “an increasing number of the least-skilled Americans [are] leaving the workforce.” (Note that while the CIS report focuses on native men, the trends in all of the following figures are the same direction regardless of sex.)

Figure 1: Prime-Age Native-Born High School Dropouts Unemployed or Not in the Labor Force (1995-2014)

Source: Census Bureau, Current Population Survey, March Supplement

Since the number of lesser-skilled native workers who are not working has not grown, all of the increase in the number of prime-age native workers who are not working has come from graduates of high school and college. As Figure 2 shows, the share of not-working prime-age natives who are high school dropouts declined substantially from 1995 to 2014.

Figure 2: Natives Unemployed or Out of the Labor Force—Number and Share Who Are High School Dropouts, Number Who Are High School Graduates  (1995-2014)

Source: Census Bureau, Current Population Survey, March Supplement

Mr. Richwine meticulously avoids absolute numbers in his report, focusing instead on the share of lesser-skilled natives who are not working. But the decline in the absolute number of high school dropouts explains all of the increase in the share who are not working. There are still the same small number of people at the bottom who have dropped out of high school and the workforce. But because so many other natives upgraded their skills, these troubled people are a greater share of natives in their skill demographic, while being a smaller share of natives overall.

As immigrants are entering the lower rungs of the economic ladder, natives are leaving those rungs in great numbers. Immigrants have partially filled-in the gaps that they have left, but on net, there has actually been less competition for jobs by new low-skilled workers. An increase in low-skilled labor supply simply does not explain any of the trends in low-skilled employment because there has been no such increase. The basic premise of the CIS report is wrong.

Figure 3: Prime-Age High School Dropouts by Nativity and Employment Status (1995-2014)

Source: Census Bureau, Current Population Survey, March Supplement

From this perspective, we see that the collapse in the number of native-born high school dropouts is a good thing because it represents an exodus of working Americans to higher education and better employment opportunities. A much larger share of employed natives is acquiring skills and moving up the economic ladder. Perhaps this is the most important point: the share of prime-age, native-born Americans who have dropped out of high school is falling fast—by 50 percent from 1995 to 2014.

Figure 4: Share of Prime-Age Natives Without a High School Degree (1995-2014)

Source: Census Bureau, Current Population Survey, March Supplement

As immigrant workers have entered the United States, natives have become more educated and skilled. There are good reasons to believe that this relationship is causal, as lesser-skilled immigration boosts wages for higher-skilled workers. Having immigrant workers to do these lower-skilled jobs frees natives to pursue higher quality employment. Mr. Richwine calls it “naïve” to think that immigration can “lift all boats” by encouraging natives to get educated, but whether it will lift all boats or not, it has lifted more boats than not. This skill-upgrading in response to immigration is not a new phenomenon. As I’ve written before:

In fact, immigration may have caused America’s “high school movement” – the increase in high school enrollment from 12 percent in 1910 to 50 percent in 1930. In a detailed 2002 study of the period for the International Monetary Fund, Rodney Ramcharan concluded, for instance, that “the massive immigration of unskilled labor in the late 19th and early 20th century triggered the U.S. high school movement” by raising “the private return to education and engendered schooling investment.”

As economists Francesco D’Amuri and Giovanni Peri have found, “immigrants often supply manual skills, leaving native workers to take up jobs that require more complex skills – even boosting demand for them. Immigrants replace ‘tasks’, not workers.” This, in turn, results in higher wages for natives. CIS’s report—while disingenuously framed—provides no evidence to contradict this finding.

Mr. Richwine suggests that the United States should radically transform its labor markets in order to accommodate a shrinking sliver of its population—those prime-age high school dropouts who aren’t working. Even if this proposal did benefit them, it would make no sense to hurt the 99 percent to attempt to help the one percent. There are other options to help the one percent of natives who, for whatever reasons, cannot hold a job or complete government-provided high school.

Washington Post fact checker Glenn Kessler gives a maximum Four Pinocchios to the claim that Hillary Clinton was fired during the Watergate inquiry, which has gotten a lot of circulation on social media. He makes a detailed case that there is no evidence for such a firing. However, along the way he does note some unflattering aspects of her tenure there:

In neither of his books does Zeifman say he fired Clinton. But in 2008, a reporter named Dan Calabrese wrote an article that claimed that “when the investigation was over, Zeifman fired Hillary from the committee staff and refused to give her a letter of recommendation.” The article quoted Zeifman as saying: “She was a liar. She was an unethical, dishonest lawyer. She conspired to violate the Constitution, the rules of the House, the rules of the committee and the rules of confidentiality.”…

In 1999, nine years before the Calabrese interview, Zeifman told the Scripps-Howard news agency: “If I had the power to fire her, I would have fired her.” In a 2008 interview on “The Neal Boortz Show,” Zeifman was asked directly whether he fired her. His answer: “Well, let me put it this way. I terminated her, along with some other staff members who were — we no longer needed, and advised her that I would not — could not recommend her for any further positions.”

So it’s pretty clear that Jerry Zeifman, chief counsel of the House Judiciary Committee during the Watergate inquiry, had a low opinion of the young Yale Law graduate Hillary Rodham. But because she reported to the chief counsel of the impeachment inquiry, who was hired separately by the committee and did not report to Zeifman, Zeifman had no authority over her. He simply didn’t hire her for the permanent committee staff after the impeachment inquiry ended.

Kessler also notes that Clinton failed the D.C. bar exam in that period. She never retook the exam (passing the Arkansas exam instead) and concealed her failure even from her closest friends until her autobiography in 1973.

And then there’s this:

Zeifman’s specific beef with Clinton is rather obscure. It mostly concerns his dislike of a brief that she wrote under Doar’s direction to advance a position advocated by Rodino — which would have denied Nixon the right to counsel as the committee investigated whether to recommend impeachment. 

That brief may get some attention during the next few years, should any members of the Clinton administration become the subject of an impeachment inquiry. Also in Sunday’s Post, George Will cites James Madison’s view that the power to impeach is “indispensable” to control of executive abuse of power. 

Teledoc, Inc. is a health services company that provides access to state-licensed physicians through telecommunications technology, usually for a fraction of the cost of a visit to a physician’s office or urgent care center. Teladoc sued the Texas Medical Board—comprised mostly of practicing physicians—because the board took steps to protect the interests of traditional physicians by imposing licensing rules such as requiring the in-person examination of patients before telephonic treatment is permitted.

Because the board isn’t supervised by the Texas legislature, executive, or judiciary, Teledoc argues that its self-dealing violates federal antitrust laws—and the federal district court agreed. The Texas Medical Board has now appealed to the U.S. Court of Appeals for the Fifth Circuit, where Cato filed an amicus brief urging the court to affirm the lower-court ruling and protect the fundamental right to earn a living.

Our brief argues that the Supreme Court has consistently held that the right to earn a living without unreasonable government interference is guaranteed by the Constitution, and that this protection dates back much earlier, to Magna Carta and the common law. Indeed, the right to earn a living is central to a person’s life and ability to pursue happiness. As Frederick Douglass wrote in his autobiography, “To understand the emotion which swelled in my heart as I clasped this money, realizing that I had no master who could take it from me—that it was mine—that my hands were my own, and could earn more of the precious coin—one must have been in some sense himself a slave… . I was not only a freeman but a free-working man.”

Licensing laws, which can be valid if protecting a legitimate public interest, are a tool of the state often employed by private market participants to restrict competition. By creating barriers to entry, existing firms or practitioners mobilize the state to wield monopoly power. This results in higher prices and fewer choices for consumers and diminished opportunities for entrepreneurs and workers.

While it may be appropriate to create a regulatory body exempt from antirust laws to achieve a specialized purpose, it’s inappropriate to grant private actors populating a licensing board limitless ability to claim such state-action immunity unless they are appropriately supervised by state officials. Without active supervision, private parties may wield state regulatory power purely for their own self-interest.

The Supreme Court has said that this active supervision standard is “flexible and context-dependent,” N.C. State Bd. of Dental Exam’rs v. FTC (2014), but not flimsy and porous. Moreover, there are other ways for states to obtain the specialized knowledge of professionals without creating regulatory bodies that rubber-stamp the assertions of active practitioners.

Teledoc offers an innovative service that makes obtaining healthcare easier and more affordable. The Fifth Circuit should protect its right to do so and the right of all persons to pursue a trade or career without onerous government-backed constraints instituted by private actors. 

Frederic Bastiat, the great French economist (yes, such creatures used to exist) from the 1800s, famously observed that a good economist always considers both the “seen” and “unseen” consequences of any action.

A sloppy economist looks at the recipients of government programs and declares that the economy will be stimulated by this additional money that is easily seen, whereas a good economist recognizes that the government can’t redistribute money without doing unseen damage by first taxing or borrowing it from the private sector.

A sloppy economist looks at bailouts and declares that the economy will be stronger because the inefficient firms that stay in business are easily seen, whereas a good economist recognizes that such policies imposes considerable unseen damage by promoting moral hazard and undermining the efficient allocation of labor and capital.

We now have another example to add to our list. Many European nations have “social protection” laws that are designed to shield people from the supposed harshness of capitalism. And part of this approach is so-called Employment Protection Legislation, which ostensibly protects workers by, for instance, making layoffs very difficult.

The people who don’t get laid off are seen, but what about the unseen consequences of such laws?

Well, an academic study from three French economists has some sobering findings for those who think regulation and “social protection” are good for workers.

…this study proposes an econometric investigation of the effects of the OECD Employment Protection Legislation (EPL) indicator… The originality of our paper is to study the effects of labour market regulations on capital intensity, capital quality and the share of employment by skill level using a symmetric approach for each factor using a single original large database: a country-industry panel dataset of 14 OECD countries, 18 manufacturing and market service industries, over the 20 years from 1988 to 2007.

One of the findings from the study is that “EPL” is an area where the United States historically has always had an appropriately laissez-faire approach (which also is evident from the World Bank’s data in the Doing Business Index).

Here’s a chart showing the US compared to some other major developed economies.

It’s good to see, by the way, that Denmark, Finland, and the Netherlands engaged in some meaningful reform between 1994-2006.

But let’s get back to our main topic. What actually happens when nations have high or low levels of Employment Protection Legislation?

According to the research of the French economists, high levels of rules and regulations cause employers to substitute capital for labor, with low-skilled workers suffering the most.

Our main estimation results show an EPL effect: i) positive for non-ICT physical capital intensity and the share of high-skilled employment; ii) non-significant for ICT capital intensity; and (iii) negative for R&D capital intensity and the share of low-skilled employment. These results suggest that an increase in EPL would be considered by firms to be a rise in the cost of labour, with a physical capital to labour substitution impact in favour of more non-sophisticated technologies and would be particularly detrimental to unskilled workers. Moreover, it confirms that R&D activities require labour flexibility. According to simulations based on these results, structural reforms that lowered EPL to the “lightest practice”, i.e. to the US EPL level, would have a favourable impact on R&D capital intensity and would be helpful for unskilled employment (30% and 10% increases on average, respectively). …The adoption of this US EPL level would require very largescale labour market structural reforms in some countries, such as France and Italy. So this simulation cannot be considered politically and socially realistic in a short time. But considering the favourable impact of labour market reforms on productivity and growth. …It appears that labour regulations are particularly detrimental to low-skilled employment, which is an interesting paradox as one of the main goals of labour regulations is to protect low-skilled workers. These regulations seem to frighten employers, who see them as a labour cost increase with consequently a negative impact on low-skilled employment.

There’s a lot of jargon in the above passage for those who haven’t studied economics, but the key takeaway is that employment for low-skilled workers would jump by 10 percent if other nations reduced labor-market regulations to American levels.

Though, as the authors point out, that won’t happen anytime soon in nations such as France and Italy.

Now let’s review an IMF study that looks at what happened when Germany substantially deregulated labor markets last decade.

After a decade of high unemployment and weak growth leading up to the turn of the 21th century, Germany embarked on a significant labor market overhaul. The reforms, collectively known as the Hartz reforms, were put in place in three steps between January 2003 and January 2005. They eased regulation on temporary work agencies, relaxed firing restrictions, restructured the federal employment agency, and reshaped unemployment insurance to significantly reduce benefits for the long-term unemployed and tighten job search obligations.

And when the authors say that long-term unemployment benefits were “significantly” reduced, they weren’t exaggerating.

Here’s a chart from the study showing the huge cut in subsidies for long-run joblessness.

So what were the results of the German reforms?

To put it mildly, they were a huge success.

…the unemployment rate declined steadily from a peak of almost 11 percent in 2005 to five percent at the end of 2014, the lowest level since reunification. In contrast, following the Great Recession other advanced economies — particularly in the euro area — experienced a marked and persistent increase in unemployment. The strong labor market helped Germany consolidate its public finances, as lower outlays on unemployment benefits resulted in lower spending while stronger taxes and social security contribution pushed up revenues.

Gee, what a shocker. When the government stopped being as generous to people for being unemployed, fewer people chose to be unemployed.

Which is exactly what happened in the United States when Congress finally stopped extending unemployment benefits.

And it’s also worth noting that this was also a  period of good fiscal policy in Germany, with the burden of spending rising by only 0.18 percent annually between 2003-2007.

But the main lesson of all this research is that some politicians probably have noble motives when they adopt “social protection” legislation. In the real world, however, there’s nothing “social” about laws and regulations that either discourage employers from hiring people and or discourage people from finding jobs.

P.S. Another example of “seen” vs “unseen” is how supposedly pro-feminist policies actually undermine economic opportunity for women.

A big story to come out of the last G-20 summit was that the Russians and Saudis were talking oil (read: an oil cooperation agreement). With that, everyone asked, again, where are oil prices headed? To answer that question, one has to have a model – a way of thinking about the problem. In this case, my starting point is Roy W. Jastram’s classic study, The Golden Constant: The English and American Experience 1560-2007. In that work, Jastram finds that gold maintains its purchasing power over long periods of time, with the prices of other commodities adapting to the price of gold. 

Taking a lead from Jastram, let’s use the price of gold as a long-term benchmark for the price of oil. The idea being that, if the price of oil changes dramatically, the oil-gold price ratio will change and move away from its long-term value. Forces will then be set in motion to shift supply of and demand for oil.  In consequence, the price of oil will change and the long-term oil-gold price ratio will be reestablished. Via this process, the oil-gold ratio will revert, with changes in the price of oil doing most of the work.

For example, if the price of oil slumps, the oil-gold price ratio will collapse. In consequence, exploration for and development of oil reserves will become less attractive and marginal production will become uneconomic. In addition to the forces squeezing the supply side of the market, low prices will give the demand side a boost. These supply-demand dynamics will, over time, move oil prices and the oil-gold price ratio up. This is what’s behind the old adage, there is nothing like low prices to cure low prices.

We begin our analysis of the current situation by calculating the oil-gold price ratios for each month. For example, as of September 5th, oil was trading at $46.97/bbl and gold was at $1323.50/oz. So, the oil-gold price ratio was 0.035. In June 2014, when oil was at its highs, trading at $107.26/bbl and gold was at $1314.82/oz, the oil-gold price ratio was 0.082. 

We can calculate these ratios over time. Those ratios are presented in the accompanying chart, starting in 1973 (the post-Bretton Woods period).  

Two things stand out in the histogram: the recent oil price collapse was extreme – the February 2016 oil-gold price ratio is way to the left of the distribution, with less than one percent of the distribution to its left. The second observation is that the ratio is slowly reverting to the mean, with a September 2016 ratio approaching 0.04.

But, how long will it take for the ratio to mean revert? My calculations (based on post-1973 data) are that a 50 percent reversion of the ratio will occur in 13.7 months. This translates into a price per barrel of WTI of $60 by March 2017 – almost exactly hitting OPEC’s sweet spot. It is worth noting that, like Jastram, I find that oil prices have reverted to the long-run price of gold, rather than the price of gold reverting to that of oil. So, the oil-gold price ratio reverts to its mean via changes in the price of oil.

The accompanying chart shows the price projection based on the oil-gold price ratio model. It also shows the historical course of prices. They are doing just what the golden constant predicts: oil prices are moving up. That said, there remains a significant gap between the January 2018 futures price of WTI, which stands at $51.50/bbl and the implied price estimate of $70.06/bbl which is generated by the oil-gold ratio model. Best to be long oil.

As a young professional woman myself, lately I’ve grown fatigued by the media’s on-going portrayal of women as victims of circumstance. Media messaging on one topic in particular – the gender pay gap – is especially discouraging because it’s assembled on the basis of flimsy facts. Although it necessitates a voyage outside my traditional topical expertise, setting the record straight seems a sufficiently worthwhile activity as to require it.

Let’s begin with the numbers. Hillary Clinton and others allege that women get paid 76 cents for every dollar a man gets paid – an alarming workplace injustice, if it’s true.

The 76 cent figure is based on a comparison of median domestic wages for men and women. Unfortunately, comparing men’s and women’s wages this way is duplicitous, because men and women make different career choices that impact their wages: 1) men and women work in different industries with varying levels of profitability and 2) men and women on average make different family, career, and lifestyle trade-offs.

For example, BLS statistics show that only 35% of professionals involved in securities, commodities, funds, trusts, and other financial investments and 25% of professionals involved in architecture, engineering, and computer systems design are women. On the other hand, women dominate the field of social assistance, at 85%, and education, with females holding 75% of jobs in elementary and secondary schools.

An August 2016 National Bureau of Economic Research study, Does Rosie Like Riveting? Male and Female Occupational Choices, suggests that industry segregation may not be structural or even coincidental. According to the authors of the study, women may select different jobs than men because they “may care more about job content, and this is a possible factor preventing them from entering some male dominated professions.”

Another uncomfortable truth for the 76-cent crowd: women are considerably more likely to absorb more care-taker responsibilities within their families, and these roles demand associated career trade-offs. Sheryl Sandberg’s Lean In describes 43% of highly-qualified women with children as leaving their careers or off-ramping for a period of time. And a recent Harvard Business Review report describes women as being more likely than men to make decisions “to accommodate family responsibilities, such as limiting (work-related) travel, choosing a more flexible job, slowing down the pace of one’s career, making a lateral move, leaving a job, or declining to work toward a promotion.”

It’s fair to assume that such interruptions impact long-term wages substantially. In fact, when researchers try to control for these differences, the wage gap virtually disappears. A recent Glassdoor study that made an honest attempt to get beyond the superficial numbers showed that after controlling for age, education, years of experience, job title, employer, and location, the gender pay gap fell from nearly twenty-five cents on the dollar to around five cents on the dollar. In other words, women are making 95 cents for every dollar men are making, once you compare men and women with similar educational, experiential, and professional characteristics.

It’s worth noting that the Glassdoor study could only control for obvious differences between professional men and women. It’s likely that other, more nuanced but documented differences, like spending fewer hours on paid work per week would explain some of the remaining five percent pay differential.

Now, don’t misunderstand. Certainly somewhere a degenerate, sexist, hiring manager exists. Someone who thinks to himself: you’re a woman, so you deserve a pay cut. But rather than the being the rule, this seems to be an exception. In fact, the data seems to indicate that the decisions that impact wages are more likely due to cultural and societal expectations. A recent study shows that a full two-thirds of Harvard-educated Millennial generation men expect their partners to handle the majority of child-care. It’s possible that women would make different, more lucrative career decisions given different social or cultural expectations.

Or maybe they wouldn’t. But in the meantime, Hillary’s “equal pay for equal work” rallying cry is irresponsible, in that it perpetuates a workplace myth: by painting women as victims of workplace discrimination, when they’re not, it holds my sex psychologically hostage by stripping us of the very confidence we need to succeed. It also unhelpfully directs our focus away from dealing with the real barrier to long-term earning power – social and cultural pressures – in favor of an office witch hunt.

And that’s why, on the gender pay gap, I’m not with her.

When it was first released back in April, a “discussion draft” of the Compliance With Court Orders Act sponsored by Sens. Dianne Feinstein (D-CA) and Richard Burr (R-NC) met with near universal derision from privacy advocates and security experts. (Your humble author was among the critics.) In the wake of that chilly reception, press reports were declaring the bill effectively dead just weeks later, even as law enforcement and intelligence officials insisted they would continue pressing for a solution to the putative “going dark” problem that encryption creates for government eavesdroppers.  Feinstein and Burr, however, appear not to have given up on their baby: Their offices have been circulating a revised draft, which I’ve recently gotten hold of.

To protect my source’s anonymity, I won’t post the document itself, but it’s easy enough to summarize. The 2.0 version is mostly identical to the original version, with four main changes:

(1) Narrower scope

The original draft required a “covered entity” to render encrypted data “intelligible” to government agents bearing a court order if the data had been rendered unintelligible “by a feature, product, or service owned, controlled, created, or provided, by the covered entity or by a third party on behalf of the covered entity.” The new version deletes “owned,” “created,” and “provided”—so the primary mandate now applies only to a person or company that “controls” the encryption process.

(2)  Limitation to law enforcement

The revised version eliminates section (B) under the bill’s definition of “court order,” which obligated recipients to comply with decryption orders issued for investigations related to “foreign intelligence, espionage, and terrorism.”  The bill is now strictly about law enforcement  investigations into a variety of serious crimes, including federal drug crimes and their state equivalents.

 (3) Exclusion of critical infrastructure

 A new subsection in the definition of the “covered entities” to whom the bill applies specifically excludes “critical infrastructure,” adopting the definition of that term from 42 USC §5195c.

(4) Limitation on “technical assistance” obligations

The phrase “reasonable efforts” has been added to the definition of the “technical assistance” recipients can be required to provide. The original draft’s obligation to provide whatever technical assistance is needed to isolate requested data, decrypt it, and deliver it to law enforcement is replaced by an obligation to make “reasonable efforts” to do these things.

Those changes aside, it’s the same bill—and still includes the problematic mandate that distributors of software licenses, like app stores, ensure that the software they distribute is “capable of complying” with the law. (As I’ve argued previously, it is very hard to imagine how open-source code repositories like Github could effectively satisfy this requirement.) So what do these changes amount to?  Let’s take them in order.

The first change is on face the most significant one by a wide margin, but it’s also the one I’m least confident I understand clearly.  If we interpret  “control” of an encryption process in the ordinary-language sense—and in particular as something conceptually distinct from “ownership,” “provision,” or “creation”—then the law becomes radically narrower in scope, but also fails to cover most of the types of cases that are cited in discussions of the “going dark” problem.  When a user employs a device or application to encrypt data with a user-generated key, that process is not normally under the “control” of the entity that “created” the hardware or software in any intuitive sense.  On the other hand, when a company is in direct control of an encryption process—as when a cloud provider applies its own encryption to data uploaded by a user—then it would typically (though by no means necessarily) retain both the ability to decrypt and an obligation to do so under existing law.  So what’s going on here?

One obvious possibility, assuming that narrow reading of “controlled,” is that the revised bill is very specifically targeting companies like Apple that are seeking to combine the strong security of end-to-end encryption with the convenience of cloud services. At the recent Blackhat security conference, Apple introduced their “Cloud Key Vault” system. The critical innovation there was finding a way to let users users back up and synchronize across devices some of their most sensitive data—the passwords and authentication tokens that safeguard all their other sensitive data—without giving Apple itself access to the information. The details are complex, but the basic idea, oversimplifying quite a bit, is that Apple’s backup systems will act a like a giant iPhone: User data is protected with a combination of the user’s password and a strong encryption key that’s physically locked into a hardware module and can’t be easily extracted. Like the iPhone, it will defend against “brute force” attacks to guess the user passcode component of the decryption key by limiting the number of permissible guesses. The critical difference is that Apple has essentially destroyed their own ability to change or eliminate that guess limit. 

This may not sound like a big deal, but it addresses one of the big barriers to more widespread adoption of strong end-to-end encryption: convenience.  The encrypted messaging app Signal, for example,provides robust cryptographic security with a conspicuous downside: It’s tethered to a single device that holds a user’s cryptographic keys. That’s because any process that involves exporting those keys so they can be synced across multiple devices—especially if they’re being exported into “the cloud”—represents an obvious and huge weak point in the security of the system as a whole. The user wants to be able to access their cloud-stored keys from a new device, but if those keys are only protected by a weak human-memorable password, they’re highly vulnerable to brute force attacks by anyone who can obtain them from the cloud server. That may be an acceptable risk for someone who’s backing up their Facebook password, but not so much for, say, authentication tokens used to control employee access to major corporate networks—the sort of stuff that’s likely to be a target for corporate espionage or foreign intelligence services. Over the medium to long term, our overall cybersecurity is going to depend crucially on making security convenient and simple for ordinary users accustomed to seamlessly switching between many devices.  So we should hope and expect to see solutions like Apple’s more widely adopted.

For intelligence and law enforcement, of course, better security is a mixed blessing.  For the time being, as my co-authors and I noted in the Berkman Center report Don’t Panic, the “going dark” problem is substantially mitigated by the fact that users like to back stuff up, they like the convenience of syncing across devices—and so however unbreakable the disk encryption on a user’s device might be, a lot of useful data is still going to be obtainable from those cloud servers.  They’ve got to be nervous about the prospect of a world where all that cloud data is effectively off the table, because it becomes practical to encrypt it with key material that’s securely syncable across devices but still inaccessible, even to an adversary who can run brute force attacks, without the user’s password. 

If this interpretation of the bill’s intent is right, it’s particularly politically canny.  You propose to saddle every developer with a backdoor mandate, or break the mechanism everyone’s Web browser uses to make a secure connection, and you can expect a whole lot of pushback from both the tech community and the Internet citizenry.  Tell people you’re going to mess with technology their security already depends upon—take away something they have now—and folks get upset. But, thanks to a well-known form of cognitive bias called “loss aversion,” they get a whole lot less upset if you prevent them from getting a benefit (here, a security improvement) most aren’t yet using. And that will be true even if, in the neverending cybersecurity arms race, it’s an improvement that’s going to be necessary over the long run even to preserve current levels of overall security against increasingly sophisticated attacks.

That strikes me, at least for now, as the most plausible read on the “controlled by” language. But another possibility (entirely compatible with the first) is that courts and law enforcement will construe “controlled by” more broadly than I am. If the FBI gives Apple custody of an iPhone, which is running gatekeeper software that Apple can modify, does it become a technology “controlled by” Apple at the time the request is made, even if it wasn’t under their control at the time the data was encrypted?  If the developer of an encrypted messaging app—which, let’s assume, technically retains ownership of the software while “licensing” it to the end user—pushes out regular automated updates and runs a directory server that mediates connections between users, is there some sense in which the entire process is “controlled by” them even if the key generation and encryption runs on the user’s device?  My instinct is “no,” but I can imagine a smart lawyer persuading a magistrate judge the answer is “yes.” One final note here: It’s a huge question mark in my mind how the mandate on app stores to ensure compliance interacts with the narrowed scope. Can they now permit un-backdoored applications as long as the encryption process isn’t “controlled by” the software developers? How do they figure out when that’s the case in advance of litigation?

Let’s move on to the other changes, which mercifully we can deal with a lot more briefly.  The exclusion of intelligence investigations from the scope of the bill seems particularly odd given that the bill’s sponsors are, after all, members of their respective chambers’ intelligence committees, with the intelligence angle providing the main jurisdictional hook for them to be taking point on the issue at all.  But it makes a bit more sense if you think of it as a kind of strategic concession in a recurring jurisdictional turf war with the judiciary committees.  The sponsors would effectively be saying: “Move our bill, and we’ll write it in a way that makes it clear you’ve got primary jurisdiction.”  Two other alternatives: The intelligence agencies, which have both intelligence gathering and cybersecurity assurance responsibilities, have generally been a lot more lukewarm than law enforcement about the prospect of legislation mandating backdoors, so this may be a way of reducing their role in the debate over the bill.  Or it may be that, given the vast amount of collection intelligence agencies engage in compared with domestic law enforcement—remember, there are nearly 95,000 foreign “targets” of electronic surveillance just under §702 of the FISA Amendments Act—technology companies are a lot more skittish about being indundated with decryption and “technical assistance” requests from those agencies, while the larger ones, at least, might expect the compliance burden to be more manageable if the obligation extends only to law enforcement.

I don’t have much insight into the critical infrastructure carve-out; if I had to guess, I’d hazard that some security experts were particularly worried about the security implications of mandating backdoors in software used in especially vital systems at the highest risk of coming under attack by state-level adversaries.  That’s an even bigger concern when you recall that the United States is contemplating bilateral agreements that would let foreign governments directly serve warrants on technology companies.  We may have a “special relationship” with the British, but perhaps not so special that we want them to have a backdoor into our electrical grid.  One huge and (I would have thought) obvious wrinkle here: Telecommunications systems are a canonical example of “critical infrastructure,” which seems like a pretty big potential loophole.

The final change is the easiest to understand: Tech companies don’t want to be saddled with an unlimited set of obligations, and they sure don’t want to be strictly liable to a court for an outcome they can’t possibly guarantee is achievable in every instance. With that added limitation, however, becomes less obvious whether a company is subject to sanction if they’ve designed their products so that a successful attack always requires unreasonable effort. “We’ll happily provide the required technical assistance,” they might say, “as soon as the FBI can think up an attack that requires only reasonable effort on our part.” It’d be a little cheeky, but they might well be able to sell that to a court as technically compliant depending on the facts in a particular case.

So those are my first pass thoughts. Short version: Potentially a good deal narrower than the original version of the bill, and therefore not subject to all the same objections that one met with. Still a pretty bad idea. This debate clearly isn’t going anywhere, however, and the latest iteration of the backdoor bill is unlikely to be the last we’ll see.

Iceland will hold early elections in October following the resignation of former Prime Minister Gunnlaugsson. One aggregation of polls has the upstart Pirate Party in the lead by four percentage points, and the party may be in prime position to form Iceland’s next government. They have an eclectic suite of policies in their party platform, some of them interesting and not all of them desirable. In a narrow sense, their elevation could lead to the development of a basic income experiment due to the shortcomings they perceive in Iceland’s current welfare system. Another pilot program for a basic income could help find more answers to the many questions that still surround the idea.

Last year the party’s MPs introduced a proposal calling for the government to form a working group to investigate the feasibility of shifting to a basic income that would “replace, or at least simplify” their current system. As with most discussions about the desirability of such a shift, the details are incredibly important, and to a large extent these proposals cannot be evaluated until more elements of the plan are decided.

If this is an unconditional income that is grafted onto the current framework, it would likely end up being unaffordable without addressing the work disincentives and other problems currently in place. If, however, it replaces the patchwork of programs with a simplified benefit going directly to people instead of being transmitted through a series of in-kind or specified programs, it could potentially be an improvement over the status quo.

One thing that is certain, the current system can deter work for low-income households in those programs and ultimately make it harder for them to prosper. A single parent with two children who transitions from inactivity to a full-time job paying two-thirds of the country’s average wage faces a rate of 73 percent, so she would lose almost three quarters of each dollar of earnings to lower benefits and higher taxes. This makes work a less attractive option. It’s not just moving from inactivity to work, as low-income workers face a similar trap. A single parent with two children faces a 54 percent rate if they move from a low wage job paying one-third of the average wage to one paying two-thirds. This trap has gotten worse recently, as this rate is up significantly from 46 percent in 2002. Moving to a form of basic income could reduce these work disincentives in the right framework, but much of this depends on the details in the plan and how it is implemented.

Last year’s proposal cited the experience in Finland, where researchers recently announced they will be moving forward with a limited trial where a randomly selected group of two to three thousand people already unemployed will begin to receive a basic income of about $600 each month for a period of two years. The new cash grant will replace their existing benefits, and researchers will assess the impact of the change on poverty, employment rates, and bureaucracy. The Finnish experiment will be more limited in scope than some initial reports, so a potentially more expansive experiment in Iceland could help to test other aspects, like what community wide effects would be where a meaningful portion of the residents are in this regime instead of the existing framework.

Even something along these lines will leave some fundamental questions surrounding a basic income unanswered: concerns about affordability, sustainability and work impact for a program that is permanent instead of limited to a two year period. Scaling up an unconditional benefit to the entire country would present funding concerns not present with a limited pilot program. A later generation that grew up with a basic income framework in existence could have significantly different responses in terms of work effort than one that shifted after they were already well into their working lives.

Getting more information about how these other models compare to current welfare systems along these metrics is crucially important for countries considering reform. The many flaws of the current system are well-known and past work at Cato has delved into them at length. It is not yet clear whether these new systems will fall prey to some of the same problems as the old framework when they are implemented, or even be practically feasible. Until we know much more about the ramifications of such a shift, it is much too early to consider large-scale adoption of anything along these lines. These experiments and demonstrations are necessary because they will help us get more data to try to find answers to some of these questions. We may be getting another test case in Iceland soon.

Pinal County, Arizona was in danger of being the first second third fourth place where ObamaCare caused insurance markets to collapse. As of last month, every private health insurance company now selling ObamaCare coverage in the county announced it would no longer do so in 2017. Had that scenario come to pass, it would have tossed nearly 10,000 residents out of their Exchange plans and left them to buy ObamaCare coverage outside of the Exchange, with no taxpayer subsidies to make the coverage “affordable.” If they didn’t buy that unaffordable coverage, ObamaCare would still subject them to penalties, at least until the Secretary of Health and Human Services intervened.

It appears that Pinal County has avoided that fate. Blue Cross Blue Shield of Arizona has announced that, despite reservations, it will sell ObamaCare coverage in Pinal County next year. Pinal County now joins 13 other Arizona counties, one third of counties nationwide, and seven states that will have only one carrier in the Exchange.

The Wall Street Journal reports, “The insurer’s agreement represents a victory for federal and state officials, who have been pushing to resolve the situation in the weeks since it emerged.” The Journal neglects to mention this “victory” for the government comes at the expense of Exchange enrollees and taxpayers. The Associated Press reports Blue Cross Blue Shield of Arizona will be increasing premiums on Exchange enrollees by a whopping 51 percent. Imagine your annual ObamaCare premium is $14,000. Now imagine it rising by $7,000. Some victory.

If you live in Arizona, you may not have to imagine. Blue Cross Blue Shield of Arizona will be the only carrier selling on the Exchange in 13 of 14 counties in the state, and has requested what Charles Gaba describes as “a whopping 51.2% average [premium increase] statewide.” Remember when President Obama excoriated insurers for “unacceptable” premium increases as high as 39 percent? Remember when he promised ObamaCare would make such exorbitant premium increases a thing of the past?

Now that private insurance executives have pulled ObamaCare’s cookies out of the fire, in Arizona and everywhere the Exchanges are down to a single carrier, the industry appears to have the ObamaCare ideologues over a barrel. Blue Cross Blue Shield of Arizona, which has lost close to $250 million in the Exchange, no doubt has a wish-list of regulatory and legislative changes that would bail them out “stabilize the market.” The Obama administration and state regulators appear ready to hand insurance companies the keys to the treasury.

ObamaCare supporters signal their willingness to bail out private insurance companies – and their disdain for taxpayers and Exchange enrollees – every time they suggest these premium hikes are no big deal because the government subsidizes premiums. Not only do government subsidies not reduce the cost of ObamaCare coverage – they simply shift the cost from enrollees to taxpayers – half of ObamaCare enrollees don’t even get subsidies. Robert Laszewski writes, “many millions of people…have no choice but to take the full whack from these rate increases if they want to stay covered.”

Those who are ideologically committed to ObamaCare, or utterly dependent on it, can rejoice that it didn’t collapse today. Those who seek a sustainable and secure system of subsidies for the sick aren’t celebrating. The fact that ObamaCare has sunk so low, and may sink lower still, means the case for repeal is stronger than ever.

Based on the title of this column, you may think I’m going to write about oppressive IRS behavior or punitive tax policy.

Those are good guesses, but today’s “brutal tax beating” is about what happens when a left-leaning journalist writes a sophomoric column about tax policy and then gets corrected by an expert from the Tax Foundation.

The topic is the tax treatment of executive compensation, which is somewhat of a mess because part of Bill Clinton’s 1993 tax hike was a provision to bar companies from deducting executive compensation above $1 million when compiling their tax returns (which meant, for all intents and purposes, an additional back-door 35-percent tax penalty on salaries paid to CEO types). But to minimize the damaging impact of this discriminatory penalty, particularly on start-up firms, this extra tax didn’t apply to performance-based compensation such as stock options.

In a good and simple tax system, which taxes income only one time (including business income), the entire provision would be repealed.

But when Alvin Chang, a graphics reporter from Vox, wrote a column on this topic, he made the remarkable claim that somehow taxpayers are subsidizing big banks because the aforementioned penalty does not apply to performance-based compensation.

…the government doesn’t tax performance-based pay for…any…top bank executive in America. Unlike regular salaries — where the government takes out taxes to pay for Medicare, Social Security, and all other sorts of things — US tax code lets banks deduct the big bonuses they give to their executives. … The solution most Americans want is to either heavily tax CEO pay over a certain amount, or to set a strict cap on how much CEOs can make, relative to their workers. As long as this loophole is open, though, it makes sense for banks to continue paying executives these huge sums. ..for now, taxpayers are still ponying up to help make wealthy bankers even wealthier, because the US tax code encourages it.

Since Mr. Chang is a graphics reporter, you won’t be surprised that he included several images to augment his argument.

Here’s one making the case that companies should pay a 35 percent tax on performance-based pay for CEO types. Keep in mind, as you peruse this image, that recipients of performance-based pay have to declare that income on their 1040s and pay 39.6 percent individual income tax.

And here’s Chang’s look at how much money the IRS could have collected from big banks in recent years if the anti-CEO tax penalty was extended to performance-based pay.

When I look at these images, my gut reaction is to be offended that Chang equates “taxpayers” with the federal government.

So I would change the caption of the first image so it ended, “…this pile would be diverted from shareholders to politicians.”

And the caption in the second image would read, “This is the amount it saved taxpayers.”

But Chang’s argument is also flawed for much deeper reasons. Scott Greenberg of the Tax Foundation debunks his entire column. Not just debunks. Eviscerates. Destroys.

Here are some of the highlights.

…the article contains several factual errors and misleading claims about how CEOs are taxed in America. The article begins by making an incorrect claim: that the federal government does not tax performance-based CEO pay… This is simply untrue. Under the U.S. tax code, households are generally required to pay individual income taxes on the value of the stock options and bonuses that they receive…up to 39.6% on the performance-based pay… The article continues with another false assertion…it claims that CEO performance-based pay is not subject to the same Social Security and Medicare payroll taxes as “regular salaries.” In fact, all employee compensation, including CEO pay, is subject to Medicare payroll taxes, and high-income individuals actually pay a higher Medicare payroll tax rate than most other employees. …it claims that U.S. businesses are allowed to deduct CEO pay but are not allowed to deduct “regular salaries.” This is patently incorrect. Under the U.S. tax code, businesses are allowed to deduct virtually all compensation to employees. In fact, the only major exception to this rule is that businesses are only allowed to deduct $1 million in non-performance-based salaries to CEOs. This means that the U.S. tax code gives the same, if not worse, treatment to CEO compensation as “regular salaries.”

Scott also addresses the silly assertion that deductions for CEO compensation are some sort of subsidy.

You probably wouldn’t claim that taxpayers are subsidizing the restaurant worker’s salary, because the deduction for employee compensation is a regular, structural feature of the tax code. In general, businesses in the U.S. are taxed on their revenues minus their expenses, and the salary paid to the worker is a business expense like any other. The same argument applies for CEO compensation. When a business pays a CEO $155 million, it has increased its expenses and decreased its profits. The normal logic of U.S. tax law dictates that the business be allowed to deduct the CEO’s compensation from its taxable income. Then, the CEO is required to pay individual income taxes on the compensation.

The bottom line, as Scott points out, is that Bill Clinton’s provision means that CEO pay is penalized rather than subsidized.

…wages and salaries of CEOs are penalized relative to the wages and salaries of regular employees, while performance-based compensation is taxed in the same manner as regular wages and salaries. In sum, it is simply wrong to say that the federal tax code subsidizes CEO pay.

Game, set, and match. Mr. Chang should stick to graphics rather than tax policy.

And policy makers should resist tax policies based on envy and resentment since the net result is a tax code that is needless complex and pointlessly destructive.

One final moral to the story: If there’s ever a tax fight between Vox and the Tax Foundation, always bet on the latter.

A preliminary draft paper on transparency that Cass Sunstein posted last month inspired Vox’s Matthew Yglesias to editorialize “Against Transparency” this week. Both are ruminations that shouldn’t be dealt with too formally, and in that spirit I’ll say that my personal hierarchy of needs doesn’t entirely overlap with Yglesias’.

In defense of selective government opacity, he says: “We need to let public officials talk to each other — and to their professional contacts outside the government — in ways that are both honest and technologically modern.”

Speak for yourself, buddy! The status quo in government management may need that, but that status quo is no need of mine.

A pithy, persuasive response to Yglesias came from the AP’s Ted Bridis, who pointed out via Twitter the value of recorded telephone calls for unearthing official malfeasance. Recordings reveal, for example, that in 2014 U.S. government officials agreed to restrict more than 37 square miles of airspace surrounding Ferguson, Missouri, in response to local officials’ desire to keep news helicopters from viewing the protests there. Technological change might counsel putting more of public officials’ communications “on the record,” not less.

It’s wise of Sunstein to share his piece in draft—in its “pre-decisional” phase, if you will—because his attempt to categorize information about government decision-making as “inputs” and “outputs” loses its intuitiveness as you go along. Data collected by the government is an output, but when it’s used for deciding how to regulate, it’s an input, etc. These distinctions would be hard to internalize and administer, certainly at the scale of a U.S. federal government, and would collapse when administered by government officials on their own behalf.

I think it’s right that there are some categories of work done in government executive offices and agencies that should be left as the business of the parties themselves. But I’m more attracted to reversing the presumption of automatic publication only in the narrow instances when national security, the privacy of private individuals as such, and the need for candid communication with high executive officials requires it. (That’s hardly a perfect standard. This blog post is a rumination.)

Few would make the connection, but Dave Weigel supplies what’s really missing from the discussion. In “Why You Should Stop Blaming the Candidates if you Don’t Know ‘What They Stand For’,” he says: “I’m sorry, but this one falls on the voters. It is generally as easy to learn where the candidates stand on all but the most obscure issues as it is to find, say, a recipe for low-calorie overnight oats.”

The “outputs” are there, but the public is failing to use them. The problem is the public. If voters aren’t using even published and publicized campaign information, why would they do any good with governance inputs? Especially in light of the interests of regulators, with whom Sunstein and Yglesias quite strongly sympathize, transparency isn’t all that special. Let’s just tap the brakes in this one area, giving regulators more room for private deliberations.

Massive public dissatifaction with the status quo tells us there must be something wrong with that argument.

The glib, libertarian answer is that you could shrink the size and scope of government, particularly the U.S. federal government, and grow the public’s relative capacity to oversee it. And while I think that’s true, there’s a more subtle approach that may appeal to an Yglesias or a Sunstein. That’s to recognize that the public today has diminished capacity for government oversight, having lived for many decades without transparency as to the inputs or outputs of government.

In my paper, Publication Practices for Transparent Government, I wrote of the social capital that must grow up once governments begin publishing information well.

[T]ransparency is not an automatic or instant result of following these good practices, and it is not just the form and formats of data. It turns on the capacity of the society to interact with the data and make use of it. American society will take some time to make use of more transparent data once better practices are in place. There are already thriving communities of researchers, journalists, and software developers using unofficial repositories of government data. If they can do good work with incomplete and imperfect data, they will do even better work with rich, complete data issued promptly by authoritative sources. When fully transparent data comes online, though, researchers will have to learn about these data sources and begin using  them. Government transparency and advocacy websites will have to do the same. Government entities themselves will discover new ways to coordinate and organize based on good data-publication practices. Reporters will learn new sources and new habits.

Put aside mere “inputs.” The government today doesn’t publish good data that reflects its deliberations, management, and results. Take one of the most important areas: the law itself. The bills in Congress are published in unnecessarily opaque fashion (our efforts with the Deepbills project notwithstanding). Federal statutory law in the form of the U.S. Code only recently began to see publication in a basic computer-usable fashion. Regulations are available in XML, but regulatory processes have only been superficially updated, essentially by combining existing processes on a single web site. And sub-regulatory law—guidance documents, advisories, and such—those are a transparency travesty if not a systematic, widely accepted Due Process violation.

Authoritative data that indicates what the organizational units of the federal government are—a machine readable government organization chart—does not exist, even after the Obama Administration’s promise last fall to produce one in “months.” And implementation of the DATA Act, while proceeding, may yet run into enough roadblocks to fail at giving the public a consistent, reliable account of federal spending.

The transparency movement is a reform movement based on a vision of modern government. Sunstein’s and Yglesias’ modest arguments against transparency don’t appear to recognize the government’s long, systematic exclusion of the public from its processes or the resulting atrophy of Americans’ civic muscles. Thus, they too easily conclude that giving transparency to internal matters, or “inputs,” is not necessary because it’s not useful.

Transparency is a legacy issue for President Obama, who campaigned in 2008 on promises of true reform. Time has not run out for the president’s pro-transparency reform.