Feed aggregator

The Third Circuit last week held oral arguments on whether an individual can be forced to decrypt a drive with incriminating information on it. The Fifth Amendment prohibits any person from bring “compelled in any criminal case to be a witness against himself.” The Third Circuit will hopefully recognize that being forced to decrypt information is just the kind of testimonial act that the Fifth Amendment prohibits.

In a forced decryption case there are two kinds of subpoenas that could be issued. The first compels the individual to turn over the encryption key or password. This isn’t the kind of subpoena in the Third Circuit case, but it is useful in looking at why this is also not allowed. The other kind of subpoena is to produce the documents themselves.

With a direct subpoena of the password the password itself isn’t incriminating, but the Supreme Court has held that that the Fifth Amendment also prevents compelling incriminating “information directly or indirectly derived from such testimony.” The Supreme Court “particularly emphasized the critical importance of protection against a future prosecution ‘based on knowledge and sources of information obtained from the compelled testimony.’” While the password itself isn’t incriminating it clearly provides the lead necessary to get incriminating information from the encrypted drives. Another close analogy that seems to apply was that the Supreme Court clearly prohibited compelling a person to disclose a combination to a safe.

The second type of subpoena, and the one in this case, seeks only the production of the documents supposedly encrypted on the hard drive. In this case, the order was to “produce” the whole hard drive in an “unencrypted state.” The production of documents is not usually considered testimonial (and therefore not protected by the Fifth Amendment) if the documents existence, location, and authenticity are a “foregone conclusion.” By being a foregone conclusion, no new information is given to the government by the defendant’s testimonial acts of turning over the document (showing his own knowledge of the document’s existence, location and authenticity).

The real problem with this second type of subpoena is that there is real question of if the documents subpoenaed actually exist even if they are encrypted on the hard drive. In the traditional safe analogy this isn’t a problem, we know the documents really exist inside the safe if only we could get at them. And so the compelling of the individual who can open the safe to do so and give us the documents isn’t testimonial (as long as they are not required to tell the government what the combination to the safe is). But in the case of encrypted documents, no plaintext or unencrypted documents actually exist at all when the subpoena was issued.

Now the potential defendant could use his password to decrypt the documents but this act of decryption itself is the testimonial act. Imagine if the government were to subpoena from a suspected murderer where they couldn’t find the body an order to “produce a document with the location of the body.” The creating of that document that doesn’t already exist is testimonial and cannot be compelled under the Fifth Amendment. An encrypted drive is like finding a piece of paper that the government cannot makes sense out of. Ordering the individual to use the personal knowledge in his mind (the password) to transform that document into one that makes sense for the government is testimonial  because it is creating something that did not already exist in that form using the knowledge in his mind. Forced decryption should not be allowed for the same reason. Hopefully the Third Circuit in United States  v. Apple Macpro Computer will recognize this.

Immigrants from India waiting to receive residency in the United States may die before they receive their green cards. The line is disproportionately long for Indians because the law discriminates against immigrants from populous countries, skewing the immigration flow to the benefit of immigrants from countries with fewer people. This policy—a compromise that resolved a long-dead immigration dispute—is senseless and economically damaging.

In the 1920s, Congress imposed the first-ever quota on immigration, but rather than just a worldwide limit, it also distributed the numbers between countries in order to give preference to immigrants from “white” countries. In 1965, Congress repealed this system with one that allowed immigrants from any country to receive up to seven percent of the green cards issued each year. This was an improvement, but is an anachronism today and it is causing its own pointless discrimination.

The per-country limits treat each nation equally, but not each immigrant equally. China receives the same treatment as Estonia, but immigrants from Estonia who apply today could receive their visas this year, while immigrants from China who apply today could have to wait a generation. It is equality in theory and inequality in practice. It is arbitrary and unfair.

Immigrants should be treated as individuals, not as national representatives. As I have written before, no one actually knows for sure the waits for legal immigrants, but Stuart Anderson of the National Foundation for American Policy has conservatively estimated decades-long waits for certain immigrants from China, India, Mexico, and the Philippines.

The entire system is an absurd relic of a bygone era. It was a compromise that enabled Congress to overcome its prior racial bias, but the explanation made sense in 1965, not today. Nation-based quotas are governmental discrimination that is every bit as useless—if not as malicious—as racial discrimination.

The per-country limits make employers think twice about hiring the best person for the job due to the disparate waits. This means lost productivity for the United States and a less competitive economy. It can separate families for such a long period of time that would-be legal immigrants attempt illegal entry rather than wait decades for a legal visa. 

Shockingly, some opponents of legal immigration would keep this system. Jessica Vaughn of the Center for Immigration Studies told Congress in 2012 not to fix the law on the hope that “maybe the green card delays will dampen some of the enthusiasm for overused guestworker [sic] categories,” which immigrants often use to initially come here before applying for a green card. In other words, she would keep the system so broken that skilled people don’t even want to bother trying to come to the United States and let other countries benefit from their talents.

In 2011, Congress overwhelmingly passed (389-15) a bill, the Fairness for High-Skilled Immigrants Act, that doubled the limits to 15 percent for family-sponsored immigrants and eliminated the limits entirely for employer-sponsored immigrants. While it failed to receive a vote in the Senate amid wrangling on unrelated issues, there is little doubt its current version (H.R. 213) with nearly 100 cosponsors—half of whom are Democrats—would pass if it came up for a vote today.

Congress is currently considering a bill to reform one high-skilled visa category, the EB-5 investor visa, which has a high likelihood of becoming law in some form. Proponents of ending the per-country limits have an opportunity to attach their fix to this bill. If they do, and Congress passes it, it would put to rest nearly a century of discriminatory immigration policy.

Tomorrow the House Financial Services Committee moves to “mark-up” (amend and vote on) the Financial Choice Act, introduced by Committee Chair Jeb Hensarling.  The Choice Act represents the most comprehensive changes to financial services regulation since the passage of Dodd-Frank in 2010.  Unlike Dodd-Frank, however, the Choice Act moves our system in the direction of more stability and fewer bailouts.

At the heart of the Choice Act is an attempt to improve financial stability by increasing bank capital, while improving the functioning of our financial system by reducing compliance costs and over-reliance on regulatory discretion.  While I would have chosen a different level of capital, the Choice Act gets at the fundamental flaw in our current financial system: government guarantees punish banks for holding high levels of capital which, unfortunately, leads to excessive leverage and widespread insolvencies whenever asset values (such as houses) decline.  Massive leverage still characterizes our banking system, despite the “reforms” in Dodd-Frank.

The Choice Act also includes important, even if modest, improvements in Federal Reserve oversight (see Title VII).  There was perhaps no contributor to the housing boom and bust that has been as ignored by Congress as the Fed’s reckless monetary policies in the mid-2000s.  Years of negative real rates (essentially paying people to borrow) drove a boom in our property markets.  The eminent economist John Taylor has written extensively and persuasively on this topic, yet it remained ignored by legislators prior to Hensarling’s efforts.  Such reforms are too late to unwind the Fed’s current distortionary policies, but they may prove helpful in moderating future booms and busts.

Despite its daunting 500+ pages, the Choice Act is still best viewed as a modest step in the right direction.  Considerably more needs to be done to bring market discipline and accountability to our financial system.  But at least the Choice Act moves us in the right direction, for that the bill merits applause and consideration.


The Center for Immigration Studies (CIS) released a report by Jason Richwine last week entitled “Immigrants Replace Low-Skill Natives in the Workforce.” The Cato Institute has previously pointed out the inaccuracies, methodological tricks, and disingenuous framing that have plagued CIS’s reports on numerous occasions, but this latest report performs poorly even relative to those prior attempts. More importantly, its underlying numbers actually buttress the case for expanding legal immigration.

The report’s central finding is that the share of native-born high school dropouts in their prime who are not working has grown at the same time as the population of similarly educated immigrants. While Mr. Richwine explicitly states that this finding “does not necessarily imply that immigrants push out natives from the workforce,” he goes on to imply exactly that throughout the report, blaming immigrants for “causing economic and social distress.” 

First of all, “distress” would imply that at least some more prime-age, lesser-skilled natives are out of work—i.e. unemployed or out of the labor force—now than prior to the wave of immigration in the 1990s. But this is incorrect. The numbers of such workers in their prime (ages 25 to 54) actually declined by 25 percent from 1995 to 2014, according to Census data. For the last decade, the number has remained roughly constant. Richwine is just wrong to state that “an increasing number of the least-skilled Americans [are] leaving the workforce.” (Note that while the CIS report focuses on native men, the trends in all of the following figures are the same direction regardless of sex.)

Figure 1: Prime-Age Native-Born High School Dropouts Unemployed or Not in the Labor Force (1995-2014)

Source: Census Bureau, Current Population Survey, March Supplement

Since the number of lesser-skilled native workers who are not working has not grown, all of the increase in the number of prime-age native workers who are not working has come from graduates of high school and college. As Figure 2 shows, the share of not-working prime-age natives who are high school dropouts declined substantially from 1995 to 2014.

Figure 2: Natives Unemployed or Out of the Labor Force—Number and Share Who Are High School Dropouts, Number Who Are High School Graduates  (1995-2014)

Source: Census Bureau, Current Population Survey, March Supplement

Mr. Richwine meticulously avoids absolute numbers in his report, focusing instead on the share of lesser-skilled natives who are not working. But the decline in the absolute number of high school dropouts explains all of the increase in the share who are not working. There are still the same small number of people at the bottom who have dropped out of high school and the workforce. But because so many other natives upgraded their skills, these troubled people are a greater share of natives in their skill demographic, while being a smaller share of natives overall.

As immigrants are entering the lower rungs of the economic ladder, natives are leaving those rungs in great numbers. Immigrants have partially filled-in the gaps that they have left, but on net, there has actually been less competition for jobs by new low-skilled workers. An increase in low-skilled labor supply simply does not explain any of the trends in low-skilled employment because there has been no such increase. The basic premise of the CIS report is wrong.

Figure 3: Prime-Age High School Dropouts by Nativity and Employment Status (1995-2014)

Source: Census Bureau, Current Population Survey, March Supplement

From this perspective, we see that the collapse in the number of native-born high school dropouts is a good thing because it represents an exodus of working Americans to higher education and better employment opportunities. A much larger share of employed natives is acquiring skills and moving up the economic ladder. Perhaps this is the most important point: the share of prime-age, native-born Americans who have dropped out of high school is falling fast—by 50 percent from 1995 to 2014.

Figure 4: Share of Prime-Age Natives Without a High School Degree (1995-2014)

Source: Census Bureau, Current Population Survey, March Supplement

As immigrant workers have entered the United States, natives have become more educated and skilled. There are good reasons to believe that this relationship is causal, as lesser-skilled immigration boosts wages for higher-skilled workers. Having immigrant workers to do these lower-skilled jobs frees natives to pursue higher quality employment. Mr. Richwine calls it “naïve” to think that immigration can “lift all boats” by encouraging natives to get educated, but whether it will lift all boats or not, it has lifted more boats than not. This skill-upgrading in response to immigration is not a new phenomenon. As I’ve written before:

In fact, immigration may have caused America’s “high school movement” – the increase in high school enrollment from 12 percent in 1910 to 50 percent in 1930. In a detailed 2002 study of the period for the International Monetary Fund, Rodney Ramcharan concluded, for instance, that “the massive immigration of unskilled labor in the late 19th and early 20th century triggered the U.S. high school movement” by raising “the private return to education and engendered schooling investment.”

As economists Francesco D’Amuri and Giovanni Peri have found, “immigrants often supply manual skills, leaving native workers to take up jobs that require more complex skills – even boosting demand for them. Immigrants replace ‘tasks’, not workers.” This, in turn, results in higher wages for natives. CIS’s report—while disingenuously framed—provides no evidence to contradict this finding.

Mr. Richwine suggests that the United States should radically transform its labor markets in order to accommodate a shrinking sliver of its population—those prime-age high school dropouts who aren’t working. Even if this proposal did benefit them, it would make no sense to hurt the 99 percent to attempt to help the one percent. There are other options to help the one percent of natives who, for whatever reasons, cannot hold a job or complete government-provided high school.

Washington Post fact checker Glenn Kessler gives a maximum Four Pinocchios to the claim that Hillary Clinton was fired during the Watergate inquiry, which has gotten a lot of circulation on social media. He makes a detailed case that there is no evidence for such a firing. However, along the way he does note some unflattering aspects of her tenure there:

In neither of his books does Zeifman say he fired Clinton. But in 2008, a reporter named Dan Calabrese wrote an article that claimed that “when the investigation was over, Zeifman fired Hillary from the committee staff and refused to give her a letter of recommendation.” The article quoted Zeifman as saying: “She was a liar. She was an unethical, dishonest lawyer. She conspired to violate the Constitution, the rules of the House, the rules of the committee and the rules of confidentiality.”…

In 1999, nine years before the Calabrese interview, Zeifman told the Scripps-Howard news agency: “If I had the power to fire her, I would have fired her.” In a 2008 interview on “The Neal Boortz Show,” Zeifman was asked directly whether he fired her. His answer: “Well, let me put it this way. I terminated her, along with some other staff members who were — we no longer needed, and advised her that I would not — could not recommend her for any further positions.”

So it’s pretty clear that Jerry Zeifman, chief counsel of the House Judiciary Committee during the Watergate inquiry, had a low opinion of the young Yale Law graduate Hillary Rodham. But because she reported to the chief counsel of the impeachment inquiry, who was hired separately by the committee and did not report to Zeifman, Zeifman had no authority over her. He simply didn’t hire her for the permanent committee staff after the impeachment inquiry ended.

Kessler also notes that Clinton failed the D.C. bar exam in that period. She never retook the exam (passing the Arkansas exam instead) and concealed her failure even from her closest friends until her autobiography in 1973.

And then there’s this:

Zeifman’s specific beef with Clinton is rather obscure. It mostly concerns his dislike of a brief that she wrote under Doar’s direction to advance a position advocated by Rodino — which would have denied Nixon the right to counsel as the committee investigated whether to recommend impeachment. 

That brief may get some attention during the next few years, should any members of the Clinton administration become the subject of an impeachment inquiry. Also in Sunday’s Post, George Will cites James Madison’s view that the power to impeach is “indispensable” to control of executive abuse of power. 

Teledoc, Inc. is a health services company that provides access to state-licensed physicians through telecommunications technology, usually for a fraction of the cost of a visit to a physician’s office or urgent care center. Teladoc sued the Texas Medical Board—comprised mostly of practicing physicians—because the board took steps to protect the interests of traditional physicians by imposing licensing rules such as requiring the in-person examination of patients before telephonic treatment is permitted.

Because the board isn’t supervised by the Texas legislature, executive, or judiciary, Teledoc argues that its self-dealing violates federal antitrust laws—and the federal district court agreed. The Texas Medical Board has now appealed to the U.S. Court of Appeals for the Fifth Circuit, where Cato filed an amicus brief urging the court to affirm the lower-court ruling and protect the fundamental right to earn a living.

Our brief argues that the Supreme Court has consistently held that the right to earn a living without unreasonable government interference is guaranteed by the Constitution, and that this protection dates back much earlier, to Magna Carta and the common law. Indeed, the right to earn a living is central to a person’s life and ability to pursue happiness. As Frederick Douglass wrote in his autobiography, “To understand the emotion which swelled in my heart as I clasped this money, realizing that I had no master who could take it from me—that it was mine—that my hands were my own, and could earn more of the precious coin—one must have been in some sense himself a slave… . I was not only a freeman but a free-working man.”

Licensing laws, which can be valid if protecting a legitimate public interest, are a tool of the state often employed by private market participants to restrict competition. By creating barriers to entry, existing firms or practitioners mobilize the state to wield monopoly power. This results in higher prices and fewer choices for consumers and diminished opportunities for entrepreneurs and workers.

While it may be appropriate to create a regulatory body exempt from antirust laws to achieve a specialized purpose, it’s inappropriate to grant private actors populating a licensing board limitless ability to claim such state-action immunity unless they are appropriately supervised by state officials. Without active supervision, private parties may wield state regulatory power purely for their own self-interest.

The Supreme Court has said that this active supervision standard is “flexible and context-dependent,” N.C. State Bd. of Dental Exam’rs v. FTC (2014), but not flimsy and porous. Moreover, there are other ways for states to obtain the specialized knowledge of professionals without creating regulatory bodies that rubber-stamp the assertions of active practitioners.

Teledoc offers an innovative service that makes obtaining healthcare easier and more affordable. The Fifth Circuit should protect its right to do so and the right of all persons to pursue a trade or career without onerous government-backed constraints instituted by private actors. 

Frederic Bastiat, the great French economist (yes, such creatures used to exist) from the 1800s, famously observed that a good economist always considers both the “seen” and “unseen” consequences of any action.

A sloppy economist looks at the recipients of government programs and declares that the economy will be stimulated by this additional money that is easily seen, whereas a good economist recognizes that the government can’t redistribute money without doing unseen damage by first taxing or borrowing it from the private sector.

A sloppy economist looks at bailouts and declares that the economy will be stronger because the inefficient firms that stay in business are easily seen, whereas a good economist recognizes that such policies imposes considerable unseen damage by promoting moral hazard and undermining the efficient allocation of labor and capital.

We now have another example to add to our list. Many European nations have “social protection” laws that are designed to shield people from the supposed harshness of capitalism. And part of this approach is so-called Employment Protection Legislation, which ostensibly protects workers by, for instance, making layoffs very difficult.

The people who don’t get laid off are seen, but what about the unseen consequences of such laws?

Well, an academic study from three French economists has some sobering findings for those who think regulation and “social protection” are good for workers.

…this study proposes an econometric investigation of the effects of the OECD Employment Protection Legislation (EPL) indicator… The originality of our paper is to study the effects of labour market regulations on capital intensity, capital quality and the share of employment by skill level using a symmetric approach for each factor using a single original large database: a country-industry panel dataset of 14 OECD countries, 18 manufacturing and market service industries, over the 20 years from 1988 to 2007.

One of the findings from the study is that “EPL” is an area where the United States historically has always had an appropriately laissez-faire approach (which also is evident from the World Bank’s data in the Doing Business Index).

Here’s a chart showing the US compared to some other major developed economies.

It’s good to see, by the way, that Denmark, Finland, and the Netherlands engaged in some meaningful reform between 1994-2006.

But let’s get back to our main topic. What actually happens when nations have high or low levels of Employment Protection Legislation?

According to the research of the French economists, high levels of rules and regulations cause employers to substitute capital for labor, with low-skilled workers suffering the most.

Our main estimation results show an EPL effect: i) positive for non-ICT physical capital intensity and the share of high-skilled employment; ii) non-significant for ICT capital intensity; and (iii) negative for R&D capital intensity and the share of low-skilled employment. These results suggest that an increase in EPL would be considered by firms to be a rise in the cost of labour, with a physical capital to labour substitution impact in favour of more non-sophisticated technologies and would be particularly detrimental to unskilled workers. Moreover, it confirms that R&D activities require labour flexibility. According to simulations based on these results, structural reforms that lowered EPL to the “lightest practice”, i.e. to the US EPL level, would have a favourable impact on R&D capital intensity and would be helpful for unskilled employment (30% and 10% increases on average, respectively). …The adoption of this US EPL level would require very largescale labour market structural reforms in some countries, such as France and Italy. So this simulation cannot be considered politically and socially realistic in a short time. But considering the favourable impact of labour market reforms on productivity and growth. …It appears that labour regulations are particularly detrimental to low-skilled employment, which is an interesting paradox as one of the main goals of labour regulations is to protect low-skilled workers. These regulations seem to frighten employers, who see them as a labour cost increase with consequently a negative impact on low-skilled employment.

There’s a lot of jargon in the above passage for those who haven’t studied economics, but the key takeaway is that employment for low-skilled workers would jump by 10 percent if other nations reduced labor-market regulations to American levels.

Though, as the authors point out, that won’t happen anytime soon in nations such as France and Italy.

Now let’s review an IMF study that looks at what happened when Germany substantially deregulated labor markets last decade.

After a decade of high unemployment and weak growth leading up to the turn of the 21th century, Germany embarked on a significant labor market overhaul. The reforms, collectively known as the Hartz reforms, were put in place in three steps between January 2003 and January 2005. They eased regulation on temporary work agencies, relaxed firing restrictions, restructured the federal employment agency, and reshaped unemployment insurance to significantly reduce benefits for the long-term unemployed and tighten job search obligations.

And when the authors say that long-term unemployment benefits were “significantly” reduced, they weren’t exaggerating.

Here’s a chart from the study showing the huge cut in subsidies for long-run joblessness.

So what were the results of the German reforms?

To put it mildly, they were a huge success.

…the unemployment rate declined steadily from a peak of almost 11 percent in 2005 to five percent at the end of 2014, the lowest level since reunification. In contrast, following the Great Recession other advanced economies — particularly in the euro area — experienced a marked and persistent increase in unemployment. The strong labor market helped Germany consolidate its public finances, as lower outlays on unemployment benefits resulted in lower spending while stronger taxes and social security contribution pushed up revenues.

Gee, what a shocker. When the government stopped being as generous to people for being unemployed, fewer people chose to be unemployed.

Which is exactly what happened in the United States when Congress finally stopped extending unemployment benefits.

And it’s also worth noting that this was also a  period of good fiscal policy in Germany, with the burden of spending rising by only 0.18 percent annually between 2003-2007.

But the main lesson of all this research is that some politicians probably have noble motives when they adopt “social protection” legislation. In the real world, however, there’s nothing “social” about laws and regulations that either discourage employers from hiring people and or discourage people from finding jobs.

P.S. Another example of “seen” vs “unseen” is how supposedly pro-feminist policies actually undermine economic opportunity for women.

A big story to come out of the last G-20 summit was that the Russians and Saudis were talking oil (read: an oil cooperation agreement). With that, everyone asked, again, where are oil prices headed? To answer that question, one has to have a model – a way of thinking about the problem. In this case, my starting point is Roy W. Jastram’s classic study, The Golden Constant: The English and American Experience 1560-2007. In that work, Jastram finds that gold maintains its purchasing power over long periods of time, with the prices of other commodities adapting to the price of gold. 

Taking a lead from Jastram, let’s use the price of gold as a long-term benchmark for the price of oil. The idea being that, if the price of oil changes dramatically, the oil-gold price ratio will change and move away from its long-term value. Forces will then be set in motion to shift supply of and demand for oil.  In consequence, the price of oil will change and the long-term oil-gold price ratio will be reestablished. Via this process, the oil-gold ratio will revert, with changes in the price of oil doing most of the work.

For example, if the price of oil slumps, the oil-gold price ratio will collapse. In consequence, exploration for and development of oil reserves will become less attractive and marginal production will become uneconomic. In addition to the forces squeezing the supply side of the market, low prices will give the demand side a boost. These supply-demand dynamics will, over time, move oil prices and the oil-gold price ratio up. This is what’s behind the old adage, there is nothing like low prices to cure low prices.

We begin our analysis of the current situation by calculating the oil-gold price ratios for each month. For example, as of September 5th, oil was trading at $46.97/bbl and gold was at $1323.50/oz. So, the oil-gold price ratio was 0.035. In June 2014, when oil was at its highs, trading at $107.26/bbl and gold was at $1314.82/oz, the oil-gold price ratio was 0.082. 

We can calculate these ratios over time. Those ratios are presented in the accompanying chart, starting in 1973 (the post-Bretton Woods period).  

Two things stand out in the histogram: the recent oil price collapse was extreme – the February 2016 oil-gold price ratio is way to the left of the distribution, with less than one percent of the distribution to its left. The second observation is that the ratio is slowly reverting to the mean, with a September 2016 ratio approaching 0.04.

But, how long will it take for the ratio to mean revert? My calculations (based on post-1973 data) are that a 50 percent reversion of the ratio will occur in 13.7 months. This translates into a price per barrel of WTI of $60 by March 2017 – almost exactly hitting OPEC’s sweet spot. It is worth noting that, like Jastram, I find that oil prices have reverted to the long-run price of gold, rather than the price of gold reverting to that of oil. So, the oil-gold price ratio reverts to its mean via changes in the price of oil.

The accompanying chart shows the price projection based on the oil-gold price ratio model. It also shows the historical course of prices. They are doing just what the golden constant predicts: oil prices are moving up. That said, there remains a significant gap between the January 2018 futures price of WTI, which stands at $51.50/bbl and the implied price estimate of $70.06/bbl which is generated by the oil-gold ratio model. Best to be long oil.

As a young professional woman myself, lately I’ve grown fatigued by the media’s on-going portrayal of women as victims of circumstance. Media messaging on one topic in particular – the gender pay gap – is especially discouraging because it’s assembled on the basis of flimsy facts. Although it necessitates a voyage outside my traditional topical expertise, setting the record straight seems a sufficiently worthwhile activity as to require it.

Let’s begin with the numbers. Hillary Clinton and others allege that women get paid 76 cents for every dollar a man gets paid – an alarming workplace injustice, if it’s true.

The 76 cent figure is based on a comparison of median domestic wages for men and women. Unfortunately, comparing men’s and women’s wages this way is duplicitous, because men and women make different career choices that impact their wages: 1) men and women work in different industries with varying levels of profitability and 2) men and women on average make different family, career, and lifestyle trade-offs.

For example, BLS statistics show that only 35% of professionals involved in securities, commodities, funds, trusts, and other financial investments and 25% of professionals involved in architecture, engineering, and computer systems design are women. On the other hand, women dominate the field of social assistance, at 85%, and education, with females holding 75% of jobs in elementary and secondary schools.

An August 2016 National Bureau of Economic Research study, Does Rosie Like Riveting? Male and Female Occupational Choices, suggests that industry segregation may not be structural or even coincidental. According to the authors of the study, women may select different jobs than men because they “may care more about job content, and this is a possible factor preventing them from entering some male dominated professions.”

Another uncomfortable truth for the 76-cent crowd: women are considerably more likely to absorb more care-taker responsibilities within their families, and these roles demand associated career trade-offs. Sheryl Sandberg’s Lean In describes 43% of highly-qualified women with children as leaving their careers or off-ramping for a period of time. And a recent Harvard Business Review report describes women as being more likely than men to make decisions “to accommodate family responsibilities, such as limiting (work-related) travel, choosing a more flexible job, slowing down the pace of one’s career, making a lateral move, leaving a job, or declining to work toward a promotion.”

It’s fair to assume that such interruptions impact long-term wages substantially. In fact, when researchers try to control for these differences, the wage gap virtually disappears. A recent Glassdoor study that made an honest attempt to get beyond the superficial numbers showed that after controlling for age, education, years of experience, job title, employer, and location, the gender pay gap fell from nearly twenty-five cents on the dollar to around five cents on the dollar. In other words, women are making 95 cents for every dollar men are making, once you compare men and women with similar educational, experiential, and professional characteristics.

It’s worth noting that the Glassdoor study could only control for obvious differences between professional men and women. It’s likely that other, more nuanced but documented differences, like spending fewer hours on paid work per week would explain some of the remaining five percent pay differential.

Now, don’t misunderstand. Certainly somewhere a degenerate, sexist, hiring manager exists. Someone who thinks to himself: you’re a woman, so you deserve a pay cut. But rather than the being the rule, this seems to be an exception. In fact, the data seems to indicate that the decisions that impact wages are more likely due to cultural and societal expectations. A recent study shows that a full two-thirds of Harvard-educated Millennial generation men expect their partners to handle the majority of child-care. It’s possible that women would make different, more lucrative career decisions given different social or cultural expectations.

Or maybe they wouldn’t. But in the meantime, Hillary’s “equal pay for equal work” rallying cry is irresponsible, in that it perpetuates a workplace myth: by painting women as victims of workplace discrimination, when they’re not, it holds my sex psychologically hostage by stripping us of the very confidence we need to succeed. It also unhelpfully directs our focus away from dealing with the real barrier to long-term earning power – social and cultural pressures – in favor of an office witch hunt.

And that’s why, on the gender pay gap, I’m not with her.

When it was first released back in April, a “discussion draft” of the Compliance With Court Orders Act sponsored by Sens. Dianne Feinstein (D-CA) and Richard Burr (R-NC) met with near universal derision from privacy advocates and security experts. (Your humble author was among the critics.) In the wake of that chilly reception, press reports were declaring the bill effectively dead just weeks later, even as law enforcement and intelligence officials insisted they would continue pressing for a solution to the putative “going dark” problem that encryption creates for government eavesdroppers.  Feinstein and Burr, however, appear not to have given up on their baby: Their offices have been circulating a revised draft, which I’ve recently gotten hold of.

To protect my source’s anonymity, I won’t post the document itself, but it’s easy enough to summarize. The 2.0 version is mostly identical to the original version, with four main changes:

(1) Narrower scope

The original draft required a “covered entity” to render encrypted data “intelligible” to government agents bearing a court order if the data had been rendered unintelligible “by a feature, product, or service owned, controlled, created, or provided, by the covered entity or by a third party on behalf of the covered entity.” The new version deletes “owned,” “created,” and “provided”—so the primary mandate now applies only to a person or company that “controls” the encryption process.

(2)  Limitation to law enforcement

The revised version eliminates section (B) under the bill’s definition of “court order,” which obligated recipients to comply with decryption orders issued for investigations related to “foreign intelligence, espionage, and terrorism.”  The bill is now strictly about law enforcement  investigations into a variety of serious crimes, including federal drug crimes and their state equivalents.

 (3) Exclusion of critical infrastructure

 A new subsection in the definition of the “covered entities” to whom the bill applies specifically excludes “critical infrastructure,” adopting the definition of that term from 42 USC §5195c.

(4) Limitation on “technical assistance” obligations

The phrase “reasonable efforts” has been added to the definition of the “technical assistance” recipients can be required to provide. The original draft’s obligation to provide whatever technical assistance is needed to isolate requested data, decrypt it, and deliver it to law enforcement is replaced by an obligation to make “reasonable efforts” to do these things.

Those changes aside, it’s the same bill—and still includes the problematic mandate that distributors of software licenses, like app stores, ensure that the software they distribute is “capable of complying” with the law. (As I’ve argued previously, it is very hard to imagine how open-source code repositories like Github could effectively satisfy this requirement.) So what do these changes amount to?  Let’s take them in order.

The first change is on face the most significant one by a wide margin, but it’s also the one I’m least confident I understand clearly.  If we interpret  “control” of an encryption process in the ordinary-language sense—and in particular as something conceptually distinct from “ownership,” “provision,” or “creation”—then the law becomes radically narrower in scope, but also fails to cover most of the types of cases that are cited in discussions of the “going dark” problem.  When a user employs a device or application to encrypt data with a user-generated key, that process is not normally under the “control” of the entity that “created” the hardware or software in any intuitive sense.  On the other hand, when a company is in direct control of an encryption process—as when a cloud provider applies its own encryption to data uploaded by a user—then it would typically (though by no means necessarily) retain both the ability to decrypt and an obligation to do so under existing law.  So what’s going on here?

One obvious possibility, assuming that narrow reading of “controlled,” is that the revised bill is very specifically targeting companies like Apple that are seeking to combine the strong security of end-to-end encryption with the convenience of cloud services. At the recent Blackhat security conference, Apple introduced their “Cloud Key Vault” system. The critical innovation there was finding a way to let users users back up and synchronize across devices some of their most sensitive data—the passwords and authentication tokens that safeguard all their other sensitive data—without giving Apple itself access to the information. The details are complex, but the basic idea, oversimplifying quite a bit, is that Apple’s backup systems will act a like a giant iPhone: User data is protected with a combination of the user’s password and a strong encryption key that’s physically locked into a hardware module and can’t be easily extracted. Like the iPhone, it will defend against “brute force” attacks to guess the user passcode component of the decryption key by limiting the number of permissible guesses. The critical difference is that Apple has essentially destroyed their own ability to change or eliminate that guess limit. 

This may not sound like a big deal, but it addresses one of the big barriers to more widespread adoption of strong end-to-end encryption: convenience.  The encrypted messaging app Signal, for example,provides robust cryptographic security with a conspicuous downside: It’s tethered to a single device that holds a user’s cryptographic keys. That’s because any process that involves exporting those keys so they can be synced across multiple devices—especially if they’re being exported into “the cloud”—represents an obvious and huge weak point in the security of the system as a whole. The user wants to be able to access their cloud-stored keys from a new device, but if those keys are only protected by a weak human-memorable password, they’re highly vulnerable to brute force attacks by anyone who can obtain them from the cloud server. That may be an acceptable risk for someone who’s backing up their Facebook password, but not so much for, say, authentication tokens used to control employee access to major corporate networks—the sort of stuff that’s likely to be a target for corporate espionage or foreign intelligence services. Over the medium to long term, our overall cybersecurity is going to depend crucially on making security convenient and simple for ordinary users accustomed to seamlessly switching between many devices.  So we should hope and expect to see solutions like Apple’s more widely adopted.

For intelligence and law enforcement, of course, better security is a mixed blessing.  For the time being, as my co-authors and I noted in the Berkman Center report Don’t Panic, the “going dark” problem is substantially mitigated by the fact that users like to back stuff up, they like the convenience of syncing across devices—and so however unbreakable the disk encryption on a user’s device might be, a lot of useful data is still going to be obtainable from those cloud servers.  They’ve got to be nervous about the prospect of a world where all that cloud data is effectively off the table, because it becomes practical to encrypt it with key material that’s securely syncable across devices but still inaccessible, even to an adversary who can run brute force attacks, without the user’s password. 

If this interpretation of the bill’s intent is right, it’s particularly politically canny.  You propose to saddle every developer with a backdoor mandate, or break the mechanism everyone’s Web browser uses to make a secure connection, and you can expect a whole lot of pushback from both the tech community and the Internet citizenry.  Tell people you’re going to mess with technology their security already depends upon—take away something they have now—and folks get upset. But, thanks to a well-known form of cognitive bias called “loss aversion,” they get a whole lot less upset if you prevent them from getting a benefit (here, a security improvement) most aren’t yet using. And that will be true even if, in the neverending cybersecurity arms race, it’s an improvement that’s going to be necessary over the long run even to preserve current levels of overall security against increasingly sophisticated attacks.

That strikes me, at least for now, as the most plausible read on the “controlled by” language. But another possibility (entirely compatible with the first) is that courts and law enforcement will construe “controlled by” more broadly than I am. If the FBI gives Apple custody of an iPhone, which is running gatekeeper software that Apple can modify, does it become a technology “controlled by” Apple at the time the request is made, even if it wasn’t under their control at the time the data was encrypted?  If the developer of an encrypted messaging app—which, let’s assume, technically retains ownership of the software while “licensing” it to the end user—pushes out regular automated updates and runs a directory server that mediates connections between users, is there some sense in which the entire process is “controlled by” them even if the key generation and encryption runs on the user’s device?  My instinct is “no,” but I can imagine a smart lawyer persuading a magistrate judge the answer is “yes.” One final note here: It’s a huge question mark in my mind how the mandate on app stores to ensure compliance interacts with the narrowed scope. Can they now permit un-backdoored applications as long as the encryption process isn’t “controlled by” the software developers? How do they figure out when that’s the case in advance of litigation?

Let’s move on to the other changes, which mercifully we can deal with a lot more briefly.  The exclusion of intelligence investigations from the scope of the bill seems particularly odd given that the bill’s sponsors are, after all, members of their respective chambers’ intelligence committees, with the intelligence angle providing the main jurisdictional hook for them to be taking point on the issue at all.  But it makes a bit more sense if you think of it as a kind of strategic concession in a recurring jurisdictional turf war with the judiciary committees.  The sponsors would effectively be saying: “Move our bill, and we’ll write it in a way that makes it clear you’ve got primary jurisdiction.”  Two other alternatives: The intelligence agencies, which have both intelligence gathering and cybersecurity assurance responsibilities, have generally been a lot more lukewarm than law enforcement about the prospect of legislation mandating backdoors, so this may be a way of reducing their role in the debate over the bill.  Or it may be that, given the vast amount of collection intelligence agencies engage in compared with domestic law enforcement—remember, there are nearly 95,000 foreign “targets” of electronic surveillance just under §702 of the FISA Amendments Act—technology companies are a lot more skittish about being indundated with decryption and “technical assistance” requests from those agencies, while the larger ones, at least, might expect the compliance burden to be more manageable if the obligation extends only to law enforcement.

I don’t have much insight into the critical infrastructure carve-out; if I had to guess, I’d hazard that some security experts were particularly worried about the security implications of mandating backdoors in software used in especially vital systems at the highest risk of coming under attack by state-level adversaries.  That’s an even bigger concern when you recall that the United States is contemplating bilateral agreements that would let foreign governments directly serve warrants on technology companies.  We may have a “special relationship” with the British, but perhaps not so special that we want them to have a backdoor into our electrical grid.  One huge and (I would have thought) obvious wrinkle here: Telecommunications systems are a canonical example of “critical infrastructure,” which seems like a pretty big potential loophole.

The final change is the easiest to understand: Tech companies don’t want to be saddled with an unlimited set of obligations, and they sure don’t want to be strictly liable to a court for an outcome they can’t possibly guarantee is achievable in every instance. With that added limitation, however, becomes less obvious whether a company is subject to sanction if they’ve designed their products so that a successful attack always requires unreasonable effort. “We’ll happily provide the required technical assistance,” they might say, “as soon as the FBI can think up an attack that requires only reasonable effort on our part.” It’d be a little cheeky, but they might well be able to sell that to a court as technically compliant depending on the facts in a particular case.

So those are my first pass thoughts. Short version: Potentially a good deal narrower than the original version of the bill, and therefore not subject to all the same objections that one met with. Still a pretty bad idea. This debate clearly isn’t going anywhere, however, and the latest iteration of the backdoor bill is unlikely to be the last we’ll see.

Iceland will hold early elections in October following the resignation of former Prime Minister Gunnlaugsson. One aggregation of polls has the upstart Pirate Party in the lead by four percentage points, and the party may be in prime position to form Iceland’s next government. They have an eclectic suite of policies in their party platform, some of them interesting and not all of them desirable. In a narrow sense, their elevation could lead to the development of a basic income experiment due to the shortcomings they perceive in Iceland’s current welfare system. Another pilot program for a basic income could help find more answers to the many questions that still surround the idea.

Last year the party’s MPs introduced a proposal calling for the government to form a working group to investigate the feasibility of shifting to a basic income that would “replace, or at least simplify” their current system. As with most discussions about the desirability of such a shift, the details are incredibly important, and to a large extent these proposals cannot be evaluated until more elements of the plan are decided.

If this is an unconditional income that is grafted onto the current framework, it would likely end up being unaffordable without addressing the work disincentives and other problems currently in place. If, however, it replaces the patchwork of programs with a simplified benefit going directly to people instead of being transmitted through a series of in-kind or specified programs, it could potentially be an improvement over the status quo.

One thing that is certain, the current system can deter work for low-income households in those programs and ultimately make it harder for them to prosper. A single parent with two children who transitions from inactivity to a full-time job paying two-thirds of the country’s average wage faces a rate of 73 percent, so she would lose almost three quarters of each dollar of earnings to lower benefits and higher taxes. This makes work a less attractive option. It’s not just moving from inactivity to work, as low-income workers face a similar trap. A single parent with two children faces a 54 percent rate if they move from a low wage job paying one-third of the average wage to one paying two-thirds. This trap has gotten worse recently, as this rate is up significantly from 46 percent in 2002. Moving to a form of basic income could reduce these work disincentives in the right framework, but much of this depends on the details in the plan and how it is implemented.

Last year’s proposal cited the experience in Finland, where researchers recently announced they will be moving forward with a limited trial where a randomly selected group of two to three thousand people already unemployed will begin to receive a basic income of about $600 each month for a period of two years. The new cash grant will replace their existing benefits, and researchers will assess the impact of the change on poverty, employment rates, and bureaucracy. The Finnish experiment will be more limited in scope than some initial reports, so a potentially more expansive experiment in Iceland could help to test other aspects, like what community wide effects would be where a meaningful portion of the residents are in this regime instead of the existing framework.

Even something along these lines will leave some fundamental questions surrounding a basic income unanswered: concerns about affordability, sustainability and work impact for a program that is permanent instead of limited to a two year period. Scaling up an unconditional benefit to the entire country would present funding concerns not present with a limited pilot program. A later generation that grew up with a basic income framework in existence could have significantly different responses in terms of work effort than one that shifted after they were already well into their working lives.

Getting more information about how these other models compare to current welfare systems along these metrics is crucially important for countries considering reform. The many flaws of the current system are well-known and past work at Cato has delved into them at length. It is not yet clear whether these new systems will fall prey to some of the same problems as the old framework when they are implemented, or even be practically feasible. Until we know much more about the ramifications of such a shift, it is much too early to consider large-scale adoption of anything along these lines. These experiments and demonstrations are necessary because they will help us get more data to try to find answers to some of these questions. We may be getting another test case in Iceland soon.

Pinal County, Arizona was in danger of being the first second third fourth place where ObamaCare caused insurance markets to collapse. As of last month, every private health insurance company now selling ObamaCare coverage in the county announced it would no longer do so in 2017. Had that scenario come to pass, it would have tossed nearly 10,000 residents out of their Exchange plans and left them to buy ObamaCare coverage outside of the Exchange, with no taxpayer subsidies to make the coverage “affordable.” If they didn’t buy that unaffordable coverage, ObamaCare would still subject them to penalties, at least until the Secretary of Health and Human Services intervened.

It appears that Pinal County has avoided that fate. Blue Cross Blue Shield of Arizona has announced that, despite reservations, it will sell ObamaCare coverage in Pinal County next year. Pinal County now joins 13 other Arizona counties, one third of counties nationwide, and seven states that will have only one carrier in the Exchange.

The Wall Street Journal reports, “The insurer’s agreement represents a victory for federal and state officials, who have been pushing to resolve the situation in the weeks since it emerged.” The Journal neglects to mention this “victory” for the government comes at the expense of Exchange enrollees and taxpayers. The Associated Press reports Blue Cross Blue Shield of Arizona will be increasing premiums on Exchange enrollees by a whopping 51 percent. Imagine your annual ObamaCare premium is $14,000. Now imagine it rising by $7,000. Some victory.

If you live in Arizona, you may not have to imagine. Blue Cross Blue Shield of Arizona will be the only carrier selling on the Exchange in 13 of 14 counties in the state, and has requested what Charles Gaba describes as “a whopping 51.2% average [premium increase] statewide.” Remember when President Obama excoriated insurers for “unacceptable” premium increases as high as 39 percent? Remember when he promised ObamaCare would make such exorbitant premium increases a thing of the past?

Now that private insurance executives have pulled ObamaCare’s cookies out of the fire, in Arizona and everywhere the Exchanges are down to a single carrier, the industry appears to have the ObamaCare ideologues over a barrel. Blue Cross Blue Shield of Arizona, which has lost close to $250 million in the Exchange, no doubt has a wish-list of regulatory and legislative changes that would bail them out “stabilize the market.” The Obama administration and state regulators appear ready to hand insurance companies the keys to the treasury.

ObamaCare supporters signal their willingness to bail out private insurance companies – and their disdain for taxpayers and Exchange enrollees – every time they suggest these premium hikes are no big deal because the government subsidizes premiums. Not only do government subsidies not reduce the cost of ObamaCare coverage – they simply shift the cost from enrollees to taxpayers – half of ObamaCare enrollees don’t even get subsidies. Robert Laszewski writes, “many millions of people…have no choice but to take the full whack from these rate increases if they want to stay covered.”

Those who are ideologically committed to ObamaCare, or utterly dependent on it, can rejoice that it didn’t collapse today. Those who seek a sustainable and secure system of subsidies for the sick aren’t celebrating. The fact that ObamaCare has sunk so low, and may sink lower still, means the case for repeal is stronger than ever.

Based on the title of this column, you may think I’m going to write about oppressive IRS behavior or punitive tax policy.

Those are good guesses, but today’s “brutal tax beating” is about what happens when a left-leaning journalist writes a sophomoric column about tax policy and then gets corrected by an expert from the Tax Foundation.

The topic is the tax treatment of executive compensation, which is somewhat of a mess because part of Bill Clinton’s 1993 tax hike was a provision to bar companies from deducting executive compensation above $1 million when compiling their tax returns (which meant, for all intents and purposes, an additional back-door 35-percent tax penalty on salaries paid to CEO types). But to minimize the damaging impact of this discriminatory penalty, particularly on start-up firms, this extra tax didn’t apply to performance-based compensation such as stock options.

In a good and simple tax system, which taxes income only one time (including business income), the entire provision would be repealed.

But when Alvin Chang, a graphics reporter from Vox, wrote a column on this topic, he made the remarkable claim that somehow taxpayers are subsidizing big banks because the aforementioned penalty does not apply to performance-based compensation.

…the government doesn’t tax performance-based pay for…any…top bank executive in America. Unlike regular salaries — where the government takes out taxes to pay for Medicare, Social Security, and all other sorts of things — US tax code lets banks deduct the big bonuses they give to their executives. … The solution most Americans want is to either heavily tax CEO pay over a certain amount, or to set a strict cap on how much CEOs can make, relative to their workers. As long as this loophole is open, though, it makes sense for banks to continue paying executives these huge sums. ..for now, taxpayers are still ponying up to help make wealthy bankers even wealthier, because the US tax code encourages it.

Since Mr. Chang is a graphics reporter, you won’t be surprised that he included several images to augment his argument.

Here’s one making the case that companies should pay a 35 percent tax on performance-based pay for CEO types. Keep in mind, as you peruse this image, that recipients of performance-based pay have to declare that income on their 1040s and pay 39.6 percent individual income tax.

And here’s Chang’s look at how much money the IRS could have collected from big banks in recent years if the anti-CEO tax penalty was extended to performance-based pay.

When I look at these images, my gut reaction is to be offended that Chang equates “taxpayers” with the federal government.

So I would change the caption of the first image so it ended, “…this pile would be diverted from shareholders to politicians.”

And the caption in the second image would read, “This is the amount it saved taxpayers.”

But Chang’s argument is also flawed for much deeper reasons. Scott Greenberg of the Tax Foundation debunks his entire column. Not just debunks. Eviscerates. Destroys.

Here are some of the highlights.

…the article contains several factual errors and misleading claims about how CEOs are taxed in America. The article begins by making an incorrect claim: that the federal government does not tax performance-based CEO pay… This is simply untrue. Under the U.S. tax code, households are generally required to pay individual income taxes on the value of the stock options and bonuses that they receive…up to 39.6% on the performance-based pay… The article continues with another false assertion…it claims that CEO performance-based pay is not subject to the same Social Security and Medicare payroll taxes as “regular salaries.” In fact, all employee compensation, including CEO pay, is subject to Medicare payroll taxes, and high-income individuals actually pay a higher Medicare payroll tax rate than most other employees. …it claims that U.S. businesses are allowed to deduct CEO pay but are not allowed to deduct “regular salaries.” This is patently incorrect. Under the U.S. tax code, businesses are allowed to deduct virtually all compensation to employees. In fact, the only major exception to this rule is that businesses are only allowed to deduct $1 million in non-performance-based salaries to CEOs. This means that the U.S. tax code gives the same, if not worse, treatment to CEO compensation as “regular salaries.”

Scott also addresses the silly assertion that deductions for CEO compensation are some sort of subsidy.

You probably wouldn’t claim that taxpayers are subsidizing the restaurant worker’s salary, because the deduction for employee compensation is a regular, structural feature of the tax code. In general, businesses in the U.S. are taxed on their revenues minus their expenses, and the salary paid to the worker is a business expense like any other. The same argument applies for CEO compensation. When a business pays a CEO $155 million, it has increased its expenses and decreased its profits. The normal logic of U.S. tax law dictates that the business be allowed to deduct the CEO’s compensation from its taxable income. Then, the CEO is required to pay individual income taxes on the compensation.

The bottom line, as Scott points out, is that Bill Clinton’s provision means that CEO pay is penalized rather than subsidized.

…wages and salaries of CEOs are penalized relative to the wages and salaries of regular employees, while performance-based compensation is taxed in the same manner as regular wages and salaries. In sum, it is simply wrong to say that the federal tax code subsidizes CEO pay.

Game, set, and match. Mr. Chang should stick to graphics rather than tax policy.

And policy makers should resist tax policies based on envy and resentment since the net result is a tax code that is needless complex and pointlessly destructive.

One final moral to the story: If there’s ever a tax fight between Vox and the Tax Foundation, always bet on the latter.

A preliminary draft paper on transparency that Cass Sunstein posted last month inspired Vox’s Matthew Yglesias to editorialize “Against Transparency” this week. Both are ruminations that shouldn’t be dealt with too formally, and in that spirit I’ll say that my personal hierarchy of needs doesn’t entirely overlap with Yglesias’.

In defense of selective government opacity, he says: “We need to let public officials talk to each other — and to their professional contacts outside the government — in ways that are both honest and technologically modern.”

Speak for yourself, buddy! The status quo in government management may need that, but that status quo is no need of mine.

A pithy, persuasive response to Yglesias came from the AP’s Ted Bridis, who pointed out via Twitter the value of recorded telephone calls for unearthing official malfeasance. Recordings reveal, for example, that in 2014 U.S. government officials agreed to restrict more than 37 square miles of airspace surrounding Ferguson, Missouri, in response to local officials’ desire to keep news helicopters from viewing the protests there. Technological change might counsel putting more of public officials’ communications “on the record,” not less.

It’s wise of Sunstein to share his piece in draft—in its “pre-decisional” phase, if you will—because his attempt to categorize information about government decision-making as “inputs” and “outputs” loses its intuitiveness as you go along. Data collected by the government is an output, but when it’s used for deciding how to regulate, it’s an input, etc. These distinctions would be hard to internalize and administer, certainly at the scale of a U.S. federal government, and would collapse when administered by government officials on their own behalf.

I think it’s right that there are some categories of work done in government executive offices and agencies that should be left as the business of the parties themselves. But I’m more attracted to reversing the presumption of automatic publication only in the narrow instances when national security, the privacy of private individuals as such, and the need for candid communication with high executive officials requires it. (That’s hardly a perfect standard. This blog post is a rumination.)

Few would make the connection, but Dave Weigel supplies what’s really missing from the discussion. In “Why You Should Stop Blaming the Candidates if you Don’t Know ‘What They Stand For’,” he says: “I’m sorry, but this one falls on the voters. It is generally as easy to learn where the candidates stand on all but the most obscure issues as it is to find, say, a recipe for low-calorie overnight oats.”

The “outputs” are there, but the public is failing to use them. The problem is the public. If voters aren’t using even published and publicized campaign information, why would they do any good with governance inputs? Especially in light of the interests of regulators, with whom Sunstein and Yglesias quite strongly sympathize, transparency isn’t all that special. Let’s just tap the brakes in this one area, giving regulators more room for private deliberations.

Massive public dissatifaction with the status quo tells us there must be something wrong with that argument.

The glib, libertarian answer is that you could shrink the size and scope of government, particularly the U.S. federal government, and grow the public’s relative capacity to oversee it. And while I think that’s true, there’s a more subtle approach that may appeal to an Yglesias or a Sunstein. That’s to recognize that the public today has diminished capacity for government oversight, having lived for many decades without transparency as to the inputs or outputs of government.

In my paper, Publication Practices for Transparent Government, I wrote of the social capital that must grow up once governments begin publishing information well.

[T]ransparency is not an automatic or instant result of following these good practices, and it is not just the form and formats of data. It turns on the capacity of the society to interact with the data and make use of it. American society will take some time to make use of more transparent data once better practices are in place. There are already thriving communities of researchers, journalists, and software developers using unofficial repositories of government data. If they can do good work with incomplete and imperfect data, they will do even better work with rich, complete data issued promptly by authoritative sources. When fully transparent data comes online, though, researchers will have to learn about these data sources and begin using  them. Government transparency and advocacy websites will have to do the same. Government entities themselves will discover new ways to coordinate and organize based on good data-publication practices. Reporters will learn new sources and new habits.

Put aside mere “inputs.” The government today doesn’t publish good data that reflects its deliberations, management, and results. Take one of the most important areas: the law itself. The bills in Congress are published in unnecessarily opaque fashion (our efforts with the Deepbills project notwithstanding). Federal statutory law in the form of the U.S. Code only recently began to see publication in a basic computer-usable fashion. Regulations are available in XML, but regulatory processes have only been superficially updated, essentially by combining existing processes on a single web site. And sub-regulatory law—guidance documents, advisories, and such—those are a transparency travesty if not a systematic, widely accepted Due Process violation.

Authoritative data that indicates what the organizational units of the federal government are—a machine readable government organization chart—does not exist, even after the Obama Administration’s promise last fall to produce one in “months.” And implementation of the DATA Act, while proceeding, may yet run into enough roadblocks to fail at giving the public a consistent, reliable account of federal spending.

The transparency movement is a reform movement based on a vision of modern government. Sunstein’s and Yglesias’ modest arguments against transparency don’t appear to recognize the government’s long, systematic exclusion of the public from its processes or the resulting atrophy of Americans’ civic muscles. Thus, they too easily conclude that giving transparency to internal matters, or “inputs,” is not necessary because it’s not useful.

Transparency is a legacy issue for President Obama, who campaigned in 2008 on promises of true reform. Time has not run out for the president’s pro-transparency reform.

Among industrialized countries, the United States has the highest official corporate tax rate and one of the highest effective tax rates. To take advantage of lower taxes in other countries, some U.S. firms elect to sell themselves to smaller foreign firms, a process called “inversion.”

For shareholders of those firms, the tax consequences of inversions are complicated. Some are harmed by the move while others benefit. Individual shareholders, who own shares in taxable accounts, are taxed on the increased value of their shares. This can result in different tax outcomes from inversions for shareholders who have held the stock for a long time prior to the inversion and short-term shareholders (including corporate officers exercising company stock options).

In the summer issue of Regulation, I described a new research paper that investigates 73 inversions that occurred from 1983 to 2014. For those investors who had owned stock for three years, half of the inversions resulted in a negative return. So if many long-term shareholders lose money on inversions, why do they occur?

The answer appears to be that corporate executives gain from inversions even if shareholders lose. The return earned by CEOs of inverted companies is different than the return of average shareholders if the CEOs have stock options. Inversion does not result in capital gain taxation of exercised options.  Thus inversions can be more rewarding for CEOs than long-term investors. The paper’s authors show that the higher the option compensation of a CEO, the greater the likelihood of an inversion.

Put simply, the CEO has incentives that are not well-aligned with long-term shareholders. That likely means that current proposals to combat inversions by raising taxes on inverting firms will not have the intended effect; though shareholders would be further harmed by the tax penalties, the CEOs would still have incentives to invert.

The authors of the research paper had a recent op-ed in the New York Times about their work. They call for lower corporate tax rates and an end to the rules that were intended to reduce inversions but have only hurt long-term shareholders.

Export-Import Bank supporters are back at it again. According to a document from the Office of Management and Budget, the administration is reportedly asking lawmakers to include a provision restoring the agency’s full lending authority as part of the continuing resolution that needs to be passed in order to keep the government functioning after September 30th. It was just a few weeks before his election in 2008 that Obama said it had “become little more than a fund for corporate welfare,” and cited it as an example of why he wasn’t someone “who believes we can or should defend every government program just because it’s there.” What a difference eight years can make.

Opponents of cheered last year when Congress let the bank’s charter lapse, only for it to be reauthorized months later when a provision was attached to the highway bill in December to reauthorize the agency through September 2019.

The agency, which provides financing and loan guarantees for U.S. export transactions, has since been limited in the scope of its lending authority, as the Senate has declined to approve the administration’s nominee to its board of directors. With three of five board seats vacant, quorum rules prevent the bank from approving any transactions over $10 million until the vacancy is filled.

This latest request from the administration is the culmination of a concerted effort on both sides of the aisle to restore full authority to the agency, without which it cannot approve the larger deals that would benefit the bigger companies that receive so much of the bank’s support. This is hardly a partisan affair, as earlier this year Republican Rep. Charlie Dent introduced an amendment to the State and Foreign Operations Appropriations Bill that would achieve the same objective.

The agency’s supporters suggest that companies are moving some operations abroad for lack of support from the Export-Import Bank, leading to lost jobs. These claims should be viewed with a lot of skepticism, as the agencies support primarily introduces distortions that shift jobs to the industries and firms it subsidizes.  Most of the beneficiaries are major corporations that already have access to capital, almost two-thirds of the bank’s financing benefits just 10 large corporations.

According to analysis from Veronique de Rugy, the Export-Import bank supports only a miniscule share of exporters or small business: 0.42 percent of exporters and 0.28 percent of small businesses from FY2009-2014. The vast majority of these companies are operate and compete just fine without the Export-Import bank. In fact, these firms not directly supported by the bank are placed at a competitive disadvantage by its interventions.

Despite the bipartisan support from policymakers, voters are less sure about the need for an Export-Import Bank. In a Morning Consult poll from last year, 36 percent said that the statement the agency is a “government handout that benefits only a handful of major American corporations” was closer to their opinion than that it “supports U.S. jobs.”

In that same poll, 73 percent of registered voters responded that they had not heard much or anything at all about Export-Import Bank, which may be part of the reason that firms who receive the concentrated benefits of its support are consistently able to overcome opposition from some policymakers, and public support that is tepid at most.

The Export-Import Bank is just one aspect of corporate welfare within federal government policies, and one that does not currently place an outsize fiscal burden on taxpayers, but it still distorts business decisions and gets the government into the role of influencing which firms will be winners and losers from policy. Each year, the federal government spends more than $100 billion on various forms of corporate welfare. Given the stubborn persistency of the Export-Import Bank and the broader problem of concentrated benefits and diffuse costs, this state of affairs will unfortunately continue for years to come.

Princeton economist Uwe Reinhardt supports ObamaCare. He also thinks the law’s health-insurance Exchanges are doomed. An exodus of insurers—lots of Exchanges are down to one carrier; Pinal County, Arizona is down to zero carriers—has taken supporters and the media by surprise. It shouldn’t. Similar laws and even ObamaCare itself have caused multiple insurance markets to collapse.

Reinhardt jokes ObamaCare’s Exchanges look like they were designed by “a bunch of Princeton undergrads.” Those Exchanges are now experiencing “a mild version” of “the death spiral that actuaries worry about.” The extreme version has happened before. “We’ve had two actual death spirals: in New Jersey and in New York,” Reinhardt explains. “New Jersey passed a law that had community rating but no mandate, so that market shrank quickly and premiums were off the wall. You look at New York and the same thing happened; they had premiums above $6,000 per month. The death spiral killed those markets.” Community rating is a system of government price controls that supposedly prohibit insurers from discriminating against people with preexisting conditions.

And it’s not just New York and New Jersey where ObamaCare-like laws have caused health insurance markets to collapse. It also happened in Kentucky, New Hampshire, and Washington State.

In fact, the death spiral Reinhardt sees in the Exchanges would itself be the fourth death spiral ObamaCare itself has caused:

  1. Before they even took effect, ObamaCare’s preexisting conditions provisions began driving insurers out of the market for child-only health insurance. Insurers ultimately exited that market in 39 states, causing the markets in 17 states to collapse.
  2. ObamaCare’s long-term care insurance program – the CLASS Act – failed to launch when the administration could not make it financially sustainable. President Obama and Congress repealed it.
  3. Exchanges effectively collapsed in every U.S. territory, again prior to launch.
  4. Now, a nationwide exodus of insurers has left one third of counties, one in six residents and seven states with only one carrier. In Pinal County, Arizona, every insurer has exited the Exchange. The exodus goes beyond greedy, for-profit insurers. It includes more than a dozen government-chartered nonprofit “co-op” plans.

Each of these crashes shares the same root cause: ObamaCare’s preexisting-conditions provisions create adverse selection. (Adverse selection is when sick people enroll in a plan and healthy people don’t.) To put it more plainly, ObamaCare required insurers to cover so many people with preexisting conditions that they ultimately could not cover anyone.

In the child-only market and the CLASS Act, the preexisting conditions provisions took effect with no mandate to purchase insurance, or premium subsidies, or anything to mitigate the resulting adverse selection.

In the territories, the preexisting conditions provisions were to take effect with no mandate and relatively weak premium subsidies. “These regulations had screwed up territorial insurance markets so badly that health insurance plans bolted; it’s currently impossible to purchase an individual market insurance plan in the Northern Marinas Islands,” wrote Vox’s Sarah Kliff. It got so bad, the Obama administration reinterpreted the law to say its preexisting conditions provisions and other costly regulations don’t apply in the territories.

In Pinal County, ObamaCare had everything. It had the pre-existing conditions provisions.  It had a mandate. It had premium subsidies. Thanks to the Supreme Court, it even had subsidies and a mandate that the ACA doesn’t authorize (because Arizona didn’t establish its own Exchange). The Exchange still collapsed.

ObamaCare’s authors knew they were playing with fire, but thought their handiwork could contain it. If it can’t, even more Exchanges will collapse, leaving people who had relatively secure coverage before ObamaCare with no coverage at all.

But hey, we’ll always have Massachusetts.

The state of Oregon recently began a pilot program with 1,000 drivers, which charges those drivers a fee based on the miles they drive, rather than a gas tax. Several states are looking closely at Oregon’s experiment. This could mark the beginning of a major change to a much better way to finance our roads.

The states care about Oregon’s experiment because the gas tax is a lousy user fee that doesn’t come close to capturing the true cost a driver imposes on the state when he drives, whether via the wear and tear his vehicle causes to the highway, the congestion his presence on the road exacerbates, or the pollution his car emits. An optimal user fee would attempt to capture each one of those and charge a fee based on where a person drives, how much he drives, the amount of congestion on the roads he is on, and his car’s emissions. Oregon’s simple experiment captures none of that—it consists solely of a 1.5 cent per mile charge, coupled with a fuel tax credit—but with today’s technology a more advanced system could easily be implemented.

The advantage of having a sophisticated user fee for drivers is that it could dramatically lessen congestion on a road: if you charge a high fee when roads get crowded, people will postpone trips, carpool, work at home, or take mass transit. Since the majority of auto pollution comes from cars stalled in traffic, the reduction in smog would be significant. Such a user fee would also help states reduce how much infrastructure they have to build by smoothing out demand.

The complaint against such schemes is that they have the potential to invade privacy—a valid concern, but one that can be addressed with adequate regulation, and an open source software system that can be examined by anyone to determine if it is sufficiently secure.

Illinois’s legislature considered such an approach until popular outcry led to rapid backpedaling by leaders in the state legislature, spurred in part by downstate outrage. That the poorer and more rural residents there reflexively objected to Illinois’s mileage-based user fee plan is a pity, because they stand to be the biggest winners from such a change. Switching to a smart per-mile fee for Illinois drivers would push more of the cost of the state’s roads onto wealthier drivers in suburban Chicago, while at the same time reducing the amount that would need to be spent on infrastructure for them. People who don’t live upstate and thus rarely find themselves in traffic jams would see their fees go down. In short, replacing the gas tax with a smart per-mile fee would represent a very progressive change to the system, whether in Oregon, Illinois, or any other state.

California took a small step in this direction when it changed auto insurance rules to base rates on miles driven. The politicians survived that step. As long as gas prices remain low, it is a propitious time to aggressively expand efforts to charge people something akin to the true public and private cost of their driving.

The federal government could help the cause by making it easier to toll on federal roads, and congressmen could help by not demagoguing such plans when they arise, as is their wont. My former congressman Ray LaHood committed a political gaffe early in his tenure as secretary of transportation, when he inadvertently told the truth and said per-mile charging was the most logical system that existed, and that we should do what we can to move towards implementing it. President Obama quickly renounced the sentiment and promised that such a thing would never happen on his watch.

Obama’s watch is now almost over. It’s time we embraced a transportation funding system that would more closely resemble an actual market, and deliver markedly better results for drivers and the environment. It would be asking too much for either presidential candidate to embrace such a reform, but the next commander-in-chief could help nudge states in the right direction simply by directing the Department of Transportation to be more helpful in experiments like Oregon’s. The idea of smart, per-mile charging is so intuitively appealing that I suspect most DOT workers are already inclined in that direction. A White House that’s not worried about short-term blowback could allow them to do a lot more than at present to make per-mile fees, instead of gas taxes, a reality.

According to opinion polls, Americans think that the federal government is too large and powerful. Most people do not trust the federal government to handle problems. Only one-third of people think that the government gives competent service, and the public’s “customer satisfaction” with federal services is lower than for virtually all private services. I discussed these sad realities in this study.

NextGov.com reports today on a new customer satisfaction study:

Despite a major push by the Obama administration in recent years, the federal government “still fails at customer experience,” according to Forrester Research’s Customer Experience Index.

The federal government finished dead last among 21 major industries, and had five of the eight worst scores of the 319 brands, leading Forrester to note that government has a “near monopoly on the worst experiences.”

Notably, HealthCare.gov ranked last among all brands … USAJobs.gov, the departments of Education and Veterans Affairs, the Transportation Security Administration, the Internal Revenue Service, Medicaid and the Small Business Administration rated in the bottom 6 percent of all brands.

This was not a small-sample poll. Forrester’s Index was based on perceptions from surveys of 122,500 adult customers.

“For me, the most compelling point is that federal agencies are clustered near the bottom of the index,” Rick Parrish, senior analyst at Forrester, told Nextgov. “So many agencies that have been working hard haven’t shown improvement. You see a lot of action, a lot of arm-waving and noise, but not a lot of progress.” Even the worst brands in the worst industries—TV and internet service providers, and some airlines—generally outperformed federal agencies.

An irony of Big Government is that even as Congress has created hundreds of new programs to supposedly help people, and dishes out more than $2 trillion a year in subsidies, the public has not grown fonder of the government. Instead, people have become more alienated from it, and more disgusted by its poor performance.

For more on government failure, see here.

The outcome was certain the moment federal and state regulators spilled blood in the water and swarmed ITT Technical Institutes, but today it became official: ITT is going out of business. No proven guilt, just accused to death. But we’ve been over all that.

What is worth pointing out now are the alternatives to ITT. I’ve recently seen a couple of stories from Ohio about community colleges offering to take in students stranded by ITT’s demise, and thought it might be worth doing a little comparison between Ohio ITT branches—I mean, former branches—and these would-be rescuers.

Here is some broad info from the federal College Scorecard on Ohio ITT branches, and it is certainly not great: Annual after-aid costs ranging from $21,212 to $24,258, graduation rates from “not available” to 52 percent, and salary after attending of $38,400, which appears to be listed for most ITT campuses nationwide.

How about those community colleges?

I couldn’t find Butler Tech or Great Oaks on the Scorecard, but Cuyahoga Community College has an annual after-aid student cost of $5,832—enabled by upfront taxpayer subsidies—but only a 6 percent graduation rate and an annual salary after attending of $27,600. Cincinnati State Technical and Community College has an annual cost of $7,021, a graduation rate of 22 percent, and a salary of $29,700. The community colleges are cheaper than ITT, but their outcomes appear appreciably worse.

The Scorecard, importantly, is a seriously flawed tool, but it comes from the very federal government that has targeted ITT, and it gives the kind of first-blush data that have readily been employed to attack the for-profit sector. What I looked at is also, of course, anecdotal. But what it suggests is that the alternatives to ITT, at least in Ohio, are probably no better than ITT was, and may well be worse. Which supports what you’ve read here many times, and which broader evidence upholds: For-profit colleges are not distinctly terrible. It is the whole, federally distorted system that is a wreck.