Cato Op-Eds

Individual Liberty, Free Markets, and Peace
Subscribe to Cato Op-Eds feed

Newly anointed GOP presidential nominee Donald Trump wasted no time in criticizing the foreign policy legacy of Barack Obama and Hillary Clinton. For decades the GOP has claimed to uniquely represent American military personnel.

Service members aren’t allowed to become publicly involved in partisan politics. However, they do speak indirectly, via polls and contributions.

It turns out that they favor neither Democrats nor Republicans. Rather, this campaign a plurality is supporting the least militaristic of the candidates, Libertarian Party nominee Gary Johnson.

The LP is a perennial and distant third place contender. But this election might be different. Johnson has been polling in double digits and could hold the balance of power, especially with the help of military voters. For instance, a July poll found Johnson well ahead of the two major party candidates among active duty personnel. 

Almost 39 percent of active duty members backed him. Just 31 percent supported Donald Trump and only 14 percent were for Hillary Clinton. Johnson carried every service except the Navy. He enjoyed the biggest margin in the Marines corps, 44 percent to 27 percent for Trump.

This isn’t the first time a libertarian led the presidential race among military personnel. Republican Ron Paul, a congressman long known as “Dr. No,” was a consistent outlier on foreign policy. While the other Republicans advocated more intervention and war, Paul highlighted the problems of “blowback”—terrorism as a response to Washington’s persistent willingness to bomb, invade, and occupy other nations and drone and bomb other peoples.

The conventional wisdom seemed to be that military personnel favored war. Yet, wrote Timothy Egan in the New York Times in 2011, Paul had “more financial support from active duty members of the service than any other politician.” At one point Paul had collected 87 percent of the military contributions for GOP candidates.

As of March 2012, Paul had received more than twice the amount for Obama, almost ten times the amount for Mitt Romney, more than ten times as much as Newt Gingrich, and about 32 times the amount for Rick Santorum. The latter three were inveterate war hawks who themselves never served in the military. In contrast, Obama presented himself as a critic of unnecessary war.

Paul even led his Republican competitors among military contractors (though he trailed Obama). Analyst Loren Thompson explained that “Just because people work in the defense industry doesn’t mean that they always vote their economic interests.”

While service personnel are willing to serve in combat, most do not want to do so absent compelling circumstances. And few of the interests involved in Washington’s conflicts can be considered serious let alone vital. A Marine corps veteran who supported Paul told Egan that service members “realize they’re being utilized for other purposes—nation building and being world’s policeman—and it’s not what they signed up for.”

As I wrote in Rare: “Despite the support of so many military members, Ron Paul was never able to significantly broaden his appeal. Johnson has a unique opportunity given widespread dislike of his two major opponents.”

Who can keep Americans safe? That obviously is one of the most important questions this election. Uniformed military personnel are giving a surprising answer.

The so-called Islamic State is losing ground. The liberation of Mosul, Iraq’s third most populous city, may be the Baghdad government’s next objective.

Yet even as the “caliphate” shrinks in the Middle East, Daesh, as the group also is known, is increasing its murderous attacks on Western civilians. Washington’s intervention actually has endangered Americans.

In contrast to al-Qaeda, which always conducted terrorism, ISIS originally focused on creating a caliphate, or quasi-state. Daesh’s territorial designed conflicted with many nations in the Mideast: Iraq, Syria, Iran, Turkey, Libya, Jordan, Lebanon, and the Gulf kingdoms.

The Obama administration did not intervene out of necessity: ISIS ignored America. Moreover, the movement faced enemies which collectively had a million men under arms; several possessed sophisticated air forces.

Washington’s concern for those being killed by the Islamic State was real, but casualties lagged well behind the number of deaths in other lands routinely ignored by the U.S. The administration seemed most motivated by the sadistic murder of two Americans who had been captured by ISIS in Syria. Although barbaric, these acts did not justify intervention in another Mideast war.

Washington took control of the anti-ISIS campaign but waged a surprisingly lackluster effort. The administration recognized that there was no domestic support for ground troops so mixed bombing and drone strikes with support for “moderate” Syrian rebels, who proved to be generally ineffective.

Turkey sought to play the U.S., pushing Washington to oust Syria’s Bashar al-Assad, while tolerating Daesh. The Syrian government attempted to use the specter of the Islamic State to weaken Western support for its overthrow.

Iraq’s Shia-dominated government wanted a bail-out while maintaining sectarian rule. The Sunni Gulf countries expected America to take care of their problems, as usual. Saudi Arabia dragged Washington into a most foolish military diversion, a war with Yemen’s Houthis.

Now ISIS is in retreat—it has lost almost half of the territory it held in Iraq and one-fifth in Syria. But the group remains surprisingly resilient for an increasingly unpopular and only modestly armed group facing a coalition including the U.S., Europe, and most of the Middle East.

Unfortunately, defeat has turned Daesh toward terrorism, including in the West. As was predictable.

After the horrid attacks in Paris late last year, French President Francois Hollande declared that his nation was at war. But it had been bombing ISIS-held territory for 14 months. The only surprise was that it took Daesh so long to retaliate so spectacularly.

Had the U.S. and Europe left the battle to those directly threatened, the latter would have had no choice but to take the lead. And the fight would have been largely contained within the Middle East. The Islamic State would have had to focus on the enemy literally at its gates, rather than its abstract Western enemies afar.

As ISIS recedes Washington should step back. Unfortunately, the administration’s plan to increase U.S. forces by 560 to about 6000 for the coming Mosul assault will tie America more closely to the sectarian government in Baghdad and its close partner, Iran-supported Shia militias which have committed numerous civilian atrocities.

Washington understandably prefers an unsympathetic Iraqi government to a threatening Islamic State. But should be left responsible for its own mistakes and crimes.

Seeking to build up a “moderate” insurgency to battle both Damascus and ISIS has proved to be mostly a fool’s errand. Backing Riyadh in Yemen is a disaster. Saudi Arabia has turned the long-running insurgency into a sectarian conflict, while Yemenis blame America for civilian atrocities committed by the Saudis.

As I wrote in National Interest: “America faces a genuine terrorist threat, though it is largely indigenous, inspired by foreign killers. Alas, the number of terrorists will continue to increase as Washington makes other people’s conflicts its own. The U.S. must learn to focus on its own enemies.”

Housing affordability, an issue paid considerable attention over the previous two decades, doesn’t show signs of meaningful national improvement. This even despite the almost $50 billion HUD spends in taxpayer dollars annually on solving the affordability crisis and related concerns.

So what gives? One likely culprit is the language we use to describe the problem.

Take the word “affordable.” Affordable housing – used in a public policy context – is a misnomer of sorts: affordability implies the ability to pay for something given your budget. But budgets vary considerably between households, and so the definition of affordability varies considerably, too.

There are only two – improbable – ways that any given housing could be affordable to the aggregate U.S. population. One option is that everyone’s incomes are identical. Another option is that housing is altogether free.

Although HUD makes attempts at each via its current policy mélange, neither is a practicable objective in a free society. And aside from these objectives being met, objectively affordable housing cannot exist.

Instead, it is useful to reframe the debate as a need for low-cost housing. Rather than lowering the bar, low-cost housing offers a higher ideal: even when housing is affordable it isn’t necessarily low-cost, for example. But, conversely, when housing is low-cost enough it is nearly always affordable.

Essentially, re-labeling reminds us to solve the real problem by obliging us to ask “how do we make housing cost less?”

Fortunately, low-cost housing can be realized in myriad ways. Many of these market solutions are currently underemphasized by politicians and housing policy specialists. Examples include terminating protectionist housing policies, like those surrounding mortgage interest tax deductions, curbing future reactionary meddling in mortgage underwriting standards, and of course, repealing land use and zoning laws, which are inimical to low-cost housing.

In this way, we diagnose the problem and treat it directly, and avoid a hopeless fixation on high-cost housing’s irresolvable bi-products. 

Of all issues energizing environmentalists, hydraulic fracturing, or fracking, of the subsurface rocks during the production of oil and gas is near the very top of the list.

In spite of this, several recent government decisions and court rulings have come down on the side of fracking. For example, overshadowed in May by Brexit coverage, elected officials in Yorkshire, England gave the thumbs up to fracking operations there in an effort to boost natural gas production. Also in May, the Colorado Supreme Court struck down fracking bans passed by the city governments of Fort Collins and Longmont, in a ruling that upheld the legality of the state to regulate fracking, versus the municipalities’ lack of standing to ban use of the technology.

In June, a federal judge in Wyoming struck down rules proposed by the Interior Department’s Bureau of Land Management (BLM) to further regulate hydraulic fracturing on federal lands. BLM’s fracking rules were designed to collect additional information already gathered and regulated by states to assure protection of groundwater supplies in oil and gas producing areas. The energy industry argued these fracking rules are duplicative, expensive to carry out, and the net result would not increase environmental protection. Worse yet, the BLM and existing state rules in places countermand each other. Consequently, BLM’s fracking rules were challenged in court on the same day they were issued, and were then set aside last year by the same federal judge who struck them down last month.

The timing and tenure of the judge’s decision has several interesting aspects. From the legal standpoint, the administration does not have the congressional authority to regulate fracking. The judge’s opinion stated “the BLM’s efforts to do so through the fracking rule is in excess of its statutory authority and contrary to law.” He further explained that “Congress’ inability or unwillingness to pass a law desired by the executive branch does not default authority to the executive branch to act independently, regardless of whether hydraulic fracturing is good or bad for the environment for the citizens of the United States.”

The judge’s decision to strike the BLM rules also comports with the major finding in EPA’s June 2015 “Assessment of the Potential Impacts of Hydraulic Fracturing for Oil and Gas on Drinking Water Resources.” In it, the agency’s key findings stated “we did not find evidence that these mechanisms (fracking) have led to widespread, systemic impacts on drinking water resources in the United States.

The recent legal decisions striking down rules and bans on fracking are based on fundamental constitutional law, nevertheless they are occurring even as the environmental hysteria over fracking appears to be increasing. However, the theory of groundwater contamination due to rock fracturing many thousands of feet below the groundwater table has lost much of its credibility, particularly with last year’s publication of the EPA study.

In the end, BLM’s fracking rules sought to create a blanket regulation of a critical proven technology that, when combined with horizontal drilling, provided the foundation of the shale revolution—in turn contributing mightily to the success of the American energy renaissance. In fact, the Energy Information Administration (EIA) in May reported that the U.S. remained the world’s top producer of petroleum and natural gas for 2015, having surpassed the production of Russia in 2012 and Saudi Arabia in 2013, due to surging production brought about by fracking and horizontal drilling.

Finally, forgotten among the energy production statistics is the fact that over the past five years, the increased use of natural gas brought about by the fracking and horizontal drilling of the shale revolution has cut the carbon intensivity of new fossil fuel power production by roughly 50 percent. With regard to the EPA, decreased carbon emissions should be as much of a key issue as groundwater safety. As the public and elected officials become better educated on this important topic—so central to our uninterrupted energy supply—it appears increasingly likely that any environmental “lightening” being hurled at hydraulic fracturing technology will be harmlessly directed to ground.

There are hints of possible interest in acquiring nuclear weapons in both South Korea and Japan, especially since the rise of Donald Trump. Such a policy shift would be neither quick nor easy. Yet the presumption that the benefits of nuclear nonproliferation are worth the costs of maintaining a nuclear “umbrella” is outdated.

Since the development of the atomic bomb America has been committed to nuclear war. Many Americans probably believed that meant if their own nation’s survival was in doubt. But Washington always has been far more likely to use nukes on behalf of its allies.

The cost of America’s many commitments is high. The U.S. promises to sacrifice thousands if not millions of its own citizens for modest or even minimal interests.

Unfortunately, war is possible. Deterrence often fails, and Washington might have to decide if it will fight an unanticipated nuclear war or back down.

Friendly proliferation might be a better option. I write in Foreign Affairs: “Instead of being in the middle of a Northeast Asia in which only the bad guys—China, North Korea, Russia—had nukes, the U.S. could remain out of the fray. If something went wrong, the tragedy would not automatically include America.”

There obviously remain good reasons for the U.S. to be wary of encouraging proliferation. Yet opening a debate over the issue may be the most effective way to convince China to take more serious action against the DPRK.

Friendly proliferation might be the best of a bad set of options.

Donald Trump is a competitive person. He likes to have bigger things than other people. He says that he has really big hands. His tax cut was larger than the other GOP candidates. And now he says that his infrastructure plan will be double the size of Hillary Clinton’s.

The problem is, with the federal government, smaller is almost always better than bigger. That’s the message of this fashionable Cato T-shirt, which, by the way, would go nicely with Donald’s Make America Great hat. So for the Trump campaign, I’ll put aside a really big T-shirt for Donald, while a medium would look great on Melania.

With regard to infrastructure, I describe why the federal government ought to reduce its role in this essay. There are few, if any, advantages of federal involvement, and many disadvantages, including bureaucracy, pork barrel, misallocation, cost overruns, and regulation.

State and local governments can raise money for their own highways, bridges, seaports, and airports anytime they want to. Indeed, about half the states have raised their gas taxes to fund transportation infrastructure in just the past four years. There’s no need for more federal taxes or debt, as Trump is proposing.

As a businessman, Trump ought to be thinking about expanding the private role in America’s infrastructure, not trying to one-up Clinton on central control. A global trend toward infrastructure privatization has swept the world since the 1980s. To make America great again, we should adopt the best practices from around the world, and that means the efficiency and innovation that comes with privatization.

When he thinks about his hometown, does Trump think that New York’s government-owned airports provide Trump-level quality and efficiency? Does he think that the Port Authority of New York and New Jersey is a model of good corporate governance? If Trump is elected president, would he hand over management of his many fine properties to the city government during his stay in D.C.?

Of course he wouldn’t. Trump knows that governments are a complete screw-up when it comes to managing and constructing big projects. Even little projects: he’s the one who took over the city’s failing Wollman Skating Rink in Central Park and turned it into a big success, on-time and on-budget.

So Trump should rethink his big government plan for infrastructure. If elected, he will discover that federal bureaucracies do not operate the way that his well-oiled real estate enterprises do. The reasons are deep-seated, and no amount of big talk from the Oval Office will turn the vastly bloated federal bureaucracy into a success like the Wollman Rink.  

Air temperature and precipitation, in the words of Chattopadhyay and Edwards (2016), are “two of the most important variables in the fields of climate sciences and hydrology.” Understanding how and why they change has long been the subject of research, and reliable detection and characterization of trends in these variables is necessary, especially at the scale of a political decision-making entity such as a state. Chattopadhyay and Edwards evaluated trends in precipitation and air temperature for the Commonwealth of Kentucky in the hopes that their analysis would “serve as a necessary input to forecasting, decision-making and planning processes to mitigate any adverse consequences of changing climate.”

Data used in their study originated from the National Oceanic and Atmospheric Administration and consisted of time series of daily precipitation and maximum and minimum air temperatures for each Kentucky county. The two researchers focused on the 61-year period from 1950-2010 to maximize standardization among stations and to ensure acceptable record length. In all, a total of 84 stations met their initial criteria. Next, Chattopadhyay and Edwards subjected the individual station records to a series of statistical analyses to test for homogeneity, which reduced the number of stations analyzed for precipitation and temperature trends to 60 and 42, respectively. Thereafter, these remaining station records were subjected to non-parametric Mann-Kendall testing to assess the presence of significant trends and the Theil-Sen approach to quantify the significance of any linear trends in the time series. What did these procedures reveal?

For precipitation, Chattopadhyay and Edwards report only two of the 60 stations exhibited a significant trend in precipitation, leading the two University of Kentucky researchers to state “the findings clearly indicate that, according to the dataset and methods used in this study, annual rainfall depths in Kentucky generally exhibit no statistically significant trends with respect to time. With respect to temperature, a similar result was found. Only three of the 42 stations examined had a significant trend. Once again, Chattopadhyay and Edwards conclude the data analyzed in their study “indicate that, broadly speaking, mean annual temperatures in Kentucky have not demonstrated a statistically significant trend with regard to time.”

Given such findings, it would seem that the vast bulk of anthropogenic CO2 emissions that have been emitted into the atmosphere since 1950 have had little impact on Kentucky temperature and precipitation, because there have been no systematic trends in either variable.

 

Reference

Chattopadhyay, S. and Edwards, D.R. 2016. Long-term trend analysis of precipitation and air temperature for Kentucky, United States. Climate 4: 10; doi:10.3390/cli4010010.

India’s move to replace varying federal, state and interstate sales tax with a uniform Value Added Tax (VAT) makes a lot of sense (unlike Brazil which has high sales taxes and a 19% VAT). 

Yet India should take care not to let the 17.3% tax rate creep up, because VAT too is subject to the “Laffer Curve.”

As the graph shows (using 2013 OECD data) the VAT never brings in revenue higher than 10% GDP even in countries that mistakenly raised the VAT to 20% or more - 23% in Greece and Ireland, 25% in Scandanavian countries, 26% in Iceland, 27% in Hungary. New Zealand collected 9.7% of GDP with a 15% standard VAT, while countries with much higher VATs brought in less.

No tax fails to inflict economic damage, of course, which holds down the revenue collected from income and payroll taxes. A study by James Alm of Tulane University and Asmaa El-Ganainy of the IMF found “a one percentage point increase in the VAT rate leads to roughly a one percent reduction in the level of aggregate consumption in the short run and to a somewhat larger reduction in the long run.” Such a reduction in consumer spending means fewer jobs, wages and profits, so a higher VAT leads to lower revenue from taxes on personal income and profits – a fact borne out by Japan’s experience.

Our government forays into the housing market have been a disaster, to say the least.

The mortgage interest deduction goes solely to the wealthy and costs the government nearly $100 billion a year. For some perspective of how out of whack this subsidy is, the residents of Nancy Pelosi’s pricey San Francisco neighborhood get roughly 100 times the benefit, per household, of the denizens of my middle class home town in central Illinois.

Of course, our government-sponsored enterprises are an even more ill-conceived subsidy for home buying. Fannie Mae and Freddie Mac ostensibly increase the amount of capital available to finance home buying by purchasing mortgages from banks and other mortgage originators, packaging them into mortgage-backed securities, and then selling them to pensions, hedge funds, and the like. But it is dubious that their existence meaningfully increases home ownership rates: The Census Bureau announced last month that the U.S. homeownership rate was 62.9 percent, its lowest since 1965 and well below most EU countries, virtually none of which has anything akin to either the mortgage interest deduction or government-sponsored enterprises buying up mortgages.

Not only does the MID and the GSEs fail to boost home ownership but they also can exacerbate broader problems in the housing market, and financial markets in general. The MID encourages people to purchase as much house as they can possibly afford in order to take full advantage of the tax break, which set up many people for disaster when they lost their job in the Great Recession of 2008-2009.

The pressure the federal government put on the GSEs to extend credit to low-income borrowers in order to help boost home ownership amongst the middle and lower-classes ended in tears for millions of Americans as well, as the swings in the housing market destroyed the value in their homes and left them unable to afford to continue living there.

The plunging home prices cratered the portfolio of the GSEs and led the Treasury to use the 2008 Home Equity and Recovery Act, or HERA, to place them into a conservatorship, with the shareholders seeing their share of the company slashed to just 20%, with the government assuming the rest. 

However, the demise of Fannie and Freddie was premature: the reported losses of the GSEs were just temporary, a fact that was clear to many shareholders who held onto their stock or jumped into the company post-crisis. For these people, holding onto the stock post-crisis appeared as if it would work out to be a good bet, especially once real estate prices returned to their previous levels.

Unfortunately for them such a bet failed to account for the vagaries of government action. In 2012 the Treasury imposed an amendment to HERA that effectively nationalized the GSEs and cut out the shareholders from any residual profits. The massive profits generated by Fannie Mae and Freddie Mac–which Treasury officials fully anticipated prior to the takeover–went directly into Treasury’s coffers, helping the Obama Administration claim a victory over the federal budget. The 2012 deficit was “just” $1.1 trillion, or $200 billion less than the previous year, helped by th outsized GSE profits that year.

While the third amendment may have been a short-term political salve, it has created longer-term problems that the next president will have to deal with. The major problem is that because the third amendment “sweeps” the entire net worth of each GSE into Treasury’s coffers each quarter, it means that neither of them have sufficient capital to withstand a sub-par quarter without a draw from Treasury. Should the real-estate market so much as hiccup it’s going to mean a loss for one or both, and a new injection of taxpayer dollars from the government could create a political firestorm and a round of finger-pointing that could lead to another short-term fix that makes a further mess of the lending market for homes.

The plight of the GSEs has not been grist for the masses but it’s time we began some sort of dialogue about our nation’s housing policy going forward nevertheless. Sacrificing $1 trillion of tax revenue and socializing the gains and losses from home mortgage has done nothing to boost home ownership or improve our economy.

Or, to be more precise, worse than nothing: the effective expropriation of the shares of Fannie and Freddie held by the public for short-term political expediency will assuredly have repercussions down the road, as investors become more wary of trusting the government to hold up its side of a deal. The virtual disappearance of any private label mortgage-backed- securities is a side-effect of the capriciousness of the government in mortgage markets in recent years.

The optimal policy solution is simple: we should begin by ending the mortgage-interest deduction, which is by far the most regressive blue-state tax break in existence, and follow that by gutting the government’s footprint in the market for mortgage-backed securities.

Accomplishing the latter is tricky even if it were to become politically possible: the problem is that it is hard to conceive of an entity that is buying, packaging and reselling mortgages operating without there being some sort of implicit government backstop, even if the government swears up and down it would never bail out such an entity. If it were to grow large enough there would inevitably be some sort of systemic risk with the collapse of an entity issuing MBS’s, and once the market perceived that to be true it would be duly exploited by both sides of the transaction.

One solution would be to simply have a number of smaller, competing entities and take steps to prevent one from growing “too” large, a proposal others have already offered. Another step might be to give back the private GSE shareholders their share of the company and for the government to sell its share off as well, and come up with a way for the federal government to capture the value of its effective MBS guarantee in some way.

Or, if we’re being creative, we can try to come up with a way for the Treasury to effectively tie its hands so as to make it impossible for it to intervene in the market, even if it were to collapse. However, I fear that this is as doable as it would be for the federal government to forswear helping people who build in floodplains when their houses inevitably get damaged by flooding, to paraphrase a famous example by the Nobel prize-winning economists Finn Kydland and Ed Prescott.

The current quasi-nationalization of our mortgage markets and billions of dollars of tax breaks for homeowners has not done one whit for home ownership while creating all sorts of potential problems in the housing market.

There’s always a risk in attempting any sort of major reform; what’s unclear is whether we can possibly do worse than the status quo.

It looks like Deutsche Bank is heading toward failure. Why might we be concerned?

The problem is that Deutsche is too big to fail — more precisely, that the new Basel III bank resolution procedures now in place are unlikely to be adequate if it defaults.

Let’s review recent developments. In June 2013 FDIC Vice Chairman Thomas M. Hoenig lambasted Deutsche in a Reuters interview. “Its horrible, I mean they’re horribly undercapitalized,” he said. They have no margin of error.” A little over a year later, it was revealed that the New York Fed had issued a stiff letter to Deutsche’s U.S. arm warning that the bank was suffering from a litany of problems that amounted to a “systemic breakdown” in its risk controls and reporting. Deutsche’s operational problems led it to fail the next CCAR — the Comprehensive Capital Analysis and Review aka the Fed’s stress tests – in March 2015.

Major senior management changes were made throughout 2015 and Deutsche was retrenching sharply with plans to cut its workforce by 35,000. This retrenchment failed to reassure the markets. Between January 1st and February 9th this year, the bank’s share price fell 41 percent and the prices of Deutsche’s CoCos (or Contingent Convertible bonds) were down to 70 cents on the euro.[1] Co-Chair John Cryan responded with an open letter to reassure employees: “Deutsche Bank remains absolutely rock-solid, given our strong capital and risk position,” he wrote. The situation was sufficiently serious that the German finance minister Wolfgang Schäuble felt obliged to explain that he “had no concerns” about Deutsche. Finance ministers never need to provide reassurances about strong banks.

On February 12th, Deutsche launched an audacious counter-attack: it would buy back $5.4 billion of its own bonds. The prices of its bonds — and especially of its CoCos — rallied and the immediate danger receded.

Fast forward to the day after the June 23rd Brexit vote and Deutsche’s share price plunged 14 percent. Deutsche then took three further hits at the end of June. First, spreads on Credit Default Swaps (CDSs) spiked sharply to 230 basis points, up from 95 basis points at the start of the year. These spreads indicate the market’s odds on a default. Second, Deutsche flunked the CCAR again. Then the latest IMF Country Report stated that “Deutsche Bank appears to be the most important net contributor to systemic risks” in the world financial system and warned the German authorities to urgently (re)examine their bank resolution procedures.

Less than a week later, Deutsche’s CoCos had collapsed again, trading at 75 cents on the euro. The Italian Prime Minister, Matteo Renzi, then put his boot in, suggesting that the difficulties facing Italian banks over their well-publicised bad loans were minuscule compared to the problems that other European banks had with their derivatives. To quote:

If this nonperforming loan problem is worth one, the question of derivatives at other banks, big banks, is worth one hundred. This is the ratio: one to one hundred.

He was referring to the enormous size of Deutsche’s derivatives book.

In this post I take a look at Deutsche’s financial position using information drawn mainly from its last Annual Report. I wish to make two points. The first is that although there are problems with the lack of transparency of Deutsche’s derivatives positions, their size alone is not the concern. Instead, the main concern is the bank’s leverage ratio — the size of its ‘risk cushion’ relative to its exposure or amount at risk — which is low and falling fast.

Deutsche’s derivatives positions

On p. 157 of its 2015 Annual Report: Passion to Perform, Deutsche reports that the total notional amount of its derivatives book as of 31 December 2015 was just over €41.9 trillion, equivalent to about $46 trillion, over twice U.S. GDP. This number is large, but is largely a scare number. What matters is not the size of Deutsche’s notional derivatives book but the size of its derivatives exposure, i.e., how much does Deutsche stand to lose?

There can be little question that this exposure will be nowhere near the notional value and may only be a small fraction of it. One reason is that the notional value of some derivatives – such as some swaps – can bear no relationship to any sensible notion of exposure. A second reason is that many of these derivatives will have offsetting exposures, so that losses on one position will be offset by gains on others.

On the same page, Deutsche reports the net market value of its derivatives book: €18.3 billion, only 0.04 percent of its notional amount. However, this figure is almost certainly an under-estimate of Deutsche’s derivatives exposure.

It is unreliable because many of its derivatives are valued using unreliable methods. Like many banks, Deutsche uses a three-level hierarchy to report the fair values of its assets. The most reliable, Level 1, applies to traded assets and fair-values them at their market prices. Level 2 assets (such as mortgage-backed securities) are not traded on open markets and are fair-valued using models calibrated to observable inputs such as other market prices. The murkiest, Level 3, applies to the most esoteric instruments (such as the more complex/illiquid Credit Default Swaps and Collateralized Debt Obligations) that are fair-valued using models not calibrated to market data – in practice, mark-to-myth. The scope for error and abuse is too obvious to need spelling out.  P. 296 of the Annual Report values its Level 2 assets at €709.1 billion and its Level 3 assets at €31.5 billion, or 1,456 percent and 65 percent respectively of its preferred core capital measure, Tier 1 capital. There is no way for outsiders to check these valuations, leaving analysts with no choice but to work with these numbers while doubtful of their reliability.

The €18.3 billion net market value of its derivatives is likely to be an under-estimate because it is based on assumptions (e.g., about hedge effectiveness) and model-based valuations that are will be biased on the rosy side. Deutsche’s management will want their reports to impress the analysts and investors on whose confidence they depend.

The International Financial Reporting Standards (IFRS) used by Deutsche also allow considerable scope for creative fiddling, and not just for derivatives: weaknesses include deficiencies in provisions for expected losses and IFRS’s vulnerability to retained earnings manipulation.[2]

Experience confirms that losses on some derivatives positions (e.g., CDSs) can be many multiples of accounting-based or model-based projections of their exposures.[3]

For all these reasons, the true derivatives exposure is likely to be considerably greater – and I would guesstimate many multiples of – any net market value number.[4]

In short, Deutsche’s derivatives exposure is much greater than €18.3 billion but only a small fraction of the ‘headline’ €41.9 trillion scare number.

Bank accounting is the blackest of black holes.

Deutsche’s reported 3.5 percent leverage ratio

Recall that the number to focus on when gauging a bank’s risk exposure is its leverage ratio.[5] Traditionally, the term ‘leverage’ (or sometimes ‘leverage ratio’) was used to describe the ratio of a bank’s total assets to its core capital. However, under the Basel III capital rules, that same term ‘leverage ratio’ is now used to describe the ratio of a bank’s core capital to a new measure known as its leverage exposure. Basel III uses leverage exposure instead of total assets because the former measure takes account of some of the off-balance-sheet risks that the latter fails to include. However, there is usually not much difference between the total asset and leverage exposure numbers in practice. We can therefore think of the Basel III leverage ratio as being (approximately) the inverse of the traditional leverage (ratio) measure.

Armed with these definitions, let’s look at the numbers. On pp. 31, 130 and 137 of its 2015 Annual Report, Deutsche reports that at the end of 2015, its Basel III-defined leverage ratio, the ratio of its Tier 1 capital (€48.7 billion) to its leverage exposure (€1,395 billion) was 3.5 percent.[6] This leverage ratio implies that a loss of only 3.5 percent on its leverage exposure (or approximately, on its total assets) would be enough to wipe out all its Tier 1 capital.

If you think that 3.5 percent is a low capital buffer, you would be right. Deutsche’s 3.5 percent leverage ratio is also lower than that of any of its competitors and about half that of major U.S. banks.

One can also compare Deutsche’s reported 3.5 percent leverage ratio to regulatory standards. Under the Basel III rules, the absolute minimum required (Tier 1) leverage ratio is 3 percent. Under the U.S. Prompt Corrective Action (PCA) framework, a bank is regarded as ‘well-capitalized’ if it has a leverage ratio of at least 5 percent, ‘adequately capitalized’ if that ratio is at least 4 percent, ‘undercapitalized’ if that ratio is less than 4 percent, ‘significantly undercapitalized’ if that ratio is less than 3 percent, and ‘critically undercapitalized’ if its tangible equity to total assets ratio is less than or equal to 2 percent.[7]

The Federal Reserve is in the process of imposing a 5 percent minimum leverage ratio requirement on the 8 U.S. G-SIB (Globally Systemically Important Bank) holding companies and a 6 percent minimum leverage ratio on their federally insured subsidiaries, effective 1 January 2018.[8]

So, Deutsche’s leverage ratio is (a) not much bigger than Basel III’s absolute minimum, (b) ‘undercapitalized’ under the PCA framework and (c) well below the minimum requirements coming through for the big U.S. banks.

The 3.5 percent leverage ratio is also a fraction of the minimum capital standards proposed by experts. On this issue, an important 2010 letter to the Financial Times  by Anat Admati and 19 other renowned experts recommended a minimum leverage ratio of at least 15 percent. Independently, John Allison, Martin Hutchinson, Allan Meltzer, Thomas Mayer and I have also advocated minimum leverage ratios of at least 15 percent, which happens to be close to the average leverage ratio of U.S. banks when the Fed was founded.

There are also reasons to believe that the reported 3.5 percent figure overstates the bank’s ‘true’ leverage ratio. Leaving aside the incentives on the bank’s part to overstate the bank’s financial strength, which I touched upon earlier, the first points to note here are that Deutsche uses Tier 1 capital as the numerator, and that €48.7 billion is a small capital cushion for a systemically important bank.

The numerator in the leverage ratio: core capital

So let’s consider the numerator further, and then the denominator.

You might recall that I described the numerator in the leverage ratio as ‘core’ capital. Now the point of core capital is that it is the ‘fire resistant’ capital can be counted on to support the bank in the heat of a crisis. The acid test of a core capital instrument is simple: if the bank were to fail tomorrow, would the capital instrument be worth anything? If the answer is Yes, the capital instrument is core; if the answer is No, then it is not.

Examples of capital instruments that would fail this test but are still commonly but incorrectly included in core capital measures are goodwill and Deferred Tax Assets (DTAs), which allow a bank to claim back tax on incurred losses if/when the bank subsequently returns to profitability.

Deutsche also reports (p. 130) a more conservative capital measure, Core Equity Tier 1 (CET1), equal to €44.1 billion. This CET1 measure would have been more appropriate because it excludes softer non-core capital instruments – Additional Tier Capital – that are included in the Tier 1 measure.

If one now replaces Tier 1 capital with CET1, one gets a leverage ratio of 44.1/1,395 = 3.16 percent.

Even CET1 overstates the bank’s core capital, however. One reason for this overstatement is that the regulatory definition of CET1 includes a ‘sin bucket’ of up to 15 percent of non-CET1 (i.e., softer) capital instruments, including DTAs, Mortgage Servicing Rights,and the capital instruments of other financial institutions.[9] The consequence is that Basel III-approved CET1 can overstate the ‘true’ CET1 by up to 1/0.85 -1 = 17.5 percent.

Yet even stripped of its silly sin bucket — which was a concession to banks’ lobbying to weaken capital requirements — a ‘pure’ CET1 measure still overstates core capital. Basel III defines CET1 as (approximately) Tangible Common Equity (TCE), plus realised earnings, accumulated other income and other disclosed returns.[10] Of these items, only TCE really belongs in a measure of core capital, because the other items (especially retained earnings) are manipulable, i.e., these items are not core capital at all.

And what exactly is Tangible Common Equity? Well, the ‘tangible’ in TCE means that the measure excludes soft capital like goodwill or DTAs, and the ‘common’ in TCE means that it excludes more senior capital instruments like preference shares or hybrid capital instruments such as CoCos.

The importance of TCE as the ultimate core capital measure was highlighted in a 2011 speech by Federal Reserve Governor Daniel Tarullo. When reflecting on the experience of the Global Financial Crisis, Tarullo observed:

It is instructive that during the height of the crisis, counterparties and other market actors looked almost exclusively to the amount of tangible common equity held by financial institutions in evaluating the creditworthiness and overall stability of those institutions and essentially ignored any broader capital measures altogether.[11]

As a consequence, CET1 is itself too broad a capital measure but we don’t have data on the TCE core capital measure we would really want.

The denominator in the leverage ratio: leverage exposure vs. total assets, both too low

Turning to the denominator, first note that the leverage exposure (€1,395 billion) is less than the reported total assets (€1,629 billion, p. 184). You might recall that the leverage exposure is supposed to take account of the off-balance-sheet risks that the total assets measure ignores, but it doesn’t. Instead, the measure that does take account of (some) off-balance-sheet exposure is less than the total assets measure that does not. If this has you scratching your head, then your brain is working. The leverage exposure is too low.

If one replaces the leverage exposure in the denominator with total assets, one gets a leverage ratio equal to 44.1/1,629 = 2.71 percent, comfortably below Basel III’s 3 percent absolute minimum. But this total assets measure is itself too low, because it ignores the off-balance-sheet risks, which typically dwarf the on-balance-sheet exposures.

In short the 2.71 percent leverage ratio overstates the ‘true’ leverage ratio because it overstates the numerator and understates the denominator.

Market-value vs. book-value leverage ratios

There are yet still more problems. The 2.71 percent leverage ratio is a book value estimate. Corresponding to the book-value estimate is the market-value leverage ratio, which is the estimate reflected in Deutsche’s stock price. In the present context, the latter is the better indicator, because it reflects the information available to the market, whereas the book value merely reflects information in the accounts. If there is new information, or if the market does not believe the the accounts, then the market value will reflect that market view, but the book value will not.

One can obtain the market value estimate by multiplying the book-value leverage ratio by the bank’s price-to-book ratio, which was 44.4 percent at the end of 2015. Thus, the contemporary market-value estimate of Deutsche’s leverage ratio was 2.71 percent times 44.4 percent = 1.20 percent.

Since then Deutsche’s share price has fallen by almost 43 percent and Deutsche’s latest market-value leverage ratio is now about 0.71 percent.

Policy implications

So what’s next for the world’s most systemically dangerous bank?

At the risk of having to eat my words, I can’t see Deutsche continuing to operate for much longer without some intervention: chronic has become acute. Besides its balance sheet problems, there is a cost of funding that exceeds its return on assets, its poor risk management, its antiquated IT legacy infrastructure, its inability to manage its own complexity and its collapsing profits — and the peak pain is still to hit.  Deutsche reminds me of nothing more than a boxer on the ropes: one more blow could knock him out.

If am I correct, there are only three policy possibilities. #1 Deutsche will be allowed to fail, #2 it will be bailed-in and #3 it will be bailed-out.

We can rule out #1: the German/ECB authorities allowing Deutsche to go into bankruptcy. They would be worried that that would trigger a collapse of the European financial system and they can’t afford to take the risk. Deutsche is too-big-to-fail.

Their preferred option would be #2, a bail-in, the only resolution procedure allowed under EU rules, but this won’t work. Authorities would be afraid to upset bail-in-able investors and there isn’t enough bail-in-able capital anyway.

Which consideration leads to the policy option of last resort — a good-old bad-old taxpayer-financed bail-out. Never mind that EU rules don’t allow it and never mind that we were promised never again.  Never mind, whatever it takes.

______________

[1] CoCo investors feared that their bonds’ triggers would be breached and that their bonds would be converted into equity, which might soon become worthless.

[2] See T. Bush, “UK and Irish Banks Capital Losses – Post Mortem,” Local Authority Pension Fund Forum, 2011) and G. Kerr, “Law of Opposites: Illusory Profits in the Financial Sector,” Adam Smith Institute, 2011.

[3] A. G. Haldane, “Capital Discipline,” speech given to the American Economic Association, Denver, Colorado, January 9, 2011, chart 3.

[4] Indeed, as partial confirmation of this conjecture, on p. 137, the Annual Report gives an estimated ‘total derivatives exposure’ of  €215 billion, nearly 12 times the €18.3 billion net market value of its derivatives book and over 4 times its Tier 1 capital.

[5] In its Annual Report, Deutsche highlights as ‘headline’ capital ratio its Tier 1 capital ratio, the ratio of Tier 1 capital to Risk-Weighted Assets (RWA): this ratio is 12.3 percent if one uses the ‘fully loaded’ measure, which assumes that CCR/CRD 4, the Capital Requirements Directive, has been fully implemented. However, the RWA measure is discredited (see here) and these so-called capital ratios are fictitious, not least because they assume that most assets have no risk.To illustrate, the ratio of Deutsche’s RWA to total assets is only 24.4 percent, which suggests that 75.6 percent of its assets are riskless!

[6] These numbers refer to the CRR/CRD 4 ‘fully loaded’ measures.

[7] See here, p. 2.1-8.

[8] See Board of Governors of the Federal Reserve System,” Agencies Adopt Enhanced Supplementary Leverage Ratio Final Rule and Issue Supplementary Leverage Ratio Notice of Proposed Rulemaking.” Press release, April 8 2014.

[9] For more on the sin bucket, see T. F. Huertas, “Safe to Fail: How Resolution Will Revolutionise Banking,” New York: Palgrave, 2014, p. 23; and “Basel III: A global regulatory framework for more resilient banks and banking systems,” (Basel Committee, June 2011), pp. 21-6 and Annex 2.

[10] For a more complete definition of CET1 capital, see Basel Committee on Banking Supervision (BCBS) “Basel III: A global regulatory framework for more resilient banks and banking systems,” (Basel Committee, June 2011), p. 13.

[11] D. K. Tarullo, “The Evolution of Capital Regulation,” speech to the Clearing House Business Meeting and Conference, New York, November 9, 2011.

[Cross-posted from Alt-M.org]

The New York Times, in its infinite wisdom, has figured out how poor states can become rich states: simply put, they need only to increase taxes and spending. It recently publish a piece entitled “the Path to Prosperity is Blue” which suggested that the states that have maintained solid growth the last three decades largely owe that growth to high state government spending, and it suggested that the poor states follow that formula as well. 

The statistical derivation of this conclusion comes from the fact that the wealthiest states of the U.S. tend to be blue states, which have higher taxes and spending. By this logic, spending drives growth. 

While there is indeed a relationship between a state’s spending and its GDP, the causality is completely contrary to what the Times portrays. The reality is that states that become prosperous invariably spend more money. Some of that can represent more spending on public goods–Connecticut does seem to have better schools than Mississippi–but far more of it is simply captured by government interests. While California may have made have created a quality public university system in the 1950s and 1960s with its newfound wealth, the reason its taxes are so high today is because it has a ruinous public pension system it needs to finance. Their high spending isn’t doing its citizenry any good at all. 

New York City and California. two high tax regions, became prosperous in large part because they were (and remain) a hub for immigrants and ambitious, entrepreneurial Americans who helped create the industries that to this day drive the economies of each state. California’s defense and IT industry did benefit from public investment as well, of course, but it was investment from the federal government, and in each case it merely served as a catalyst for the development of industries that went far beyond the government’s initial investment. 

To tell Mississippi that it could become prosperous and pull its citizenry out of poverty if it only doubled taxes is an absurd notion that amounts to economic malpractice. What Mississippi has to do is figure out how to attract and retain talented individuals, which is easier said than done. Unfortunately, the Jacksons and Peorias of the world are not lures to the ambitious Indian engineer or Chinese IT professional, who’d rather take their chances in Silicon Valley, Los Angeles, or anywhere else where the quality of life is good and jobs are plenty.

The lesson to take away from a comparison of the economic status of the fifty states is that economies of agglomeration is a vaguely-understood but critically important phenomenon, location matters, and that it is enormously difficult for states to pivot when their main industries falter. None of these can be said to be driven by government spending.

An updated body camera scorecard highlights a disturbing state of affairs in body camera policy that lawmakers should strongly resist. A majority of the body camera policies examined by Upturn and the Leadership Conference on Civil and Human Rights received the lowest possible score when it came to officer review of footage and citizens alleging misconduct having access to footage, meaning that the departments were either silent on the issues or have policies in place that are contrary to the civil rights principles outlined in the scorecard. Such policies do not promote transparency and accountability and serve as a reminder that body cameras can only play a valuable role in criminal justice reform if they’re governed by the right policies.

Upturn and the Leadership Conference on Civil and Human Rights looked at the body camera policies in fifty departments, including all departments in major cities that have either outfitted their officers with body cameras or will do so in the near future. Other departments that were scored include departments that received at least $500,000 in body camera grants from the Department of Justice as well as Baton Rouge Police Department and the Ferguson Police Department.

Each department was given one of four possible scores in eight categories (personal privacy, officer review, biometric use, footage retention, etc.). Departments were either awarded a red ex, a yellow circle, or a green check, depending on how consistent their body camera policy is with the civil rights principles outlined in the scorecard, with a red ex indicating inconsistency or silence and a green check indicating consistency. A fourth score, the “?”, was awarded to policies that were not publicly available.

Below are the scoring criteria for officer review and footage access for citizens filing complaints:

 

 

 

Forty of the fifty departments received the lowest possible score for “Officer Review,” and not one received a green check.

When it comes to access to footage the scores are marginally better, with four departments being awarded green checks. However, thirty-nine of departments in the “Footage Access” category received the lowest score.

Thirty-five (70%) of the departments received the lowest possible score for both officer review and access to footage. Among these departments are some of the America’s largest, including the Los Angeles Police Department, the New York Police Department, the Houston Police Department, and the Philadelphia Police Department.

Regrettably, the federal government has sent body camera funds to departments with the lowest-scoring officer review and footage access policies. Eleven of the thirty-five departments that received a red ex for officer review and footage access were awarded at least $500,000 in body camera grants by the Department of Justice.

Body cameras can only be tools for increased transparency and accountability in law enforcement with the right policies in place. Unfortunately, Upturn and the Leadership Conference on Civil and Human Rights’ scorecard reveals not only that many departments have poor accountability and transparency policies but also that the Department of Justice does not review these policies as disqualifying when it comes to body camera grants. 

 

Last Thursday, a Chicago police officer shot unarmed 18-year-old Paul O’Neal in the back, killing him. O’Neal reportedly crashed a stolen car into a police vehicle during a chase and then fled on foot. Two officers then fired at O’Neal. This is the kind of incident where body camera footage would be very helpful to investigators. The officer who shot O’Neal was outfitted with a body camera. Unfortunately, the camera wasn’t on during the shooting, raising difficult questions about the rules governing non-compliance with body camera policy. While there is undoubtedly a learning curve associated with body cameras officers who fail to have them on during use-of-force incidents should face harsh consequences.

Body camera footage of O’Neal’s shooting would make the legality of the killing easier to determine. The Supreme Court ruled in Tennessee v. Garner (1985) that a police officer cannot use lethal force on a fleeing suspect unless “the officer has probable cause to believe that the suspect poses a significant threat of death or serious physical injury to the officer or others.” The Chicago Police Department’s own use-of-force guidelines allow officers to use a range of tools (pepper spray, canines, Tasers) to deal with unarmed fleeing suspects under some circumstances, but the firearm is not one of them.

O’Neal’s shooting would be legal if the officer who shot him had probable cause to believe that he posed a threat of death or serious injury to members of the public or police officers. Given the information available, perhaps most significantly the fact that O’Neal was unarmed, it looks likely that O’Neal’s died as a result of unjustified use of lethal force.

So far, the Chicago Police Department has stripped three officers involved in the chase and shooting of police powers, with Superintendent Eddie Johnson saying that the officers violated department policy. O’Neal’s mother has filed a federal civil rights lawsuit, alleging that her son was killed “without legal justification.”

O’Neal’s shooting is clearly the kind of incident police body cameras should film. There are important debates related to body cameras capturing footage of living rooms, children, or victims of sexual assault. But O’Neal’s death is the kind of incident that body camera advocates have consistently wanted on record. The shooting was outside (thereby posing few privacy considerations) and involved lethal use-of-force. Indeed, Chicago’s own body camera policy states that incidents such as O’Neal’s shooting should be filmed.

Investigators reportedly don’t think that the body camera was intentionally disabled, with the officer’s inexperience with the camera or the crash playing a role in the camera not filming the shooting. This can be handled by better training, but lawmakers should consider policies that harshly punish officers who don’t have their body cameras on when they should.

The American Civil Liberties Union (ACLU) proposed one such policy.  Under the ACLU’s body camera policy, if an officer fails to activate his camera or interferes with the footage the following policies kick in:

1. Direct disciplinary action against the individual officer.

2. The adoption of rebuttable evidentiary presumptions in favor of criminal defendants who claim exculpatory evidence was not captured or was destroyed.

3. The adoption of rebuttable evidentiary presumptions on behalf of civil plaintiffs suing the government, police department and/or officers for damages based on police misconduct. The presumptions should be rebuttable by other, contrary evidence or by proof of exigent circumstances that made compliance impossible.

The third policy recommendation is of note in the O’Neal shooting given that O’Neal’s mother has filed a civil federal lawsuit. If the ACLU’s body camera policy were in place the evidentiary presumption would be on behalf of O’Neal’s mother, not the Chicago Police Department. However, the ACLU’s policy doesn’t make it clear how a judge would oversee this shift in evidentiary presumption in such cases.

It’s unrealistic for criminal justice reform advocates to expect that body cameras will be a police misconduct panacea. We shouldn’t be surprised if reports of cameras not being on when they should have been emerge as more and more police departments issue body cameras. Lawmakers should anticipate body camera growing pains, but they should also consider policies that ensure failure to comply with body camera policies results in harsh consequences.

On July 25, Miami-Dade Florida circuit judge Teresa Pooler dismissed money-laundering charges against Michell Espinoza, a local bitcoin seller. The decision is a welcome pause on the road to financial serfdom. It is a small setback for authorities who want to fight crime (victimless or otherwise) by criminalizing and tracking the “laundering” of the proceeds, and who unreasonably want to do the tracking by eliminating citizens’ financial privacy, that is, by unrestricted tracking of their subjects’ financial accounts and activities. The US Treasury’s Financial Crimes Enforcement Network (FinCEN) is today the headquarters of such efforts.

As an Atlanta Fed primer reminds us, the authorities’ efforts are built upon the Banking Secrecy Act (BSA) of 1970. (A franker label would be the Banking Anti-Secrecy Act). The Act has been supplemented and amended many times by Congress, particularly by Title III of the USA PATRIOT Act of 2001, and expanded by diktats of the Federal Reserve and FinCEN. The laws and regulations on the books today have “established requirements for recordkeeping and reporting of specific transactions, including the identity of an individual engaged in the transaction by banks and other FIs [financial institutions].”  These requirements are collectively known as Anti-Money-Laundering (AML) rules.

In particular, banks and other financial institutions are required to obey “Customer Identification Program” (CIP) protocols (aka “know your customer”), which require them to verify and record identity documents for all customers, and to “flag suspicious customers’ accounts.” Banks and financial institutions must submit “Currency Transaction Reports” (CTRs) on any customers’ deposits, withdrawals, or transfers of $10,000 or more. To foreclose the possibility of people using unmonitored non-banks to make transfers, FinCEN today requires non-depository “money service businesses” (MSBs) – which FinCEN defines to include “money transmitters” like Western Union and issuers of prepaid cards like Visa – also to know their customers. Banks and MSBs must file “Suspicious Activity Reports (SARs)” on transactions above $5000 that may be associated with money-laundering or other criminal activity. Individuals must also file reports. Carrying $10,000 or more into or out of the US triggers a “Currency or Monetary Instrument Report” (CMIR).” Any US citizen who has $10,000 or more in foreign financial accounts, even if it never moves, must annually file “Foreign Bank and Financial Accounts Reports (FBARs).”

In addition, state governments license money transmitters and impose various rules on their licensees.

When most of these rules were enacted, before 2009, there were basically only three convenient (non-barter) conduits for making a large-value payment. If Smith wanted to transfer $10,000 to Jones, he could do so in person using cash, which would typically involve a large withdrawal followed by a large deposit, triggering CTRs. He could make the transfer remotely using deposit transfer through the banking system, triggering CTRs or SARs if suspicious. Or he could use a service like Western Union or Moneygram, again potentially triggering SARs. For the time being, the authorities had the field pretty well covered.

Now come Bitcoin and other cryptocurrencies. Cash is of course still a face-to-face option. But today if Smith wants to transfer $10,000 remotely to Jones, he need not go to a bank or Western Union office. He can accomplish the task by (a) purchasing $10,000 in Bitcoin, (b) transferring the BTC online to Jones, and (c) letting Jones sell them for dollars (or not).  The authorities would of course like to plug this “loophole.” But the internet, unlike the interbank clearing system, is not a limited-access conduit whose users can be commandeered to track and report on its traffic. No financial institution is involved in a peer-to-peer bitcoin transfer. Granted, Smith will have a hard time purchasing $10,000 worth of Bitcoins without using a bank deposit transfer to pay for them, which pings the authorities, but in principle he could quietly buy them in person with cash.

In the recent legal case, it appears that this possibility for unmonitored transfers was noticed by Detective Ricardo Arias of the Miami Beach Police Department, who “became intrigued” and presumably alarmed upon learning about Bitcoin at a meeting with the US Secret Service’s Miami Electronic Crimes Task Force. Detective Arias and Special Agent Gregory Ponzi decided to investigate cash-for-Bitcoin sales in South Florida. (I take details about the case from Judge Pooler’s decision in State of Florida v. Michell Abner Espinoza (2016).) Arias and Ponzi went to localbitcoins.com to find a seller willing to make a cash sale face-to-face. Acting undercover, Arias contacted one Michell Espinoza, apparently chosen because his hours were flexible. Arias purchased $500 worth of Bitcoin at their first meeting in a Miami Beach coffee shop, and later purchased $1000 worth at a meeting in a Haagen-Daaz ice cream shop in Miami. Arias tried to make a third purchase for $30,000 in a hotel room where surveillance cameras had been set up, but Espinoza rightly suspected that the currency offered was counterfeit, and refused it. At that meeting, immediately after the failed purchase, Espinoza was arrested. He was charged with one count of unlawfully operating a money services business without a State of Florida license, and two counts of money laundering under Florida law.

Judge Pooler threw out all three charges. Evaluating her arguments as a monetary economist, I find that some are insightful, while others are beside the point or confused. On the charge that Espinoza illegally operated an unlicensed money services business, she correctly noted that Bitcoin is not widely accepted in exchange for goods and thus “has a long way to go before it is the equivalent of money.” Accordingly, “attempting to fit the sale of Bitcoin into a statutory scheme regulating money services businesses is like fitting a square peg in a round hole.” However she also offered less compelling reasons for concluding that Bitcoin is not money, namely that it is not “backed by anything” and is “certainly not tangible wealth and cannot be hidden under a mattress like cash and gold bars.” Federal Reserve notes are money without being backed by anything, and bank deposits are money despite being intangible. Gold bars are today not money (commonly accepted as a medium of exchange).

Judge Pooler further correctly noted that Espinoza did not receive currency for the purpose of transmitting it (or its value) to any third party on his customer’s behalf, as Western Union does. He received cash only as a seller of Bitcoin. Nor, she held, does Bitcoin fall into any of the categories under Florida’s statutory definition of a “payment instrument,” so Espinoza was not operating a money services business as defined by the statute. Bitcoin is indeed not a payment instrument as defined by the statue because it is not a fixed sum of “monetary value” in dollars like the categories of instruments that are listed by the statute. It is an asset with a floating dollar price, like a share of stock.

Here Judge Pooler accepted a key defense argument (basically, “the defendant was not transmitting money, but only selling a good for money”) that was rejected by Judge Collyer in U.S. v. E-Gold (2008). In the e-gold system, Smith could purchase and readily transfer to Jones claims to units of gold held at e-gold’s warehouse. Federal officials successfully busted e-gold for “transmitting money” without the proper licenses. Judge Collyer accepted the prosecution’s argument that selling gold to Smith, providing a vehicle for him to transfer it to Jones, and buying it back from Jones is tantamount to transmitting money from Smith to Jones. Of course the Espinoza case is different in that Espinoza did not provide a vehicle for transmitting Bitcoin to a third party, nor did he buy Bitcoin from any third party.

On the charge of money laundering, Judge Pooler found that there was no evidence that Espinoza acted with the intent to promote illicit activity or disguise its proceeds. Further, Florida law is too vague to know whether it applies to Bitcoin transactions. Thus: “This court is unwilling to punish a man for selling his property to another, when his actions fall under a statute that is so vaguely written that even legal professionals have difficulty finding a singular meaning.”

I expect that FinCEN will now want to work with the State of Florida, and other states, to rewrite their statutory definitions of money services businesses and money laundering to reinforce their 2013 directive according to which Bitcoin exchanges must register as MSBs and so submit to “know your customer” and “file reports on your customer” rules. If even casual individual Bitcoin sellers like Espinoza must also register as MSBs, that will spell the end to legal local Bitcoin-for-cash trades.

[Cross-posted from Alt-M.org]

Last Friday, President Obama quietly signed legislation requiring special labeling for commercial foods containing genetically modified organisms (GMOs)—plants and animals with desirable genetic traits that were directly implanted in a laboratory. Gene modification typically yields plants and animals that take less time to reach maturity, have greater resistance to drought or disease, or have other desirable traits like sweeter corn or meatier livestock. Yet some people oppose these scientific advances, for reasons that aren’t all that clear.

Most of the foods that humans and animals have consumed for millennia have been genetically modified. Usually this was done through the unpredictable, haphazard technique of cross-fertilization, a technique whose development marked the dawn of agriculture. Yet the new law targets only the highly precise gene manipulations done in laboratories. The labeling requirement comes in spite of the fact that countless scientific organizations—including the American Association for the Advancement of Science, the National Academy of Sciences, the World Health Organization, the American Medical Association, and the British Royal Society—have concluded that GMOs pose no more threat to human health than new organisms developed through traditional methods.

Accordingly, some Obama critics have responded to his bill-signing by sarcastically quoting his earlier vows to rely on science when policymaking. The cleverer critics have even asked what comes next: dihydrogen monoxide warnings? Labels that foods contain no fairies or gremlins?

The critics overlook the incredible weakness of the new law, which can be satisfied with something as unobtrusive as a nondescript QR code linking to a GMO notice. In fact, anti-GMO activists oppose the new law because it preempts more rigorous regulation. And that’s exactly what President Obama and Congress intended to do.

The immediate reason for the new legislation is a more onerous 2014 Vermont law that would have affected the food supply chain, raising consumer prices nationally. Similar requirements were percolating in other states, advanced by anti-GMO activists (and agribusiness groups that don’t want competition from GMO products). With a stroke of the pen, President Obama and federal lawmakers have used a bad but meager requirement to counteract those far worse state laws.

This is not the only time the Obama White House has helped consumers and advanced science, to the frustration of the anti-GMO crowd. His administration previously approved the AquAdvantage salmon that had languished in bureaucratic review hell for decades.

Cato’s Regulation has covered GMOs and the broader biotech controversy for decades. You can see a couple of those articles if you click on the last two links above. Case Western Reserve law professor Jonathan Adler will have an article on the new labeling law in the magazine’s fall issue.

This summer, the United Nations High Commissioner for Refugees (UNHCR) announced that the worldwide refugee crisis, the worst in absolute terms since World War II, had reached a new record high. Recognizing that the refugee crisis is beyond what governments alone can handle, UNHCR has urged nations to create “privately sponsored admission schemes,” allowing the private sector to shoulder the burden of resettlement.

Many governments have heeded this call, but despite the strong philanthropic traditions among Americans, the United States has still not created such a program. There are many questions that need to be answered before the government can move forward. The most pressing is how to select the refugees for resettlement. Here are several different models for sponsorship that policymakers should consider:

1) Use the current system without an option for sponsors to select specific refugees. Except in a few rare cases (see #3 below), the State Department, UNHCR, or one of UNHCR’s non-governmental partner organizations identifies refugees in need of resettlement. While sponsors would not select the refugees that they wished to sponsor under this model, the government could, as it does when placing refugees with the nonprofits that coordinate all resettlement today, match refugees with sponsors that it felt were best suited to meet their needs. This method’s primary virtue is that it would be the simplest to administer and implement because it requires no further changes to the system.

2) Sponsors choose from a pool of refugees selected under the current system. In this version, sponsors would choose from refugees already identified under the normal refugee vetting and identification process who are designated for resettlement, based on information that the State Department already collects. This was how American sponsors selected refugees under the Reagan-era private refugee sponsorship program. Depending on the sponsorship model, this could impose new administrative costs on the agency to provide oversight of sponsors and protect against trafficking, but would create a much stronger incentive for sponsors who are interested in aiding a particular group of refugees to step forward and actually sponsor them.

3) Expand family sponsorship under the current system. The Priority 3 (P-3) family reunification program provides for a very narrow group of refugees to be “sponsored,” albeit without the financial commitment that the UNHCR model proposes. P-3 allows U.S. residents to ask the State Department to allow their family members abroad to apply directly to the U.S. refugee program. P-3 is rarely used because it is limited to certain nationalities, it applies only to U.S. residents who entered as refugees, and accepts only their immediate family members—minor children and spouses.

Removing these restrictions and opening it up to extended family members, as was done for Bosnians in the 1990s under the P-4 and P-5 programs, could provide a basis for a large private sponsorship program. P-3 “sponsors” are also not required to take financial responsibility for the family member. In order for private sponsorship to build on the total number of refugees admitted, these sponsors would either need to compensate the government for its expenses or provide services that the government currently provides on its behalf.

The benefit of family sponsorship is that DNA testing can accurately verify a familial relationship, alleviating concerns about human trafficking and other potential security issues. While DNA testing can be expensive and time-consuming, family sponsorship would provide a powerful incentive for Americans to sponsor refugees.

4) Allow sponsors to select any refugee that they choose. While simplest to state, this method would be the most difficult and costly to administer. A person who submits a sponsorship application would need to be thoroughly vetted to guard against trafficking, an entirely new procedure that the agency would need to develop. If the refugee had not already been identified by UNHCR, an NGO, or the State Department, a new evaluation would have to be conducted to verify their claims. Homeland Security officials would also need to interview the refugee (under the current process, they set up in a camp and identify refugees for resettlement on site.)

Nevertheless, these issues are worth overcoming. Canada, which has the most successful private sponsorship model in the world, has an open sponsorship model that allows sponsors to name any specific person who meets the definition of a refugee. Sponsors select refugees both with and without family relationships—though family-linked cases dominate the program—and more than 225,000 refugees have been privately sponsored since 1979 under the Canadian model.

No matter what model the agency chooses to go with, the American private sector could do quite a lot to alleviate the refugee crisis. Many businesses are already contributing millions of dollars to aid refugees overseas, and major U.S. philanthropists have said that they want to see a private refugee program created in the United States. It is time that the government allowed Americans to save refugees on their own.

Earlier this month, the U.S. Court of Appeals for the D.C. Circuit (CADC) ruled that the U.S. Department of Justice Federal Criminal Discovery Blue Book for prosecutions were exempt from Freedom of Information Act (FOIA) requests. The National Association of Criminal Defense Lawyers (NACDL) filed the suit to make the book public, and for good reason.   For background, criminal discovery is the process by which a prosecutor’s office turns over evidence to the defense team that is relevant to the criminal case before trial. Particularly, evidence that might be helpful or exculpatory to a criminal defendant must be turned over under Brady v. Maryland (1963) and subsequent cases. For example, if investigators independently found an eyewitness that supports a defendant’s alibi, or discovers that a witness or police officer has a history of dishonesty, that information must be turned over to the defense counsel in the furtherance of justice. Such evidence is known as “Brady material.”    The origin of NACDL’s case dates back to the bungled prosecution of the late Sen. Ted Stevens (R-AK). A federal judge threw out Stevens’ 2008 conviction for corruption because the DOJ hid evidence from the defense team, including contradictory statements by a star witness that were crucial to proving Stevens’ alleged criminal intent. Furthermore, the judge ordered an independent inquiry into the handling of the case that resulted in a damning 514-page report that faulted the DOJ for its mismanagement and “egregious misconduct” in the case.

 The prosecution and conviction ended the career of a long-serving United States Senator. If the DOJ could do that to him, they may do (and probably have done) that to people much less powerful. Consequently, Congress held hearings to consider whether or not to pass new legislation to ensure discovery would be properly handled within the Department of Justice. But as NACDL wrote in their 2014 complaint: DOJ asserted that federal legislation was unnecessary to prevent future discovery abuses because it had instituted various internal reforms.  During the hearings, DOJ asserted it had implemented “rigorous enhanced training” to ensure that “prosecutors and agents [have] a full appreciation of their responsibilities” under federal law. As part of this effort, DOJ stated that it had created a “Federal Criminal Discovery Bluebook” that “comprehensively covers the law, policy, and practice of prosecutors’ disclosure obligations” under Brady v. Maryland, 373 U.S. 83 (1963), Giglio v. United States, 405 U.S. 150 (1972), and their progeny. According to DOJ, the Blue Book was “distributed to prosecutors nationwide in 2011” and “is now electronically available on the desktop of every federal prosecutor and paralegal.” [internal citations omitted] In short, the Blue Book should assure everyone that DOJ prosecutors will play by the rules…but everyone will just have to take the DOJ at their word on that because the Blue Book is off-limits to the public.   Although the CADC agreed with the lower court ruling and the DOJ’s interpretation of current precedent, Senior Judge David Sentelle wrote a concurrence to the opinion, joined by Senior Judge Harry Edwards, that read:  It is often said that justice must not only be done, it must be seen to be done. Likewise, the conduct with the U.S. Attorney must not only be above board, it must be seen to be above board. If the people cannot see it at all, then they cannot see it to be appropriate, or more is the pity, to be inappropriate. I hope that we shall, in spite of Schiller, someday see the day when the people can see the operations of their Department of Justice.    In short, I join the judgment of the majority, not because I want to, but because I have to.

Such a concurrence signals that the guiding precedent in Schiller should be re-examined and such information vital to the public interest should be made public. “Just trust us” is not a reasonable guarantee of governmental and prosecutorial accountability.

The NACDL released a statement that they will file for an en banc hearing at CADC. You can read the opinion and concurrence here. Judge Sentelle delivered the B. Kenneth Simon Lecture on Constitutional Thought at Cato on Constitution Day 2013 that you can read here.

You Ought to Have a Look is a feature from the Center for the Study of Science posted by Patrick J. Michaels and Paul C. (“Chip”) Knappenberger.  While this section will feature all of the areas of interest that we are emphasizing, the prominence of the climate issue is driving a tremendous amount of web traffic.  Here we post a few of the best in recent days, along with our color commentary.

With the end of Convention season mercifully upon us, we thought we ought to have a look at what the party platforms have to say about energy and the environment, with an eye on climate change policies in particular.

We’ll start out with the Democratic Party Platform.

The Democrats are of the mind that human-caused climate change is one of the major problems facing the country/world today, describing it as “an urgent threat and a defining challenge of our time.”

It’s unclear that the voters feel that way… although part of the Democrats strategy for this election seems to be to try to persuade them otherwise.

The Democratic platform is chock full of government actions that promise to initiate, broaden and extend the current set of rules, regulation, and orders seeking to reduce our emissions of carbon dioxide (and other greenhouse gases), largely by way of lessening (on the way to eliminating) our reliance on fossil fuels as our primary source of energy production. This collection of promised federal actions is large both in scope and number and includes everything from pursuing a carbon tax

Democrats believe that carbon dioxide, methane, and other greenhouse gases should be priced to reflect their negative externalities, and to accelerate the transition to a clean energy economy and help meet our climate goals.

to furthering regulatory control

Democrats are committed to defending, implementing, and extending smart pollution and efficiency standards, including the Clean Power Plan, fuel economy standards for automobiles and heavy-duty vehicles, building codes and appliance standards

to rallying international efforts

In the first 100 days of the next administration, the President will convene a summit of the world’s best engineers, climate scientists, policy experts, activists, and indigenous communities to chart a course to solve the climate crisis. Our generation must lead the fight against climate change and we applaud President Obama’s leadership in forging the historic Paris climate change agreement. We will not only meet the goals we set in Paris, we will seek to exceed them and push other countries to do the same by slashing carbon pollution and rapidly driving down emissions of potent greenhouse gases like hydrofluorocarbons.

and even to prosecuting folks who don’t toe to party line

Democrats also respectfully request the Department of Justice to investigate allegations of corporate fraud on the part of fossil fuel companies accused of misleading shareholders and the public on the scientific reality of climate change.

All and all, an extremely ambitious plan:

We are committed to a national mobilization, and to leading a global effort to mobilize nations to address this threat on a scale not seen since World War II.

The Republican sees things almost completely differently.

The Republican Party Platform does not share the Democrats’ concerns that climate change is a major pressing issue. Instead it describes how the severity of the issue has been grossly distorted through “intolerance toward scientists and others” who dissent from the “orthodoxy.”

The Republican platform wants to both reel in most for the current climate efforts put in place by the Obama Administration as well as put the kibosh on any new federal actions on reducing carbon dioxide emissions through restricting energy choice. As for current actions, the Platform includes things like

We will likewise forbid the EPA to regulate carbon dioxide, something never envisioned when Congress passed the Clean Air Act.

and

We reject the agendas of both the Kyoto Protocol and the Paris Agreement, which represent only the personal commitments of their signatories; no such agreement can be binding upon the United States until it is submitted to and ratified by the Senate.

and

We demand an immediate halt to U.S. funding for the U.N.’s Framework Convention on Climate Change (UNFCCC) in accordance with the 1994 Foreign Relations Authorization Act.

and

We oppose any carbon tax.

Instead, the Republicans want to ease restrictions and encourage development of any energy production methods that are competitive in the free marketplace. This includes support for

the development of all forms of energy that are marketable in a free economy without subsidies, including coal, oil, natural gas, nuclear power, and hydropower.

as well

the cost-effective development of renewable energy sources — wind, solar, biomass, biofuel, geothermal, and tidal energy — by private capital.

and for

lifting restrictions to allow responsible development of nuclear energy, including research into alternative processes like thorium nuclear energy.

Ultimately, the Republicans

firmly believe environmental problems are best solved by giving incentives for human ingenuity and the development of new technologies, not through top-down, command-and-control regulations that stifle economic growth and cost thousands of jobs.

And then there is the Libertarian Party Platform. This platform is considerably less wordy than their Democratic or Republican counterparts without any direct mention of climate change. Here are the sections on the Energy and the Environment in their entirety:

2.2 Environment

Competitive free markets and property rights stimulate the technological innovations and behavioral changes required to protect our environment and ecosystems. Private landowners and conservation groups have a vested interest in maintaining natural resources. Governments are unaccountable for damage done to our environment and have a terrible track record when it comes to environmental protection. Protecting the environment requires a clear definition and enforcement of individual rights and responsibilities regarding resources like land, water, air, and wildlife. Where damages can be proven and quantified in a court of law, restitution to the injured parties must be required.

2.3 Energy and Resources

While energy is needed to fuel a modern society, government should not be subsidizing any particular form of energy. We oppose all government control of energy pricing, allocation, and production.

It you like big government, the Democrats have a deal for you. If you prefer that the federal government largely stay out of our energy markets, then you’ll find much to like in either the Republican or Libertarian Platforms.

In our opinion, this presidential election should not be about climate change itself as we don’t believe that the risks and challenges it presents are greater than the ones that are posed by its proposed “solutions”(see our soon-to-be-updated book Lukewarming: The New Science that Changes Everything for reasons why we think the way we do). But, this election should be about climate change policies. A Democratic Administration will seek a further expansion of the reach of the federal government into our daily lives as the impacts of their vigorous pursuit of reducing carbon dioxide emissions increasing find their way into all aspects of our daily lives—reducing choice and increasing costs of all manner of things, while having minimal effect on the actual climate. As it stands now, the federal government’s reach has grown perilously. It is high time to insure the Constitutional limitations placed upon it are restored and respected.

The debate over the Seattle experiment has generated more heat than light to this point. A new report from Jacob Vigdor and his colleagues at the University of Washington attempts to shed some light on the effects of the first incremental stage of the increase. They use data from the state’s Employment Security Department from when the law was passed through the fourth quarter of 2015, at which point the minimum wage stood at $11 per hour. This does not include the second stage of increases that took place January 1, 2016, or the further increases that will eventually bring it to $15 per hour and much higher thereafter. The early results show the mixed effects of the first incremental increase, there does not appear to be much evidence of firms being driven out of business, and some low-wage workers have seen their hourly wage increase, it also reduced the employment rate and hours, with the end result for these low-wage workers being “ambiguous and likely fairly small.”

The report only analyzes the first initial stage of the scheduled minimum wage increases, as the authors note and as illustrated in Figure 1. In addition, due to the timing of the study, it can only capture the short-run effects of this first incremental increase. As such, this analysis cannot provide insight into the impact of future additional increases to the minimum wage or what the longer-run effects might be.

Figure 1

Seattle Minimum Wage Schedule and Period Analyzed

Source: Seattle Minimum Wage Study Team, 2016.

They employ multiple strategies to try to discern the impact of the minimum wage increase. In the most straightforward, the “observable change,” they simply compare trends in Seattle before and after the increase took place. This might be affected by differences in underlying trends that would lead to spurious findings, so they include a number of other comparisons, like King County excluding Seattle, but their preferred specification is a “Synthetic Seattle” consisting of an aggregate of zip codes in the state with similar levels and trends to Seattle. They then compare what happened in Seattle, which was subject to the minimum wage increase, to “Synthetic Seattle” which had no minimum wage increase but in other ways was very similar. 

As might be expected, the share of workers with wages below $11 an hour decreased significantly, but this was also true to some extent outside of Seattle, which suggest much of this decline might be due to improvements in the broader economy rather than directly attributable to the minimum wage increase. They estimate the minimum wage increase responsible for a $0.73 rise in median hourly earnings.

While the employment rate of low-wage workers in Seattle increased by 2.6 percentage points over the period, it increased less than it did in their preferred comparison Synthetic Seattle, where it increased by 3.8 percentage points, leading them to conclude “the Minimum Wage Ordinance modestly held back Seattle’s employment of low-wage workers relative to the level we could have expected.” Evaluations simply looking at the total number of low-wage jobs in Seattle before and after the increase took effect would have observed a substantial increase in the total number of jobs, and may have erroneously concluded that there were no adverse employment effects of the minimum wage increase. While it is true that the discrete level of jobs increased, there is some evidence that it reduced the employment rate of lower-wage workers compared to what it otherwise would have been.

A similar dynamic can be seen with hours worked for this lower-wage group: due to an healthy broader economy, the hours worked increased relative to Seattle’s history, but improved less than it did in the control groups, and “on balance, it appears that the Minimum Wage Ordinance modestly lowered hours worked” by 4.1 hours per quarter compared to their preferred comparison, Synthetic Seattle.

Table 1

Impact on Low-Wage Workers

 

Source: Seattle Minimum Wage Study Team, 2016.

In aggregate the authors find some evidence of a reduction in hours per employee, and minimal evidence of an impact on the number of persistent jobs in industries with a high proportion of low-wage workers. They also find an increase in the rate of business closures and business openings, and suggest this could be in line with other research that found that minimum wage increases prompt in firm composition from more labor intensive companies to those that rely more on capital.

We don’t yet know what the long-term effects of the minimum wage increase will be. This study only encompasses the first step of the phase-in, and later scheduled increases will raise the wage floor to levels that are outside the scope of most past U.S. experiences, making it difficult to estimate the magnitude of potential effects. Some of the minimum wage literature has found that the long-run effects of an increase are greater in magnitude, as the authors of this report note saying “in the long-run, certain industries affected by the minimum wage, such as the fast food industry, have more opportunity to relocate, change the composition of their workforce, or invest in technologies that reduce their need for labor.” In addition a high wage city like Seattle implementing this incremental step during a time of relatively strong broader economy limit how applicable these findings might be for other places.

The initial results suggest that, at least in this initial stage, the sky is not falling, but there are signs that the increase did have unintended consequences, reducing opportunities for low wage workers that leave the net effect of the increase on these targeted workers ambiguous. Because the minimum wage is poorly targeted to poor households, think teenagers in relatively affluent families who are affected by the minimum wage increase, this ordinance likely has even less of an impact in terms of poverty reduction. These are just preliminary results from the first step of a minimum wage increase in a relatively high-wage city. The adverse effects of further increases and the long-run responses could be much larger and more costly.

Plenty of libertarians were wary of seeing former Massachusetts governor Bill Weld as the Libertarian Party’s nominee for vice president. Even those of us who haven’t had anything to do with the LP would like to see the party represented by, you know, libertarians. Weld, who seems like a nice man and was apparently a decent governor, is the living expositor of the difference between a libertarian and someone who’s “socially liberal and fiscally conservative.”

Case in point: this week’s ReasonTV interview, where Weld praises Justice Stephen Breyer and Judge Merrick Garland, who are the jurists most deferential to the government on everything, whether environmental regulation or civil liberties. Later in the same interview, he similarly compliments Republican senators like Mark Kirk and Susan Collins, who are among the least libertarian of the GOP caucus in terms of the size and scope of government and its imposition on the private sector and civil society.

My point isn’t to criticize the Weld selection as a matter of political strategy. Indeed, he seems to have brought a certain respectability to a party that is rarely taken seriously. And if that moves the national political debate in a more libertarian direction, bully.

But then look at the most recent news made by the man at the top of the LP ticket. Former New Mexico governor Gary Johnson, in an interview with (my friend) Tim Carney of the Washington Examiner, calls religious freedom “a black hole” and endorses a federal role in preventing “discrimination” in all its guises. More specifically, he’s okay with fining a wedding photographer for not working a gay wedding – a case from New Mexico where Cato and every libertarian I know supported the photgrapher – and forcing the Little Sisters of the Poor to pay for contraceptives (where again Cato and libertarians supported religious liberty). He also bizarrely compare Mormonism to religiously motivated shootings.

In other words, Johnson doesn’t just come off as anti-religion, but completely misses the distinction between public (meaning government) and private action that is at the heart of (classical) liberal or libertarian legal theory. That’s a shame: it makes him no different than progressives in that regard – or social conservatives, who miss the distinction in the other direction, restricting individual rights in addition to government powers.

And so, what we’re left with is a Libertarian Party ticket that’s positioning itself as “moderate” more than anything else. Again, that may well be a clever political ploy – though it makes the dubious bet that there are more #NeverHillary Democrats than #NeverTrump Republicans – but it’s not very encouraging for libertarians who want to “vote [their] conscience.”

Pages