Feed aggregator

“It is not rational, never mind ‘appropriate,’ to impose billions of dollars in economic costs in return for a few dollars in health or environmental benefits,” the Supreme Court held last year in Michigan v. EPA.It seems that the U.S. Fish and Wildlife Service (USFWS) did not get the message, with its willy-nilly imposition of significant economic costs when designating “critical habitat” for endangered species.

A California builders’ association is now asking the Court to establish that judicial review is available for individuals and businesses affected by these agency actions that purport to enforce the Endangered Species Act (ESA). The ESA specifically requires federal agencies to take economic impacts into consideration, but the USFWS routinely ignores the costs of designating land as a critical habitat. The San Francisco-based U.S Court of Appeals for the Ninth Circuit held that the designation of critical habitat is an action fully committed to agency discretion, and that it may ignore any cost implications at its leisure, but this would seem to contradict Michigan v. EPA and other precedent.

The USFWS employs a cost-benefit accounting method called “baseline analysis,” which separates the impacts that would occur absent designation (baseline impacts) from the impacts attributable to designation (incremental impacts). It then only considers the incremental impacts, despite enormous disparities between baseline and incremental costs—one order of magnitude or two—and fanciful estimates that the economic impact of critical habitat designation is often $0.

Cato, joined by the Reason Foundation and National Federation of Independent Business, filed an amicus brief urging the Supreme Court to take up this important question of whether courts can even review the government’s Enron-style of cost-benefit analysis. Independent research by Reason’s Brian Seasholes found that in examining 159 of the 793 species that have critical habitat designation, there are at least $10.7 billion in economic impacts, hundreds of jobs lost per species designated, and regulatory burdens affecting 60,169,546 acres of land (11,261,054 privately owned) spanning 37 states and two territories.

And what is the purported conservation benefit to these billions in costs? Nothing. As the USFWS itself has stated, “[i]n 30 years of implementing the Act, the Service has found that the designation of statutory critical habitat provides little additional protection to most listed species, while consuming significant amounts of available conservation resources.”

Moreover, critical habitat designation is counterproductive for conservation. Again, the federal government is the source for the best material on this: “Mounting evidence suggests that some regulatory actions by the Federal government, while well-intentioned and required by law, can (under certain circumstances) have unintended negative consequences for the conservation of species on private lands.” These negative consequences are caused by the ESA’s regulatory reach and severe penalties—up to $50,000 and 1 year in jail for misdemeanor harm to an endangered fish, bird, or habitat, whether the habitat is occupied or not—coupled with the ability to regulate vast amounts of land, water and natural resources.

As Australian environmental-law expert David Farrier has described, “disgruntled landowners make poor conservationists”—and foisting enormous costs and regulatory burdens onto homeowners with criminal penalties for non-compliance certainly makes them disgruntled. 

The Supreme Court will consider whether to take up Building Industry Association of the Bay Area v. U.S. Dept. of Commerce either right before it goes on summer recess or right when it gets back in September.

Over at Cato’s Police Misconduct web site, we have selected the worst case for the month of May.  It was the case of one Shane Mauger.  Over a period of about 10 years, this former police officer told lies to obtain search warrants and would then falsify police reports by under-reporting any cash that he seized during those raids.

Now, because of his corruption, officials cannot tell how many of his previous cases were based on valid police work and how many were based upon dishonest work.  Many cases are being reviewed and thrown out.

Federal investigators discovered other corrupt officers in the same Reynoldsburg, Ohio police department.  Former officer Tye Downard was arrested in February for dealing in narcotics.  Shortly after his arrest, Downard committed suicide in his jail cell.

Global Science Report is a feature from the Center for the Study of Science, where we highlight one or two important new items in the scientific literature or the popular media. For broader and more technical perspectives, consult our monthly “Current Wisdom.”

Methane is all the rage. Why? Because 1) it is a powerful greenhouse gas, that molecule for molecule, is some 25 times as potent as carbon dioxide (when it comes to warming the lower atmosphere),  2) it plays a feature role in a climate scare story in which climate change warms the Arctic, releasing  methane stored there in the (once) frozen ground, which leads to more warming and more methane release, ad apocalypse, and 3) methane emissions are  also be linked to fossil fuel extraction (especially fracking operations). An alarmist trifecta!

Turns out, though, that these favored horses aren’t running as advertised.

While methane is a more powerful greenhouse gas in our atmosphere than carbon dioxide, its lifetime there is much shorter, even as the UN’s Intergovernmental Panel on Climate Change can’t quite say how long the CO2 residence time actually is. This means that it is harder to build-up methane in the atmosphere and that methane releases are more a short-term issue than a long-term one. If the methane releases are addressed, their climate influence is quickly reduced.

This is why methane emissions from fracking operations—mainly through leaks in the wells or in the natural gas delivery systems—really aren’t that big of a deal. If they can be identified, they can be fixed and the climate impact ends. Further, identifying such leaks are in the fracking industry’s best interest, because, in many cases, they represent lost profits. And while the industry says it has good control of the situation, the EPA isn’t so sure and has proposed regulations aimed at reducing methane emissions from new and existing fossil fuel enterprises. The recent scientific literature is somewhat split on who is right. A major paper recently published in Science magazine seemed to finger Asian agriculture as the primary suspect for recent increases in global methane emissions, while a couple of other recent studies seemed to suggest U.S. fracking operations as the cause (we reviewed those findings here).

And as to the runaway positive feedback loop in the Arctic, a new paper basically scratches that pony.

A research team led by University of Colorado’s Colm Sweeney set out to investigate the strength of the positive feedback between methane releases from Arctic soil and temperature (as permafrost thaws, it releases methane). To do this, they examined data on methane concentrations collected from a sampling station in Barrow, Alaska over the period 1986 through 2014. In addition to methane concentration, the dataset also included temperature and wind measurements. They found that when the wind was blowing in from over the ocean, the methane concentration of the air is relatively low, but when the wind blew from the land, methane concentration rose–at least during the summer/fall months, when the ground is free from snow and temperature is above freezing. When the researchers plotted the methane concentration (from winds blowing over land) with daily temperatures, they found a strong relationship. For every 1°C of temperature increase, the methane concentration increased by 5 ± 3.6 ppb (parts per billion)—indicating that higher daily temperatures promoted more soil methane release. However (and here is where things get real interesting), when the researchers plotted the change in methane concentration over the entire 29-yr period of record, despite an overall temperature increase in Barrow of 3.5°C, the average methane concentration increased by only about 4 ppm—yielding a statistically insignificant change of 1.1 ± 1.8 ppm/°C. The Sweeney and colleagues wrote:

The small temperature response suggests that there are other processes at play in regulating the long-term [methane] emissions in the North Slope besides those observed in the short term.

As for what this means for the methane/temperature feedback loop during a warming climate, the authors summarize [references omitted]:

The short- and long-term surface air temperature sensitivity based on the 29 years of observed enhancements of CH4 [methane] in air masses coming from the North Slope provides an important basis for estimating the CH4 emission response to changing air temperatures in Arctic tundra. By 2080, autumn (and winter) temperatures in the Arctic are expected to change by an additional 3 to 6°C. Based on the long-term temperature sensitivity estimate made in this study, increases in the average enhancements on the North Slope will be only between -2 and 17 ppb (3 to 6°C x 1.1 ± 1.8 ppb of CH4/°C). Based on the short-term relationship calculated, the enhancements may be as large as 30 ppb. These two estimates translate to a -3 – 45% change in the mean (~65 ppb) CH4 enhancement observed at [Barrow] from July through December. Applying this enhancement to an Arctic-wide natural emissions rate estimate of 19 Tg/yr estimated during the 1990s and implies that tundra-based emissions might increase to as much as 28 Tg/yr by 2080. This amount represents a small increase (1.5%) relative to the global CH4 emissions of 553 Tg/yr that have been estimated based on atmospheric inversions.

In other words, even if the poorly understood long-term processes aren’t sustained, the short term methane/temperature relationship itself doesn’t lead to climate catastrophe.

The favorite thoroughbreds of the methane scare are proving to be little more than a bunch of claimers.

 

Reference:

Sweeney, C., et al., 2016.  No significant increase in long-term CH4 emissions on North Slope of Alaska despite significant increase in air temperature. Geophysical Research Letters, doi: 10.1002/GRL.54541.

 

A new issue of the Cato Journal, which collects the proceedings of last year’s Annual Monetary Conference, was released last week.  Those proceedings include a paper by Claudio Borio, head of the Bank for International Settlement’s monetary and economic department, which Alt-M readers may find particularly interesting.

According to Borio, conventional thinking on monetary policy rests on three faulty assumptions:

First, that natural interest rates are those consistent with output at potential and low, stable inflation.

This assumption is important because monetary authorities are supposed to track natural interest rates when they set policy.  Unfortunately, says Borio, the mainstream view of natural interest rates is imprecise, since we know that dangerous financial build ups can occur even when growth is strong and inflation is on target.  Crucially, such build ups—excessive credit, inflated asset prices, and too much risk-taking — may be caused by interest rates that are too low.  Could it be that “natural” rates are themselves sometimes inconsistent with financial stability?  Borio thinks not, and suggests that we need instead to define natural rates more carefully, as rates “consistent with sustainable financial and macroeconomic stability.”  In practice, such a definition would lead monetary policymakers to “lean against” booms when times are good, and also to worry more about the long-term consequences of expansionary monetary policy (which Borio suggests may sow the seeds of future crises) during busts.

Second, that monetary policy is neutral over the medium- to long-term.

By contrast, Borio believes that monetary policy may in fact have significant long-term effects on the real economy.  It is hard to argue, for example, that low interest rates are not a factor in fueling financial booms and busts, given that monetary policy generally operates through its impact on credit expansion, asset prices, and risk-taking.  And when such booms and busts lead to financial crises, the effects can be very long-lasting, if not permanent: growth rates may recover, but output might never catch up with its pre-crisis, long-term trend.  Borio points out that financial busts weaken demand, since falling asset prices and over-indebtedness often combine to wreak havoc on balance sheets.  Financial booms, meanwhile, affect supply: BIS research suggests they “undermine productivity growth as they occur” by attracting resources towards lower productivity growth sectors.  Taken together, these points have important implications: on the one hand, monetary policymakers ought to be more careful about supporting booms; on the other, apart from resisting the temptation to encourage booms, there may not be much that monetary policy can do about busts, since “agents wish to deleverage” and “easy monetary policy cannot undo the resource misallocations.”

Third, that deflation is everywhere and always a bad thing.  

Not so, says Borio (and many here at Alt-M would agree with him).  In fact, BIS research has found that there is only a weak association between deflation and output.  When you control for falling asset prices, moreover, that association disappears altogether — even in the case of the Great Depression.  The key here is to distinguish between supply-driven deflations, which Borio suggests depress prices while also boosting output, and demand-driven deflations, which tend to be bad news all around.  By failing to draw this distinction, monetary authorities have introduced an easy-money bias into their policy decisions: in the boom years, when global disinflationary forces should have led to falling consumer prices, loose monetary policy instead kept inflation “on target”; then, in the bust years, central banks eased aggressively — and persistently — to stave off the mere possibility of a demand-driven deflation.  (Or did they?)

This leads neatly to the broader theory that Borio outlines in his Cato Journal article: that the long-term decline in real interest rates we have witnessed since the 1990s is not, as proponents of the “savings glut” and “secular stagnation” hypotheses suggest, an equilibrium phenomenon, driven by deep, exogenous forces; rather, it is a disequilibrium phenomenon driven by asymmetrical monetary policy, and may be inconsistent with lasting financial and macroeconomic stability.

In a nutshell, Borio believes that the three fundamental misconceptions outlined above have inclined central banks towards monetary policy that is expansionary when times are good, and then even more expansionary when times are bad.  Over the course of successive financial and business cycles, this skewed approach to monetary policy imparts a downward bias to interest rates and an upward bias to debt, which in turn leads to “a progressive loss of policy room for maneuver” as central banks cannot push interest rates any lower, but also cannot raise rates “owing to large debts and the distortions generated in the real economy.”  The result is entrenched instability and “chronic weakness in the global economy,” as well as what Borio calls an “insidious form of ‘time inconsistency,’” in which policy decisions that seem reasonable — even unavoidable — in the short term, nevertheless lead us ever-further astray as time goes by.  This will, undoubtedly, strike many readers as an apt description of the current state of play in monetary policy.

Here again is Borio’s complete article.  I encourage you to read the whole thing.  The entire monetary issue of the Cato Journal, titled “Rethinking Monetary Policy,” can be found here, and features articles from Stanford economist John Taylor, Richmond Fed president Jeffrey Lacker, and St. Louis Fed president James Bullard, as well as from Alt-M’s own George Selgin, Larry White, and Kevin Dowd, among others.  Happy reading!

[Cross-posted from Alt-M.org]

Most press reports about Zimbabwe’s fantastic hyperinflation are off the mark – way off the mark. Even our most trusted news sources fail to get the facts right. This confirms the “95 Percent Rule”: 95 percent of what you read in the financial press is either wrong or irrelevant.

When it comes to the reportage about hyperinflation, there are no excuses. All 56 of the world’s hyperinflations have been carefully documented in “World Hyperinflations”. This record is available in the Routledge Handbook of Major Economic Events in Economic History (2013) and has been available online since 2012 at the Cato Institute.

The International Monetary Fund (IMF) is the main culprit, a prominent source of the faulty data. EvenThe Economist magazine has fallen into the trap of uncritically accepting figures pumped out by the IMF and further propagating them. It’s no wonder that there is a massive gap between the public’s perception and economic reality. A gap that, ironically, The Economist reports on this week

The Economist’s most recent infraction on Zimbabwe’s hyperinflation appeared in the May 2016 issue. The magazine claimed that the hyperinflation peaked at an annual rate of 500 billion percent. Where did this figure originate? You guessed it. That figure is buried in the IMF’s 2009 Article IV Consultation Staff Report on Zimbabwe

In reality, Zimbabwe’s annual inflation rate in September 2008 was 471 billion percent, not 500 billion percent. More importantly, Zimbabwe’s hyperinflation peaked in November, not September. It was then that Zimbabwe recorded the second-highest hyperinflation in history: a whopping 89.7 sextillion percent. This is 179 billion times greater than the IMF’s figure.

That said, the IMF did attempt to cover its backside from questions about its hyperinflation guestimate. 2009 Article IV Staff Report on Zimbabwe states clearly that “data have serious shortcomings that significantly hamper surveillance due to capacity constraints.” Despite the red flag, The Economistcontinues to blindly propagate a figure that is neither reliable nor replicable. I stress the word “continues”.

It turns out that The Economist is a serial propagator of inaccurate IMF figures. The magazine has cited IMF’s incorrect figure of 500 billion percent before, in June 2009 and October 2015.

For accurate estimates of Zimbabwe’s fantastic hyperinflation that are used in the professional literature – estimates that are reliable and replicable – the IMF and the financial press corps should take a look at the following table from “On the Measurement of Zimbabwe’s Hyperinflation”, which was published in The Cato Journal, (2009):

In a speech this week, President Obama called for an expansion of Social Security, saying “it’s time we finally made Social Security more generous, and increased its benefits.” Obama was undoubtedly influenced  to some degree by the developments in the Democratic primary, where both Bernie Sanders and Hillary Clinton have expressed support for some form of expansion.  This represents a reversal in part for Obama. While he had always supported increasing payroll taxes on higher-earning Americans, he had also previously supported a change in the way benefits were adjusted each year that would have reduced the growth rate of benefits over a long timeframe in the interest of improving the program’s fiscal trajectory. Social Security’s long-term oultook has only gotten worse in the intervening years, but in his speech he signalled that he no longer believed “all options were on the table” to address solvency concerns  and instead supports further expansion. This reversal is misguided. If his favored reforms are implemented it will increase the economic distortions introduced by Social Security and do nothing to address its serious fiscal problems.  The more likely result is that with this retrenchment, policymakers will continue to make promises but fail to actually do anything. Younger workers will bear the brunt of the cost resulting from failures to put forward constructive reform.

Inexorable demographic changes and the program’s structure mean that today’s younger workers were already going to get a worse deal from Social Security than previous generations. As work from C. Eugene Steuerle and Caleb Quakenbush has shown, a married-couple both earning the average wage retiring in 1960 received more than seven dollars in benefits for each dollar it paid in taxes over their lifetime. A similar couple reaching age 65 in 1980 received roughly $2.60in benefits for each dollar contributed, and a couple retiring in 2030 will receive about $1.12 for each dollar paid into the program. As they note, this ratio probably overstates how good a deal future retirees will get as it does not incorporate the reforms needed to pay scheduled benefits, so that couple that is currently in their 50s could end up having to pay more in taxes or taking a substantial benefit cut.

Present Value Lifetime Benefits and Taxes in Social Security, Married Couple at Average Wage

Source: Steuerle and Quakenbush (2015)

Given that neither Hillary Clinton nor Donald Trump are likely to make any substantive reforms to improve the program’s fiscal trajectory, she has expressed support for some form of expansion and he has promised to protect old-age entitlements from any kind of cuts, it is likely policymakers will continue to kick that can further down the road and closer to the trust fund exhaustion date in 2034. The longer these reforms are delayed, the larger the required reforms become. In order to make the program solvent through the 75-year projection period, scheduled benefits would have to be cut by 16.4 percent for all current and future beneficiaries. If policymakers delay until 2034, scheduled benefits would have to be by 21 percent, with these reductions increasing in later decades.

Changes Needed to Reach 75-Year Solvency

Source: Social Security Administration, The 2015 Annual Report of the Board of Trustees of the Federal Old-Age and Survivors Insurance and Federal Disability Insurance Trust Funds, July 2015, p. 25.

Recent experience has shown that even would-be reformers have expressed a reluctance to make any changes that would affect current retirees, and if this continues it would make addressing the program’s significant unfunded obligations more difficult. Even completely eliminating benefits for those newly eligible in 2034 would not be enough to enable the program to pay out all scheduled benefits in that year. Perhaps it is not surprising that almost two-thirds of people 18 to 29 think Social Security will be unable to pay them benefits when they retire.

President Obama’s reversal is misguided, and will make it harder to enact Social Security reforms that would actually begin to address the program’s issues. Younger workers will bear the burden of policymakers’ reticence to put forward constructive reforms, and they have shown that they are skeptical of Social Security promises made by politicians.

Imagine that you run a daycare business out of your home. Some of your clients are poor families whom your state has decided to help with daycare. The state program allows such families to choose any daycare they want and then reimburses the provider up to a certain amount. Now the state has declared that because of this program, you—and even people who provide at-home daycare for family members’ children—will be considered a state employee for the sole purpose of giving a union exclusive representation rights.

You don’t get state medical or dental insurance. You don’t get state retirement benefits. You don’t get paid vacation on national holidays. The only thing you get is a union you didn’t choose and you refuse to join that is now representing your “interests” before the state, which isn’t even your employer. Does this sound far-fetched? Yet it’s what’s happened to Kathleen D’Agostino and seven other women in Massachusetts who are asking the Supreme Court to take their case after the lower courts dismissed their lawsuit.

The plaintiffs argue that the state’s imposition of an exclusive representative on them violates their First Amendment freedom of association. In the 2014 case Harris v. Quinn, the Supreme Court ruled that states that unionize healthcare aides and other home-based workers who are “not full-fledged public employees” cannot require those who do not wish to join the union to pay fees to support it. This new case asks the question Harris left unanswered: May a state even mandate an exclusive representative for those who are “not full-fledged public employees”?

The U.S. Court of Appeals for the First Circuit said that the case is easily resolved under Abood v. Detroit Board of Education (1977)—which allowed the imposition of “agency fees” on union nonmembers—and does not require further First scrutiny. Abood, however, is like a house built on the sand: it treated the First Amendment concerns public unions (should) raise as already resolved by earlier cases when in fact those earlier cases merely resolved the question of whether the Commerce Clause gave Congress the power to regulate those public unions (the old cases having arisen at a time when the Commerce Clause was only starting to be read expansively).

Abood’s reliance on the notion of “labor peace”—which was significant in those old cases but shouldn’t be a valid First Amendment interest—conflicts with the constitutional ban on compelled speech and association absent a substantial government interest. Although the First Circuit treated this case as automatically resolved under Abood, it would actually be a vast expansion of precedent to say that “labor peace” justifies forcibly unionizing at-home workers who are independent except for the sole fact that some of their clients pay them through a government-subsidy program.

States are already doing this in a number of fields, but expanding Abood would enable the states to go as far as mandating exclusive representation for private-school teachers whose schools receive funding through state voucher or tax-credit programs. Or apartment-building owners who lease to people in rental-assistance programs. Or for the federal government to impose exclusive representation on bank tellers who work at FDIC-backed institutions.

Where does it stop? Cato has filed a brief asking the Supreme Court to answer that question in the case of D’Agostino v. Baker.

The Trans-Pacific Partnership is the economic centerpiece of the Obama administration’s much ballyhooed “strategic pivot” to Asia, which – in 2009 – heralded U.S. intentions to extricate itself from the messes in Iraq and Afghanistan and to reassert its interests in the world’s fastest-growing region. After six years of negotiations, the comprehensive trade deal was completed last year and signed by its 12 charter members earlier this year. But the TPP must be ratified before it can take effect – and prospects for that happening in 2016 grow dimmer with each passing day.

One would assume TPP ratification a policy priority of President Obama. After all, he took office promising to restore some of the U.S. foreign policy credibility that had been notoriously squandered by his predecessor. If Congress fails to ratify the agreement before Christmas, Obama will leave office with American commercial and strategic positions weakened in the Asia-Pacific, and U.S. credibility further diminished globally.  The specter of that outcome would keep most presidents awake at night.

In Newsweek today, I put most of the blame for this precarious situation on a president who, throughout his tenure, has remained unwilling to challenge the guardians of his party’s anti-trade orthodoxy by making the case for trade liberalization generally, or the TPP specifically:

Superficially, one could blame election-year politics and a metastasizing popular antipathy toward trade agreements for the situation, but the original sin is the president’s lackluster effort to sell the TPP to his trade-skeptical party and the American public. In the administration’s division of labor, those tasked with negotiating the TPP kept their noses to the grindstone and brought back an agreement that reduces taxes and other protectionist impediments to trade…

Meanwhile, those responsible for explaining the deal’s merits domestically spent too much time on the golf course. With scarcely greater frequency than a couple of sentences in his past two States of the Union addresses has President Obama attempted to articulate the importance of trade and the TPP to the American public. Even then, his “advocacy” has been grudging and couched in enough skepticism to create and reinforce fears about trade and globalization.

When Hillary Clinton – the president’s former secretary of state, co-architect of the Asian pivot, and champion of the TPP – announced her opposition to the negotiated deal because it became a political liability for her, President Obama remained silent. If the president really believes in the trade agenda his administration has pursued for eight years, his decision not to challenge Clinton was a significant tactical error – and a profoundly lamentable display of cowardice. Foregone was a prime opportunity to inject an affirmative case for the trade deal into the fact-deprived election debate. And how could Obama let Clinton’s political ambitions take priority over his policy agenda?  How could the President of the United States be so cavalier about actions and inactions that amount to kneecapping the U.S. foreign policy agenda and subverting American commercial interests?

The president’s near total absence of promotion of the TPP explains why, in the waning months of his tenure, ratification of the economic centerpiece of the vaunted Asian pivot is unlikely. In this absence emerged a fallacious, hysterical narrative about the allegedly deleterious effects of the TPP on jobs, the environment, public health, and even cancer rates, which became the dry tinder that fueled the fiery antitrade rhetoric of this year’s demagogic presidential campaigns.

Lately, it has become convenient for the president’s apologists to point to Donald Trump’s bluster and the dampening enthusiasm for the TPP among Republicans in Congress as the major obstacles to ratification. But that is all a consequence of President Obama’s failure to rebut the trade fallacies and tall tales concocted by groups on the left, like the Sierra Club, the AFL-CIO, and Public Citizen, and his abject deference to Harry Reid, Nancy Pelosi, and then candidate Hillary Clinton in helming his party’s platform on trade and the TPP. I warned of this problem over the years, and unless things change in a New York minute, we will soon be reaping the whirlwind.

Late in the 11th hour, selling the TPP for its immediate economic benefits will not be enough.  The president must make a compelling, comprehensive case to Congress and the public about what is really at stake. This is what I suggested in Newsweek:

There is a strategic rationale for trade agreements and those kinds of arguments can be more politically persuasive than the economic ones. Indeed, the post-WWII liberal global trading order reflects the lessons of history: commerce and economic interdependence are the best guarantors of peace. Or, as the French economics writer Fredric Bastiat is alleged to have quipped a century earlier: “When goods don’t cross borders, armies will.”

It was with those lessons in mind that President George W. Bush paid a visit to the Senate in June 2005, with legislation to implement his Central American Free Trade Agreement facing uncertain prospects. Citing the rise of Hugo Chavez in Venezuela and the return of Daniel Ortega in Nicaragua, Bush urged his colleagues to consider CAFTA an agreement that would serve long-standing U.S. strategic interests – with a sprinkling of economic benefits to boot. The following day the agreement was ratified.

President Obama is down to his last chance to fulfill his obligation to posterity.  Success requires that he put the TPP into its broader geopolitical and geoeconomic contexts and describe a world with and without its ratification. The president attempted as much in a Washington Post opinion piece last month, describing the TPP as an opportunity for the United States to write the new rules of global trade before China does. It was a laudable start, if not too reliant on the myth of trade as a competition between the United States and China.

With so many Americans leery of China’s rise and U.S.-China relations growing more contentious over economic and security issues, the president may be tempted to describe the TPP as an agreement that “excludes,” “contains,” or “isolates” China.  That characterization would certainly resonate with members of Congress looking for cover to support the TPP. But portraying the TPP as a weapon of economic warfare essential to “beating” or “defeating” China is short-sighted and fraught with the perils of self-fulfilling prophesy. Our economic relationship with China is more collaborative than competitive and the costs of estrangement would be felt deeply in the United States.

Instead, President Obama should argue that U.S. leadership, and immersion in the process of crafting the 21st century rules that will govern the trade of China’s most important partners, will leave Beijing with no better alternatives than to embrace those rules. Accession to the TPP is open to all newcomers that can meet the deal’s relatively high standards, and that, importantly, includes China.

The president’s comprehensive case for the TPP must go well beyond the static benefits estimated by the ITC.  It must include the benefits associated with the liberalizing policy reactions of other countries in the region, as they aspire to become parties to the agreement.  It must include the benefits associated with TPP expansion to include South Korea, the Philippines, Indonesia, Thailand, Colombia, Taiwan, and China. It must include the benefits of TPP as a catalyst for an eventual Free Trade Area of the Asia Pacific to include countries like India and Russia. And it must include the benefits of the TPP as an inducement to Europe, Brazil, South Africa, and the rest of the world to resuscitate the process of multilateral trade liberalization, which has been mostly defunct for over 20 years.

These enormous potential benefits of TPP ratification this year are also the costs of failure to ratify this year. If the United States fails to ratify the agreement this year, TPP members that are also party to the China-centric Regional Comprehensive Economic Partnership negotiations will be drawn more deeply into China’s ambit. While that doesn’t mean that U.S. entities will be excluded from engaging in commerce with entities in those countries, it does mean that existing China-centric investment and supply chain relationships will be reinforced, new ones will emerge and become established, and the costs of reorienting those relationships in the event of some future TPP implementation will increase with each passing year.

But at a deeper, institutional level, failure to ratify would impair U.S. commercial and diplomatic interests in the region. Foreign governments that incurred political costs to push the TPP in their countries with expectations of U.S. participation wouldn’t soon forget that the United States proved to be an unreliable partner. Expectations that the United States is still capable of leading the world to the economic liberalization it so desperately needs would erode, and with that diminished credibility, U.S. policy objectives would become more difficult or, in some cases, impossible to meet.

Those would be the costs of a U.S. failure to ratify the TPP this year. Avoiding that outcome is President Obama’s obligation to posterity.

The Police Commission in San Francisco recently voted 5-2 to approve a body worn camera (BWC) plan. The plan, which one commissioner described as a “travesty,” prohibits supervisors from viewing BWC videos in order to find policy violations. It also requires officers involved in a shooting or in-custody death to submit an “initial statement” before they review BWC footage. Whether officers should be allowed to view BWC footage before making a statement is one of the most pressing issues in body camera debates. Unfortunately, the San Francisco BWC plan does not adequately address this issue.

Your memory isn’t always reliable. While many of us are confident that we’re pretty good at remembering specific incidents, it turns out that even our memories of notable and historic events, such as 9/11, are hardly as well-formed and clear as we might hope.

The legality of an officer’s use of deadly force depends in large part on the reasonableness of what the officer believed at the time of the incident. For instance, whether an officer who shot someone reasonably feared for his life, or the lives of innocent bystanders, will be an important factor in determining whether the shooting was legal.

BWCs, like other cameras, don’t have fuzzy memories. What’s filmed by BWCs is stored and, absent tampering, won’t change. The same can’t be said of police officers’ memories. This is one of the factors that has prompted debate about whether police officers should be allowed to view BWC footage of a deadly use-of-force incidents before they file a report.

I and others have argued that police should not view BWC footage related to a deadly use-of-force incidents before filing a report. A policy that allows officers to view BWC footage before filing a report would allow officers an unfair chance to exculpate themselves of wrongdoing. Officers could search for justifications for use-of-force that didn’t occur to them while the incident in question was happening.

Others could argue that police officers, like all human beings, don’t have perfect memories and might not accurately remember important facts concerning a stressful incident under investigation. Rather than being seen as an honest lapse of memory, the omission of crucial facts in a report could be portrayed as an officer trying to avoid the consequences of poor behavior.

San Francisco’s body camera plan requires officers involved in a shooting or in-custody death to submit an “initial statement” before he reviews body camera footage.

At first glance, this policy seems like a decent compromise between the two positions I outlined above. Such a policy ensures that officers can view BWC footage, but only after providing a statement outlining what they remember about the incident under investigation.

However, the “initial statement” required by the recently approved San Francisco plan is explicitly required to be brief and resembles a collection of basic facts rather than an explanatory report:

The initial statement by the subject officer shall briefly summarize the actions that the officer was engaged in, the actions that required the use of force, and the officer’s response.

These initial statement requirements are too narrow. As Alan Schlosser, legal director for the American Civil Liberties Union of Northern California, said, officers should fill out a full report before viewing body camera footage:

When we said there should be an initial report, we didn’t mean there should be a brief report,” he said. “When we support an initial report, we meant there would be a full report and then the officer would see the video and then there would be a supplemental report, with the understanding that recollections change.

Police in San Francisco will be wearing BWCs in the not too distant future. With the current plan in place there is still room for improvement when it comes to using BWCs as tools for increased law enforcement accountability. If San Francisco’s police commissioners ever want to revisit their body camera plan they could do worse than taking inspiration from their neighbors across San Francisco Bay. In Oakland, officers involved in shootings cannot view body camera footage without first being interviewed and submitting a report.

 

Two months ago, the Supreme Court ruled that states have leeway in determining how to draw their legislative districts, more specifically that they don’t have to equalize the number of voters per district to satisfy the constitutional principle of “one person, one vote.” The decision was really a “punt,” not resolving the tensions between “representational equality” and “voter equality”; it’ll take some future case after the next census to force the justices to face the issues left unresolved. 

Former Cato intern (and future legal associate) Tommy Berry and I have now published an essay in the Federalist Society Review explaining how the Court “shanked” that punt by misreading constitutional structure and application. Here’s a sample (footnotes omitted):

In Evenwel, the Court decided that it is acceptable for a state to ignore the distinction between voters and nonvoters when drawing legislative district lines. According to the Court, a state may declare that equality is simply providing representatives to equal groups of people, without distinction as to how many of those people will actually choose the representative. A state may use this constituent-focused view of equality because “[b]y ensuring that each representative is subject to requests and suggestions from the same number of constituents, total-population apportionment promotes equitable and effective representation.”

But ignoring the distinction between voters and nonvoters achieves a false picture of equality at the expense of producing far more serious inequalities. Rather than placing nonvoters and voters on anything approaching an equal political footing, it instead gives greater power to those voters who happen to live near more nonvoters, and less power to those who do not.

As we argued before the decision came down, the framers of the Fourteenth Amendment recognized that granting such extra voting power runs the risk of harming the very nonvoters to whom it ostensibly grants representation. This recognition manifested itself in the enactment of the Fourteenth Amendment’s Penalty Clause. In both ignoring that clause and oversimplifying the debates over the Fourteenth Amendment, the Court’s opinion paints an incomplete picture of constitutional history.

Read the whole thing. For more, see Tommy’s blogpost on our article, as well as our earlier criticism of Justice Ginsburg’s majority opinion for misreading the Federalist Papers.

Hillary Clinton clearly believes that she enjoys a decided advantage over Donald Trump when it comes to foreign policy. Her speech today in San Diego launched what will clearly be a sustained attack on Trump’s qualifications as commander-in-chief. Citing his support for torturing the families of terrorists, his loose talk about using nuclear weapons on ISIS, and his calls for walking away from NATO and other allies, Clinton argued that Trump’s ideas about foreign policy are “dangerously incoherent.” His main tools of global statecraft, she said, would include bragging, mocking, and composing nasty tweets. In short, Clinton’s central theme is that Trump is simply “not up to the job” of president and if elected, Trump would lead America down a “truly dark path.”

Though most of Clinton’s attacks by this point have already been well rehearsed, the account against Trump is nonetheless devastating. Or at least the attack would be devastating to some other candidate in some other election year. This year, however, things look very different.

The most recent Washington Post/ABC News survey found Americans almost evenly divided over whether Hillary Clinton or Donald Trump would do a better job keeping the country safe, dealing with terrorism, and dealing with international trade. Can these numbers be real? Can almost half of the American public honestly prefer a man who clearly has given so little thought to international affairs over a woman who has traveled the world, served as a United States senator, and spent four years as Secretary of State? The surprising answer is yes.

There are three things keeping Clinton from winning the foreign policy debate.

The first dynamic fueling this situation is partisan polarization. As research has begun to make clear, the United States now suffers from an extreme case of “partyism.” Republicans and Democrats now dislike each other so much that they oppose each other instinctively regardless of the facts – witness how much Republicans still think President Obama is a Muslim. On the question of keeping the country safe, the Post/ABC survey found that 84% of Democrats think Clinton will do a better job but 83% of Republicans think Trump will do a better job. The fact that Trump commands such partisan loyalty despite his clear lack of knowledge and experience illustrates just how powerful a force partisan polarization has become in the United States. This alone will make it very difficult for Clinton’s (or anyone else’s) substantive arguments to gain any traction.

The second force at work is the appeal of Trump’s foreign policy views. Whatever his deficits on paper, on the campaign trail Trump’s “America First” rhetoric aligns more closely with public preferences than Clinton’s liberal interventionism does. Clinton denounces Trump for unrealistic and dangerous talk about allies, trade deals, and refugees, arguments that resonate with pundits and party leaders inside the Beltway. Trump, meanwhile, scores points with the public for understanding that for most Americans the best foreign policies are those that improve things at home. A recent Pew survey, for example, found that 57% of the public thinks the United States should deal with its own problems and let other countries deal with theirs as best they can. That same survey found that more Americans now believe American involvement in the global economy is a bad thing than a good thing. And a whopping 70% of the public wants the next president to focus on domestic policy; just 17% want him or her to focus on foreign policy. In treating foreign policy as an extension of domestic policy, Trump has plugged into a deep reservoir of public concern that the White House has allowed foreign policy to distract the United States from more pressing matters.

Finally, Clinton’s own weaknesses on foreign policy are helping buoy Trump’s case. Her foreign policy record includes a long list of decisions that challenge her narrative of superior judgment and temperament. Bernie Sanders has paved the way for Trump on this score, pressing Clinton repeatedly on her decision to vote in support of the 2003 invasion of Iraq when she was in the Senate and criticizing her for the mishandling of the Libyan intervention. Nor has Trump been shy about following Sanders’ lead. At a rally earlier this month Trump called Clinton “trigger happy” and said, “Her decisions in Iraq, Syria, Egypt, Libya have cost trillions of dollars, thousands of lives and have totally unleashed ISIS.” Nor is that just campaign rhetoric. From Afghanistan to the Libyan intervention to the Syrian civil war, Clinton has repeatedly staked out aggressive interventionist positions that go beyond what most of the public supports, leaving her wide open to Trump’s counterattacks.

In the end, Clinton is correct: Trump clearly does not possess the qualifications or the temperament to lead the United States. Unfortunately, Clinton’s critique leaves voters with only a “less bad” alternative to Trump rather than with a compelling vision of America’s role in the world. And with the approval ratings of both candidates at historic lows, it is unlikely that either will manage to score a knock out blow on foreign policy in the general election. In fact, with approval ratings of the two candidates at historic lows, it would not be surprising if large numbers of disaffected Democrats and Republicans leaned toward a third party ticket that eschews the aggressive interventionism of Clinton and the belligerent nationalism of Trump. 

Setting the stage for their study, Roy et al. (2015) write that rice is “one of the most important C3 species of cereal crops,” adding that it “generally responds favorably to elevated CO2.” However, they note that the actual response of rice crops to elevated CO2 and warming “is uncertain.” The team of five Indian scientists set out “to determine the effect of elevated CO2 and night time temperature on (1) biomass production, (2) grain yield and quality and (3) C [carbon], N [nitrogen] allocations in different parts of the rice crop in tropical dry season.”

The experiment they designed to achieve these objectives was carried out at the ICAR-Central Rice Research Institute in Cuttack, Odisha, India, using open-top-chambers in which rice (cv. Naveen) was grown in either control (ambient CO2 and ambient temperature), elevated CO2 (550 ppm, ambient temperature) or elevated CO2 and raised temperature (550 ppm and +2°C above ambient) conditions for three separate growing seasons.

In discussing their findings, Roy et al. write that the aboveground plant biomass, root biomass, grain yield, leaf area index and net C assimilation rates of the plants growing under elevated CO2 conditions all showed significant increases (32, 26, 22, 21, and 37 percent, respectively) over their ambient counter-parts. Each of these variables were also enhanced under elevated CO2 and increased temperature conditions over ambient CO2 and temperature, though to a slightly lesser degree than under elevated CO2 conditions alone. 

With respect to grain quality, the authors report there was no difference among the parameters they measured in any of treatments, with the exception of starch and amylose content, which were both significantly higher in the elevated CO2 and elevated CO2 plus elevated temperature treatments. The C and N grain yields were also both significantly increased in both of these treatments compared with control conditions.

The results of this study thus bode well for the future of rice production in India during the dry season. As the CO2 concentration of the air rises, yields will increase.  And if the temperature rises as models project, yields will still increase, though by not quite as much. These findings, coupled with the fact that the grain nutritional quality (as defined by an increase in amylose content) was enhanced by elevated CO2, suggest there is a bright future in store for rice in a carbon dioxide-enhanced atmosphere.

 

Reference

Roy, K.S., Bhattacharyya, P., Nayak, A.K., Sharma, S.G. and Uprety, D.C. 2015. Growth and nitrogen allocation of dry season tropical rice as a result of carbon dioxide fertilization and elevated night time temperature. Nutrient Cycling in Agroecosystems 103: 293-309.

Sen. Jeff Flake (R-AZ), Rep. Dave Brat (R-VA), and other members of Congress have introduced legislation based on the “Large HSAs” concept I first proposed here and developed herehereherehere, and here.

The “Health Savings Account Expansion Act” (H.R. 5324S. 2980) would expand the availability and benefits of tax-free health savings accounts (HSAs) in several ways. It would nearly triple existing HSA contribution limits from $3,400 for individuals and $6,750 for families to $9,000 and $18,000. It would allow tax-free HSA funds to purchase health insurance, over-the-counter medications, and direct primary care. It would eliminate the mandate that HSA holders purchase a government-designed high-deductible health plan. And it would repeal ObamaCare’s increase of the penalty on non-medical withdrawals. Americans for Tax Reform and FreedomWorks have endorsed the bill.

I’m sure I will have lots to say about Flake-Brat, but here are a few initial impressions.

  1. Flake-Brat would free workers from the government program we call employer-sponsored insurance—but only if that’s what workers want. The federal tax code currently tells the average worker with family coverage she can either surrender $13,000 of income to her employer and let her employer choose her health plan, or surrender a huge chunk of that money to the government by paying income and payroll taxes on it. The Flake-Brat bill would allow her to keep that money and either save it, use it to stay on her employer’s health plan, or use it to purchase better coverage somewhere else, all tax-free. The choice would belong to her, not to Congress or the IRS.
  2. Flake-Brat is a bigger tax cut than you’ve ever seen.  Large HSAs would be the largest-ever scaling back of the federal government’s role in health care. The Flake-Brat bill is effectively a $9 trillion tax cut. That’s how much money the current tax exclusion for employer-sponsored insurance will divert from workers to their employers over the next decade. Flake-Brat would return that money to the workers who earned it. Flake-Brat is thus an effective tax cut equal to all of the Reagan and Bush tax cuts combined. It is nine times the size of the tax cut associated with repealing ObamaCare.  Unlike health-insurance tax credits, Large HSAs involve no government spending and would not mandate that taxpayers purchase health insurance, as existing HSAs and health-insurance tax credits do. (The bill and its sponsors describe that requirement as a “mandate.”)
  3. Flake-Brat would make health care better, more affordable, and more secure. It would do so by dramatically reducing government’s influence over the health care sector. By shifting from employers to consumers nearly a quarter of the $3 trillion Americans spend annually on health care, Large HSAs would begin to make the health care sector and health policy respond to the needs of patients. Large HSAs are also less restrictive than existing HSA law or health-insurance tax credits. As a replacement for ObamaCare, Large HSAs would encourage innovative products like pre-existing conditions insurance that make coverage more affordable and secure.
  4. Flake-Brat shows Congress could create Large HSAs with or without repealing ObamaCare. Large HSAs are the most promising ObamaCare replacement plan to date, but Congress can create them before it repeals ObamaCare. The Flake-Brat bill would create Large HSAs even with ObamaCare still on the books. In fact, Flake-Brat would build support for repealing ObamaCare by exposing consumers to the full cost of its hidden taxes.
  5. Flake-Brat is a marker. The Flake-Brat bill defers consideration of a number of issues. All else equal, expanding tax breaks for HSA contributions would reduce federal revenues and increase federal deficits and debt. Like any proposal to level the playing field between employer-sponsored coverage and other coverage, the bill creates the potential for employer plans to unravel as (healthy) people choose better options. Were Congress to enact Flake-Brat with ObamaCare still on the books, there could be even more complicated interactions. The bill doesn’t totally level the playing field, either. Everyone would get an income-tax break, but only those with an employer who facilitates HSA contributions would get the payroll tax break. (Large HSAs can completely level the playing field with a simple tax credit that mimics that exclusion for such workers.) The authors don’t address these issues in the bill, or their supplemental materials. They will have to address them at some point. Fortunately, there are solutions. (For more on those solutions, see the “developed” links in the second paragraph.)

All in all, the Flake-Brat bill is a much-needed addition to the debate over the future of American health care.

The research organization MDRC recently released its comprehensive evaluation of Opportunity NYC- Family Rewards, a conditional cash transfer (CCT) pilot program with the goal of helping families break free of the cycle of poverty. This program is particularly notable because it is the first comprehensive CCT program in a developed country, and it was a large-scale randomized control trial. CCTs offer cash assistance, but only if certain conditions are met, in this case these conditions are concentrated in the three spheres of children’s education, preventive health care utilization and parents’ employment. There was no case management component to the program, the cash-based incentives were the only mechanism in place. The initiative had the twin goals of reducing current poverty (material hardship) and incentivizing these low-income families to invest in developing their human capital, which is important for their ability to attain a level of self-sufficient prosperity. After the conclusion of the three-year pilot program, while the program’s cash transfers produced some results in the first goal of reducing material hardship, as you might expect from a significant cash transfer to families with otherwise limited incomes, it failed to have much of an impact in its second goal related to the development of human capital. Family Rewards failed to produce any meaningful effects. There are caveats to what these findings mean in a broader sense, but they convey some of the limitations of transfers, and of antipoverty policies in general, in addressing the more complex and difficult aspects of poverty.

Six community-based organizations ran the Family Rewards pilot in six of the city’s communities with the highest levels of poverty, with the program running for three years concluding in August 2010. MDRC split roughly 4,800 families with 11,000 children into treatment and control groups, and analyzed the effects of the CCT program on a range of different metrics by comparing the groups at two and six years after the program began. Family Rewards included 22 different rewards tide to specific activities like taking the PSATs, attending parent-teacher conferences, and sustaining full-time work.

Throughout the three years of the program, participating families received an average of over $8,700 with a majority of those families receiving at least $7,000 and the top quintile receiving more than $13,000 in cash transfers. These substantial cash transfers reduced the share of families in poverty by 12 percentage points (from a baseline of 68 percent in poverty). There were associated reductions in measures of material hardship like the proportion of families dealing with food insufficiency or inability to pay rent, relative to the control group. These gains were concentrated among families living in severe poverty, while the reductions in hardships were “small and statistically insignificant among those whose poverty was not as severe at the time they began the program.”

The program fared much worse in its second goal of enticing people to invest in their human capital and develop the ability to support themselves long-term. With the exception of increases in high school achievement and graduation rates for students who were already more prepared, Family Rewards failed to have a substantial impact on the longer-term goals related to avoiding future poverty.  The program had no substantia impact on school outcomes for students in elementary and middle school, and no positive effects for less proficient high school students.  Despite cash-incentives to maintain full-time work, parents did not significantly increase their earnings in jobs covered by unemployment insurance, although it is possible that parents increased informal earnings that were not covered. In fact, Family Rewards reduced work effort for parents with limited educational attainment: parents in the program lacking a high school diploma had an average quarterly employment rate three percentage points lower than the control group, with average earnings roughly $2,900 lower. For the parents with the most limited job prospects, the program had the opposite of its intended effect, as the cash transfer amounting to a substantial portion of what they would be able to earn led them to reduce work effort. The minimal results for both parents and their children highlight some of the limitations of this strategy for trying to help them find a sustainable path out of poverty.

The results of this large randomized control CCT program show the challenges and limitations of government efforts to address the complicated problem of poverty.  While there were some reductions in material hardship due to the program’s substantial cash transfers, after the program ended the there were not significant differences between the program group and control group in terms of average monthly income and annual poverty rates. This is perhaps the starkest sign that the program largely fell short in its second goal of incentivizing people develop the capacity self-sufficient prosperity in the longer term, as the effects faded out when the transfers did. Instead of the intended goal of encouraging parents to maintain full-time work, Family Rewards actually had an adverse impact on the work effort of some of the most disadvantaged parents, which could have led them to become even more disconnected and isolated, which potentially comes with a whole range of negative spillover effects. The groups behind Family Rewards incorporated these findings and updated the design of the program in a second iteration that ran from 2011 to 2014, with results and more lessons learned from that pilot available this year. The current welfare system is a tangled mess and in dire need of reform. Pilot programs and evaluations like this one are valuable because they help us see what works, or in this case, what doesn’t. 

Donald Trump’s success in the U.S. is not unique. Europe is being buffeted by similar populist currents.

The United Kingdom might vote to exit the European Union in June. Moreover, a yes victory might spark what John Gillingham of the Harvard Center for European Studies and Cato’s Marian Tupy of the Cato Institute called a “rush for the exits.”

The most important question for UK voters is: Does belonging improve their lives?

European unity originally was designed to expand economic markets. The “European Project” took a dramatic new turn in 1993 with the Maastricht Treaty, which created the European Union and set as a goal “ever-closer union among the peoples of Europe.” This process was enthusiastically endorsed by a Eurocratic elite, many of whom are located in Europe’s quasi-capital of Brussels.

Are the benefits worth the cost? The single market remains the organization’s greatest contribution to Europe. However, regulation increased as Brussels expanded its authority. The London-based group Open Europe figured that the 100 most important EU regulations cost Britons about $33.3 billion annually.

The EU unabashedly infringes national sovereignty. For instance, Nile Gardiner of the Heritage Foundation wrote: “For decades, the British people have had to surrender their right to self-determination and have been forced to endure the humiliation of having British laws being overruled by European courts, and a multitude of rules and regulations imposed by unelected bureaucrats in Brussels.”

The UK government figures about half of economically significant laws originate in EU legislation. Yet London doesn’t need oversight from Brussels, having set the global standard for parliamentary government for much of the world.

Which is why Prime Minister Cameron pressed for broader British exemptions from EU dictates. He won only modest concessions.

At least Brussels still is less Leviathan than is Washington. But some Eurocrats openly pine for a United States of Europe.

Unfortunately, continental government is almost inherently anti-democratic. The EU has been attacked for its “democratic deficit.” Washington suffers a similar problem.

But continental authority weighs more heavily on European nations because much more separates them than divided the American colonies. Attempting to impose unity has triggered strong resistance.

Does London really need to be a member of the UK to promote prosperity? Critical would be new relationships forged by London with Europe and America.

Reaching an agreement with America should be relatively easy, despite President Barack Obama’s professed skepticism. The UK is a significant investor in the U.S. as well as major trading partner.

London could deal with EU members like any other nation under the rules set by the World Trade Organization or negotiate its own trade arrangement. Irritated Eurocrats might not be inclined to be generous. Still it would be in the EU’s interest to facilitate commerce for both sides.

Overall, Raoul Ruparel, Stephen Booth, and Vincenzo Scarpetta of Open Europe predicted that Brexit likely would reduce GDP between .5 and 1.5 percent. So, they explained, “the question then is whether the UK can use its new found freedoms to offset this cost or reverse it to a positive outcome.”

Brexit opponents also contend that the UK would lose international influence by leaving. However, foreign clout is of far greater interest to government officials than to normal people who pay the bills.

For the first time in decades the European Project risks going into reverse. As I wrote in Forbes: “Europeans are learning what Americans realized decades ago: a government strong enough to open markets is strong enough to impose uniformity.”

The United Kingdom will thrive in or out of the European Union. The British people must decide just how much they are prepared to pay to preserve a unified Europe.

Listening to Hillary Clinton put her big-government ideology before the needs of veterans (see below video) brings to mind an email exchange I had recently with a correspondent who had questions about privatizing Medicare, Medicaid, and the Veterans Health Administration.

The video is an interview with Libertarian presidential and vice presidential candidates Gary Johnson and Bill Weld into which MSNBC interjected a telephone interview with Democratic candidate Hillary Clinton. Clinton protests (starting at 4:20) that Congress should not privatize the VHA, while Bill Weld, a former two-term Republican governor of Massachusetts, gives one of the best explanations I’ve seen of why it should (10:00).

The email exchange follows the video.

Here’s what my correspondent wrote:

I have been discussing VA reform with a friend. She brought up as counter argument that Kansas (where she lives) “privatized” their Medicare and Medicaid program and made “everything worse.”

Not living in Kansas and being unfamiliar with their reforms of these programs left me without a response. (Other than, “Says you, liberty is always better!”) For all I know they came up with some weird public/private collaboration that really did make things worse. If so I’d like to show why that is not “privatization.”

Could you possibly direct me to any resources where I can get more information on this issue? I am especially interested in a critique of whatever they did from a free market vs. government perspective.

Here was my response:

Privatization means the government transfers ownership of a physical or financial asset from the government to private actors. The government could privatize the Veterans Health Administration by selling VHA hospitals. But it cannot privatize Medicare. There’s just nothing to transfer.

Your friend probably means that Medicare or Medicaid is contracting with private health insurance companies to provide the program’s “guaranteed” benefits. That is not privatization. Medicare and Medicaid have traditionally operated by writing checks to private doctors and hospitals. When these programs contract with private insurance companies (e.g., through the Medicare Advantage program), the government is simply writing checks to private insurance companies rather than private doctors and hospitals. There is nothing more private about the former than the latter. In both cases, the government is paying the piper and calling the tune.

When these programs contract with private insurance companies, the effects are usually mixed. There can improvement on some dimensions of cost and/or quality, but there is usually simultaneous backsliding on others.

I should have added that, contra Clinton, Congress should privatize the entire Department of Veterans Affairs, including all the VHA’s physical capital. My colleague Chris Preble and I explained how and why in the New York Times.

The Johnson-Weld campaign might look at that proposal as well as my “Large HSAs” proposal (Weld also mentioned HSAs), which some members of Congress have introduced as legislation.

Yesterday, the Washington Post reported that the U.S. age-adjusted death rate has ticked up slightly, breaking a trend of long-term decline. That is worrying and worth looking into, but let’s not lose sight of the broader picture. 
The rise in U.S. life expectancy has been going on for more than a century—almost uninterrupted. The only major disruption to the trend was a brief dip a century ago caused by the Spanish Flu pandemic following the end of WWI. What a pity that long-term trends do not make for flashy headlines! 
Life expectancy isn’t the same for different groups. As is the case globally, the gender gap in the United States favors women. Scientists are still studying why women live longer than men, but it may be related to differences in the immune system.
Racial life expectancy disparities have narrowed considerably since 1900, although they still remain. The gender gap has proved far more persistent than the racial gap: African American women now outlive white American men on average. 
Life expectancy has been rising at an even faster pace in most developing countries, thanks in large part to falling infant mortality rates. In my lifetime alone, Africans have gained almost eight years of life on average, while U.S. life expectancy has risen by almost four years. While it may not make for a good headline, rising life expectancy certainly makes for a story worth telling.

A new paper that examined the effect Uber had on crime in 150 cities and counties from 2010-2013 reveals that Uber lowers the rate of DUIs and fatal vehicle crashes. This is not an especially surprising finding given that Uber, like other ridesharing companies, offers a convenient way for those who’ve had a few drinks to get a sober ride home. Interestingly, the paper’s authors, Providence College’s Angela K. Dills and Stonehill College’s Sean Mulholland, also found that the introduction of Uber is followed by a decline in arrests for assault as well as an increase in vehicle theft. On balance, Dills and Mulholland’s paper ought to reassure those who are concerned about ridesharing safety.

Uber’s effects on drunk-driving have been one of the technology company’s strongest talking points. Last year, Uber teamed up with Mothers Against Drunk Driving and issued a report, which claimed that Uber’s entry into Seattle was associated with a 10 percent reduction in DUI arrests. During debates on ridesharing in Austin, Texas, Travis County Sheriff Greg Hamilton noted that while the causal relationship between Uber’s arrival and a reduction in DWI arrests “requires more study,” DWI arrests had fallen 16 percent in 2014, the year after Uber came to Austin, and in 2015 DWI arrests declined by 23 percent.*

According to Dills and Mulholland, “For each additional year of operation, Uber’s continued presence is associated with a 16.6 percent decline in vehicular fatalities.” The reduction in fatal vehicle crashes prompted by Uber’s arrival shouldn’t be attributed solely to fewer drunk people driving. A recent Pew survey found that 28 percent of 18-29 year-old have used a ridesharing services like Uber, more than another other age group. As the graph from the Insurance Institute for Highway Safety below shows, this an age group that is comparatively very prone to fatal car accidents.

Not only is there a strong indication that Uber reduces drunk driving, it’s also a platform very popular among some of the country’s most dangerous drivers.

The decline in arrests for assault is an intriguing finding, and Dills and Mulholland write that this may be because Uber cuts wait-times for passengers, thereby reducing their chance of being assaulted on the street:

Wait times are also likely to be lower because ride-sharing applications can quickly adjust prices in response to changes in the number of riders and drivers. Potential ride-share passengers do not need to physically search for a vehicle as they do for a taxi. This reduces opportunities for them to become the victim of a street crime. Potential passengers can also leave on short notice. This may reduce assaults.

Perhaps the most interesting of Dills and Mulholland’s findings is that an increase in vehicle thefts follows Uber’s arrival in an area, “reflecting more than 100 percent increases at the mean.” Dills and Mulholland suggest that this could be because of “an increased propensity for Uber passengers to leave personal vehicles parked in public locations.”

I and others have argued that ridesharing services such as Uber should be allowed to compete against traditional market incumbents. Uber is a popular service that allows users to efficiently find rides at times and places where it is inconvenient to find taxis. Dills and Mulholland’s paper shows that not only can Uber offer a valuable service to its users, it can save lives as well.

*For the difference between a DUI and DWI under Texas law see this explainer from Johnson, Johnson & Baer, a Houston-based DWI law firm. http://www.dwi-texas.com/what-is-the-difference-between-a-dui-and-a-dwi-under-texas-law/

Earlier this year, I documented the Obama administration’s abysmal results before the Supreme Court (the two Obamacare cases excepted). Not only is its overall winning percentage much worse than any other modern presidency, but its spate of unanimous losses is truly record-breaking.

And that record has only grown in the last few months. This week the government suffered its fifth unanimous loss of the year – matching its dubious achievement in 2013 with 25 cases still left to be decided – in a property-rights case in which Cato filed an amicus briefU.S. Army Corps of Engineers v. Hawkes Co.

Hawkes has a somewhat technical background but the case boiled down to this question: Can a landowner – in this case a peat-mining company (nothing to do with scotch, unfortunately) – challenge a government determination that its land is subject to federal regulation? Not whether the land is properly a wetland under the Clean Water Act, but whether the owner can go to court to argue the point in the first place!

Thankfully, all eight justices ruled that yes, this agency action is subject to judicial review under the Administrative Procedure Act. If you’re an eagle-eyed reader and think this reminds you of another case from a few years ago, you’re right! In 2012, the Court – also unanimously – ruled essentially the same way in a case called Sackett v. EPA. Yes, that case involved a different government agency and different legal technicalities, but the upshot is the same: if the government does something that hurts your use and enjoyment of your land, you get to go to court to challenge that action.

You’d think this would be a simple proposition, and yet the government insists on fighting it all the way to highest court in the land – and garnering nary a vote. Congratulations to our friends at the Pacific Legal Foundation, who litigated Hawkes and who have now won eight straight cases at the Supreme Court!

Finally, one interesting footnote to Hawkes: The Court took up this case after the U.S. Court of Appeals for the Eighth Circuit had ruled against the government and thus split from an opposite ruling by the Fifth Circuit in an essentially identical case called Kent Recycling Services v. U.S. Army Corps of Engineers. That Hawkes ruling happened but two weeks after the Court had denied review in Kent Recycling. Accordingly, the keen PLF lawyers who also brought Kent Recycling filed an immediate petition for rehearing, which the justices held pending the resolution of Hawkes. That petition will now be Granted, the lower-court ruling Vacated, and the case Remanded – what lawyers call “GVR’d” – for reconsideration (and reversal) in light of Hawkes.

As far as I know, it’s been decades since a cert. denial was not only reconsidered, but turned into a summary reversal on the merits. And it was here at Cato’s Constitution Day conference where John Elwood made what I believe was the first public call for just that outcome (see final panel). 

Last week marked the 92nd anniversary of the passage of the Immigration Act of 1924, also known as the National Origins Act.  This bill marked the permanent end of America’s nearly open borders policy with Europe.  Other previously passed laws like the Chinese Exclusion Act, the Literacy Act of 1917, and the Page Act restricted immigration from elsewhere.

The Immigration Act of 1924 limited the annual number of new immigrants by country to just 2 percent of the number of immigrants from that country who were already living in the United States in 1890.  This was a reform of the temporary Emergency Quota Act of 1921 that limited immigration to just 3 percent of the number of immigrants from any country who were already living in the United States in 1910.  Congress picked 1890 as the target date for the 1924 Act because that would exclude most of the Italian, Eastern European, and other Southern Europeans who came to dominate immigration since then (Charts 1 and 2).  The 1924 Act also created family reunification as a non-quota category. 

 

Chart 1   

Immigrants by Region of Origin (1820-1889)

Source: Yearbook of Immigration Statistics.

 

Chart 2

Immigrants by Region of Origin (1890-1920)

 

Source: Yearbook of Immigration Statistics.

 

The supporters of the 1924 Act gave several reasons for blocking immigration from Europe. 

Prescott Hall, co-founder of the Immigration Restriction League that concocted the national origin scheme, wrote: “Do we want this country to be peopled by British, German, and Scandinavian stock … or by Slav, Latin, and Asiatic races, historically downtrodden, atavistic, and stagnant?” 

Representative Albert Johnson, chairman of the House Committee on Immigration and Naturalization, was also the head of the Eugenics Research Association.  One of Johnson’s key advisers on immigration was Madison Grant, author of the 1916 best seller The Passing of the Great Race, a tract that denigrated Asians, blacks, and split Europeans along absurdly antiquated racial lines.  They wrote Hall’s scheme into law.

Why did the 1924 Act use a complex national-origins system to discriminate (mostly) based on race and ethnicity when they could have just explicitly discriminated based on race and ethnicity?  Prominent 1924 Act supporter and New York University sociologist Henry Pratt Fairchild explained the answer to that in his 1926 book The Melting Pot Mistake:

 

“The question will probably at once arise, why, if this legislation was a response to a demand for racial discrimination, was it expressed in terms of nationality?  The answer is simple.  As has already been shown, our actual knowledge of the racial composition of the American people, to say nothing of the various foreign groups, is so utterly inadequate that the attempt to use it as a basis of legislation would have led to endless confusion and intolerable litigation.  So Congress substituted the term nationality, and defined nationality as country of birth.  It is clear, then, that ‘nationality,’ as used in this connection, does not conform exactly to the correct definition of either nationality or race.  But in effect it affords a rough approximation to the racial character of the different immigrant streams [Emphasis added].”

 

Fear of litigation, administrative simplicity, and the knowledge that nationality and race were close enough for this piece of discriminatory legislation to achieve their goals made explicit discrimination unnecessary. 

Some of the worst provisions of the 1924 Act were changed in 1952 and the rest of it was obliterated in 1965, with the exception of the immediate family exemptions from the quota.  The Displaced Persons Act of 1948 corrected another serious deficiency of the 1924 Act by creating the first refugee law in U.S. history.  The Displaced Persons Act was in response to the U.S. government denying Jewish refugees during the 1930s and to help absorb the refugees of communism in the newly declared Cold War.

The brutal justifications for the 1924 Act and its terrible consequences should make us all glad that it’s a dead and buried law.  

Pages