Feed aggregator

Working the world of public policy, I’m used to surreal moments.

Such as the assertion that there are trillions of dollars of spending cuts in plans that actually increase spending. How do you have a debate with people who don’t understand math?

Or the oft-repeated myth that the Reagan tax cuts for the rich starved the government of revenue. How can you have a rational discussion with people who don’t believe IRS data?

And let’s not overlook my personal favorite, which is blaming so-called tax havens for the financial crisis, even though places such as the Cayman Islands had nothing to do with the Fed’s easy-money policy or with Fannie Mae and Freddie Mac subsidies.

These are all example of why my hair is turning gray.

But I’ll soon have white hair based on having to deal with the new claim from European bureaucrats that countries are guilty of providing subsidies if they have low taxes for companies.

I’m not joking. This is basically what’s behind the big tax fight between Apple, Ireland, and the European Commission.

I did several TV interviews on the topic yesterday, all of which can be seen here, but the Wall Street Journal did a great job of summarizing the issue today. Let’s look at that editorial, starting with the European Commission’s galling decision to use anti-trust laws to justify the bizarre assertion that low taxes are akin to a business subsidy.

Even by the usual Brussels standards of economic malpractice, Tuesday’s €13 billion ($14.5 billion) tax assault on Apple is something to behold. …Apple paid all the taxes it owed under existing tax laws around the world, which is why it hasn’t been subject to enforcement proceedings by revenue authorities. …Brussels now wants to use antitrust law to tell Ireland and other low-tax countries how to apply their own tax laws. …Brussels is deploying its antitrust gnomes to claim that taxes that are “too low” are an illegal subsidy under EU state-aid rules.

This is amazing. A subsidy is when government officials use coercion to force taxpayers (or consumers) to pay more in order to line the pockets of a company or industry. The Export-Import Bank would be an example of this odious practice, as would ethanol handouts.

Choosing to tax at a lower rate is not in this category. It’s a reduction in government coercion.

That doesn’t necessarily mean we’re necessarily talking about good policy since there are plenty of preferential tax laws that should be wiped out as part of a shift to a simple and fair flat tax.

I’m simply pointing out that lower taxes are not “state aid.”

The WSJ also points out that it’s not uncommon for major companies to seek clarification rulings from tax authorities.

Brussels points to correspondence between Irish tax officials and Apple executives to claim that Apple enjoyed favors not available to other companies, which would be tantamount to a subsidy. But all Apple received from Dublin, in 1991 and 2007, were letters confirming how the tax authorities would treat various transactions under the Irish laws that applied to everyone. If anyone in Brussels knew more about tax law, they’d realize such “comfort letters” are common practice around the world.

Indeed, the IRS routinely approves “advance pricing agreements” with major American taxpayers.

This doesn’t mean, by the way, that governments (the U.S., Ireland, or others) treat all transactions appropriately. But it does mean that Ireland isn’t doing something strange or radical.

The editorial also makes the much-needed point that the Obama White House and Treasury Department are hardly in a position to grouse, particularly because of the demagoguery and rule-twisting that have been used to discourage corporate inversions.

As for the U.S., the Treasury Department pushed back against these tax cases, which it rightly views as a protectionist threat to the rule of law. But it’s hard to believe that Brussels would have pulled this stunt if Treasury enjoyed the global respect it once did. President Obama and Treasury Secretary Jack Lew have also contributed to the antibusiness political mood by assailing American companies for moving to low-tax countries.

Amen.

It’s also worth noting that the Obama Administration has been supportive of the OECD’s BEPS initiative, which also is designed to increase corporate tax burdens and clearly will disadvantage US companies.

A story from the Associated Press focuses on the European Commission’s real motive.

The European Commission says…it should help protect countries from unfair tax competition. When one country’s tax policy hurts a neighbor’s revenues, that country should be able to protect its tax base.

Wow, think about what this implies.

We all recognize, as consumers, the benefits of having lots of restaurants competing for our business. Or several cell phone companies. Or lots of firms that make washing machines. Competition helps us by leading to lower prices, higher quality, and better service. And it also boosts the overall economy because of the pressure to utilize resources more efficiently and productively.

So why, then, should the European Commission be working to protect governments from competition? Why is it bad for a country with low tax rates to attract jobs and investment from nations with high tax rates?

The answer, needless to say, is that tax competition is a good thing. Ever since the Reagan and Thatcher tax cuts got the process started, there have been major global reductions in tax rates, both for households and businesses, as governments have competed with each other (sadly, the US has fallen way behind in the contest for good business taxation).

Politicians understandably don’t like this liberalizing process, but the tax competition-induced drop in tax rates is one of the reason why the stagflation of the 1960s and 1970s was replaced by comparatively strong growth in the 1980s and 1990s.

To conclude, it’s worth noting that Apple is just the tip of the iceberg. If the EC succeeds, many other American companies will be under the gun.

And when politicians - either here or overseas - raise taxes on companies, never forget that they’re actually raising taxes on worker, consumers, and shareholders.

P.S. Just in case you think the Obama Administration is sincere about defending Apple and other American companies, don’t forget that these are the folks who included a global corporate minimum tax scheme in the President’s most recent budget.

Twenty years ago last week, Congress enacted the most extensive welfare reform law since the 1960s, the Personal Responsibility and Work Opportunity Reconciliation Act. Cato scholars have long championed a particular aspect of the reform bill that excluded recent legal immigrants from federal means-tested public benefits and have argued for extending the law’s restrictions. Welfare reform was successful: immigrants thrived without government support.

The theory behind welfare reform was that depriving benefits from immigrants would incentivize those already here to find jobs and encourage only those who wanted to work to come. This theory has appeared to work out in practice. Following the law’s enactment, immigrants who were most likely to be targeted by its restrictions responded by working more, which decreased the prevalence of poverty in their households.  

While the 1996 reform restricted welfare in some ways for both native-born citizens (natives) and naturalized citizens, the law imposed the harshest restriction on noncitizens, barring them from any means-tested public benefits until they became eligible to apply for citizenship—5 years after receiving lawful permanent residency in the United States. The Census Bureau’s Current Population Survey details the unemployment among noncitizens and native-born or naturalized citizens.

All groups saw reductions in unemployment post-welfare reform, but noncitizens saw the greatest reduction in unemployment from 1996 to 2016. In 1996, natives had an unemployment rate that was 3 percentage points lower than the rate among noncitizens. By 2016, the rates for each group had converged. While unemployment among naturalized citizens also declined during this time, the noncitizen unemployment dropped much further (3.4 percent compared to 0.9 percent).

Figure 1: Unemployment Rate among Noncitizen, Native-Born and Naturalized Citizens

Source: Current Population Survey

Unemployment did not decline as a result of immigrants abandoning the labor force either. In fact, the labor force participation rate (LFPR) among noncitizens rose rapidly after 1996. The native LFPR was nearly 2 percentage points above that for noncitizens in 1996. In 2016, it was 3.7 percentage points higher for noncitizens. Again, naturalized citizens improved, but not as greatly (0.6 percentage points).

Figure 2: Labor Force Participation Rate among Noncitizens, Native-Born and Naturalized Citizens           

Source: Current Population Survey

But the most important trend is this: not only did poverty decline among noncitizens from 1994 to 2014, they were the only group of the three citizenship classes to see a decline in poverty. Noncitizens experienced an 5.1 percentage point decrease in their rate of poverty (an 18.4 percent drop in their 1996 rate), compared to a 2.1 percentage point increase for naturalized citizens (20.2 percent growth) and a 0.3 percentage point increase for natives (2.6 percent growth). Because this poverty statistic includes all cash public benefits, this means that noncitizens performed best even after welfare payments to citizens.

Figure 3: Poverty Rate among Noncitizen, Native-Born and Naturalized Citizen Households         

Source: Current Population Survey, March Supplement

If these increases in employment and labor force participation were partially the result of welfare reform, we would expect two conditions to hold. First, the main effect of the law would be felt in the first five years after enactment as the first cohorts of immigrants enter without benefits. After five years, the population of immigrants becoming eligible for benefits each year roughly matched the population entering without benefits, so the rate of welfare receipt among legal immigrants would remain roughly constant.

And this is exactly what we see. Nearly all of the gains in labor force participation, unemployment, and poverty occurred between 1996 and 2000. Of course, the absolute gains during that time were a result of the late 1990s booming economy, but the relative gains for noncitizens compared to naturalized and native-born citizens also occurred during this period and were maintained later, just as the theory predicts.

The second prediction is this: the gains should have occurred most strongly among noncitizens who arrived within the prior five year period because, as mentioned before, noncitizens are ineligible for federal benefits only in their first five years in the country. Unfortunately, the Current Population Survey data on year of entry that I found using its Data Ferret tool only extends through 2004, but this slice reveals that noncitizens who were barred from federal benefits once again did better than other noncitizens.

In 1996, the unemployment rate among recent immigrants was 5.5 percentage points more than the unemployment rate among the established immigrants. By 2004, the difference was just a half a percentage point. The vast majority of the gains occurred in the recent immigrant group.

Figure 4: Unemployment Rate among Noncitizens Who Arrived in Prior 5 Years and Other Noncitizens    

Source: Current Population Survey, March Supplement

The same pattern can be seen in the labor force participation rate data. Noncitizens of both types improved from 1996 to 2004, but immigrants barred from benefits did better. Recent immigrants were 8.4 percentage points less likely to be working or looking for work in 1996 than established immigrants. They were 5.3 percentage points less likely to do so in 2004.

Figure 5: Labor Force Participation Rate among Noncitizens Who Arrived in Prior 5 Years and Other Noncitizens 

Source: Current Population Survey, March Supplement

Noncitizen outcomes reflect their increased employment. Those without access to public benefits saw the greatest reduction in poverty of any group in the United States. From 1996 to 2004, the poverty rate among noncitizens who arrived in the prior five years declined 9 percentage points compared to 5 percentage points for other noncitizens.

Figure 6: Poverty Rate among Noncitizen Households Whose Head of Household Arrived in Prior 5 Years and Other Noncitizen Households 

 

Source: Current Population Survey, March Supplement

The theory is not complicated by the fact that the trends for noncitizens outside of the five-year bar were also positive (though much less so). Other research has shown that noncitizens who were eligible for benefits also used them less following the 1996 act. Social scientists have debated the reasons for this “chilling” effect—confusion over the law or community norms could have a role—but in either case, they also appear to have benefited from the law.

The unambiguous lesson of this story is that welfare reform did not decimate the immigrant poor. It is impossible to know for sure the full reasons for these trends, but although exceptions undoubtedly exist, it is clear that the vast majority of immigrants can thrive without federal benefits. The United States could financially afford to accept many more immigrants if it further restricted such benefits, allowing it to be even more welcoming to another generation of hardworking immigrants. 

After last week’s release of Education Next’s 2016 survey of education opinion (see Jason Bedrick’s and Neal McCluskey’s responses), Phi Delta Kappa yesterday released its own poll (see Neal’s take on that here). Once again, the poll sheds light on the public’s understanding (or misunderstanding) of school financing.

In an open-ended question, Americans for the 15th consecutive year designated “lack of money/financial support” the biggest problem facing public schools. Perhaps as a result, most Americans—53% in support to 45% opposed—favored increasing property taxes to boost school funding. However, there was broad skepticism (47% of respondents) that increases would spur quality improvements. What explains this apparent inconsistency?

It turns out support for increased property taxes is correlated with how respondents ranked local public schools. Those that viewed their public schools more favorably were more likely to support property tax hikes and be confident that increased funding would improve schools. Conversely, those that rated local schools lower were more resistant to hikes and skeptical that increased funding would result in improvements. While two-thirds of those that gave their local schools an A grade were confident that increased funding would help, only 17% of those that gave their schools an F agreed.

In what PDK calls its most “lopsided” result, Americans overwhelmingly preferred keeping a failing school open to closure, 84% to 14%, but 62% favored replacing teachers and administrators to increasing funding in the turnaround. Americans, it seems, agree that increased funding will not improve underperforming schools. Furthermore, 26% of those that gave their schools a failing grade thought school closure was the more appropriate response, compared to only 13% of the general public.

Listing funding as a problem also does not necessarily result in support for increased property taxes. In the latest poll, 19% of respondents cited school funding as the biggest problem, down from a record high of 36% in 2010 and 2011, the peak of the recession years. But the Education Next poll demonstrates that support for property tax hikes declined dramatically during those years.

Another reason so many respondents cited “lack of funding” as a major problem? The open-ended nature of the question allowed up to three responses, increasing the likelihood that many respondents would include school funding as one of their answers. That only 19% of respondents included it seems low given that that majority of respondents favored property tax increases. Moreover, the EdNext pollsters theorize that support for increases in funding rises in election years, when this issue is most hotly debated, and it’s therefore unsurprising that it was seen as the biggest problem in public education.

An important caveat to these findings is that support for increased funding dramatically drops when an individual is informed of real spending. In the EdNext poll, uninformed respondents estimated average per-pupil spending at $7,020, a little more than half the actual average of $12,440. When uninformed respondents were asked if they favored an increase in school funding, 61% supported the idea; when a separate group of respondents was told the actual per-pupil expenditure, support dropped to 45%.

These results lead to a number of conclusions. First, support for increased schooling taxation comes disproportionately from the wealthy, already well-performing public schools, where parents are confident that spending is put to good use. The poll results shouldn’t be seen as supporting property tax hikes in communities with failing schools where the effectiveness of more funding is suspect. Second, because the public appears uncertain about funding as a tool to turn around schools, perhaps a better alternative is to give parents more control over their children’s education via school choice policies, as minority groups favor. Finally, these studies together reinforce the importance of a well-informed public. Support for spending increases drops for all groups—teachers, Republicans, Democrats, and the general public—when given accurate information.

Despite large numbers of respondents favoring property tax increases, the PDK poll demonstrates a broad skepticism of more funding for failing schools. And there is no powerful  link between spending and academic performance, making it heartening that the public appears intuitively aware of this.

As a battered and weary country peers into the hellmouth of Election 2016, contemplating the “bleak choice between a ‘liar’ and your ‘drunk uncle,’” along come two of (Anglo-) America’s premier public intellectuals with a plan for getting honest, sober policies out of our next president. “We urge the next president to establish a White House Council of Historical Advisers,” Niall Ferguson and Graham Allison write in this month’s Atlantic. Modeled on the Council of Economic Advisers, the CHA would bring together the country’s finest historical minds, backed by a professional staff, to help close the “history deficit” at 1600 Pennsylvania.

It’s one of those buzzworthy notions that seems ingenious on first airing—a presidential “Dream Team of Historians”!—but gets less shiny the closer you examine it. 

Arthur Schlesinger, Jr.: speaking truth to JFK’s power

“I think there would be more than enough work for a council of applied historians,” says Harvard’s Allison. What kind of work? As Ferguson and Allison envision it, the Council could help presidents avoid unforced historical errors, like the invasion of Iraq. When Bush “chose to topple Saddam Hussein,” they write, “he did not appear to fully appreciate either the difference between Sunni and Shiite Muslims,” and “he failed to heed warnings that the predictable consequence of his actions would be a Shiite-dominated Baghdad beholden to the Shiite champion in the Middle East—Iran.”

It’s a fair critique, but neither Ferguson nor Allison is in a great position to make it. It wasn’t what either of them were saying at the time.

During the war fever of 2002-03, Ferguson wasn’t urging the administration to rethink the Iraq adventure, lest they inadvertently empower Iran–he was cheering the disaster on. “By showing them just how easily Saddam’s vicious little tyranny could be overthrown,” he wrote in the Daily Mail (“Empire of the Gun,” June 21, 2003), “Mr. Bush has made it clear to the leaders of Iran, Syria and Saudi Arabia that he is in deadly earnest. If their countries continue to sponsor terrorism as all three notoriously do, Saddam’s fate could befall them too. Such saber-rattling evidently works.” Further: “Historians may well look back on 2003 as a turning point in the troubled politics of the Middle East. And they will give much of the credit for that transformation to the courageous and undoubtedly risky strategy adopted by President George Bush.” Just the hard truths Bush needed to hear!

In fairness, Ferguson could sometimes be heard to strike a cautionary note: “Consider some history,” he urged those advocating a time-limited occupation: “the British ran Iraq for the better part of 40 years.” As a ”fully paid up member of the neoimperialist gang,” Ferguson worried about America’s imperial “stamina.” A “successful” occupation might take a million troops, he warned in 2005, plus “around 10 years to establish order in Iraq, 30 more to establish the rule of law, and quite possibly another 30 to create a stable democracy.” But like Madeleine Albright, he thought “the price is worth it.” If Bush had taken a dose of Ferguson’s “applied history,” we’d have ended up spending even more blood and treasure on a futile project.

Graham Allison was much less exuberant about the Iraq War, but he wasn’t against it. When you’ve got “rattlesnakes in your backyard, backing off and hoping they slink away is not the answer,” he wrote in October 2002. Even so, Allison believed that Saddam’s WMD arsenal was so fearsome we might need a crash program to bolster America’s biodefenses before launching the attack: “Have you gotten your anthrax and smallpox vaccinations yet?” Over the next 14 years, he’s spent much of his time predicting that jihadists are about to graduate from crotch-bombs to functional nukes.

So when Ferguson and Allison write in the Atlantic that “applied historians will never be clairvoyants with unclouded crystal balls,” they’re putting it mildly, especially where present company is concerned. If they’d had the president’s ear at the time, he’d have gotten extra doses of alarmism and delusions of grandeur, pathologies that weren’t exactly in short supply in the Bush White House.

In fact, there was a top-flight Middle East scholar, fully up to speed on the differences between Sunnis and Shiites, who had the administration’s attention in the run up to the war. That was Bernard Lewis, “Bush’s historian,” who “deliver[ed] spine-stiffening lectures to Cheney over dinner in undisclosed locations” and pro-war thinkpieces in the Wall Street Journal.

Lewis also had thoughts about Iran, and let them fly in another WSJ op-ed, “August 22,” published on August 8, 2006. In it, he warned that the Iranians might nuke Israel within the next two weeks, to mark the anniversary of “the night flight of the prophet Muhammad …. to heaven and back.” “This might well be deemed an appropriate date for the apocalyptic ending of Israel and if necessary, of the world,” Lewis wrote. No crystal ball there either, thankfully. 

So one potential problem with the CHA idea is that the president would get to pick the historians. Personnel is policy, and presidents might staff the Council with scholars who feed them even crazier ideas than they’ve already got.

Still, it’s probably a safer bet that the CHA wouldn’t have much influence at all. In one of the recent stories on the Ferguson/Allison proposal, Rep. Tom Cole (R-OK), a former history professor, makes the obvious point that “the council would have value only if the president wanted its advice.” Presidents have mostly used their pet scholars as ambassadors to academia and the chattering classes: they’re valued less for their influence on executive-branch decisionmaking than for their ability to put an intellectual veneer on whatever it is the president’s already decided to do. If implemented, the CHA might be yet another useless appendage to the Executive Office of the President, an institutionalized kaffeeklatsch of court historians.

Even that might not be totally harmless, however. Presidents already have an unhealthy obsession with their legacies—wandering the halls, gazing at the portraits (sometimes even talking to them), and wondering how they’ll stack up in the rankings game: “In 1996 Clinton privately grouped his presidential predecessors into three tiers, then spent a long Sunday morning with consultant Dick Morris discussing what he could do to join the top group.

The academic consensus on that question seems to be that, to become a great president, you need to dream big, break stuff, and “leave the presidency stronger than you found it.” Given historians’ generally demented judgments about which presidents belong in the top tier, we should probably be grateful Bill didn’t have a Council of Historical Advisers around to consult.

Earlier this month, the New York Times ran a headline “Trial by Jury, a Hallowed American Right, Is Vanishing.”  This is very true.  It’s a trend that we at Cato have been lamenting for many years.  Despite the clear language of the Sixth Amendment, that the accused shall enjoy the right to trial by jury in “all criminal prosecutions,” the government manages to oversee a system where jury trials are quite rare–only about 1 percent of the criminal cases will be decided by juries.

Fortunately, there’s a new book that calls attention to this problem, The Missing American Jury, by Professor Suja Thomas.  Entrepreneur Mark Cuban recommends the book, saying jury trial “is a right that you never think you will need … until you do.”  Precisely.  Beyond the criminal area, the administrative state is also trampling the right to jury trial in the civil area

For a podcast interview with Suja Thomas, go here.

For related Cato scholarship, go here and here.

My previous attempts at asking a Trump trade adviser directly about trade policy failed. I’m now going to try another approach: Interpreting something surprising two other Trump advisers said.

Here’s what Wilbur Ross and Peter Navarro wrote recently:

The saddest fact here is that Hillary Clinton doesn’t know the difference between a good trade deal and a bad one. Exhibit A is the Central American Free Trade Agreement (CAFTA-DR).

In her economic speech in Detroit, Clinton bragged that she voted against the one multilateral trade deal that came before the Senate while she was there. That was indeed CAFTA-DR, a multilateral deal involving the U.S. along with Costa Rica, El Salvador, Guatemala, Honduras, Nicaragua, and the Dominican Republic.

Here’s what Clinton did not confess to: She was wrong to oppose CAFTA-DR. In 2014, we had a favorable trade in goods balance with the CAFTA-DR countries of $2.7 billion. By 2015, that jumped to $5 billion. This pattern continued in the first half of 2016 with a surplus of $2.4 billion.

Did you catch that?  Trump’s trade advisers are praising a U.S. trade agreement.  That doesn’t sound very Trump-like, as Trump has been saying NAFTA is a “disaster” and has been calling the Trans Pacific Partnership (TPP) the “rape” of our country.  

So why do they like the CAFTA-DR?  Because in 2014 (9 years after the deal went into effect), there was a U.S. surplus in trade in goods with the various members of CAFTA-DR.  That by itself tells them it was a good deal. 

They follow this up with their typical criticism of NAFTA and the US-Korea FTA, complaining that these agreements led to a trade deficit:

Now what about the very poorly negotiated trade deals Hillary Clinton did support? Take NAFTA, which she lobbied for and her husband, former President Bill Clinton, signed in 1993. At the time, our trade in goods with Mexico was roughly in balance, with a small surplus of $1.7 billion. Today, we run a trade deficit in goods of roughly $60 billion — an astonishing leap. 

NAFTA is hardly a bad trade deal outlier in the Clinton oeuvre. As Secretary of State, Hillary Clinton helped draft the South Korea Bilateral Agreement, describing it as “cutting edge.” She was right. It cut 75,000 American jobs, according to the EPI, rather than the 70,000 gain promised by the White House. Meanwhile, our trade deficit with South Korea has doubled.

So the argument seems to be this: There are good trade agreements that lead to trade surpluses and bad trade agreements that lead to trade deficits. A good negotiator can get you an agreement with a surplus; with bad negotiators, you will end up with a deficit.

In reality, this is complete nonsense. These agreements were all negotiated by basically the same people (the U.S. Trade Representative’s Office), and they all say basically the same thing: They all lower tariffs; they all open services and procurement markets to some degree; they all protect intellectual property; and they all have rules on investor protection. There wasn’t some tricky maneuver the U.S. trade negotiators carried out it in the CAFTA-DR context to get a trade surplus, but forgot to use in the other trade agreements.

So why the variation in trade flows between agreements? Well, it’s complicated. These trade numbers actually fluctuate quite a bit from year to year.  See below for tables showing the U.S. trade balance over the years with several CAFTA-DR countries:

There are a lot of reasons for these fluctuations. Among other things, there are overall trends in the global economy and in specific national economies; and there are the decisions of private actors operating in the marketplace (it is these actors who are the ones actually doing the trading), which affect particular sectors over time. Thus, citing a trade surplus with CAFTA-DR countries – especially when it focuses on only the brief period 2014-2016 – as a reason for the success of the agreement vastly oversimplifies the impact of trade agreements on trade flows. Again, it is not the skill of the CAFTA-DR negotiators vs. the incompetence of the NAFTA or US-Korea FTA negotiators that led to the different results here.  As noted, it was basically the same people on each.  This is not like baseball. Trade negotiators do not have an off year and negotiate a bad trade agreement one year, but then come back a couple years later and have a career year by negotiating a good trade agreement.

But Trump’s advisers don’t realize that, probably because they don’t really know much about the substantive details of trade agreement.  They are simply looking at U.S. trade deficits or surpluses that arise after the fact, which generally matter very little (as my colleague Dan Ikenson has explained), and certainly don’t matter much at all in the context of trade between the U.S. and individual countries.

U.S. Trade Representative Michael Froman is having a bad week.  First, Senate Majority Leader Mitch McConnell put the kibosh on lingering prospects that his chamber would consider ratification of the Trans-Pacific Partnership deal this year.  Then Germany’s economy minister proclaimed the 3-year-old Transatlantic Trade and Investment Partnership negotiations had “de facto” failed, with the French trade minister promising to pursue formal termination of the talks – adding that “the Americans give nothing or just crumbs” (which puts the USTR beneath Marie Antoinette, who at least offered cake).    Whether McConnell is being coy in hopes of extracting concessions from the administration on TPP is unclear, but either way the likelihood is approaching certainty that ratification of the Pacific trade deal will become the responsibility of the next president and Congress.  For reasons given here and here, I’m bullish on that outcome within two years.   But the TTIP is a different story.  Although the negotiations are not officially dead, they might as well be. Talks were doomed from the outset, laden with too many intractable issues, too many red lines, a thorough lack of realism concerning the time and effort required for success, and a profound asymmetry in the desire to get a deal done. With U.S. negotiators focused on completing the TPP, the EU’s embrace and commitment to the TTIP became a case of unrequited love.  With each EU overture, the U.S. negotiators could play hard to get.  And they did.   Now, the United Kingdom’s likely departure from the EU complicates matters further, with uncertainty about the future composition of the EU impeding proper evaluation of the expected tradeoffs from a prospective TTIP. So, while the prevailing uncertainty likely means TTIP stasis for the next couple of years, Brexit would give U.S. negotiators even more leverage in TTIP than they already have. The possibility of a US-UK free trade agreement or a UK accession to the TPP would undoubtedly shift TTIP dynamics further in favor of U.S. negotiators – and give the UK added leverage in negotiating its own post-Brexit relationship with the EU.   TTIP isn’t dead. It’s in a coma. For it to have any hope of recovery and real success – an outcome with real liberalization that is – a restoration of some semblance of symmetry in demand for that outcome is necessary. With the existing imbalance, it’s better to have no deal at all because the misguided objectives of negotiators are to open foreign markets as much as possible, while keeping their own as closed as possible. Negotiators with leverage are more likely to succeed at keeping their own markets closed, depriving their fellow citizens of the real benefits of trade. For Americans to realize the most important benefits of trade liberalization, its negotiators must be matched up against foreign negotiators with approximately the same strength (or leverage). When the foreign trade negotiators don’t have enough leverage, U.S. consumers and import-consuming industries lose.   For any TTIP outcome to be considered successful, the deal must tackle U.S. restrictions on competition in shipping (repealing the Jones Act), commercial air services, and government procurement projects. Trillions of dollars of annual economic activity in the United States is provided by domestic suppliers facing no foreign competition, which represents an enormous drag on U.S. growth.  In the TTIP negotiations to date, the United States hasn’t budged an inch to accommodate any liberalization in those areas.  Until that is no longer the case, the TTIP should be considered a failure.   When the TTIP negotiations were launched in 2013, I warned in this paper that the talks included the seeds of its own destruction and that a successful outcome would require a new approach:

As great as the benefits may be, the TTIP was not borne of any genuine enthusiasm for the enterprise. In Europe, it was seen as a last resort. Frustrated by the failures of monetary policy and restricted by the imperative of fiscal austerity, policymakers were looking for something—anything—to embrace as a potential economic tonic. Whether they actually thought TTIP likely to bear fruit is an entirely different matter. They wanted something to behold as evidence that Greece did not represent Europe’s fate. Potential voter wrath, political backlash, and stalemate–historically effective deterrents to initiating transatlantic trade talks–took a back seat to the affirmative optics of embracing some plausible initiative that might steer Europe from the abyss.

For U.S. policymakers, the main motivation for launching TTIP was to assuage EU concerns that the United States had written her off in its “pivot” to Asia.   Other rationales for pursuing TTIP include the argument that the world needs the United States and European Union to reassert global economic leadership at a time when no other country or group of countries is willing or able to do so. Another is that there is a race to establish global production standards and TTIP, representing half the world’s output, presents an opportunity to establish them here and now. A third ex-post rationale is that by establishing disciplines on issues where other trade agreements are silent—issues like currency manipulation, the operations of state-owned enterprises, local content rules, and others—the United States and EU could establish rules that China and others would eventually have to heed.   It is within this context that TTIP emerged. But none of those rationales–pursuing TTIP as a last resort, assuaging hurt feelings, establishing standards, disciplining China and others–seem likely to provide the motivation for negotiators and governments to dig deep and remain committed enough to make difficult choices that may carry political consequences. As the talks drag, will governments remain committed to the goals? Will governments motivated by the “last resort” rationale continue to invest seriously in the negotiations if their economies experience growth and the political costs of TTIP no longer look so necessary to incur? Already there have been signs of retreat from the ambitious goals articulated at the outset.   From the outset, negotiators erred by setting a 2014 completion date for the negotiations. There is absolutely no plausibility to that deadline and, frankly, failure to amend the timetable with realistic deadlines will only undermine the credibility of the undertaking with a public already skeptical of trade negotiations.

There are dozens of issues on the table of varying complexity that will likely take several years to resolve. Rather than have a single deadline for a single undertaking, the negotiators should announce that their intention is to achieve a multi-tiered agreement that yields multiple harvests at established time intervals. Some analysts have referred to the TTIP as a “living agreement,” although a common understanding of that concept is not evident nor, to my knowledge, have the governments or their negotiators used this characterization in any official context. They should. And it should work something like this.

Negotiators would take stock of the issues on the table and rank them in order of importance to a successful TTIP conclusion. They would then rank those same issues in terms of order of difficulty to resolve. Based on averaging and some agreed upon weighting of those two sets of rankings, negotiators would identify what they and their counterparts see as the most important and least important issues, as well as the most difficult and least difficult issues to resolve. That exercise would produce a road map for how to proceed.

When the dust settles and greater certainty emerges, the United States and EU (and UK) might consider relaunching the TTIP negotiations along these lines. But the parties should come to the table with a genuine willingness to liberalize everything (including sacred cows) because that is what will generate the interest, excitement, and leverage to achieve a really successful outcome.

While the American news media were preoccupied with Donald Trump’s latest tweet or Hillary’s Clinton’s latest explanation for a scandal that barely passed the straight face test, a more important development took place in Europe that received scant attention.  The prime ministers of both Hungary and the Czech Republic urged the European Union to build its own army.  That is a very significant shift in attitude.  Until now, The European countries had been content to channel security matters through NATO and to focus the EU’s attention on economic issues. 

The insistence on NATO’s primacy also reflected Washington’s wishes, since it guaranteed U.S. control of transatlantic security decisions.  That control came at a high cost, however, since it enabled the European allies to free ride on Washington’s security exertions.

U.S. leaders have repeatedly discouraged independent security initiatives on the part of the European nations.  There was a tremendous opportunity to change policy and off load security burdens with the end of the Cold War.  Perhaps the clearest opportunity was when Yugoslavia began to disintegrate in the early 1990s.  When the initial European diplomatic and peacekeeping efforts faltered and, predictably, the allies then sought U.S. “leadership,” the Clinton administration’s response should have been a firm rejection. 

U.S. officials should have told their European counterparts that the turmoil in the Balkans was a regional matter that had little impact on the United States.  And just as we would have no right to expect them to take a leading role in resolving a similar parochial bout of disorder in Central America or the Caribbean, they had no right to expect U.S. military involvement in the Balkans. Such a stance would have pressured the Europeans to address security issues in their region as competent adults instead of hapless dependents.

Instead, Washington continued to play a dominant role in the Balkans through NATO, first with the Bosnia intervention in 1995 and then with the Kosovo intervention in 1999.  Democratic Europe’s security dependence continued unabated.  Indeed, NATO’s prominence and Washington’s risk exposure increased steadily as the alliance expanded first into Central Europe and then into Eastern Europe.  As defense analyst Nikolas K. Gvosdev laments, the U.S. Senate, in a monumental dereliction of its constitutional duty, failed to ask the hard questions that needed to be asked as Washington undertook increasingly far-flung, questionable security obligations.  With a few exceptions, that was also true of the broader foreign policy community.

American policymakers have learned nothing from that experience.  They still go out of their way to reassure NATO’s European members as though they are helpless protectorates who are (somehow) doing America a great favor by a willingness to be allies.  But the European Union should be far from helpless.  It has both a population and an economy greater than that of the United States.  The EU has failed to develop a defense arm because the United States enabled such irresponsible behavior.

As I discuss in an article over at the National Interest Online, the assurances members of the U.S. foreign policy establishment are giving to the Europeans are becoming dangerously unhinged from reality.  A prime example were the comments that Vice President Joe Biden made during a recent trip to the Baltic republics.  He assured his hosts that America’s commitment to the defense of NATO members remained rock solid.  And “the fact that you hear something” contrary to that treaty obligation “from a presidential candidate in the other party, it’s … nothing that should be taken seriously.”  There was, Biden told his hosts, “continued overwhelming bipartisan commitment in the United States of America in both political parties to maintain our commitment to NATO.”  Given that GOP presidential nominee Donald Trump has repeatedly termed NATO obsolete and indicated in interviews that the defense commitment to the Baltic republics in particular was highly conditional, Biden’s statement was dangerously misleading.  The man whose views he so cavalierly dismissed could be president of the United States on January 20, 2017.

A significant change in views about defense arrangements appears to be taking place on both sides of the Atlantic.  Instead of reacting with hysteria, or as Biden did with denial that borders on a catatonic response, we should recognize and encourage constructive change.  It is a healthy development if Hungary and the Czech Republic want the European Union to create its own military force.  This time, Washington should not squash such initiative.  It makes sense for the European nations to handle security problems in their region instead of expecting the United States to do so from 5,000 miles away.

In 2007, Judge Richard Posner found it “untenable” that attaching a tracking device to a car is a seizure. But the Supreme Court struck down warrantless attachment of a GPS device to a car on that basis in 2012. Putting a tracking device on a car makes use of it without the owner’s permission, and it deprives the owner of the right to exclude others from the car.

The weird world of data requires us to recognize seizures when government agents take any of our property rights, including the right to use and the right to exclude others. There’s more to property than the right to possession.

In an amicus brief filed with the U.S. Court of Appeals for the D.C. Circuit last week, we argued for Fourth Amendment protection of property rights in data. Recognition of such rights is essential if the protections of the Fourth Amendment are going to make it into the Information Age.

The case arises because the government seized data about the movements of a criminal suspect from his cell phone provider. The government argues that it can do so under the Stored Communications Act, which requires the government to provide “specific and articulable facts showing that there are reasonable grounds to believe that [data] are relevant and material to an ongoing criminal investigation.” That’s a lower standard than the probable cause standard of the Fourth Amendment.

As we all do, the defendant had a contract with his cell phone provider that required it to share data with others only based on “lawful” or “valid” legal processes. The better reading of that industry-standard contract language is that it gives telecom customers their full right to exclude others from data about them. If you want to take data about us that telecom companies hold for us under contract, you have to get a warrant.

Under the “reasonable expectation of privacy” test, a person doesn’t have privacy or a Fourth Amendment interest in information they share with others. But, as we pointed out to the appeals court, the Supreme Court has been moving away from the “reasonable expectation of privacy” test and its step-child, the “third-party doctrine.”

The Court of Appeals should put aside doctrine and administer the Fourth Amendment like a law. If there was a seizure—an invasion of a property right, including the right to exclude others from data—that should be reviewed for reasonableness. And the hallmark of reasonableness is getting a warrant.

Speaking of administering the Fourth Amendment, the weird world of data is going to require a deeper understanding of what it means to “search,” too. A new law enforcement technique uses advanced data collection and storage techniques to search entire communities before the government knows what it’s looking for.

Since January, Baltimore police have been recording all activity in the city from above, using a special, camera-equipped plane. The data collected makes any visible activity available for police to review later. I’m calling it “pre-search,” and I’ve written about it on the Reason blog.

In an ordinary search, you have in mind what you are looking for and you go look for it. If your dog has gone missing in the woods, for example, you take your mental snapshot of the dog and you go into the woods comparing that snapshot to what you see and hear.

Pre-search reverses the process. It takes a snapshot of everything in the woods so that any searcher can quickly and easily find what they later decide to look for.

In this case, it’s not the woods. It’s every home in Baltimore, and every Baltimorean. Even though the order may be backward, their interests in security from unreasonable search is the same. When this technique gathers information about people’s movements and activities in and around homes, the government’s collection and use of data should be subject to the Fourth Amendment’s constraints.

Pre-search is in use at departments of motor vehicles around the country today. Many are using facial recognition to scan the faces of all applicants and drivers’ license holders—and they’re doing it without suspicion. Scanning the faces of every driver license holder is a pre-search that sets up innocent people for later digital searching.

The weird world of data gives us a lot to grapple with if we’re going to protect our privacy.

Libertarian presidential candidate Gary Johnson, the former-governor of New Mexico, wrote a remarkable op-ed for CNN yesterday detailing his views on immigration reform. The piece included the following:

Our politicians, both right and left, have created a system for legal immigration that simply doesn’t work. We have artificial quotas. We have “caps” on certain categories of workers that have no real relationship to the realities of the free market. It’s no coincidence that recent history shows the only successful way to reduce illegal immigration is to have a recession. Over the past 10 years, both illegal entries and the number of undocumented immigrants in the country have declined. That’s not because the government did anything right. ….

Try this, instead: No caps. No categories. No quotas. Just a straightforward background check, the proper paperwork to obtain a real Social Security number and work legally or prove legitimate family ties, and a reliable system to know who is coming and who is going.

Gov. Johnson is correct to reject government-mandated immigration quotas. As I argued in a recent blog post, quotas are the definition of an unreasonable immigration policy—they literally have no reason behind them. They are no different than Soviet manufacturing quotas, and they have the exact same effect: discord in the free market—surpluses where workers are unneeded, shortages where they are needed, and black markets that inevitably results when government makes movement illegal.

Not to quibble, but Gov. Johnson shouldn’t give the poor economy all of the credit for declining illegal immigration in recent years. In fact, a significant portion of the credit can be attributed to doing what he says—granting more work visas. As the figure below shows, the number of work visas is inversely correlated to the number of illegal entries. During the 1950s and 1960s, illegal immigration almost disappeared during the Bracero guest worker program. Then, in recent years, less strict visa rules have resulted in more legal immigration and less illegal immigration.

Figure: Apprehensions of Illegal Immigrants at the Border and Lesser-Skilled Guest Workers Admitted

Sources: INS, DHS, and CBP

If the presidential candidate’s plan was implemented, the days of illegal immigration would be behind us for good.

While some people will call Gary Johnson’s proposal “open borders” without limits, this is not the case and not just because there would be background checks. There would be limits, but the limits would be those imposed not by bureaucrats, but by Americans as individuals. Rather than a single “immigration plan” for the country, Gov. Johnson is proposing giving Americans back their liberty to associate with foreigners as they see fit—plans of the many, not of the few. As the economist Friedrich Hayek once said:

This is not a dispute about whether planning is to be done or not. It is a dispute as to whether planning is to be done centrally, by one authority for the whole economic system, or is to be divided among many individuals. Planning in the specific sense in which the term is used in contemporary controversy necessarily means central planning—direction of the whole economic system according to one unified plan. Competition, on the other hand, means decentralized planning by many separate persons.

That is also the heart of the disagreement between those who favor an “immigration plan” along the lines of those outlined by the major party candidates and those who favor free markets in movement and employment. 

Responding to the worldwide refugee crisis—which the United Nations has called “the biggest humanitarian emergency of our era”—President Obama vowed last September that the United States would accept 10,000 Syrian refugees and 85,000 refugees total over the following 12 months. With much fanfare, the State Department hit its Syrian refugee quota this week. But with just one month left, it is still 15,000 short of its overall target, and if it continues at its current pace, it will come up 3,000 short.

But here are seven reasons why hitting the target would be a major accomplishment.

1) A slow start: The biggest reason that the State Department is cutting it close is that it suffered one of its slowest starts in recent years.  In the prior three fiscal years (FYs), the refugee target was 70,000, and yet even with a higher goal this year, the United States had accepted fewer refugees at the midpoint of this year than at the same time in any of those years (the purple bolded line in the chart below). While the United States has usually ramped up slightly during the second half of prior years, it has taken an historic effort to catch up this year.

Figure: Monthly Refugee Admissions to the United States (FY 2013-FY 2016)

 

Source: State Department

2) Most refugees in a month ever: If the United States is to reach its goal this year, it will need to accept nearly 15,000 refugees in September. This is more than any month that the State Department has made available since 2001 and possibly the most ever. Although month-by-month statistics are unavailable for the record year of FY 1992 when the United States admitted 132,000 refugees, the average monthly intake was only 11,000, making it possible that this September will be the most ambitious month in history.

3) Late planning: A major reason for the slow start is that the agencies had planned throughout FY 2015 to accept only 75,000 in FY 2016. It was not until two weeks before the start of the year that Secretary of State John Kerry changed course and decided to increase the number by 10,000. The agencies scrambled to adjust, but it took time to ramp up. “As an operational person and for planning purposes, I had anticipated an increase from 70,000 to 75,000,” Barbara Strack, Refugee Affairs Division Chief of U.S. Citizenship and Immigration Services (USCIS), told Congress on October 1st last year. In order to meet the goal, the agency needed to “surge” hundreds of refugee officers into Jordan from February to April to conduct enough interviews to meet the goal.

4) No new money: The agencies are already on pace to admit 12,000 more refugees this year than last year, despite receiving no new funding to do so. The agencies actually did not ask for new money. The State Department’s representative Larry Bartlett told Congress in October that the department was “looking for efficiencies across its programs.” Ms. Strack told Congress that USCIS believed that there was “sufficient funding… to cover the 85,000 anticipated admissions in FY ‘16 by reprioritizing between programs.” But again, reprioritizations and efficiencies take time, and the lack of new money likely delayed their ability to ramp up, making the accomplishment all the more impressive.

5) Humanitarian emergencies everywhere: On top of these 85,000 refugees, the Office of Refugee Resettlement will also have to deal with the most Cuban arrivals since 1980, the most asylum-seekers at the border claiming a credible fear of persecution in their home countries ever, and a massive influx of unaccompanied immigrant children. The administration warned Congress as early as December that it may fall short of the money needed to handle the number of unaccompanied children, and that has in the past resulted in money being taken from the refugee program.

6) The most difficult cases: Assuming that it takes in as many as it did last month, the United States is on pace to accept roughly 13,000 Syrian refugees this year, which is more than 5 times the amount it admitted last year. These cases require the highest level of security review. USCIS’s Fraud Detection and National Security Directorate conducts enhanced reviews of Syrian refugee cases that require more manpower than the typical refugee case. They employ surveillance to check any factually verifiable claim that the refugee makes during the process. This leads to higher rejection rates, which also makes these cases much more difficult to process.

7) Hostile political climate: Maybe this goes without saying, but it has taken political determination to effectuate a 21 percent increase in the number of refugees in the face of congressional opposition and, at times, even public opposition. In 1980, Congress entrusted the president with the responsibility, after hearings with Congress, to raise the refugee goal precisely in order to insulate the decision-making from such political whims. The fact that the president has exercised the authority deserves credit.

This achievement, however, should be seen in context: 85,000 refugees is 0.13 percent of the total number of forcibly displaced persons around the world this year who have fled their homes to escape violence and persecution. The president’s current goal for FY 2017 is 100,000, which is still lower than the share of refugees worldwide that the country has taken historically. I have argued that this number could be increased through exempting refugees from the normal immigration quotas and admitting refugees with private sponsors, neither of which would require more money from Congress.

East Asia is a rough neighborhood. Chinese activities in the East and South China Seas stoke tensions and raise questions about Beijing’s plans for regional or global dominance. North Korea’s latest contribution to regional instability came in the form of a submarine-launched ballistic missile test on August 24, during a large annual U.S.-South Korean military exercise. Initial analysis suggests that the missile could have a range of 1,000 km or more, which would give Pyongyang the ability to place all U.S. military bases in South Korea and the main Japanese islands at risk.

America’s allies and partners in East Asia have not stood idly by while Beijing and Pyongyang acted aggressively. Rather, they have taken relatively small but still meaningful measures to improve their defenses and, thus, their ability to resist coercion.

Tokyo announced that it would develop new anti-ship missiles and deploy them to islands near the disputed Senkaku/Diaoyu Islands in the East China Sea after multiple Chinese Coast Guard and fishing vessels began operating in the area in early August. During the annual Han Kuang military exercises, Taiwan’s president Tsai Ing-wen announced that the Ministry of National Defense will devise a new strategy by January 2017. U.S. allies and partners also worked with one another. For example, on August 18, Japan delivered the first of ten coast guard vessels to the Philippines.

In the near term, the United States will likely retain its status as the ultimate guarantor of security for its East Asian allies and partners, but these efforts to take on a greater burden for self-defense should be praised. Eventually these states may take up the mantle of security guarantor, but such a transformation will take time. Additionally, the next president of the United States should encourage further burden sharing. Allies that are capable of deterring coercion or responding to security threats on their own enable a more restrained U.S. grand strategy by reducing the range of incidents or crises that require U.S. intervention. 

Our statist friends like high taxes for many reasons. They want to finance bigger government, and they also seem to resent successful people, so high tax rates are a win-win policy from their perspective.

They also like high tax rates to micromanage people’s behavior. They urge higher taxes on tobacco because they don’t like smoking. They want higher taxes on sugary products because they don’t like overweight people. They impose higher taxes on “adult entertainment” because…umm…let’s simply say they don’t like capitalist acts between consenting adults. And they impose higher taxes on tanning beds because…well, I’m not sure. Maybe they don’t like artificial sun.

Give their compulsion to control other people’s behavior, these leftists are very happy about what’s happened in Berkeley, California. According to a study published in the American Journal of Public Health, a new tax on sugary beverages has led to a significant reduction in consumption.

Here are some excerpts from a release issued by the press shop at the University of California Berkeley.

…a new UC Berkeley study shows a 21 percent drop in the drinking of soda and other sugary beverages in Berkeley’s low-income neighborhoods after the city levied a penny-per-ounce tax on sugar-sweetened beverages. …The “Berkeley vs. Big Soda” campaign, also known as Measure D, won in 2014 by a landslide 76 percent, and was implemented in March 2015. …The excise tax is paid by distributors of sugary beverages and is reflected in shelf prices, as a previous UC Berkeley study showed, which can influence consumers’ decisions. …Berkeley’s 21 percent decrease in sugary beverage consumption compares favorably to that of Mexico, which saw a 17 percent decline among low-income households after the first year of its one-peso-per-liter soda tax that its congress passed in 2013.

I’m a wee bit suspicious that we’re only getting data on consumption by poor people.

Why aren’t we seeing data on overall soda purchases?

And isn’t it a bit odd that leftists are happy that poor people are bearing a heavy burden?

I’m also amused by the following passage. The politicians want to discourage people from consuming sugary beverages. But if they are too successful, then they won’t collect all the money they want to finance bigger government.

In Berkeley, the tax is intended to support municipal health and nutrition programs. To that end, the city has created a panel of experts in child nutrition, health care and education to make recommendations to the City Council about funding programs that improve children’s health across Berkeley.

In other words, one of the lessons of the Berkeley sugar tax and the 21-percent drop in consumption is that the Laffer Curve applies to so-called sin taxes just like it applies to income taxes.

But the biggest lesson to learn from this episode is that it confirms the essential insight of supply-side economics. Simply stated, when you tax something, you get less of it.

Which is something that statists seem to understand when they urge higher “sin taxes,” but they deny when the debate shifts to taxes on work, saving, entrepreneurship, and investment.

I’m not joking. I debate leftists all the time and they will unabashedly argue that it’s okay to have higher tax rates on labor income and more double taxation on capital income because taxpayers supposedly don’t care about taxes.

Oh, and the same statists who say that high tax burdens don’t matter because people don’t change their behavior get all upset about “tax havens” and “tax competition” because…well, because people will change their behavior by shifting their economic activity where tax rates are lower.

It must be nice not to be burdened by a need for intellectual consistency.

Speaking of which, Mark Perry used the Berkeley soda tax as an excuse to add to his great collection of Venn Diagrams.

P.S. On the issue of sin taxes, a brothel in Austria came up with an amusing form of tax avoidance. The folks in Nevada, by contrast, believe in sin loopholes. And the Germans have displayed Teutonic ingenuity and efficiency.

Back-to-school season is also education survey time—Jason Bedrick and I examined the Education Next poll last week—and today we get the latest Phi Delta Kappa poll. For decades the PDK survey was done in conjunction with Gallup but is not this year. It also dropped questions specifically about such hot-button topics as vouchers and the Common Core. Maybe avoiding specific mention of the latter explains an interesting finding: the public’s response to curriculum standards is quite, well, blah.

The pollsters asked several questions about standards—especially an un-specified “new set of educational standards”—and inquired what parents thought of their effects.

First, when members of the public were asked if they thought the standards in their local public schools addressed “the things students need to succeed in their adult lives,” 27 percent answered that they addressed them “extremely” or “very” well, and 30 percent said “not so” or “not at all” well. 40 percent gave the middling “somewhat” answer. Ho-hum.

How about those “new” standards? 53 percent of parents thought the standards had changed what their oldest child was being taught, versus 33 percent saying “no change.” The direction of the change? 49 percent who perceived a change thought it had been for “the better,” 47 percent for “the worse” – essentially a tie.

Note that the data breakout I found did not differentiate between public and private school parents. In the poll’s executive summary, however, public school parents were isolated, and it appears that 45 percent thought the changes were for the better, 51 percent for the worse.

Where there may be some good news for standards fans is that 43 percent of parents who thought the new standards had changed what was being taught in their child’s school saw the changes increasing what their child was learning. Still, 31 percent thought the standards had decreased learning, and 25 percent perceived no change. So a plurality detected an increase, but a majority saw a decrease or no change. Similarly, 51 percent thought there was an increase in the degree of challenge for their kids, but 48 percent saw a decrease or no change. And remember, this was only among the 53 percent of parents who saw standards changing what their schools were teaching.

Have we been feuding over the Common Core, and standards generally, for no good reason? Do standards not really make a clear difference?

Surveys, of course, only tell us what respondents perceive, and we need more information than that to really know what effect standards have. But parents have an incentive to track their kids, and what these results seem consistent with is the conclusion that standards do not make much difference. Many Common Core opponents have long argued that, and I found it when examining the empirical evidence on national standards. But that is, in fact, a major reason that it was so troubling when Core advocates used federal power to coerce adoption of national standards; they sought to ramp up federal power without even having meaningful evidence that what they wanted would help.

At least for now, the public has spoken: there’s no consensus whether standards have helped or hurt. That should come as no surprise.

In my last post, I noted promising support for marijuana legalization initiatives this fall. Still, outside libertarian circles, there​​ unfortunately isn’t the political will to support a broader repeal of our federal and state drug laws.

Before you say it: No, drug legalization will not solve our mass incarceration problem. Not all by itself, ​anyway​; ​the numbers just don’t add up. You can see that for yourself at the Urban Institute’s web-based prison population forecaster. As the Urban Institute notes,

While dramatically reducing the national prison population requires addressing the hard stuff—like long prison sentences and time served for violent offenses—reforms to drug laws and revocation policies will still go a long way in many states.

For example, nonviolent offenses are a major driver of the prison population in Alabama, Kentucky, Missouri, Oklahoma, and Texas, so sending fewer people to prison for drug and property crimes would have a big impact on incarceration rates. Halving drug admissions would cut the prison population by nearly 10 percent in each of those states by the end of 2021. And a 50 percent reduction in admissions for all nonviolent crimes would cut at least a quarter off their populations (nearly a third in Kentucky).

​For good or ill, the Urban Institute doesn’t ​consider libertarians’ first-best solution, which ​is​ of course the full legalization of all drugs​. Libertarians support this policy not just because ​it would help empty the prisons, but because it’s your body, and it’s your right to choose what goes into it.​

These propositions are obvious to us; if only they were more obvious to others. But as the success of marijuana legalization becomes increasingly apparent, I hope that a fuller legalization, also once laughed at, will come to be taken more seriously.

Nonetheless ​it’s easy to see that even full legalization won’t get us to an OECD-reasonable incarceration rate. Mandatory minimum sentencing also needs to be reconsidered, as do longer sentences in general, and we​ will​ likely need to do something about plea bargaining as well, which​, when coupled with longer sentencing​,​ tends to result in many more people behind bars, including innocents. We should finally recall that we are only a couple of decades out from a historic peak in the violent crime rate. Many people incarcerated during that time are still in prison and arguably still belong there.

Still,​ a full drug legalization ​would likely have positive direct effects on both the incarceration rate and the crime rate, and it will also likely make many other reforms easier.

​T​he Urban Institute’s prison population forecaster​ ​treats drug policy as ​​exogenous​​ to the remainder of the U.S. crime rate. ​That is, it doesn’t consider the possibility that legalizing drugs will reduce the incidence of many criminal schemes and enterprises​ that are not detectably drug-related​. ​But w​hen people can resort to the police and the courts to settle their disputes, they are less likely to turn to violence. And ​lawful businesses​, who​ ​must compete on price, quality, and other ​product-regarding ​factors​, will not resort to turf wars.

There are good methodological reasons to resist making ​forecasts that rest on this type of connection,​ and there are good reasons to resist building those forecasts into a web tool whose real purpose is to teach ​the public the true scope of a given, present-day problem. The connection between the war on drugs and secondary crime may be real—and I think to a significant it is—but quantifying it involves making some difficult ​additional predictions about how much the two phenomena are linked and how quickly a change in the one will produce a change in the other. ​These are predictions I’d rather not make.

​But as I’ve noted previously, drugs are almost certainly not exogenous to​ other forms of​ crime: We would appear to suffer much of our violent and property crime owing only to our drug war. Exactly how much is hard to say; few defendants are likely to admit to any more crimes than are necessary, or to admit drug-related motivations that would lead to additional charges. ​Still it’s surely noteworthy that few countries in the world suffer more than 5 firearm homicides per 100,000 without either suffering a civil war… or being ​major drug suppliers or conduits.

Drugs also aren’t exogenous to plea bargaining. Although low-level drug offenses aren’t ​so ​​much ​to blame for our overcrowded prison system, ​low-level ​drug offenses are to blame for our overcrowded court system. Court overcrowding ​encourages plea bargaining, which means more people pleading guilty​ to offenses that lead to prison​, rather than litigating and potentially avoiding it.

Drug offenses are the ​single ​most common type of federal case. State-level data is harder to find, but in Texas, drug offenses made up 31% of all felony cases ​filed ​in 2015​. They were​ the largest single type of felony case, and possession ​charges ​made up 80% of ​that share. Drug offenses were also the largest single type of misdemeanor in ​Texas in ​the same time period.​ This is obviously a significant burden on the court system.​

Now, ​one might say that these considerations are beside the point: If it is categorically wrong to use or possess drugs, then all punishment of drug crime is effort well spent​; in that case, the proper response to an overcrowded court or prison system is to build it out still further.​ As Sen. Tom Cotton remarked, we may have an under-incarceration problem. (But if we do, what do we make of our close cultural relations, countries like the UK and Australia, whose incarceration rates are vastly lower?)
​​
​Meanwhile, if we have anything like a natural individual right use or possess drugs, then complaining about the inefficiency introduced to the court system is silly. We ought rather to complain about the rights violation, and never mind the inefficiency.

​The latter is my actual view. But I recognize that not everyone ​agrees. I suspect that most people believe that drug use ought to be stigmatized to some extent, but that it is not necessarily categorically wrong​, for example​ in the way​ ​that murder is. To this way of thinking, trade-offs ​regarding levels of stigma and the price we pay to inflict it may be worth considering, particularly if the things we do to ​stigmatize drug use end up indirectly causing worse social problems elsewhere.

That’s likely where the rest of the country is regarding drugs harder than marijuana. If so, then a politically viable way forward is clear. It consists of significantly shorter prison sentences, decriminalization where possible, and the consistent referral of low-level possessors and users to the medical rather than the legal system.

Legalizing drugs, or even just significantly decriminalizing them, ​will not solve our mass incarceration problem all by itself. But these measures will directly help out some, and they may indirectly help out quite a lot, particularly if drug legalization is accompanied by reforms in sentencing and criminal procedure. These ought to be goals that everyone can support.

Members of what was surely the Venezuelan regime’s secret police yesterday kidnapped opposition leader and 2008 Milton Friedman Prize winner Yon Goicoechea from his car after he left his home. Diosdado Cabello, the second most powerful person in the regime, publicly announced that the government had arrested Yon on the bogus claim that he was carrying explosives. In the video broadcast on national television, Cabello referred to the $500,000 Friedman Prize award that Yon received as evidence that Yon was some sort of foreign-employed agent bent on terrorism. “That man was trained by the U.S. empire for years,” he said, “It looks like his money ran out and he wants to come here to seek blood. They gave him the order there in the United States.”

This is an old trick of the Chavista regime—distract attention from the severe political, economic and social crisis that it has inflicted on the country. Venezuela’s so-called Socialism of the 21st Century has produced shortages of everything from food and water, to medicine and electricity. Hunger is becoming widespread, the rate of violence is among the worst in the world, and the regime has become extremely unpopular. (We have commented on this downward spiral here, here, here, and here).

Yon won the Friedman Prize in 2008 for having led the student movement that played the central role in defeating the constitutional reform that would have given Hugo Chavez what at that time would have been an unprecedented concentration of political and economic power. One of Yon’s and the student movement’s central tenets is their advocacy of non-violence in the promotion of basic freedoms and democracy. Yon also offers an optimistic vision about the future and potential of his country (see his Friedman Prize acceptance speech here). That approach contrasts with the regime’s constant reliance on repression and force and helps explain its appeal to most Venezuelans. For the same reason, the government’s claim of terrorism on Yon’s part lacks any credibility. The idea that the Friedman Prize is awarded so that the recipient carry out specific tasks is also risible. The prize is given “to an individual who has made a significant contribution to advance human freedom,” and has no conditions attached to it whatsoever. It has been awarded to numerous freedom champions from around the world including prominent reformers and human rights and freedom of speech advocates.

Yon’s detention comes just prior to massive popular protests against the government that are planned for this Thursday. Sticking to the pattern it has followed over the past few years as the crisis has deepened, the regime is doubling down on repression rather than adjusting to political or economic reality. His kidnapping shows just how insecure the regime has become and the importance of speaking truth to power.

In the months and days prior to Yon’s detention, the regime has arrested other opposition leaders and activists. Nobody is sure exactly where Yon is being held or under what conditions (though we believe he is in a cell at the headquarters of the secret police in Caracas). The Venezuelan government stopped being democratic and respecting the rule of law years ago, but we nevertheless call on it to release Yon immediately and treat him with the basic rights to due process that should be afforded to any Venezuelan citizen.

Between writing his well-known revolutionary liberal tracts Common Sense (1776) and The Rights of Man (1791), Thomas Paine contributed knowledgeably to a 1785-6 debate over money and banking in Pennsylvania. Paine defended the Bank of North America’s charter and it operations in a number of lengthy letters to Philadelphia newspapers during 1786, followed by a December monograph that summarized his case, Dissertations on Government; The Affairs of the Bank; and Paper Money.[1]

Paine argued that to repeal the bank’s charter violated both the rule of law and the maxims of sound economic policy. His writings show that he well understood the benefits of banking. Although proponents of the repeal accused Paine, publicly known to be in dire financial shape, of being paid by the BNA’s proprietors for defending it (one called him “an unprincipled author, who lets his pen out for hire”), Paine vociferously denied the charge, and historians (such as Philip S. Foner, who edited an anthology of Paine’s works), have found no evidence to support the accusation. Prima facie evidence for Paine’s sincerity is found in his marshalling of serious arguments that were consistent with the classical liberal principles of his earlier writings.

Here’s the backstory: The Continental Congress chartered the Bank of North America, headquartered in Philadelphia and headed by Robert Morris and Thomas Willing in 1781. Considering a Commonwealth of Pennsylvania charter to be a sounder authorization, in 1782 the bank sought and received a charter from the Pennsylvania legislature. After the Revolutionary War’s end in 1783, as historian Janet Wilson noted, farmers in western Pennsylvania with large debts and tax arrears “set up a cry for paper money” to be issued by the Commonwealth.[2] These state-issued notes would not be presently redeemable, but would be receivable for future tax payments.

The inflationists understandably saw the BNA as a barrier to their plan. If the bank valued the state paper below its par value, while BNA banknotes and checks traded at par in terms of the silver dollars for which they could be immediately redeemed, real demand for the state paper currency would be low. Better for the sake of state paper to eliminate the superior alternative. Hence, with the legislature voting to authorize an issue of state notes in mid-1785, the inflationists demanded repeal of the bank’s charter. They were further motivated by the bank proprietors’ public opposition to state paper. The legislature debated and then repealed the charter in September 1785. The BNA continued to do business, on a smaller scale, under its 1781 charter from the Continental Congress. (The 1st US Congress would not meet until March 1789.) Eighteen months after repeal, in March 1787, following a pitched public discussion and the election of pro-bank legislators in fall 1786, the charter was restored.

The clamor for irredeemable paper money, wrote Paine in 1786, derived from “delusion and bubble.”[3] Yes, the irredeemable paper currency issued during the war as a matter of necessity had provided revenue “while it lasted,” but not as a free lunch, but rather by taxing individual money-holders through price inflation and currency depreciation. Since its demise, “gold and silver are become the currency of the country.”[4] Those thinking that state paper will relieve a “shortage” of specie have it backwards: it is precisely the issue of irredeemable paper that drives out gold and silver. On this point Paine argued with impeccable Humean logic:

The pretense for paper money has been, that there was not a sufficiency of gold and silver. This, so far from being a reason for paper emissions, is a reason against them. As gold and silver are not the productions of North America, they are, therefore, articles of importation; and if we set up a paper manufactory of money it amounts, as far as it is able, to prevent the importation of hard money, or to send it out again as fast it comes in; and by following this practice we shall continually banish the specie, till we have none left, and be continually complaining of the grievance instead of remedying the cause. Considering gold and silver as articles of importation, there will in time, unless we prevent it by paper emissions, be as much in the country as the occasions of it require, for the same reasons there are as much of other imported articles.[5]

Paine understood that any stimulus from injecting money was only temporary, because issuing more paper money does not create any more wealth. He even offered the binge drinking / hangover analogy that has, in modern times, become commonplace:

Paper money is like dramdrinking, it relieves for a moment by deceitful sensation, but gradually diminishes the natural heat, and leaves the body worse than it found it. Were not this the case, and could money be made of paper at pleasure, every sovereign in Europe would be as rich as he pleased. But the truth is, that it is a bubble and the attempt vanity.[6]

State paper money became not just imprudent but unjust when it was combined with a legal tender law compelling the acceptance of depreciated paper dollars where a contract called for payment in silver or gold dollars:

As to the assumed authority of any assembly in making paper money, or paper of any kind, a legal tender, or in other language, a compulsive payment, it is a most presumptuous attempt at arbitrary power. … [A]ll tender laws are tyrannical and unjust, and calculated to support fraud and oppression.[7]

For a legislator even to propose such a tyranny should be a capital crime [!]:

The laws of a country ought to be the standard of equity, and calculated to impress on the minds of the people the moral as well as the legal obligations of reciprocal justice. But tender laws, of any kind, operate to destroy morality, and to dissolve, by the pretense of law, what ought to be the principle of law to support, reciprocal justice between man and man: and the punishment of a member who should move for such a law ought to be death.[8]

Responding to an anti-BNA petition, which claimed that “the said bank has a direct tendency to banish a great part of the specie from the country, so as to produce a scarcity of money, and to collect into the hands of the stockholders of the said bank, almost the whole of the money which remains amongst us,” [387-8 n] Paine argued that the issue of immediately gold-redeemable banknotes gives a commercial bank like the BNA a strong reason to retain sufficient gold reserves:

Specie may be called the stock in trade of the bank, it is therefore its interest to prevent it from wandering out of the country, and to keep a constant standing supply to be ready for all domestic occasions and demands. … While the bank is the general depository of cash, no great sums can be obtained without getting it from thence, and as it is evidently prejudicial to its interest to advance money to be sent abroad, because in this case the money cannot by circulation return again, the bank, therefore, is interested in preventing what the committee would have it suspected of promoting. It is to prevent the exportation of cash, and to retain it in the country, that the bank has, on several occasions, stopped the discounting notes till the danger had been passed.[9]

Here Paine failed to add that the public’s voluntary substitution of banknotes for specie, although it does not banish any specie that is still wanted, does allow the payment system to conduct a given volume of payments more economically, with less specie. The ability to export the share of specie thus rendered redundant, in exchange for productive machines and material inputs, was a growth-enhancing benefit of banking that Adam Smith had emphasized in The Wealth of Nations published ten years earlier.

In response to the claim that the bank “will collect into the hands of the stockholders” the specie remaining in the country, Paine explained that a bank’s specie reserves are not the net worth owned by its shareholders. Rather the reserves are held to redeem its liabilities, and thus are “the property of every man who holds a bank note, or deposits cash there,” or otherwise has a claim on the bank.

The Bank of North America at the time held the first and as yet only bank charter granted by the legislature of Pennsylvania. Critics damned the BNA as a privileged monopoly. Legislator John Smiley asserted that the charter repeal “secured the natural rights of the people from invasion by monopolies.” This view – later echoed by the Jeffersonians and Jacksonians in their opposition to the First and Second Bank of the United States – is of course paradoxical. The cure for monopoly power created by exclusive charter (incorporation) is to grant charters freely, to go from one to a multiplicity of charters. It is not to go from one to zero charters. If more banks were free to enter but simply hadn’t yet, then the BNA was a monopolist only in the benign sense that the entrepreneur who creates a new market (thus expanding and not restricting trade) is the single seller until others arrive. Eventually additional chartered banks did enter the Pennsylvania market: the (First) Bank of the United States (chartered by the US Congress) in 1791, and the Bank of Pennsylvania (state-chartered) in 1793.

In a later work criticizing the Bank of England (which did have an exclusive charter to issue banknotes as a corporation), Paine unfortunately seemed to blur the distinction between banknotes and irredeemable paper money. He made the valid point that banknotes held, unlike gold held, are not net national wealth (because they are liabilities of the issuer). Then he declared:

the rage that overran America, for paper money or paper currency, has reached to England under another name. There it was called continental money, and here it is called bank notes. But it signifies not what name it bears, if the capital is not equal to the redemption. … The natural effect of increasing and continuing to increase paper currencies is that of banishing the real money. The shadow takes place of the substance till the country is left with only shadows in its hands.[10]

To reconcile this passage with his previous writings, we must suppose that Paine is not criticizing banknotes in general, but the Bank of England in particular for holding inadequate reserves relative to its growing note-issue.

But this raises the question: Why would the BOE want to hold inadequate reserves when the BNA (as he had argued) did not? Paine might have explained this (but unfortunately did not) by Parliament’s implicit guarantee that it would not penalize the BOE for a suspension of payments, giving the Bank a moral-hazard incentive to skimp on reserves. When the Bank of England did suspend payments in 1797, forced by a run on the bank prompted by the threat of an invasion by Napoleon’s troops, Parliament did in fact immunize the Bank against note-holder lawsuits. Paine ten years ahead warned that the BOE might suspend in 1796, which was only one year off if we consider it a prediction:

A stoppage of payment at the bank is not a new thing. Smith in his “Wealth of Nations,” book ii. chap. 2, says, that in the year 1696, exchequer bills fell forty, fifty and sixty per cent; bank notes twenty per cent; and the bank stopped payment. That which happened in 1696 may happen again in 1796.[11]

To be clear, Paine anticipated trouble from the growing British public debt, not from threat of invasion. But the two were not unrelated.

____________

[1] Most of the quotations from Paine below come from this monograph as reprinted in Philip S. Foner, ed., The Collected Works of Thomas Paine, vol. 2, which is available online here.

[2] Janet Wilson, “The Bank of North America and Pennsylvania Politics: 1781-1787,” The Pennsylvania Magazine of History and Biography 66 (Jan., 1942), pp. 3- 28; available here.

[3] Letter to the Pennsylvania Packet, April 4, 1786; in Collected Works II, p. 419.

[4] Letter to the Abbe Raynal (1782) in Collected Works II, pp. 229, 230.

[5] Dissertations on Government (1786), in Collected Works II, p. 407.

[6] Ibid.

[7] Ibid., pp. 407, 409.

[8] Ibid., p. 408.

[9] Ibid., pp. 391-2.

[10] Paine, “Prospects on the Rubicon” (1787), Collected Works II, pp. 636-7.

[11] Ibid., pp. 663-4.

[Cross-posted from Alt-M.org]

The 2016 G20 summit in Hangzhou is fast approaching and, similar to the pre-summit meetings hosted by China throughout the year, the focus will be the state of the global economy. Still contending with sluggish global economic growth, the summit’s theme of “Towards an Innovative, Invigorated, Interconnected and Inclusive World Economy” is especially timely. Under the umbrella of global economic growth, cultivating opportunities for trade and investment will become a major priority for G20 states, and three global powers—the United States, China, and Russia—are each developing their own multinational trade route projects. These major trade projects could serve as opportunities for cross-country cooperation and growth, but they could also become sources for future conflict.

China’s management of domestic markets, currency, and commitment to structural reforms was a cause for global concern at the first meeting of G20 Finance and Central Bank Governors in February. At next week’s summit, China’s President Xi Jinping will undoubtedly point out China’s efforts towards realizing supply-side structural reforms in the face of China’s “new normal” of slower economic growth. As part of these reforms aimed at rebalancing China’s economy, Beijing plans to cut industrial overcapacity, tackle overhanging debt, reform state-owned enterprises, and seek out new consumer markets. On the last point, Beijing is championing its New Silk Road Initiative (also known as “One Belt, One Road”), a major state project focused on opening up new markets. To accomplish this, Beijing is building vast trade networks spanning several countries and continents, by land and by sea. However, many countries are wary. The project, billed as purely an economic one, may evolve to include a political and/or military dimension as well.

Similarly, Russia is looking to build new economic opportunities and trade links in the face of continued western economic sanctions and an ongoing bearish global hydrocarbons market. Its project, the Eurasian Economic Union (EEU), is based on its precursor organization, the Eurasian Customs Union. Since coming into force on January 1, 2015, the EEU has managed to increase trade amongst its members, which at present include Russia, Kazakhstan, Belarus, Armenia, and Kyrgyzstan. However, this intra-regional economic union has been slow to deliver significant economic advantages for its member states and, in fact, may be having the reverse effect on their economies: due to their close economic links to Russia, they too have inadvertently been suffering fallout from Russia’s economic troubles. Anders Åslund of the Atlantic Council argues that the EEU is actually isolating the economies of its member states and blocking out more beneficial economic relationships, particularly with the EU. Even the initial proponent of the EEU, Kazakhstan’s President Nursultan Nazarbayev, has himself appealed to EEU state leaders for closer integration with both the EU and China’s New Silk Road project.

Finally, the relatively unknown U.S. New Silk Road Initiative is seeking to bring about economic integration and growth by encouraging trade links between South Asian and Central Asian states, and especially with Afghanistan. As the United States aims to drawdown its military presence in Afghanistan, the New Silk Road project could portend a shift to economic concerns. By helping Afghanistan to establish an economic foothold through regional trade, the United States hopes to foster greater stability. On the other hand, getting Afghanistan to stand on its own feet economically could prove difficult in a country rife with internal issues. Also, opening up trade links with Afghanistan as part of the U.S. New Silk Road Initiative could mean that Afghanistan’s various domestic and drug trade problems may spill across borders, having the reverse effect of what was intended, by increasing regional instability instead of calming it. 

Though the potential for cooperation exists, and conflict is not inevitable, how these three major projects espoused by three global powers will interlock will be debated well beyond the conclusion of the upcoming G20 summit and into the future. All three of these trade and investment projects overlap in Central Asia. Moscow’s reaction to these two powers’ further involvement in Central Asia—its traditional region of dominance—will likely not be favorable. Moscow, though, has been in talks with Beijing to include it in the EEU through a Eurasian Partnership Agreement, but how the dynamics of such a partnership would actually play out is yet to be ascertained. For its part, experts in China have expressed that there is room for the United States to cooperate in China’s New Silk Road project. The United States itself is keen to find projects on which to cooperate with Beijing, as part of its own U.S. New Silk Road Initiative. At the same time, however, China feels that the United States, and the west in general, are trying to encircle and exclude a rising China, pointing to the Trans-Pacific Partnership agreement (TPP), the row over the South China Sea, western governments questioning Chinese investments in infrastructure projects, and the U.S.’s planned THAAD anti-missile defense system in South Korea. Therefore, as global leaders gather and discuss how to achieve sustainable growth as part of an “interconnected and inclusive world economy”, the realities on the ground point to several possible areas of contention.

Last week’s 20th anniversary of welfare reform event put income-based poverty measures on trial and drew skeptics from many circles. Michael Tanner stated that you would be hard-pressed to find “anyone on the left or right to defend the current [income-based] measure,” Robert Rector compared the current poverty measure to wearing glasses with cracked lenses, and Scott Winship presented research indicating income-based measurements distort U.S. poverty estimates.

Criticisms of the poverty metric during the event were only a microcosm of objections that have been occurring for some time.

Specifically, academics have taken issue with the failure of income-based poverty measures to accurately capture the realities of poverty. Measuring poverty incorrectly can have deleterious results, because it leads to misunderstanding the problem itself and, by extension, the solution.

So, what’s wrong with using income to measure poverty?

It turns out a lot, because income does not provide an accurate picture of economic well-being. It does not, for instance, provide information about an individual’s access to welfare benefits or access to formal or informal insurance. Income also can’t say anything about an individual’s accumulation of wealth or access to credit. Practically speaking, the income measurement often ignores dollars earned under-the-table on the so-called “gray market.” Importantly, for poor individuals, the resources that income overlooks are often substantially larger than income itself.

For a more concrete illustration of the issue, just imagine your average graduate student. This student likely has very low income. If he is lucky, perhaps he works part-time for a modest wage as a course assistant. As an older student, he may even have a wife who stays at home with a couple of young kids. Though he has substantial academic and personal financial obligations, his financial outlook is not as bleak as income alone indicates: he likely has considerable access to informal insurance (a phone-call to parents or in-laws will help in a pinch), access to credit (sizable student loans), and even a bit of savings remaining from a former professional life. Perhaps he qualifies for an academic scholarship or living stipend.

Even after considering this individual’s meager income and weighty family obligations, we would likely agree that he and his family have a bright future ahead. His day-to-day living situation suggests he knows that he does: he never worries about going hungry, there is no eminent danger of eviction, and his family generally lives life with many of the trimmings of a middle-class lifestyle.

Now consider your average high-school dropout, bereft of credentials or a resume replete with stable professional experience. She works part-time stocking inventory at an agricultural processing plant, but this work is seasonal and variable. At night she has part-time work cleaning office buildings across town. When she’s away from home she leaves her two young kids with a boyfriend who is out of work, but she constantly worries about losing her apartment and paying for groceries. 

For all of their differences, these two individuals have the same income levels, yet they have wildly disparate prospects and financial security. They even have the same size household, which means that they are subject to an identical federal poverty line – $24,300 for a family of four. As such, either may qualify for various income-tested welfare benefits.

Individuals in these two very different categories are treated exactly the same way by current poverty measures.

This example highlights just one of several issues associated with using income to measure poverty. Income has been used to measure poverty for decades because it is simple. However, in an increasingly complex world, it has become an increasingly meaningless measure. As we look to a future of more effective welfare policy, it is essential we improve on it. 

You Ought to Have a Look is a feature from the Center for the Study of Science posted by Patrick J. Michaels and Paul C. (“Chip”) Knappenberger. While this section will feature all of the areas of interest that we are emphasizing, the prominence of the climate issue is driving a tremendous amount of web traffic. Here we post a few of the best in recent days, along with our color commentary.

It looks like a new investigation into the use of ethanol as a substitute for gasoline found pretty much what most people have known all along—it’s just a bad idea.

Car mechanics know it. Drivers know it. Food analysts know it. Land conservationists know it. The last bastion of holdouts (aside from Midwestern corn farmers and their Congressional representatives) were the climate change do-gooders, claiming that all of the above sacrifices were small prices to pay for the benefit to the climate that ethanol was producing.  After all, they argued, burning ethanol produces fewer carbon dioxide emissions on net than burning “fossil” fuels because the carbon liberated in the process (for more on liberated carbon check out Andy Revkin’s contribution) was being recycled at a quicker rate than the geologic times scales necessary to produce oil.

While this may be technically true, it turns out that the rate of ethanol carbon recycling was being oversold by its supporters. At least this is the conclusion of a new paper authored by John DeCicco of the University of Michigan Energy Institute and colleagues. According to the paper’s press release:

A new study from University of Michigan researchers challenges the widely held assumption that biofuels such as ethanol and biodiesel are inherently carbon neutral.

Contrary to popular belief, the heat-trapping carbon dioxide gas emitted when biofuels are burned is not fully balanced by the CO2 uptake that occurs as the plants grow, according to a study by research professor John DeCicco and co-authors at the U-M Energy Institute.

The study, based on U.S. Department of Agriculture crop-production data, shows that during the period when U.S. biofuel production rapidly ramped up, the increased carbon dioxide uptake by the crops was only enough to offset 37 percent of the CO2 emissions due to biofuel combustion.

The researchers conclude that rising biofuel use has been associated with a net increase—rather than a net decrease, as many have claimed—in the carbon dioxide emissions that cause global warming. The findings were published online Aug. 25 in the journal Climatic Change.

Interestingly, the U.S. Environmental Protection Agency has recently been called to task for not investigating the supposed climate impact of the Congressionally mandated ethanol standards—a report that the EPA was required to produce by law. The EPA’s response: “we ran out of money and Congress didn’t pay attention to us last time we tried to issue a report.” But, they said they’d get right on it—and have a report ready by 2024.

We have a better idea: skip the report and just drop the standards.

Next up is one of the few really good pieces on the Louisiana floods (aside from those generated by our last YOTHAL, e.g., at the Daily Caller and Washington Times).

Louisiana State University’s Craig Colton explains how “suburban sprawl” and poor preparation played a large role in worsening the damage of the recent flooding disaster in the state. He notes that the region has a long history of flooding (pointing to historical accounts back to the 18th century) and provides several examples of very high rainfall amounts there in recent decades (and we’ll add that there are many more examples going back decades further such as a tropical depression in 1962 that put down 23 inches in the vicinity and 1979’s Tropical Strom Claudette which dropped more than 42 inches in nearby eastern Texas). 

Coastal Louisiana is perhaps the most climatologically primed (non-mountainous) spot for heavy rainfall events in the lower 48. As such, urban/suburban development there should proceed accordingly—which apparently isn’t what is happening, according to Colton. While some steps to develop flood plans and reduce risk were drawn up after flooding in 1983, Colton reports that:

However, these efforts have not been sustained. Suburban sprawl has spilled onto floodplains and placed residents at risk.

For example, the relatively new incorporated community of Central in East Baton Rouge Parish reports that 75 percent of its territory is in the 100-year floodplain. According to initial news reports, up to 90 percent of the town’s houses sustained damage in this month’s floods.

Check out Colton’s entire informative article to find out more about why the region’s flooding disaster is rooted in (poor) local decisionmaking and why you don’t need to invoke climate change to understand that this was a disaster in the making. It’s not that a warming climate shouldn’t be expected to generally increase rainfall totals, but laying the blame for the specific heavy rains and the resultant flooding in Louisiana (or anywhere else for that matter) on human-caused climate change is neither instructive nor scientifically defensible.

And finally, we’d be remiss if we didn’t have a little fun with the flip-flop Libertarian presidential candidate Gary Johnson pulled last week on his support of a carbon tax.

Last Sunday (August 21), in an interview with the Juneau Empire, Johnson indicated that he was in favor of a carbon “fee” to address climate change. And on Monday, followed that up with what seemed to be support for a full-blown carbon tax, telling CNBC’s John Harwood:

“I do think that climate change is occurring, that it is man-caused. One of the proposals that I think is a very libertarian proposal, and I’m just open to this, is taxing carbon emission that may have the result of being self-regulating.”

We immediately suggested to Gov. Johnson (via Twitter), that even in theory, a carbon tax wasn’t such a good idea, and pointed to our Cato Working Paper (soon to be Policy Analysis) on the topic:

By last Thursday, Johnson had apparently reconsidered, telling a campaign rally in Concord, New Hampshire (as reported over at Reason.com):

If any of you heard me say I support a carbon tax…Look, I haven’t raised a penny of taxes in my politicial career and neither has Bill [Weld]. We were looking at—I was looking at—what I heard was a carbon fee which from a free-market standpoint would actually address the issue and cost less. I have determined that, you know what, it’s a great theory but I don’t think it can work, and I’ve worked my way through that.

We’re glad that Gov. Johnson has seen the data on the ground and come to see the light—let’s hope he sticks to it.

Pages