Feed aggregator

For a few years now, the town of Croydon, NH (population 651) has been fighting with the governor and state board of education over their school choice policy. The town isn’t large enough to sustain its own K-12 district school, so it contracts with a neighboring town to educate most of its residents’ children starting in 5th grade. But when its contract was approaching expiration a few years ago, the town decided to give local parents the option of sending their children to private schools as well, and the town would cover tuition up to the amount that it was spending per pupil at the neighboring district school (about $12,000).

That’s when the governor and state education bureaucrats got involved. They objected to the town’s use of tax revenue at non-government schools, though they had difficulty pointing to exactly which law or statute the town was violating. They’re currently embroiled in a lawsuit to sort out whether Croydon has the authority to decide how to spend its local tax dollars, but meanwhile the state legislature passed a bill clarifying that Croydon and similar towns have the authority to enact their own school choice policies. 

Last week, NH Gov. Maggie Hassan vetoed that bill citing two arguments I had already refuted in a Union Leader op-ed earlier in the week. In her veto message, Gov. Hassan wrote:

House Bill 1637 diverts taxpayer money to private and religious schools with no accountability or oversight, a clear violation of the New Hampshire Constitution, which states, ‘… no money raised by taxation shall ever be granted or applied for the use of the schools of institutions of any religious sect or denomination.’ Not only is the bill unconstitutional, it also has no mechanism to ensure a student’s constitutional right to the opportunity to receive an adequate education and would undermine the state’s efforts to ensure a strong and robust public education system for all New Hampshire students.

“Under current New Hampshire law, public schools are required to provide the opportunity for an adequate education, as defined by the Legislature, and are held accountable through laws and rules that require monitoring and review by the Department of Education. Additionally, as required by statute and as a result of Supreme Court decisions requiring a statewide education accountability system, New Hampshire schools are required to participate in the Statewide Educational Improvement and Assessment Program. If House Bill 1637 is enacted, public funds would be used to send students to private schools – which are only approved by the Department of Education for attendance and not curriculum, without the same accountability standards as the public schools – violating the requirements of state law and the state Constitution.

These are red herrings. As I noted in my prebuttal last week:

During the debate over the bill, opponents raised two main objections related to accountability and constitutionality. Neither withstands scrutiny.

One legislator claimed that there are “no safeguards for quality assurance” because private schools are not subject to all the same rules and regulations as district schools. However, this has it exactly backward.

District schools are primarily accountable to school boards and the state department of education, which promise an “adequate education” in principle but don’t always deliver in practice. Private schools are subject to even greater accountability because they’re held directly accountable to parents.

If a private school isn’t working out for a child, the parents can take their child (and their money) somewhere else. Knowing this, private schools have a strong incentive to be responsive to the needs of students and their parents.

Opponents also claim the bill would violate the state constitution’s “Blaine Amendment” provision, which states that “no money raised by taxation shall be granted or applied for the use of the schools or institutions of any religious sect or denomination.”

However, in a 1955 Opinion of the Justices sanctioning the use of publicly funded vouchers at a religiously affiliated nursing school, the New Hampshire Supreme Court held this constitutional provision only forbade the state from supporting “a particular sect or denomination,” but that did not mean “that members of a denomination should be deprived of public benefits because of their beliefs.”

In other words, the state constitution permits students to use public funds at a religious school so long as they could use the funds at a variety of other secular or religious schools. The state constitution demands religious neutrality, not discrimination against religious groups or institutions.

In short, state regulations are no guarantee of quality (nor does their absence imply a lack of quality) and the New Hampshire state constitution does not mandate religious discrimination. If only someone had told the governor…

Since the passing of Muhammad Ali, the establishment has been working in overdrive to convince us that the great boxer was a member of their club. In doing so, the wisdom and wit of Ali has been on display.

Muhammad Ali’s lessons on economics, however, have been absent. Economics? Yes. The lessons were developed in a most edifying book by Donald Sull, The Upside of Turbulence: Seizing Opportunity in an Uncertain World. New York: Harper Collins, 2009 – a book that Mohamed El-Erian recommended to me.

The economic lessons are summarized in “The Boxer Matrix.” A boxer’s fate is determined by a combination of his absorption capacity (read: can he take a punch?) and agility (read: can he avoid a punch?). In the Boxer Matrix, the ideal position to be in is the Northeast quadrant: where Ali and Joe Louis boxed. But, while Ali always had terrific agility, he had to train and think his way to an above average absorption capacity. This capacity was on display in his “Rumble in the Jungle” bout with George Foreman. It was then that Ali’s “rope-a-dope” tactic was executed to perfection.

This brings us to Ali’s message on economics, with particular reference to countries that are heavily dependent on the production of oil. In turbulent times (read: oil price plunges), countries like Saudi Arabia, Venezuela, and Nigeria experience a great deal of pain because their oil-dependent economies aren’t diversified. In short, they lack agility. This is reflected in their position in the lower half of the Boxer Matrix.

Saudi Arabia is able to use its huge stash of foreign reserves (high absorption capacity) to countervail its lack of agility. But, reserves can only go so far. What the Saudis need is more agility. The Vision 2030 project is intended to do just that. Whether the Saudis can endure the “training” required to achieve Vision 2030 is another matter.

As for Venezuela and Nigeria, they are – and are likely to remain – in the loser’s Southwest quadrant: the one that dooms boxers and economies alike.

America’s relationship with Islam is fraught with tension. No one wins if America ends up fighting an endless war with 1.6 billion people worldwide.

Rather, Washington should encourage responsible Islamic voices. One is the Organization of Islamic Cooperation. According the group diplomatic status would give Americans greater opportunity to influence an important forum for Islamic activism.

The OIC was founded in 1969 and is made up of 57 states, most with majority Islamic populations. Past relations have been difficult.

In 1990 the group adopted the Cairo Declaration on Human Rights in Islam which emphasized the role of Sharia Law. At the UN the OIC routinely attacked Israel.

For years the OIC sought UN support to target the so-called “defamation” of religion, which would have threatened religious liberty. The group also struggled with the issue of terrorism.

However, the OIC has filled a more responsible international role of late. Criticism of Israel continues, but the group has become more willing to challenge its own members.

In 2008 the OIC amended its charter to emphasize human rights and liberty. It also established the Independent Permanent Human Rights Commission, an advisory body to monitor human rights within member states.

Perhaps most dramatic, in 2011 the OIC abandoned its campaign on religious defamation and backed a resolution more friendly to religious liberty. Although differences remain over how to define “incitement to violence,” the OIC appears to have moved significantly toward Western standards. Last year’s Fez declaration, adopted at a UN forum backed by the OIC, emphasized the role of religious leaders in countering religious hatred, not government in imposing legislative solutions.

Finally, the group acknowledged the problem of terrorists claiming Islam as a justification for murder and mayhem. Moreover, the OIC-backed Marrakesh Declaration concluded that “It is unconscionable to employ religion for the purpose of aggressing upon the rights of religious minorities in Muslim countries.”

Last year the group’s executive committee developed a program to confront violent extremism and partner with organizations involved in counterterrorism. The OIC plans to review language and messaging, as well as reform education to reduce support for violent extremism.

In 2007 the Bush administration sent an envoy to the OIC. But the Obama administration effectively downgraded America’s representation, withholding ambassador status from the U.S. delegate. Moreover, the group continues to lack diplomatic status, unlike the Organization of American States and even the Vatican.

The Senate Relations Committee currently is moving legislation to grant diplomatic status to the six-member Gulf Cooperation Council, but not the OIC, as recommended by the administration. Yet addressing the OIC allows Washington to address 57 countries around the globe with substantial Muslim populations. Bush’s OIC envoy Sada Cumber complained that “The United States has ignored one of its most capable and effective partners in countering the rise of violence extremism around the world.”

As I wrote in Forbes online: “Obviously, engaging the organization offers no panacea for the West’s problems with Islam. Nevertheless, the OIC offers a useful venue for communicating with scores of Muslim nations. And the group provides engagement opportunities for journalists and NGOs.”

No doubt, the OIC will continue to frustrate the U.S. on many issues. However, the organization also appears open to debate. One American who worked with the OIC argued that in many areas the group is at odds with its members.

Thus, ongoing engagement with OIC staff and representatives of member states—involving them in discussions with American advocates of human rights and religious liberty—could prove useful over time. While this is possible today, diplomatic status would ease OIC administration, encourage enhanced operations, and smooth U.S. relations.

Washington would lose little in granting recognition. Among the benefits is the official oversight that comes with diplomatic status.

The latest terror attack in Orlando reminds us of America’s challenge in confronting Islam. One positive step would be to more effectively engage the OIC.

The most stinging rebuke, as well as the  most public one, I ever received over the course of my academic career, was delivered to me in the pages of The Economic Journal.  It consisted of a  footnote to an article celebrating James Tobin’s contributions to economics.  The footnote offered a paper of mine, also published in the EJ, as a “striking example” of the “comeback” of models relying upon “ad-hoc, backward-looking, mechanical expectation formation models of the early 1960s…in the guise of adaptive learning rules.”

What made my example especially egregious, in my chastiser’s  view, was the fact that, though I referred to “adaptive learning,” my argument was mainly couched in terms of static expectations — an especially naive sort.  To make matters worse, in defending my method, I referred to some other works that seemed to me to supply a rationale for such “naive” thinking in certain contexts.  By so doing, it seems, I was treating “appeal to higher authority (Marx, Keynes, Lucas etc.) [as] an acceptable substitute for empirical evidence or logical argument starting from reasonable primitive assumptions,” thereby supplying “evidence of the immaturity of economics as a science.”  Ouch!

In my defense, my topic was the transition from barter to fiat money, and despite the upbraiding I received I still think it perfectly reasonable to assume that, when some new technology is about to make its appearance, and especially when the first stages of its development are for the most part imperceptible (and money surely qualifies as such a technology), its development is likely to be quite unexpected.  And I didn’t assume static expectations for the heck of it, or because I didn’t realize that doing so was passé.  I assumed them in order to draw attention to their instrumental value and, hence, their possible relevance.  When static expectations or any of their somewhat more sophisticated counterparts, including adaptive learning, were assumed to operate in a monetary search framework, that framework yielded predictions much more consistent with historically-observed patterns of monetary development than it did if expectations were instead assumed to be forward-looking and, in that sense, “rational.”

But my main reason for bringing that whole business up isn’t so that I can defend my poor old article.  It’s to draw attention to the fellow who dressed me down for it.  For that fellow was Willem Buiter who, if you ask me (though admitting it only adds to my chagrin), is one of the best monetary economists around these days, and one whose writings deserve an even wider audience than the considerable one they already command.

As his unsparing (but mercifully brief) assault upon my article illustrates, Buiter doesn’t go in for kid gloves, or for gloves of any sort: spotting what he believes to be a bad argument, he goes after it with bare knuckles, and more often than not lands a knockout punch.  Consider his trenchant critique of the fiscal theory of the price level.  Or have a look at his 2004 Hahn Lecture, in which he puts his dukes up against half-a-dozen “ghosts, eccentricities, mirages, and mythos” of contemporary monetary economics.  No, Sir: this is one monetary economist you don’t want to mess around with.

Buiter is, on the other hand, a monetary economist whose work repays careful reading, and repays it at a decidedly positive real rate of interest.  I was reminded of this recently when, in the course of expanding upon my Congressional Testimony on Interest on Reserves, I came across a 2009 working paper by Buiter that I hadn’t read before.  The question addressed by that paper — What obstacles stand in the way of central banks shrinking their swollen balance sheets and otherwise returning to conventional monetary policy? — makes it even more pertinent today than when it first appeared.  Yet because the paper was published as part of a somewhat obscure volume edited by the European Money and Finance Forum, it hasn’t gotten much attention (Google scholar lists 13 citations, all to the working paper version).  That’s a shame, for the paper is another good example of Buiter’s ability to muster painstaking analysis in the service of blistering rhetoric, with devastating effect.

The gist of Buiter’s argument is that, despite what monetary authorities in the U.S. and elsewhere may claim, “unwinding or reversing unconventional monetary policies,” so as to reduce the relative size of central banks’ balance sheets to pre-crisis levels, “is technically easy.”  The real obstacles to such unwinding are, Buiter insists, political.  They consist, first, of a potential conflict between central bankers and fiscal authorities concerning “the role of seigniorage in closing the government’s solvency gap,” and, second, of the fact that any unwinding procedure “is likely to reveal the true extent of the central bank’s quasi-fiscal activities during the crisis and its aftermath.”

The conflict that constitutes the first of these obstacles arises because “the portfolio reshuffling that is the logical, unavoidable counterpart” to central banks’ large-scale asset sales is likely to “create serious funding problems,” especially for national treasuries.  In particular, to the extent that the sales reduce the central banks net interest income and financial surpluses, they must force associated governments to reduce their own deficits as well.  Unless such a program of deficit reduction is consistent with those treasuries own objectives, unwinding “could be delayed for years.”

The extent of the delay will, of course, depend on central banks’ ability to resist pressure from treasury authorities.  How great is that ability?  Not very, according to Buiter.  In the U.K., a Treasury unhappy with the Bank of England’s aggressive pursuit of normalization might take control of monetary policy by invoking the Reserve Powers clause (section 19) of the 1998 Bank of England Act, allowing it to dictate policy provided that doing so is “required by the public interest and by extreme economic circumstances.”  According to Buiter, its much-vaunted (if mostly mythical) independence notwithstanding, the Fed’s constitution makes it even less immune to pressure from fiscal authorities than the Bank of England.

The second reason governments have for forestalling the unwinding of their central banks’ unconventional policies — their desire to keep a cloak on those banks’ quasi-fiscal activities — is so far as Buiter is concerned all the more reason for the general public to oppose any unnecessary delay:

The large-scale ex-ante and ex-post quasi-fiscal subsidies handed out by the Fed and to a lesser extent by the other leading central banks, and the sheer magnitude of the redistribution of wealth and income among private agents that the central banks have engaged in could (and in my view should) cause a political storm.

That the Fed and other central banks made the crisis an excuse for becoming quasi-fiscal agents in the first place was, in Buiter’s opinion, inexcusable.  Like Bagehot (and Bernanke himself, to judge by the former Fed Chairman’s utterances rather than his actions), Buiter believes that central banks have no business doing anything other than providing liquidity to illiquid but solvent financial institutions, “at a cost covering [their] opportunity cost of non-monetary financing”:

Any action beyond that, such as the recapitalisation of insolvent banks through quasi-fiscal subsidies, ought to be funded by the Treasury.  The central bank should be involved only as an agent of the Treasury — an expert assistant.  It should not put its own conventional or comprehensive balance sheet at risk.

And why shouldn’t a central bank take on quasi-fiscal functions?  Generally speaking, it shouldn’t because doing so can impair its “ability to fulfill its macroeconomic stability mandate,” and also because it may obscure responsibility and accountability “for what are in substance fiscal transfers.”  In the U.S. case, Buiter notes, there is still another reason, and one that ought not to be dismissed lightly.  It is, simply, that the Fed’s quasi-fiscal actions “subvert the Constitution, which clearly states in Section 8, Clause 1, that the power to tax and spend rests with the Congress.”

That the Fed should have gotten away, not only with having allowed itself “to be used as an off-budget and off-balance-sheet special purpose vehicle for the Treasury,” but also (until Bloomberg forced its hand after Buiter’s article appeared) with refusing to divulge the details of its crisis-related fiscal transfers, seems almost incredible to the Dutch-born Buiter, who surrendered his Dutch citizenship in order to become a dual U.S.-U.K. citizen:

It is surprising that a country whose creation folklore attributes considerable significance to the principle of “no taxation without representation” would have condoned without much outcry such a blatant violation of the equally important principle of “no use of public funds without accountability.”  This indeed amounts to a quiet coup by the central bank.

Would that more U.S.-born economists, including those who fell-over each other in their rush to defend the Fed against any prospect of routine GAO “audits,” took the Constitution’s plain language as seriously.

[Cross-posted from Alt-M.org]

Marketplace Radio takes a look at the challenge of filming movies and television shows in Cuba, focusing specifically on Showtime’s “House of Lies” starring Don Cheadle. The episode is titled “No es facil” – “It’s not easy.” The title appears to be a description of doing business in Cuba, and also of filming a show about doing business in Cuba. As Marketplace’s Adrienne Hill and show creator Matthew Carnahan explain:

Camera equipment was shipped from Germany because it couldn’t be sent directly from the U.S. Even basic supplies – “there’s not hammers and toilet paper, and things that people need.” 

Journalists have stopped reporting on the privations of socialism in Cuba. But Hugo Chavez was a great admirer of Fidel Castro and the society he built, and he wanted to give Venezuelans the same thing. And of course he did:

Venezuela’s product shortages have become so severe that some hotels in that country are asking guests to bring their own toilet paper and soap, a local tourism industry spokesman said on Wednesday….

Rest well, Comandantes Castro and Chavez, while your people dream of toilet paper. And hammers. And soap.

I sometimes hear it said that today’s lengthy trade agreements are about “managed trade,” and that a true free trade agreement would only have to be one sentence (or perhaps one paragraph.) Well, maybe, but it depends on what that sentence or paragraph says. Here’s a suggestion someone made on a trade policy blog I run:

A true free trade agreement would be one sentence. Any good that can be sold legally in a country can be sold legally by a seller from any other country that is a party to this agreement. The agreements are long because they are negotiating winners and losers. That is crony capitalism.

The problem with this proposed sentence is that it would be under-inclusive: It would not achieve free trade, in several respects.

First, the primary barrier to free trade is still tariffs, which are taxes imposed on imports. Tariffs don’t make trade illegal, they just tax it, and a rule that goods which can legally be sold in a country can also be sold by foreign sellers would not eliminate tariffs. And, by the way, that’s a big reason why trade agreements are so long – they list all traded products and place limits on the tariff level for each product. Many of the pages are taken up by these detailed tariff reduction schedules.

Now, you could have a one sentence trade agreement that said something along the lines of, “All tariffs are hereby abolished.” That would be a pretty good sentence in a trade agreement. So far, we haven’t seen a sentence like that, unfortunately.

In addition, there are some complex protectionist measures out there, not all of which ban the sale of foreign goods.  For example, you could have a tax measure which applies higher taxes to foreign goods than domestic goods. This would mean that foreign goods could still legally be sold in the country, and thus the free trade sentence quoted above would not address such a measure.

Along the same lines, some trade agreements impose constraints on the use of anti-dumping measures.  There might be an ideal sentence here (“anti-dumping measures are hereby abolished”), but that is not politically achievable right now, so we end up with many pages of rules that put limits on anti-dumping measures. It’s not perfect, but it helps.

To sum up, I agree with critics who say there are lots of problems with today’s trade agreements, as various interest groups have lobbied succesfully for specific regulations to be included in them.  We can definitely scale back from the 5,000 or so pages in the Trans Pacific Partnership. In the end, though, any free trade agreement is likely to take quite a few pages to set out all the various constraints on protectionism. 

A timely new blog post from the Tax Foundation points out that, “taxes on the rich are much higher than they’ve been in recent years. Between 2008 and 2012, the top 1 percent of households paid an average tax rate of 28.8 percent. However, in 2013, this figure spiked to 34.0 percent, as a result of tax increases in the “fiscal cliff” deal and the Affordable Care Act”.

“Readers should check out the new CBO report,” the authors suggest, “and reflect for themselves about whether or not high-income Americans are now paying their fair share of taxes.”

The trouble is that the tax rate alone can’t tell us how much the Top 1% paid in taxes: To know how much taxes were paid by the Top 1% requires knowing how much income they reported to the IRS.  The reason this matters is that there is ample evidence that the “elasticity of taxable income” is very high among top taxpayers, which simply means they find ways to report less income if marginal tax rates go up.  This doesn’t require lawyers or loopholes: Avoid capital gains tax by not selling assets and/or shifting into exempt assets (housing up to $500k); avoid the dividend tax by holding tax-exempt bonds; defer personal tax on business income by retaining earnings within a C-corporation; avoid punitive tax rates on second earners by becoming a one-earner household; retire early, etc.

Looking at the same thing from a different angle, the graph shows that average taxes actually paid by the Top 1% grew rapidly after the tax rate on capital gains was cut from 28 percent to 20 percent in 1997. Taxes paid by the Top 1% grew even more rapidly after 2003 when the tax rate on capital gains and dividends was further reduced to 15 percent and the top tax on salaries and unincorporated businesses was cut from 39.6 percent to 35 percent.  If you want the rich to pay more taxes, cut their tax rates.  

As it turns out, 2013 showed that we can’t just assume higher tax rates mean docile taxpayers will simply write bigger checks to the U.S. Treasury. On the contrary, when the average tax rate on the Top 1% increased by 18.4 percent in 2013, the amount of income reported by the Top 1% fell by 15.4 percent – from $1,856,000 in 2012 to $1,571,600. The net effect was almost a wash, in terms of taxes actually paid. According to the CBO, average federal taxes paid by the Top 1% were $530,128 in 2013 –virtually unchanged from $529,056 in 2012. 

Presidential candidates Bernie Sanders and Hillary Clinton propose even more increases in top tax rates on income and capital gains (to 54.2 percent with Sanders, 43.6 percent with Clinton), ostensibly to finance their lavish government spending plans.  But even a relatively small dose of this same poison failed to raise significant revenue from the Top 1% in 2013, partly because of the drag on the overall economy from reduced incomes and incentives. 

In recent weeks, Libertarian presidential candidate Gary Johnson has been gaining media momentum as polls show him garnering about 10 percent of the vote in a race with Trump and Clinton. His candidacy has attracted attention to the libertarian ideas he espouses and the people who embrace the label.

The popular media stereotype of libertarians is disproportionately white and male. But is this accurate? What do the data actually say?

As it turns out, the libertarian label is embraced by a more racially and ethnically diverse group of individuals than some may realize, but tilts male.

Averaging across nine Reason-Rupe surveys I conducted at Reason Foundation/Reason Magazine with Princeton Survey Research Associates between 2012-2014 and a recent survey we conducted here at the Cato Institute with YouGov, here’s what we find: Among those who self-identify as “libertarian”, 71 percent are Caucasian, 14 percent are Latino, 5 percent are African-American, 8 percent identify as another race, and 4 percent chose not to identify. While not an exact reflection, these numbers are similar to the demographic makeup of all respondents averaged across the surveys: 67 percent white, 13 percent Latino, 12 percent African-American , 7 percent identifying as other, and 1 percent not identifying.  

Both the Pew Research Center and YouGov have each respectively found similar results. YouGov found 16 percent of whites, 17 percent of Hispanics, and 10 percent of African-Americans agreed they would describe themselves as libertarian. Pew went a step further to see how many Americans embraced the label and also thought it meant “someone whose political views emphasize individual freedom by limiting the role of government.” Indeed, Latinos (11 percent) were as likely as Caucasians (12 percent) to say the word “libertarian” describes them well and agree the term means limited government. African-Americans were less likely to both self-identify as libertarian and also say the term means limited government (3 percent).

While some surveys may find a higher percentage of white libertarians, the benefit of this analysis is averaging across multiple surveys and thus we’re less reliant on the potential error in one survey.

Millennial Libertarians

Diversity increases further among millennial libertarians, reflecting the racial composition of the entire generation. In a study I conducted of millennials at Reason, we found (see pg. 23) that millennial libertarians reflect the racial/ethnic diversity of the national sample. (YouGov fielded the survey of 2000 18-29 year olds). Among millennials who self-identify as libertarian, 56 percent are white, 21 percent are Latino, 14 percent are African-American, 8 percent are Asian, and 1 percent identify as another race. This is similar to all millennials surveyed: 57 percent white, 15 percent African-American, 15 percent Latino, 7 percent Asian, and 4 percent as another race.

Gender

Although libertarians are more racially and ethnically diverse than is usually thought, they do lean more male than female. Averaging across the nine Reason-Rupe surveys and a Cato/YouGov survey between 2012-2015, 63 percent of self-identified libertarians are male and 37 percent are female. We found a similar ratio among millennial libertarians with them being 68 percent male and 32 percent female.

Similarly Pew found that men (15 percent) were about twice as likely as women (7 percent) to self identify as libertarian and say that the term means limited government. YouGov found a similar ratio between men and women with 21 percent of men saying they would describe themselves as libertarian as would 10 percent of women.

In sum, Americans who choose to self-identify as libertarian in surveys tend to reflect the racial and ethnic demography of the United States more than is commonly realized, particularly among younger libertarians. However, self-identified libertarians are more like to be male than female.

 

 

 

You Ought to Have a Look is a feature from the Center for the Study of Science posted by Patrick J. Michaels and Paul C. (“Chip”) Knappenberger.  While this section will feature all of the areas of interest that we are emphasizing, the prominence of the climate issue is driving a tremendous amount of web traffic.  Here we post a few of the best in recent days, along with our color commentary.

There are several notable pieces this week that relate to the social cost of carbon (SCC)—the government’s powerful tool to aid in justifying all manner of rules and regulations. The SCC is supposed to represent the negative externalities (i.e., projected economic damages in a projected society resulting from projected climate change) that are associated with the emissions of each ton of carbon dioxide. It was developed as a way to translate carbon dioxide emission reductions into dollars savings and to make the “benefits” of proposed climate actions hit closer to home for more people.

But as you may guess from the number of “projected”s in the above parenthetical, the SCC is so highly malleable that you can pretty much game it to produce any value desired—the perfect characteristic for an all-purpose economic cost/benefit tool wielded by an opportunistic and activist government.

The situation is well-described by American Enterprise Institute’s Benjamin Zycher in his recent post for The HillThe magic of the EPA’s benefit/cost analysis.”

Welcome to the fascinating world of EPA benefit/cost analysis… the administration conducted an “analysis” of the “social cost of carbon” (SCC), in order to generate an estimate of the marginal externality cost of greenhouse gas emissions (GHG). The problems with that analysis are legion, but the central ones are the use of global (rather than national) benefits to drive the benefit/cost comparison; the failure to apply a 7 percent discount rate to the streams of benefits and costs, despite clear direction from the Office of Management and Budget; and — most important — the use of ozone and particulate reductions as “co-benefits” of climate policies. The administration’s estimate is about $36 per ton in 2015 ($31 per ton in 2010).

And that is how a regulation yielding future changes in temperatures and sea levels approaching zero can be claimed to yield net benefits “exceeding $100 billion, making this a highly beneficial rule.” In the EPA’s benefit/cost framework, the actual effects of the policies literally are irrelevant; just compute the assumed reduction in GHG emissions, multiply by $36, and voila!

Zycher takes us through the absurdities of just how small the impact of Obama’s “climate” actions is on the actual climate and how the actions are enormously magnified they become when they are run through the social cost of carbon. He concludes:

It is the delegation of legislative powers to the regulatory agencies that has allowed such game-playing in pursuit of an ideological agenda. The only means with which to restore political accountability to the regulatory process is a requirement that all regulations be approved by Congress.

You can check out his entire article, here.

If you wonder if the government’s SCC of $36/ton of carbon dioxide (as mentioned by Zycher) is scientifically robust: it isn’t.

This embarrassment is shown by some new work from a team led by Heritage Foundation’s Kevin Dayaratna that also included Ross McKitrick and David Kreutzer. These researchers re-ran the models used by the Obama Administration to determine the social cost of carbon with the most recent estimate of the earth’s climate sensitivity (i.e., how much global warming we should expect to result from our carbon dioxide emissions). The estimate they used came from a paper published by Nic Lewis and Judy Curry last year, replacing the climate sensitivity used by the government which was based on the U.N.’s Intergovernmental Panel on Climate Change (IPCC) estimates from 2011.

What did they find?

The resulting Social Cost of Carbon (SCC) estimates are much smaller than those from models based on simulated parameters. In the DICE model the average SCC falls by 30-50% depending on the discount rate, while in the FUND model the average SCC falls by over 80%. The span of estimates across discount rates also shrinks considerably, implying less sensitivity to this parameter choice… Moreover FUND, which takes more explicit account of potential regional benefits from CO2 fertilization and increased agricultural productivity, yields a substantial (about 40 percent or more) probability of a negative SCC through the first half of the 21st century.

Did you catch the end? The model that also includes positive externalities of carbon dioxide emissions (e.g., increased agricultural productivity) produces a decent probability that the social cost of carbon is negative for decades to come. In other words, instead of regulations trying to restrict carbon dioxide emissions, the government ought to be encouraging those emissions.

Given the growing evidence for a low climate sensitivity (see here for our latest on this topic) along with the firmly established evidence that carbon dioxide is a plant fertilizer that increases crop yields (among other benefits), you’d think that science organizations would be pushing the federal government to revise their SCC estimates.

Well, some are, like us. But others, with seemingly more clout (and much more government reliance) are pushing for the opposite.

Take the National Academy of Sciences (NAS), for example. This week they released the results of their examination of ways that the government could improve their SCC determination. Judy Curry has the low down. Notably absent from the NAS’s myriad suggestions was using an updated estimate of the climate sensitivity. Curry writes. “I am gobsmacked that they think the current … values of [equilibrium climate sensitivity] are fine.”

As for us, we are hardly surprised and had this to say in her comment section:

Judy–why are you surprised about not re-examining the ECS distribution? NAS was created to advise the Federal government by Abraham Lincoln and it has not deviated from that mission. The members of this particular committee are pretty much all on the global warming dole, big time. Why would they behave counter to their interests, which would be to admit that the [climate sensitivity] distributions are wrong and that the most likely values are near the low end of the present (AR4) [IPCC] distribution? That would be the end of the dole and an admission that billions were wasted on a minor issue whose solution is best left to private investment rather than public taxation.

Sadly, we can’t foresee the government letting loose its grip on its all-too-powerful social cost of carbon tool so long as we have a ruling Administration that takes an activist’s role in climate change—despite overwhelming evidence that such a role in unjustified. But we’ll keep trying our best to wrest this abusive device from them.

There’s no such thing as a free lunch. Or as the Fifth Amendment puts it, “nor shall private property be taken for public use, without just compensation.” Despite the clarity with which the Takings Clause proclaims that government must respect property rights, state and local governments have long been contriving ways to obtain private property without paying the constitutionally required just compensation.

In 2012, San Juan County, Washington—the islands in the Salish Sea between Seattle and Victoria—enacted a rule that conditions shoreline owners’ proposed land uses on dedicating a portion of their property as on-site conservation areas. This isn’t a new tactic. In Nollan v. California Coastal Commission (1987), for example, the Supreme Court rejected the government’s conditioning of a building permit on the landowners’ granting a public easement across their property to access a beach. The Court acknowledged that conditioning a benefit on the property owners’ giving up their Fifth Amendment right to just compensation is “an out-and-out plan of extortion.” The Court elaborated seven years later in Dolan v. City of Tigard (1994), ruling that courts must apply a high level of scrutiny to conditions attached to land-use permits to prevent government “gimmickry.”

In other words, if the condition by itself would be a taking, then the state cannot impose it unless there is a “nexus” and “rough proportionality” between “the property that the government demands and the social costs of the [landowner’s] proposal.” Koontz v. St. John’s River Water Management District (2013).

The San Juan County ordinance fails these tests because shoreline property owners are required to set aside “water quality buffers” as a condition of development not based on any harm the proposed land use itself might cause, but based on the county’s general efforts to reduce pollutants and other surface runoff. A local owners’ association called the Common Sense Alliance challenged the ordinance, but Washington state courts held that the nexus and proportionality tests need not even be applied here because it was the legislature that imposed the condition, not an ad hoc permitting process. The alliance has now asked the U.S. Supreme Court to step in, and Cato, joined by Reason Foundation, has filed an amicus brief supporting that petition.

The state courts’ reasoning is deeply flawed: Despite the common belief that individual permitting decisions are more prone to abuse and corruption than general legislative ones because the political process checks the latter, this simplistic notion ignores the lessons of public-choice economics. As the Texas Supreme Court put it, legislatures can “‘gang up’ on particular groups to force exactions that a majority of constituents would not only tolerate but applaud, so long as burdens they would otherwise bear were shifted to others.” Town of Flower Mound v. Stafford Estates Limited Partnership (2004).

This dynamic is particularly evident in today’s climate when in many states and sub-state political units, the majority is anti-development. Legislative conditions also have a much broader reach than do ad hoc permitting schemes. Unfortunately, Washington State is not alone in holding that the “nexus and proportionality” tests need not be applied to legislative decisions. The U.S. Supreme Court should step in and clarify that its (now well-established) Nollan-Dolan precedents extend to takings via legislative actions, not just executive ones.

The case is Common Sense Alliance v. San Juan County.

Perhaps the most pervasive myth about our nation’s education system is the notion that “public schools have to take all children.” Last year, when criticizing charter schools that she claimed, “don’t take the hardest-to-teach kids,” Hillary Clinton quipped, “And so the public schools are often in a no-win situation, because they do, thankfully, take everybody.” 

No, in fact, they do not.

At best, so-called “public” schools have to take all children in a particular geographic area, although they can and do expel children based on their behavior. They are more appropriately termed “district schools” because they serve residents of a particular district, not the public at large. Privately owned shopping malls are more “public” than district schools.

This wouldn’t be a serious problem if every district school offered a quality education, but they do not. Rather, the quality of education that the district schools provide tends to be highly correlated with the income levels of the residents of those districts. As Lindsey Burke of the Heritage Foundation and I noted last year, our housing-based system of allocating education leads to severe inequities:  

There is a strong correlation between these housing prices and school performance. In nearly all D.C. neighborhoods where the median three-bedroom home costs $460,000 or less, the percentage of students at the zoned public school scoring proficient or advanced in reading was less than 45 percent. Children from families that could only afford homes under $300,000 are almost entirely assigned to the worst-performing schools in the District, in which math and reading proficiency rates are in the teens.

Not surprisingly, some parents feel desperate when their kids are trapped in subpar schools because they can’t afford to live in ritzy neighborhoods or pay private school tuition. And some of those desperate parents will provide fake addresses to get their children a better education.

In Florida, the Broward County School Board announced this week that it is hiring private investigators to spy on the addresses the school suspects of being fake. As the Sun-Sentinel reports, the private eyes will “monitor a home and then give school officials photographs, videos and a detailed report.”

Fraudulent registration has long been an issue. Parents, believing their child will get a better education at a school outside their assigned boundary, list a relative or friend’s address, provide a fake address or even rent an empty apartment in the area of a preferred school.

Doing so can in Broward be prosecuted as a third-degree felony, since parents declare their addresses under penalty of perjury.

It’s unlikely that the district will have the funds to hire private eyes to track every student. One wonders, then, what criteria the district schools will use to determine which students should be surveilled… will they start with students who, shall we say, don’t look like most of the other students in that high-income district? 

Broward County is far from unique. Parents nationwide are regularly fined and even imprisoned for stealing a better education for their children. One New Jersey town even offered $100 bounties for information leading to the expulsion of students whose parents lied about their addresses. 

Writing at RedefinED, Nia Nuñez-Brady explained why her parents provided a fake address to get her into a better–and safer–district school: 

One day, while I was using the ladies room, another girl, who was double my size or at least it felt that way at the time, threatened to bash my head on the wall if I didn’t stop hanging out with a guy she liked. Growing up, my dad always told me, “Your face is too pretty to get into a fight.” So, I said to her: “Please don’t hit me. I’ll stay out of your way.”

She laughed. I went back to class, and tried to focus.

The next day, while walking on the hallway at the school, this same girl grabbed another student close to me. She pushed her against the wall and instigated a fight. The difference between myself and this new student: This girl fought back. The bully wasted no time. She grabbed her Snapple bottle, broke it on the wall, and used a piece of glass to slash the student’s face.

I was petrified. That could have been me.

Nia begged her parents to change schools but they couldn’t afford it. They were recent immigrants with little money. But they couldn’t bear to keep their daughter in a school where they feared for her safety. So they lied.

[M]y parents did something thousands of other public-school parents feel forced to do, because they feel they have no other options. They lied about where we lived so I could go to a different school where I would feel safe. […]

Of course, it is understandable residents of districts who have paid taxes into the system would be upset that they are subsidizing the education of children whose parents haven’t paid into the system. And so it’s also understandable that the district schools would seek to exclude students whose parents haven’t paid into the system, just as private schools shouldn’t be expected to educate a child whose parents hadn’t paid tuition. As Nia explains, problem is the system itself:

I understand that perjury is against the law, and that the law should be respected. But from my own experience, I know the parents who lie about their address are often the ones with limited resources, the ones who cannot afford to move to a more affluent neighborhood, the ones who can least afford to pay a fine or fight a felony charge.

I can also understand the families who have been kicked out of a school close to where they live, because the school is overcrowded with students from other neighborhoods. That, too, is unfair.

But that’s the problem. The system is unfair.

Indeed. Getting a decent education should not depend upon the ability of one’s parents to afford an expensive home. It is long past time that we break the link between home prices and school quality. Doing so entails recognizing that there’s no such thing as a “public” school.

Today marks the 20th anniversary of the Supreme Court decision in Whren v. United States. The case clarified the constitutionality of the practice of “pretextual” traffic stops. The Court ruled that so long as an officer can articulate that a driver violated some traffic law, the officer may stop a motorist in order to investigate potential and wholly unrelated criminal activity. The case has effectively become a blueprint for police officers to racially profile drivers without repercussion.

Last fall, I gave a talk at Case Western Reserve Law School in a symposium dedicated to Whren and its legacy. The school’s law review recently published the article that came from that talk. Instead of putting forth an argument to overturn Whren, I argue that police departments ought to curtail or end the use of pretextual stops as a proactive policing measure. The Supreme Court’s ruling that the tactic is constitutional does not make it an ethical or wise tactic to employ.

Simply put, pretextual stops undermine police legitimacy by turning public servants into antagonistic interrogators. In practice, pretextual motor vehicle stops—much like the pedestrian Terry stops used in New York’s infamous Stop-and-Frisk program—ensnare far more innocent people than criminals. And most of the people who are stopped are black or Latino, further eroding police support in those communities. Police departments must establish their legitimacy—through trust and positive interactions—to improve their effectiveness and public safety. Overly aggressive and implicitly discriminatory policing practices undermine that legitimacy.

You can read the whole article here. The rest of this issue of the Case Western Reserve Law Review can be found here.

When I was younger, my left-wing friends said conservatives unfairly attacked them for being unpatriotic and anti-American simply because they disagreed on how to deal with the Soviet Union.

Now the shoe is on the other foot.

Last decade, a Treasury Department official accused me of being disloyal to America because I defended the fiscal sovereignty of low-tax jurisdictions.

And just today, in a story in the Washington Post about the Center for Freedom and Prosperity (I’m Chairman of the Center’s Board of Directors), former Senator Carl Levin has accused me and others of “trading with the enemy” because of our work to protect and promote tax competition.

Here’s the relevant passage.

Former senator Carl Levin (D-Mich.)…said in a recent interview that the center’s activities run counter to America’s values and undermine the nation’s ability to raise revenue. “It’s like trading with the enemy,” said Levin, whose staff on a powerful panel investigating tax havens regularly faced public challenges from the center. “I consider tax havens the enemy. They’re the enemy of American taxpayers and the things we try to do with our revenues — infrastructure, roads, bridges, education, defense. They help to starve us of resources that we need for all the things we do. And this center is out there helping them to accomplish that.”

Before even getting into the issue of tax competition and tax havens and whether it’s disloyal to want limits on the power of governments, I can’t resist addressing the “starve us of resources” comment by Levin.

He was in office from 1979-2015. During that time, federal tax receipts soared from $463 billion to $3.2 trillion. Even if you only count the time the Center for Freedom and Prosperity has existed (created in late 2000), tax revenues have jumped from $2 trillion to $3.2 trillion.

At the risk of understatement, Senator Levin has never been on a fiscal diet. And he wasn’t bashful about spending all that revenue. He received an “F” rating from the National Taxpayers Union every single year starting in 1993.

Let’s now address the main implication of the Washington Post story, which is that it’s somehow wrong or improper for there to be an organization that defends tax competition and fiscal sovereignty, particularly if some of its funding comes from people in low-tax jurisdictions.

The Post offer[s] an inside look at how a little-known nonprofit, listing its address as a post office box in Alexandria, became a persistent opponent of U.S. and global efforts to regulate the offshore world. …the center met again and again with government officials and members of the offshore industry around the world… Quinlan and Mitchell launched the center in October 2000. …The center had two stated goals. Overseas, the center set out to persuade countries on the blacklist not to cooperate with the OECD, which it derided as a “global tax cartel.” In Washington, the center lobbied the Bush administration to withdraw its support for the OECD and also worked to block anti-tax haven legislation on Capitol Hill. To spread the word, the center testified before Congress, published reports and opinion pieces in leading financial publications, and drafted letters to lawmakers and administration officials. Representatives of the center crisscrossed the globe and sponsored discussions in 2000 and 2001, traveling to London, Paris, the Cayman Islands, the Bahamas, Panama, Barbados and the British Virgin Islands.

To Senator Levin and other folks on the left, I guess this is the fiscal equivalent of “trading with the enemy.”

In reality, this is a fight over whether there should be any limits on the fiscal power of governments. On one side are high-tax governments and international bureaucracies like the OECD, along with their ideological allies. They want to impose a one-size-fits-all model based on the extra-territorial double-taxation of income that is saved and invested, even if it means blacklisting and threatening low-tax jurisdictions (the so-called tax havens).

On the other side are proponents of good tax policy (including many Nobel Prize-winning economists), who believe that income should not be taxed more than one time and that the power to tax should be constrained by national borders.

And, yes, that means we sometimes side with Switzerland or Panama rather than the Treasury Department. Our patriotism is to the ideals of the Founding Fathers, not to the bad tax policy of the U.S. government.

In any event, I’m proud to say that the Center’s efforts have been semi-successful.

In May 2001, the center claimed a key victory. In a dramatic departure from the Clinton administration, Paul O’Neill, the incoming Treasury Secretary appointed by Bush, announced that the United States would back away from the reforms pushed by the OECD. …fewer than half of the nations on the OECD blacklist pledged to become more transparent in their tax systems, a victory for anti-tax forces such as the center.

Even the other side says the Center is effective.

…said Elise Bean, former staff director and chief counsel of Levin’s Homeland Security Permanent Subcommittee on Investigations, which started investigating tax havens in 2001. “They travel all around the world and they have had a tremendous impact.” …“They were very effective at painting the OECD’s work as end-times are here for tax competition, and we’re going to have European tax rates imposed upon the whole world if the OECD’s work continued,” said Will Davis, the former head of OECD public affairs in Washington.

What’s most impressive is that all this was accomplished with very little funding.

Tax returns for the center and a foundation set up in its name reported receiving at least $1.4 million in revenue from 2003 to 2010.

In other words, the Center and its affiliated Foundation managed to thwart some of the world’s biggest and most powerful governments with a very modest budget averaging about $175,000 per year. And I don’t even get compensation from the Center, even though I’m the one who almost got thrown in a Mexican jail for opposing the OECD!

So while Senator Levin had decades of experience spending other people’s money in a promiscuous fashion, I work for an organization, the Cato Institute, that is ranked as the most cost-effective major think tank, and I’m on the Board of a small non-profit that has a track record of achieving a lot with very little money.

Yet another example of why we should be thankful that tax competition makes it more difficult for politicians to extract more revenue from the economy’s productive sector.

P.S. I mentioned to the Post reporters that the world’s biggest tax haven is the United States, but that important bit of information was omitted from the article. Which is a shame since it would have given me a chance to laud Senator Rand Paul for blocking a very dangerous agreement that would undermine America’s attractive tax laws for overseas investors.

P.P.S. If politicians really want to hurt tax havens, they should adopt a flat tax. That would dramatically boost tax compliance.

P.P.P.S. All things considered, I think the reporters who put together the story were reasonably fair, though there was a bit of editorializing such as referring to one low-tax jurisdiction as a “notorious tax haven.” When they write about France, do they ever refer to it as a “notorious tax hell”?

Also, when writing about trips the Center arranged for congressional staff to low-tax jurisdictions, the article stated, “The staffers reported receiving from $900 to $2,360 for the trips”, which makes it sound as if the staffers got paid. That’s wrong. The sentence should have read, “The staffers reported that the Center’s travel and lodging expenses ranged from $900 to $2,360 for the trips.”

Today Cato senior fellow Nat Hentoff turns 91.  Happy Birthday Nat!

Nat has opposed communism since he was 15 years old, but because he had a column with the Village Voice, people would sometimes assume he had communist sympathies.  In this video, Nat explains that that mistaken assumption is how he was able to get into a meeting with Fidel Castro’s deputy, Che Guevara, and challenge him about the dictatorial nature of the Castro regime.  He finds it puzzling why so many people fawn over Castro and Che.

 

According to a conventional narrative, tropical islands are eroding away due to rising seas and increasingly devastating storms. Not really, according to the recent work of Ford and Kench (2016).

Writing as background for their study, the two researchers state that low-lying reef islands are “considered highly vulnerable to the impacts of climate change,” where an “increased frequency and intensification of cyclones and eustatic sea-level rise [via global warming] are expected to accelerate shoreline erosion and destabilize reef islands.” However, they note that much remains to be learned about the drivers of shoreline dynamics on both short- and long-term time scales in order to properly project future changes in low-lying island development. And seeking to provide some of that knowledge, the pair of New Zealand researchers set out to examine historical changes in 87 islands found within the Jaluit Atoll (~6°N, 169.6°E), Republic of the Marshall Islands, over the period 1945-2010. During this time, the islands were subjected to ongoing sea level rise and the passage of a notable typhoon (Ophelia, in 1958), the latter of which caused severe damage with its >100 knot winds and abnormal wave heights.

So what did their examination reveal?

Analyses of aerial photographs and high-resolution satellite imagery indicated that the passage of Typhoon Ophelia caused a decrease in total island land area of approximately five percent, yet Ford and Kench write that “despite [this] significant typhoon-driven erosion and a relaxation period coincident with local sea-level rise, [the] islands have persisted and grown.” Between 1976 and 2006, for example, 73 out of the 87 islands increased in size, and by 2010, the total landmass of the islands had exceeded the pre-typhoon area by nearly 4 percent.

Such observations, in the words of Ford and Kench, suggest an “alternative trajectory” for future reef island development, and that trajectory is one of “continued island expansion rather than one of island withering.” And such expansion is not just limited to Jaluit Atoll, for according to Ford and Kench, “the observations of reef island growth on Jaluit coincident with sea level rise are broadly consistent with observations of reef islands made elsewhere in the Marshall Islands and Pacific (McLean and Kench, 2015).” Given as much, it would thus appear that low-lying islands are not as vulnerable to climate change as previously thought.

 

Reference

Ford, M.R. and Kench, P.S. 2016. Spatiotemporal variability of typhoon impacts and relaxation intervals on Jaluit Atoll, Marshall Islands. Geology 44: 159-162.

McLean, R.F. and Kench, P.S. 2015. Destruction or persistence of coral atoll islands in the face of 20th and 21st century sea level rise? WIRES Climate Change 6: 445-463.

Could the U.S.-Japan alliance flounder as a result of alcohol? Apparently. At least, that’s the implication of the U.S. Navy’s ban on drinking by personnel stationed on the Japanese island of Okinawa.

It would be far better to phase out America’s military presence on Okinawa, turning U.S. bases back to the Japanese government. More than seven decades after the end of World War II, Tokyo should take over responsibility for Japan’s defense.

Washington currently bases and personnel on the island of Okinawa, with just .6 percent of Japan’s land mass. Local anger exploded in 1995 after three American service members raped a 12-year-old girl. The Japanese government sought to placate islanders with financial transfers and plans to move Futenma airbase and relocate Marines to Guam. These schemes failed to satisfy, however.

Base opponents, bolstered by the 2014 gubernatorial victory of Takeshi Onaga, continued to resist. Fueling popular anger has been a seeming spate of high-profile offenses committed by U.S. military personnel (who, in fact, have a lower crime rate than locals). Last month a sailor pled guilty to rape. Also last month a contractor and former Marine was detained in a murder case.

Then an apparently intoxicated sailor crashed, injuring two Okinawans. The navy confined all personnel to base except for essential travel and banned drinking on or off U.S. facilities.

Prime Minister Shinzo Abe largely ignored the Okinawa question as he sought to bolster Tokyo’s military capabilities. But he has made little progress against strong public opposition.

Japan’s “peace constitution” forbidding a military remains unchanged, so Abe simply interprets the law as he wishes it had been written. Military outlays have risen only modestly since Abe took power, up just two percent in 2015. Japan then devoted about $41 billion to defense, compared to roughly $180 billion by China, Tokyo’s main potential nemesis.

Although last year his government adjusted the military’s defense guidelines, Tokyo’s international activities will remain non-combat and do little to reduce America’s military duties.

Moreover, the revised standards merely allow Japan to better defend Japan, not assist the U.S. Now a Japanese ship on patrol with an American vessel can assist if the latter is attacked—so long the Japanese vessel too is threatened. And Japanese analysts warn against expecting Tokyo to allow such situations to occur.

Worse, the new guidelines appear to envision an even stronger U.S. guarantee for Japan and deployment of additional weapons. Under the “bilateral” treaty Washington’s obligations apparently only increase.

The U.S. has an obvious interest in Japan’s continued independence, but Japan’s commitment to its own security should be even greater. Tokyo should do more to defend itself.

In fact, no one expects a Chinese armada to show up in Tokyo Bay. If conflict erupts, it likely will be over disputed Senkaku/Diaoyu Islands. Of course, Beijing is not justified in using force there or elsewhere, but nothing at stake there is worth war, at least for America.

A serious Japanese military build-up is opposed by some of Tokyo’s neighbors, but no one seriously suggests that Japan is about to embark upon a new round of imperial conquests. More than seven decades after World War II Japan should finally act like a normal country—defending itself, guarding its region, and ending its dependence on America.

The U.S. should turn its security guarantee to Japan into a framework for future cooperation. That should include potential assistance if a genuine hegemonic threat arises in Asia. But Tokyo should take the lead in confronting day-to-day security challenges.

As I wrote in Forbes: “Japan should decide its own defense and foreign policies. As American forces returned home Okinawa’s bases would empty. What came next would be up to the Japanese. And American military personnel could continue to enjoy a drink … back home in their own country.”

 

Last week I criticized President Obama for his failure to sell the Trans-Pacific Partnership to the public and to Congress.  Ratification of trade agreements has always relied on consistent and unequivocal advocacy from the White House.

Well, the president heard me loud and clear and decided to take my advice.  Here’s his pitch to the American people via Jimmy Fallon (TPP lyrics begin around 4:50, but the whole thing is pretty darn funny).

 

The U.S. International Trade Commission (ITC) is required by the Bipartisan Congressional Trade Priorities and Accountability Act of 2015 to prepare estimates of the economic effects of trade agreements.  In specific:

“Not later than 105 calendar days after the President enters into a trade agreement under section 103(B), the Commission shall submit to the President and Congress a report assessing the likely impact of the agreement on the United States economy as a whole and on specific industry sectors, including the impact the agreement will have on the gross domestic product, exports and imports, aggregate employment and employment opportunities, the production, employment, and competitive position of industries likely to be significantly affected by the agreement, and the interests of United States consumers.”

This statutory language guided the ITC’s analysis of the twelve-nation Trans-Pacific Partnership (TPP).  The ITC study was released on May 18, 2016. 

It had been several years since the United States concluded a free trade agreement.  The previous one with South Korea (Korea-U.S. Free Trade Agreement, or KORUS) dates from 2007.  I served as chairman of the ITC at the time and am quite familiar with the KORUS study.  The econometric modeling used a “comparative static” analysis.  A comparative static approach can be likened to taking two snapshots of the economy.  The first photo was of the known baseline economy as it existed in 2007. The second photo also used the 2007 baseline, but this time it was “shocked” by incorporating all provisions of KORUS as if they had been fully implemented.  This allowed a conceptually sound – albeit counterfactual – assessment of the likely economic effects of KORUS by analyzing how those reforms would have influenced the 2007 economy.  (These issues are explained in this Free Trade Bulletin.) Static modeling has been used in all the ITC’s analyses of trade agreements prior to TPP." title="<--break-->" class="mceItem">

One of the great strengths of the comparative static approach is that it makes no attempt to project the economy into the future.  There is no need to speculate on whether a recession will curb trade flows, or whether technological change will make some industries obsolete while spurring new ones into existence.  Precisely predicting the future requires a degree of clairvoyance not possessed by economists or anyone else.  A comparative static analysis deals with that reality by instead looking backward.  It imposes new policy reforms on an old – but well-known – economy.  And it allows economists to avoid trying to make forward-looking projections of economic activity that inevitably turn out not to be correct.

However, comparative static modeling is not the only tool in the econometrician’s toolbox.  For its analysis of the economic effects of TPP, the ITC has chosen to use a dynamic computable general equilibrium (CGE) model.  The Global Trade Analysis Project (GTAP) model “is an appropriate tool for analyzing the effects of trade agreements because it consists of a database with international trade flows and other macroeconomic information, social accounting matrixes that show how different segments of the economy are interlinked, and national income accounts data.”  Using a dynamic version of the GTAP model has allowed the ITC to estimate changes in various economic measures (real GDP, employment, exports, imports, etc.) up to 30 years in the future.  Most of the analysis focuses on the 15-year period beginning in 2017 and ending in 2032.

In order to evaluate how TPP might influence the economy in the future, it first was necessary to create a baseline projection of what the economy would be like in the years ahead without TPP.  The ITC has done this by incorporating projections made by the International Monetary Fund (IMF) and the Organisation for Economic Co-operation and Development (OECD) regarding growth rates in many countries for labor, population, and GDP.  Once the 30-year baseline was established, the model was shocked by adding TPP’s annual policy changes.  (TPP gradually phases in many reductions in trade restrictions year by year.)  The economic effects of TPP then were measured as differences between the original baseline and the baseline following the shocks from TPP.  The dynamic GTAP model provides a mathematically sound means to estimate future economic variances caused by policy changes. 

The real questions regarding forward-looking estimates have to do with the baseline itself.  The IMF and OECD are quite capable when it comes to analyzing historic trends.  Generally it’s not unreasonable to project well-established trends a short distance into the future.  If global GDP has grown at an average rate of 3 percent over the past ten years, for instance, it may be quite sensible to estimate that growth in the coming year also will be around 3 percent.  The problems come as we look further into the future.  Life’s inherent uncertainties make it relatively likely that a future projection of U.S. exports or imports of cheese, for instance, will turn out not to be precisely accurate.  How much confidence should we have in projections five years into the future?  Fifteen years?  Thirty?

Making estimates that turn out to be different than actual future outcomes is not a problem to economists and statisticians schooled in economic modeling.  They understand well that the ITC’s estimates were made using the best available information and up-to-date econometric techniques, and that the real world economy simply diverged from what had been projected in the baseline.  Unfortunately, not everyone interested in trade policy has such depth of knowledge and understanding.  My concern is not with the integrity of the modeling, but rather the challenges that trade supporters may face in defending the results of the analysis against criticism.

Even with comparative static modeling, opponents of expanded international trade have been inclined to misinterpret the analysis.  There are claims, for example, that the ITC did a poor job with its KORUS study because the U.S. trade deficit with South Korea has gone up since the agreement went into effect.  The KORUS study didn’t say anything about what might happen to the trade deficit in the future.  However, it did indicate a likely decrease in the deficit in the hypothetical situation in which all provisions of KORUS were somehow implemented during the static 2007 baseline period.

Now that the ITC’s TPP study has used a dynamic CGE approach that actually does make estimates about future trade flows, critics of trade agreements no doubt will be happy to point out how the ITC “got things all wrong.”  (In fact, the ITC’s estimates are seldom likely to be “right.”)  Trade skeptics are unlikely to bother explaining that the real source of the estimated “errors” is that the underlying economy evolved differently than the IMF and OECD had projected.  Most anti-trade NGOs have little interest in raising the quality of the trade policy debate.  Rather, they may be inclined to argue that all economic analysis showing positive effects for the United States from trade agreements is suspect and can’t be trusted.

Supporters of trade liberalization will do their best to counter such misinformation by explaining the details of dynamic CGE modeling.  But the criticism of the ITC’s estimates will take only a few words; setting the record straight will require several sentences or paragraphs.  Protectionist rhetoric may prove to have a greater influence on public opinion than do the substantive explanations. 

It will be interesting to see whether analyzing trade agreements via dynamic CGE modeling leads to a more informed public discussion than has been the case for the comparative static technique.  With a comparative static approach, the ITC was never wrong, but often misunderstood.  With dynamic modeling, the ITC will almost never be right, while still being misunderstood. 

 

Daniel R. Pearson is a senior fellow in the Cato Institute’s Herbert A. Stiefel Center for Trade Policy Studies, and is a former chairman of the U.S. International Trade Commission.

Global Science Report is a feature from the Center for the Study of Science, where we highlight one or two important new items in the scientific literature or the popular media. For broader and more technical perspectives, consult our monthly “Current Wisdom.”

There is a new paper generating some press attention (e.g. Chris Mooney at the Washington Post) that strongly suggests global warming is leading to specific changes in the atmospheric circulation over the Northern Hemisphere that is causing an enhancement of surface melting across Greenland—and of course, that this mechanism will make things even worse than expected into the future.

We are here to strongly suggest this is not the case.

The new paper is by a team of authors led by Marco Tedesco from Columbia University’s Lamont-Doherty Earth Observatory. The main gist of the paper is that Arctic sea ice loss as a result of human-caused global warming is causing the jet stream to slow down and become wigglier—with deeper north-south excursions that hang around longer.  This type of behavior is referred to as atmospheric “blocking.”

If this sounds familiar, it’s the same theoretical argument that is made to try to link wintertime “polar vortex” events (i.e., cold outbreaks) and blizzards to global warming. This argument which has been pretty well debunked, time and time again.

Well, at least it has as it concerns wintertime climate.

The twist of the new Tedesco and colleagues’ paper is that they’ve applied it to the summertime climate over Greenland. They argue that global warming is leading to an increase in blocking events over Greenland in the summer and that is causing warm air to be “locked” in place leading to enhanced surface melting there. Chris Mooney, who likes to promote climate alarm buzzwords, refers to this behavior as “weird.” And he describes the worrysome implications:

The key issue, then, is whether 2015 is a harbinger of a future in which the jet stream keeps sending Greenland atmospheric systems that drive major melt — and in turn, whether the Arctic amplification of climate change is driving this. If so, that could be a factor, not currently included in many climate change simulations, that would worsen the ice sheet’s melt, drive additional sea level rise and perhaps upend ocean currents due to large influxes of fresh water.

As proof that things were weird over Greenland in recent summers, Tedesco’s team offers up this figure in their paper:

" title="<--break-->" class="mceItem">

This chart (part of a multipanel figure) shows the time history of the North Atlantic Oscillation (NAO—a pattern of atmospheric variation over the North Atlantic) as red bars and something called the Greenland Blocking Index (GBI) as the black line, for the month of July during the period 1950-2015. The chart is meant to show that in recent years, the NAO has been very low with 2015 being “a new record low of -1.23 (since 1899),” and the GBI has been very high with the authors noting that “[c]oncurrently, the GBI also set a new record for the month of July [2015].” Clearly the evidence is showing that atmospheric blocking increasing over Greenland which fits nicely into the global warming/sea ice loss/wiggly jet stream theory.

So what’s our beef?

A couple of months ago, some of the same authors of the Tedesco paper (notably Ed Hanna) published a paper showing the history of the monthly GBI going back to 1851 (as opposed to 1950 as depicted in the Tedesco paper).

Here’s their GBI plotted for the month of July from 1851 to 2015:

This picture tells a completely different story. Instead of a long-term trend that could be related to anthropogenic global warming, what we see is large annual and multidecadal variability, with the end of the record not looking much different than say a period around 1880 and with the highest GBI occurring in 1918 (with 1919 coming in 2nd place). While this doesn’t conclusively demonstrate that the current rise in GBI is not related to jet stream changes induced by sea ice loss, it most certainly does demonstrate that global-warming induced sea ice loss is not a requirement for blocking events to occur over Greenland and that recent events are not  at all “weird.”  An equally plausible, if not much more plausible, expectation of future behavior is that this GBI highstand is part of multidecadal natural variability and will soon relax back towards normal values.  But such an explanation isn’t Post-worthy.

Another big problem with all the new hype is that history shows the current goings-on in Greenland to be irrelevant, because humans just can’t make it warm enough up there to melt all that much ice. For example, in 2013, Dorthe Dahl-Jensen and her colleagues published a paper in Nature detailing the history of the ice in Northwest Greenland during the beginning of the last interglacial, which included a 6,000 year period in which her ice core data showed averaged a whopping 6⁰C warmer in summer than the 20th century average. Greenland only lost around 30% of its ice with a heat load of (6 X 6000) 36,000 degree-summers. The best humans could ever hope to do with greenhouse gases is—very liberally—about 5 degrees for 500 summers, or (5 X 500) 2,500 degree-summers. In other words, the best we can do is 500/6000 times 30%, or a 2.5% of the ice, resulting in a grand total of seven inches of sea level rise over 500 years. That’s pretty much the death of the Greenland disaster story, despite every lame press release and hyped “news” article on it.

While you won’t find this kind of analysis elsewhere, we’re happy to do it here at Cato. 

References:

Dahl-Jensen, D., et al., 2013.  Eemian interglacial reconstructed from a Greenland folded ice core.  Nature 489, doi: 10.1038/nature11789.

Hanna, E., et al., 2016. Greenland Blocking Index 1851-2015: a regional climate change signal. International Journal of Climatology, doi: 10.1002/joc.4673.

Tedesco, M., et al., 2016. Arctic cut-off high drives the poleward shift of a new Greenland melting record. Nature Communications, DOI: 10.1038/ncomms11723, http://www.nature.com/ncomms/2016/160609/ncomms11723/full/ncomms11723.html

North Korea’s ruling elite appears to be getting along fine despite international sanctions. Washington needs to find a new approach toward the North.

The so-called Democratic People’s Republic of Korea poses one of the most vexing challenges to American policy. For more than 20 years U.S. presidents have insisted that the DPRK cannot be allowed to develop nuclear weapons. Yet it apparently is preparing for a fifth nuclear test.

A military strike, as proposed by Ashton Carter before he was appointed Defense Secretary, would risk engulfing the peninsula in war. So the U.S. has relied on sanctions. Every time Pyongyang misbehaves—especially tests a nuclear weapon or launches a missile—American officials impose tougher domestic economic penalties and press for harsher UN sanctions.

 After the North’s latest nuclear test earlier this year, China agreed to a new round of restrictions. The increased penalties had no impact of North Korean policy. To the contrary, in early May the Kim regime used the party congress to highlight Pyongyang’s nuclear program.

Sanctions have had an impact. The People’s Republic of China has been losing patience and appears to be more tightly regulating cross-border commerce. Some North Korean representatives of blacklisted agencies moved from China to Southeast Asian nations. The regime has resorted to smuggling to bring in banned products. Moreover, Pyongyang appears to be having more difficulty selling weapons abroad.

Nevertheless, Beijing continues to moderate the impact of sanctions. Illicit goods still cross the border and some observers expect the PRC commitment to fade as Western attention moves elsewhere. Beijing more fears chaos on its border than a North Korea with nuclear weapons. President Xi Jinping recently declared: “As a close neighbor of the peninsula, we will absolutely not permit war or chaos on the peninsula.”

The Xi government so far refuses to halt energy and food shipments, the only step that would apply bone-crunching pressure to the Kim regime. Even then, Pyongyang might refuse to comply. The regime already is blaming the West, preparing its people for what it calls an “arduous march.”

During the late 1990s the regime survived the virtual collapse of the economy and starvation death of a half million or more North Koreans. The Kim dynasty might survive similar hardship in the future.

Unfortunately, the uniform experience of sanctions is that they hurt those with the least resources and influence. That appears to be the case in North Korea.

So far the powerful have prospered, despite penalties directed against luxury imports. The Washington Post recently reported on “Pyonghattan,” home to North Korea’s privileged elite. In contrast, argued Andrei Lankov of Kookmin University, “the average North Korean will also bear the brunt of the sanctions.”

The latest round of sanctions has increased hardship. Choi Ha-young, chairman of the Love North Korean Children Charity, complained: “Currently, due to the UN sanctions, people in the lowest class are really impacted.”

As I point out in National Interest: “Washington seems to have only one response to the North: increase sanctions. However, this policy is a dead-end. The U.S. and its allies must find a new strategy toward Pyongyang.”

Pages