Policy Institutes

I sometimes hear it said that today’s lengthy trade agreements are about “managed trade,” and that a true free trade agreement would only have to be one sentence (or perhaps one paragraph.) Well, maybe, but it depends on what that sentence or paragraph says. Here’s a suggestion someone made on a trade policy blog I run:

A true free trade agreement would be one sentence. Any good that can be sold legally in a country can be sold legally by a seller from any other country that is a party to this agreement. The agreements are long because they are negotiating winners and losers. That is crony capitalism.

The problem with this proposed sentence is that it would be under-inclusive: It would not achieve free trade, in several respects.

First, the primary barrier to free trade is still tariffs, which are taxes imposed on imports. Tariffs don’t make trade illegal, they just tax it, and a rule that goods which can legally be sold in a country can also be sold by foreign sellers would not eliminate tariffs. And, by the way, that’s a big reason why trade agreements are so long – they list all traded products and place limits on the tariff level for each product. Many of the pages are taken up by these detailed tariff reduction schedules.

Now, you could have a one sentence trade agreement that said something along the lines of, “All tariffs are hereby abolished.” That would be a pretty good sentence in a trade agreement. So far, we haven’t seen a sentence like that, unfortunately.

In addition, there are some complex protectionist measures out there, not all of which ban the sale of foreign goods.  For example, you could have a tax measure which applies higher taxes to foreign goods than domestic goods. This would mean that foreign goods could still legally be sold in the country, and thus the free trade sentence quoted above would not address such a measure.

Along the same lines, some trade agreements impose constraints on the use of anti-dumping measures.  There might be an ideal sentence here (“anti-dumping measures are hereby abolished”), but that is not politically achievable right now, so we end up with many pages of rules that put limits on anti-dumping measures. It’s not perfect, but it helps.

To sum up, I agree with critics who say there are lots of problems with today’s trade agreements, as various interest groups have lobbied succesfully for specific regulations to be included in them.  We can definitely scale back from the 5,000 or so pages in the Trans Pacific Partnership. In the end, though, any free trade agreement is likely to take quite a few pages to set out all the various constraints on protectionism. 

A timely new blog post from the Tax Foundation points out that, “taxes on the rich are much higher than they’ve been in recent years. Between 2008 and 2012, the top 1 percent of households paid an average tax rate of 28.8 percent. However, in 2013, this figure spiked to 34.0 percent, as a result of tax increases in the “fiscal cliff” deal and the Affordable Care Act”.

“Readers should check out the new CBO report,” the authors suggest, “and reflect for themselves about whether or not high-income Americans are now paying their fair share of taxes.”

The trouble is that the tax rate alone can’t tell us how much the Top 1% paid in taxes: To know how much taxes were paid by the Top 1% requires knowing how much income they reported to the IRS.  The reason this matters is that there is ample evidence that the “elasticity of taxable income” is very high among top taxpayers, which simply means they find ways to report less income if marginal tax rates go up.  This doesn’t require lawyers or loopholes: Avoid capital gains tax by not selling assets and/or shifting into exempt assets (housing up to $500k); avoid the dividend tax by holding tax-exempt bonds; defer personal tax on business income by retaining earnings within a C-corporation; avoid punitive tax rates on second earners by becoming a one-earner household; retire early, etc.

Looking at the same thing from a different angle, the graph shows that average taxes actually paid by the Top 1% grew rapidly after the tax rate on capital gains was cut from 28 percent to 20 percent in 1997. Taxes paid by the Top 1% grew even more rapidly after 2003 when the tax rate on capital gains and dividends was further reduced to 15 percent and the top tax on salaries and unincorporated businesses was cut from 39.6 percent to 35 percent.  If you want the rich to pay more taxes, cut their tax rates.  

As it turns out, 2013 showed that we can’t just assume higher tax rates mean docile taxpayers will simply write bigger checks to the U.S. Treasury. On the contrary, when the average tax rate on the Top 1% increased by 18.4 percent in 2013, the amount of income reported by the Top 1% fell by 15.4 percent – from $1,856,000 in 2012 to $1,571,600. The net effect was almost a wash, in terms of taxes actually paid. According to the CBO, average federal taxes paid by the Top 1% were $530,128 in 2013 –virtually unchanged from $529,056 in 2012. 

Presidential candidates Bernie Sanders and Hillary Clinton propose even more increases in top tax rates on income and capital gains (to 54.2 percent with Sanders, 43.6 percent with Clinton), ostensibly to finance their lavish government spending plans.  But even a relatively small dose of this same poison failed to raise significant revenue from the Top 1% in 2013, partly because of the drag on the overall economy from reduced incomes and incentives. 

In recent weeks, Libertarian presidential candidate Gary Johnson has been gaining media momentum as polls show him garnering about 10 percent of the vote in a race with Trump and Clinton. His candidacy has attracted attention to the libertarian ideas he espouses and the people who embrace the label.

The popular media stereotype of libertarians is disproportionately white and male. But is this accurate? What do the data actually say?

As it turns out, the libertarian label is embraced by a more racially and ethnically diverse group of individuals than some may realize, but tilts male.

Averaging across nine Reason-Rupe surveys I conducted at Reason Foundation/Reason Magazine with Princeton Survey Research Associates between 2012-2014 and a recent survey we conducted here at the Cato Institute with YouGov, here’s what we find: Among those who self-identify as “libertarian”, 71 percent are Caucasian, 14 percent are Latino, 5 percent are African-American, 8 percent identify as another race, and 4 percent chose not to identify. While not an exact reflection, these numbers are similar to the demographic makeup of all respondents averaged across the surveys: 67 percent white, 13 percent Latino, 12 percent African-American , 7 percent identifying as other, and 1 percent not identifying.  

Both the Pew Research Center and YouGov have each respectively found similar results. YouGov found 16 percent of whites, 17 percent of Hispanics, and 10 percent of African-Americans agreed they would describe themselves as libertarian. Pew went a step further to see how many Americans embraced the label and also thought it meant “someone whose political views emphasize individual freedom by limiting the role of government.” Indeed, Latinos (11 percent) were as likely as Caucasians (12 percent) to say the word “libertarian” describes them well and agree the term means limited government. African-Americans were less likely to both self-identify as libertarian and also say the term means limited government (3 percent).

While some surveys may find a higher percentage of white libertarians, the benefit of this analysis is averaging across multiple surveys and thus we’re less reliant on the potential error in one survey.

Millennial Libertarians

Diversity increases further among millennial libertarians, reflecting the racial composition of the entire generation. In a study I conducted of millennials at Reason, we found (see pg. 23) that millennial libertarians reflect the racial/ethnic diversity of the national sample. (YouGov fielded the survey of 2000 18-29 year olds). Among millennials who self-identify as libertarian, 56 percent are white, 21 percent are Latino, 14 percent are African-American, 8 percent are Asian, and 1 percent identify as another race. This is similar to all millennials surveyed: 57 percent white, 15 percent African-American, 15 percent Latino, 7 percent Asian, and 4 percent as another race.

Gender

Although libertarians are more racially and ethnically diverse than is usually thought, they do lean more male than female. Averaging across the nine Reason-Rupe surveys and a Cato/YouGov survey between 2012-2015, 63 percent of self-identified libertarians are male and 37 percent are female. We found a similar ratio among millennial libertarians with them being 68 percent male and 32 percent female.

Similarly Pew found that men (15 percent) were about twice as likely as women (7 percent) to self identify as libertarian and say that the term means limited government. YouGov found a similar ratio between men and women with 21 percent of men saying they would describe themselves as libertarian as would 10 percent of women.

In sum, Americans who choose to self-identify as libertarian in surveys tend to reflect the racial and ethnic demography of the United States more than is commonly realized, particularly among younger libertarians. However, self-identified libertarians are more like to be male than female.

 

 

 

You Ought to Have a Look is a feature from the Center for the Study of Science posted by Patrick J. Michaels and Paul C. (“Chip”) Knappenberger.  While this section will feature all of the areas of interest that we are emphasizing, the prominence of the climate issue is driving a tremendous amount of web traffic.  Here we post a few of the best in recent days, along with our color commentary.

There are several notable pieces this week that relate to the social cost of carbon (SCC)—the government’s powerful tool to aid in justifying all manner of rules and regulations. The SCC is supposed to represent the negative externalities (i.e., projected economic damages in a projected society resulting from projected climate change) that are associated with the emissions of each ton of carbon dioxide. It was developed as a way to translate carbon dioxide emission reductions into dollars savings and to make the “benefits” of proposed climate actions hit closer to home for more people.

But as you may guess from the number of “projected”s in the above parenthetical, the SCC is so highly malleable that you can pretty much game it to produce any value desired—the perfect characteristic for an all-purpose economic cost/benefit tool wielded by an opportunistic and activist government.

The situation is well-described by American Enterprise Institute’s Benjamin Zycher in his recent post for The HillThe magic of the EPA’s benefit/cost analysis.”

Welcome to the fascinating world of EPA benefit/cost analysis… the administration conducted an “analysis” of the “social cost of carbon” (SCC), in order to generate an estimate of the marginal externality cost of greenhouse gas emissions (GHG). The problems with that analysis are legion, but the central ones are the use of global (rather than national) benefits to drive the benefit/cost comparison; the failure to apply a 7 percent discount rate to the streams of benefits and costs, despite clear direction from the Office of Management and Budget; and — most important — the use of ozone and particulate reductions as “co-benefits” of climate policies. The administration’s estimate is about $36 per ton in 2015 ($31 per ton in 2010).

And that is how a regulation yielding future changes in temperatures and sea levels approaching zero can be claimed to yield net benefits “exceeding $100 billion, making this a highly beneficial rule.” In the EPA’s benefit/cost framework, the actual effects of the policies literally are irrelevant; just compute the assumed reduction in GHG emissions, multiply by $36, and voila!

Zycher takes us through the absurdities of just how small the impact of Obama’s “climate” actions is on the actual climate and how the actions are enormously magnified they become when they are run through the social cost of carbon. He concludes:

It is the delegation of legislative powers to the regulatory agencies that has allowed such game-playing in pursuit of an ideological agenda. The only means with which to restore political accountability to the regulatory process is a requirement that all regulations be approved by Congress.

You can check out his entire article, here.

If you wonder if the government’s SCC of $36/ton of carbon dioxide (as mentioned by Zycher) is scientifically robust: it isn’t.

This embarrassment is shown by some new work from a team led by Heritage Foundation’s Kevin Dayaratna that also included Ross McKitrick and David Kreutzer. These researchers re-ran the models used by the Obama Administration to determine the social cost of carbon with the most recent estimate of the earth’s climate sensitivity (i.e., how much global warming we should expect to result from our carbon dioxide emissions). The estimate they used came from a paper published by Nic Lewis and Judy Curry last year, replacing the climate sensitivity used by the government which was based on the U.N.’s Intergovernmental Panel on Climate Change (IPCC) estimates from 2011.

What did they find?

The resulting Social Cost of Carbon (SCC) estimates are much smaller than those from models based on simulated parameters. In the DICE model the average SCC falls by 30-50% depending on the discount rate, while in the FUND model the average SCC falls by over 80%. The span of estimates across discount rates also shrinks considerably, implying less sensitivity to this parameter choice… Moreover FUND, which takes more explicit account of potential regional benefits from CO2 fertilization and increased agricultural productivity, yields a substantial (about 40 percent or more) probability of a negative SCC through the first half of the 21st century.

Did you catch the end? The model that also includes positive externalities of carbon dioxide emissions (e.g., increased agricultural productivity) produces a decent probability that the social cost of carbon is negative for decades to come. In other words, instead of regulations trying to restrict carbon dioxide emissions, the government ought to be encouraging those emissions.

Given the growing evidence for a low climate sensitivity (see here for our latest on this topic) along with the firmly established evidence that carbon dioxide is a plant fertilizer that increases crop yields (among other benefits), you’d think that science organizations would be pushing the federal government to revise their SCC estimates.

Well, some are, like us. But others, with seemingly more clout (and much more government reliance) are pushing for the opposite.

Take the National Academy of Sciences (NAS), for example. This week they released the results of their examination of ways that the government could improve their SCC determination. Judy Curry has the low down. Notably absent from the NAS’s myriad suggestions was using an updated estimate of the climate sensitivity. Curry writes. “I am gobsmacked that they think the current … values of [equilibrium climate sensitivity] are fine.”

As for us, we are hardly surprised and had this to say in her comment section:

Judy–why are you surprised about not re-examining the ECS distribution? NAS was created to advise the Federal government by Abraham Lincoln and it has not deviated from that mission. The members of this particular committee are pretty much all on the global warming dole, big time. Why would they behave counter to their interests, which would be to admit that the [climate sensitivity] distributions are wrong and that the most likely values are near the low end of the present (AR4) [IPCC] distribution? That would be the end of the dole and an admission that billions were wasted on a minor issue whose solution is best left to private investment rather than public taxation.

Sadly, we can’t foresee the government letting loose its grip on its all-too-powerful social cost of carbon tool so long as we have a ruling Administration that takes an activist’s role in climate change—despite overwhelming evidence that such a role in unjustified. But we’ll keep trying our best to wrest this abusive device from them.

There’s no such thing as a free lunch. Or as the Fifth Amendment puts it, “nor shall private property be taken for public use, without just compensation.” Despite the clarity with which the Takings Clause proclaims that government must respect property rights, state and local governments have long been contriving ways to obtain private property without paying the constitutionally required just compensation.

In 2012, San Juan County, Washington—the islands in the Salish Sea between Seattle and Victoria—enacted a rule that conditions shoreline owners’ proposed land uses on dedicating a portion of their property as on-site conservation areas. This isn’t a new tactic. In Nollan v. California Coastal Commission (1987), for example, the Supreme Court rejected the government’s conditioning of a building permit on the landowners’ granting a public easement across their property to access a beach. The Court acknowledged that conditioning a benefit on the property owners’ giving up their Fifth Amendment right to just compensation is “an out-and-out plan of extortion.” The Court elaborated seven years later in Dolan v. City of Tigard (1994), ruling that courts must apply a high level of scrutiny to conditions attached to land-use permits to prevent government “gimmickry.”

In other words, if the condition by itself would be a taking, then the state cannot impose it unless there is a “nexus” and “rough proportionality” between “the property that the government demands and the social costs of the [landowner’s] proposal.” Koontz v. St. John’s River Water Management District (2013).

The San Juan County ordinance fails these tests because shoreline property owners are required to set aside “water quality buffers” as a condition of development not based on any harm the proposed land use itself might cause, but based on the county’s general efforts to reduce pollutants and other surface runoff. A local owners’ association called the Common Sense Alliance challenged the ordinance, but Washington state courts held that the nexus and proportionality tests need not even be applied here because it was the legislature that imposed the condition, not an ad hoc permitting process. The alliance has now asked the U.S. Supreme Court to step in, and Cato, joined by Reason Foundation, has filed an amicus brief supporting that petition.

The state courts’ reasoning is deeply flawed: Despite the common belief that individual permitting decisions are more prone to abuse and corruption than general legislative ones because the political process checks the latter, this simplistic notion ignores the lessons of public-choice economics. As the Texas Supreme Court put it, legislatures can “‘gang up’ on particular groups to force exactions that a majority of constituents would not only tolerate but applaud, so long as burdens they would otherwise bear were shifted to others.” Town of Flower Mound v. Stafford Estates Limited Partnership (2004).

This dynamic is particularly evident in today’s climate when in many states and sub-state political units, the majority is anti-development. Legislative conditions also have a much broader reach than do ad hoc permitting schemes. Unfortunately, Washington State is not alone in holding that the “nexus and proportionality” tests need not be applied to legislative decisions. The U.S. Supreme Court should step in and clarify that its (now well-established) Nollan-Dolan precedents extend to takings via legislative actions, not just executive ones.

The case is Common Sense Alliance v. San Juan County.

Perhaps the most pervasive myth about our nation’s education system is the notion that “public schools have to take all children.” Last year, when criticizing charter schools that she claimed, “don’t take the hardest-to-teach kids,” Hillary Clinton quipped, “And so the public schools are often in a no-win situation, because they do, thankfully, take everybody.” 

No, in fact, they do not.

At best, so-called “public” schools have to take all children in a particular geographic area, although they can and do expel children based on their behavior. They are more appropriately termed “district schools” because they serve residents of a particular district, not the public at large. Privately owned shopping malls are more “public” than district schools.

This wouldn’t be a serious problem if every district school offered a quality education, but they do not. Rather, the quality of education that the district schools provide tends to be highly correlated with the income levels of the residents of those districts. As Lindsey Burke of the Heritage Foundation and I noted last year, our housing-based system of allocating education leads to severe inequities:  

There is a strong correlation between these housing prices and school performance. In nearly all D.C. neighborhoods where the median three-bedroom home costs $460,000 or less, the percentage of students at the zoned public school scoring proficient or advanced in reading was less than 45 percent. Children from families that could only afford homes under $300,000 are almost entirely assigned to the worst-performing schools in the District, in which math and reading proficiency rates are in the teens.

Not surprisingly, some parents feel desperate when their kids are trapped in subpar schools because they can’t afford to live in ritzy neighborhoods or pay private school tuition. And some of those desperate parents will provide fake addresses to get their children a better education.

In Florida, the Broward County School Board announced this week that it is hiring private investigators to spy on the addresses the school suspects of being fake. As the Sun-Sentinel reports, the private eyes will “monitor a home and then give school officials photographs, videos and a detailed report.”

Fraudulent registration has long been an issue. Parents, believing their child will get a better education at a school outside their assigned boundary, list a relative or friend’s address, provide a fake address or even rent an empty apartment in the area of a preferred school.

Doing so can in Broward be prosecuted as a third-degree felony, since parents declare their addresses under penalty of perjury.

It’s unlikely that the district will have the funds to hire private eyes to track every student. One wonders, then, what criteria the district schools will use to determine which students should be surveilled… will they start with students who, shall we say, don’t look like most of the other students in that high-income district? 

Broward County is far from unique. Parents nationwide are regularly fined and even imprisoned for stealing a better education for their children. One New Jersey town even offered $100 bounties for information leading to the expulsion of students whose parents lied about their addresses. 

Writing at RedefinED, Nia Nuñez-Brady explained why her parents provided a fake address to get her into a better–and safer–district school: 

One day, while I was using the ladies room, another girl, who was double my size or at least it felt that way at the time, threatened to bash my head on the wall if I didn’t stop hanging out with a guy she liked. Growing up, my dad always told me, “Your face is too pretty to get into a fight.” So, I said to her: “Please don’t hit me. I’ll stay out of your way.”

She laughed. I went back to class, and tried to focus.

The next day, while walking on the hallway at the school, this same girl grabbed another student close to me. She pushed her against the wall and instigated a fight. The difference between myself and this new student: This girl fought back. The bully wasted no time. She grabbed her Snapple bottle, broke it on the wall, and used a piece of glass to slash the student’s face.

I was petrified. That could have been me.

Nia begged her parents to change schools but they couldn’t afford it. They were recent immigrants with little money. But they couldn’t bear to keep their daughter in a school where they feared for her safety. So they lied.

[M]y parents did something thousands of other public-school parents feel forced to do, because they feel they have no other options. They lied about where we lived so I could go to a different school where I would feel safe. […]

Of course, it is understandable residents of districts who have paid taxes into the system would be upset that they are subsidizing the education of children whose parents haven’t paid into the system. And so it’s also understandable that the district schools would seek to exclude students whose parents haven’t paid into the system, just as private schools shouldn’t be expected to educate a child whose parents hadn’t paid tuition. As Nia explains, problem is the system itself:

I understand that perjury is against the law, and that the law should be respected. But from my own experience, I know the parents who lie about their address are often the ones with limited resources, the ones who cannot afford to move to a more affluent neighborhood, the ones who can least afford to pay a fine or fight a felony charge.

I can also understand the families who have been kicked out of a school close to where they live, because the school is overcrowded with students from other neighborhoods. That, too, is unfair.

But that’s the problem. The system is unfair.

Indeed. Getting a decent education should not depend upon the ability of one’s parents to afford an expensive home. It is long past time that we break the link between home prices and school quality. Doing so entails recognizing that there’s no such thing as a “public” school.

Today marks the 20th anniversary of the Supreme Court decision in Whren v. United States. The case clarified the constitutionality of the practice of “pretextual” traffic stops. The Court ruled that so long as an officer can articulate that a driver violated some traffic law, the officer may stop a motorist in order to investigate potential and wholly unrelated criminal activity. The case has effectively become a blueprint for police officers to racially profile drivers without repercussion.

Last fall, I gave a talk at Case Western Reserve Law School in a symposium dedicated to Whren and its legacy. The school’s law review recently published the article that came from that talk. Instead of putting forth an argument to overturn Whren, I argue that police departments ought to curtail or end the use of pretextual stops as a proactive policing measure. The Supreme Court’s ruling that the tactic is constitutional does not make it an ethical or wise tactic to employ.

Simply put, pretextual stops undermine police legitimacy by turning public servants into antagonistic interrogators. In practice, pretextual motor vehicle stops—much like the pedestrian Terry stops used in New York’s infamous Stop-and-Frisk program—ensnare far more innocent people than criminals. And most of the people who are stopped are black or Latino, further eroding police support in those communities. Police departments must establish their legitimacy—through trust and positive interactions—to improve their effectiveness and public safety. Overly aggressive and implicitly discriminatory policing practices undermine that legitimacy.

You can read the whole article here. The rest of this issue of the Case Western Reserve Law Review can be found here.

When I was younger, my left-wing friends said conservatives unfairly attacked them for being unpatriotic and anti-American simply because they disagreed on how to deal with the Soviet Union.

Now the shoe is on the other foot.

Last decade, a Treasury Department official accused me of being disloyal to America because I defended the fiscal sovereignty of low-tax jurisdictions.

And just today, in a story in the Washington Post about the Center for Freedom and Prosperity (I’m Chairman of the Center’s Board of Directors), former Senator Carl Levin has accused me and others of “trading with the enemy” because of our work to protect and promote tax competition.

Here’s the relevant passage.

Former senator Carl Levin (D-Mich.)…said in a recent interview that the center’s activities run counter to America’s values and undermine the nation’s ability to raise revenue. “It’s like trading with the enemy,” said Levin, whose staff on a powerful panel investigating tax havens regularly faced public challenges from the center. “I consider tax havens the enemy. They’re the enemy of American taxpayers and the things we try to do with our revenues — infrastructure, roads, bridges, education, defense. They help to starve us of resources that we need for all the things we do. And this center is out there helping them to accomplish that.”

Before even getting into the issue of tax competition and tax havens and whether it’s disloyal to want limits on the power of governments, I can’t resist addressing the “starve us of resources” comment by Levin.

He was in office from 1979-2015. During that time, federal tax receipts soared from $463 billion to $3.2 trillion. Even if you only count the time the Center for Freedom and Prosperity has existed (created in late 2000), tax revenues have jumped from $2 trillion to $3.2 trillion.

At the risk of understatement, Senator Levin has never been on a fiscal diet. And he wasn’t bashful about spending all that revenue. He received an “F” rating from the National Taxpayers Union every single year starting in 1993.

Let’s now address the main implication of the Washington Post story, which is that it’s somehow wrong or improper for there to be an organization that defends tax competition and fiscal sovereignty, particularly if some of its funding comes from people in low-tax jurisdictions.

The Post offer[s] an inside look at how a little-known nonprofit, listing its address as a post office box in Alexandria, became a persistent opponent of U.S. and global efforts to regulate the offshore world. …the center met again and again with government officials and members of the offshore industry around the world… Quinlan and Mitchell launched the center in October 2000. …The center had two stated goals. Overseas, the center set out to persuade countries on the blacklist not to cooperate with the OECD, which it derided as a “global tax cartel.” In Washington, the center lobbied the Bush administration to withdraw its support for the OECD and also worked to block anti-tax haven legislation on Capitol Hill. To spread the word, the center testified before Congress, published reports and opinion pieces in leading financial publications, and drafted letters to lawmakers and administration officials. Representatives of the center crisscrossed the globe and sponsored discussions in 2000 and 2001, traveling to London, Paris, the Cayman Islands, the Bahamas, Panama, Barbados and the British Virgin Islands.

To Senator Levin and other folks on the left, I guess this is the fiscal equivalent of “trading with the enemy.”

In reality, this is a fight over whether there should be any limits on the fiscal power of governments. On one side are high-tax governments and international bureaucracies like the OECD, along with their ideological allies. They want to impose a one-size-fits-all model based on the extra-territorial double-taxation of income that is saved and invested, even if it means blacklisting and threatening low-tax jurisdictions (the so-called tax havens).

On the other side are proponents of good tax policy (including many Nobel Prize-winning economists), who believe that income should not be taxed more than one time and that the power to tax should be constrained by national borders.

And, yes, that means we sometimes side with Switzerland or Panama rather than the Treasury Department. Our patriotism is to the ideals of the Founding Fathers, not to the bad tax policy of the U.S. government.

In any event, I’m proud to say that the Center’s efforts have been semi-successful.

In May 2001, the center claimed a key victory. In a dramatic departure from the Clinton administration, Paul O’Neill, the incoming Treasury Secretary appointed by Bush, announced that the United States would back away from the reforms pushed by the OECD. …fewer than half of the nations on the OECD blacklist pledged to become more transparent in their tax systems, a victory for anti-tax forces such as the center.

Even the other side says the Center is effective.

…said Elise Bean, former staff director and chief counsel of Levin’s Homeland Security Permanent Subcommittee on Investigations, which started investigating tax havens in 2001. “They travel all around the world and they have had a tremendous impact.” …“They were very effective at painting the OECD’s work as end-times are here for tax competition, and we’re going to have European tax rates imposed upon the whole world if the OECD’s work continued,” said Will Davis, the former head of OECD public affairs in Washington.

What’s most impressive is that all this was accomplished with very little funding.

Tax returns for the center and a foundation set up in its name reported receiving at least $1.4 million in revenue from 2003 to 2010.

In other words, the Center and its affiliated Foundation managed to thwart some of the world’s biggest and most powerful governments with a very modest budget averaging about $175,000 per year. And I don’t even get compensation from the Center, even though I’m the one who almost got thrown in a Mexican jail for opposing the OECD!

So while Senator Levin had decades of experience spending other people’s money in a promiscuous fashion, I work for an organization, the Cato Institute, that is ranked as the most cost-effective major think tank, and I’m on the Board of a small non-profit that has a track record of achieving a lot with very little money.

Yet another example of why we should be thankful that tax competition makes it more difficult for politicians to extract more revenue from the economy’s productive sector.

P.S. I mentioned to the Post reporters that the world’s biggest tax haven is the United States, but that important bit of information was omitted from the article. Which is a shame since it would have given me a chance to laud Senator Rand Paul for blocking a very dangerous agreement that would undermine America’s attractive tax laws for overseas investors.

P.P.S. If politicians really want to hurt tax havens, they should adopt a flat tax. That would dramatically boost tax compliance.

P.P.P.S. All things considered, I think the reporters who put together the story were reasonably fair, though there was a bit of editorializing such as referring to one low-tax jurisdiction as a “notorious tax haven.” When they write about France, do they ever refer to it as a “notorious tax hell”?

Also, when writing about trips the Center arranged for congressional staff to low-tax jurisdictions, the article stated, “The staffers reported receiving from $900 to $2,360 for the trips”, which makes it sound as if the staffers got paid. That’s wrong. The sentence should have read, “The staffers reported that the Center’s travel and lodging expenses ranged from $900 to $2,360 for the trips.”

Today Cato senior fellow Nat Hentoff turns 91.  Happy Birthday Nat!

Nat has opposed communism since he was 15 years old, but because he had a column with the Village Voice, people would sometimes assume he had communist sympathies.  In this video, Nat explains that that mistaken assumption is how he was able to get into a meeting with Fidel Castro’s deputy, Che Guevara, and challenge him about the dictatorial nature of the Castro regime.  He finds it puzzling why so many people fawn over Castro and Che.

 

According to a conventional narrative, tropical islands are eroding away due to rising seas and increasingly devastating storms. Not really, according to the recent work of Ford and Kench (2016).

Writing as background for their study, the two researchers state that low-lying reef islands are “considered highly vulnerable to the impacts of climate change,” where an “increased frequency and intensification of cyclones and eustatic sea-level rise [via global warming] are expected to accelerate shoreline erosion and destabilize reef islands.” However, they note that much remains to be learned about the drivers of shoreline dynamics on both short- and long-term time scales in order to properly project future changes in low-lying island development. And seeking to provide some of that knowledge, the pair of New Zealand researchers set out to examine historical changes in 87 islands found within the Jaluit Atoll (~6°N, 169.6°E), Republic of the Marshall Islands, over the period 1945-2010. During this time, the islands were subjected to ongoing sea level rise and the passage of a notable typhoon (Ophelia, in 1958), the latter of which caused severe damage with its >100 knot winds and abnormal wave heights.

So what did their examination reveal?

Analyses of aerial photographs and high-resolution satellite imagery indicated that the passage of Typhoon Ophelia caused a decrease in total island land area of approximately five percent, yet Ford and Kench write that “despite [this] significant typhoon-driven erosion and a relaxation period coincident with local sea-level rise, [the] islands have persisted and grown.” Between 1976 and 2006, for example, 73 out of the 87 islands increased in size, and by 2010, the total landmass of the islands had exceeded the pre-typhoon area by nearly 4 percent.

Such observations, in the words of Ford and Kench, suggest an “alternative trajectory” for future reef island development, and that trajectory is one of “continued island expansion rather than one of island withering.” And such expansion is not just limited to Jaluit Atoll, for according to Ford and Kench, “the observations of reef island growth on Jaluit coincident with sea level rise are broadly consistent with observations of reef islands made elsewhere in the Marshall Islands and Pacific (McLean and Kench, 2015).” Given as much, it would thus appear that low-lying islands are not as vulnerable to climate change as previously thought.

 

Reference

Ford, M.R. and Kench, P.S. 2016. Spatiotemporal variability of typhoon impacts and relaxation intervals on Jaluit Atoll, Marshall Islands. Geology 44: 159-162.

McLean, R.F. and Kench, P.S. 2015. Destruction or persistence of coral atoll islands in the face of 20th and 21st century sea level rise? WIRES Climate Change 6: 445-463.

Could the U.S.-Japan alliance flounder as a result of alcohol? Apparently. At least, that’s the implication of the U.S. Navy’s ban on drinking by personnel stationed on the Japanese island of Okinawa.

It would be far better to phase out America’s military presence on Okinawa, turning U.S. bases back to the Japanese government. More than seven decades after the end of World War II, Tokyo should take over responsibility for Japan’s defense.

Washington currently bases and personnel on the island of Okinawa, with just .6 percent of Japan’s land mass. Local anger exploded in 1995 after three American service members raped a 12-year-old girl. The Japanese government sought to placate islanders with financial transfers and plans to move Futenma airbase and relocate Marines to Guam. These schemes failed to satisfy, however.

Base opponents, bolstered by the 2014 gubernatorial victory of Takeshi Onaga, continued to resist. Fueling popular anger has been a seeming spate of high-profile offenses committed by U.S. military personnel (who, in fact, have a lower crime rate than locals). Last month a sailor pled guilty to rape. Also last month a contractor and former Marine was detained in a murder case.

Then an apparently intoxicated sailor crashed, injuring two Okinawans. The navy confined all personnel to base except for essential travel and banned drinking on or off U.S. facilities.

Prime Minister Shinzo Abe largely ignored the Okinawa question as he sought to bolster Tokyo’s military capabilities. But he has made little progress against strong public opposition.

Japan’s “peace constitution” forbidding a military remains unchanged, so Abe simply interprets the law as he wishes it had been written. Military outlays have risen only modestly since Abe took power, up just two percent in 2015. Japan then devoted about $41 billion to defense, compared to roughly $180 billion by China, Tokyo’s main potential nemesis.

Although last year his government adjusted the military’s defense guidelines, Tokyo’s international activities will remain non-combat and do little to reduce America’s military duties.

Moreover, the revised standards merely allow Japan to better defend Japan, not assist the U.S. Now a Japanese ship on patrol with an American vessel can assist if the latter is attacked—so long the Japanese vessel too is threatened. And Japanese analysts warn against expecting Tokyo to allow such situations to occur.

Worse, the new guidelines appear to envision an even stronger U.S. guarantee for Japan and deployment of additional weapons. Under the “bilateral” treaty Washington’s obligations apparently only increase.

The U.S. has an obvious interest in Japan’s continued independence, but Japan’s commitment to its own security should be even greater. Tokyo should do more to defend itself.

In fact, no one expects a Chinese armada to show up in Tokyo Bay. If conflict erupts, it likely will be over disputed Senkaku/Diaoyu Islands. Of course, Beijing is not justified in using force there or elsewhere, but nothing at stake there is worth war, at least for America.

A serious Japanese military build-up is opposed by some of Tokyo’s neighbors, but no one seriously suggests that Japan is about to embark upon a new round of imperial conquests. More than seven decades after World War II Japan should finally act like a normal country—defending itself, guarding its region, and ending its dependence on America.

The U.S. should turn its security guarantee to Japan into a framework for future cooperation. That should include potential assistance if a genuine hegemonic threat arises in Asia. But Tokyo should take the lead in confronting day-to-day security challenges.

As I wrote in Forbes: “Japan should decide its own defense and foreign policies. As American forces returned home Okinawa’s bases would empty. What came next would be up to the Japanese. And American military personnel could continue to enjoy a drink … back home in their own country.”

 

Last week I criticized President Obama for his failure to sell the Trans-Pacific Partnership to the public and to Congress.  Ratification of trade agreements has always relied on consistent and unequivocal advocacy from the White House.

Well, the president heard me loud and clear and decided to take my advice.  Here’s his pitch to the American people via Jimmy Fallon (TPP lyrics begin around 4:50, but the whole thing is pretty darn funny).

 

The U.S. International Trade Commission (ITC) is required by the Bipartisan Congressional Trade Priorities and Accountability Act of 2015 to prepare estimates of the economic effects of trade agreements.  In specific:

“Not later than 105 calendar days after the President enters into a trade agreement under section 103(B), the Commission shall submit to the President and Congress a report assessing the likely impact of the agreement on the United States economy as a whole and on specific industry sectors, including the impact the agreement will have on the gross domestic product, exports and imports, aggregate employment and employment opportunities, the production, employment, and competitive position of industries likely to be significantly affected by the agreement, and the interests of United States consumers.”

This statutory language guided the ITC’s analysis of the twelve-nation Trans-Pacific Partnership (TPP).  The ITC study was released on May 18, 2016. 

It had been several years since the United States concluded a free trade agreement.  The previous one with South Korea (Korea-U.S. Free Trade Agreement, or KORUS) dates from 2007.  I served as chairman of the ITC at the time and am quite familiar with the KORUS study.  The econometric modeling used a “comparative static” analysis.  A comparative static approach can be likened to taking two snapshots of the economy.  The first photo was of the known baseline economy as it existed in 2007. The second photo also used the 2007 baseline, but this time it was “shocked” by incorporating all provisions of KORUS as if they had been fully implemented.  This allowed a conceptually sound – albeit counterfactual – assessment of the likely economic effects of KORUS by analyzing how those reforms would have influenced the 2007 economy.  (These issues are explained in this Free Trade Bulletin.) Static modeling has been used in all the ITC’s analyses of trade agreements prior to TPP." title="<--break-->" class="mceItem">

One of the great strengths of the comparative static approach is that it makes no attempt to project the economy into the future.  There is no need to speculate on whether a recession will curb trade flows, or whether technological change will make some industries obsolete while spurring new ones into existence.  Precisely predicting the future requires a degree of clairvoyance not possessed by economists or anyone else.  A comparative static analysis deals with that reality by instead looking backward.  It imposes new policy reforms on an old – but well-known – economy.  And it allows economists to avoid trying to make forward-looking projections of economic activity that inevitably turn out not to be correct.

However, comparative static modeling is not the only tool in the econometrician’s toolbox.  For its analysis of the economic effects of TPP, the ITC has chosen to use a dynamic computable general equilibrium (CGE) model.  The Global Trade Analysis Project (GTAP) model “is an appropriate tool for analyzing the effects of trade agreements because it consists of a database with international trade flows and other macroeconomic information, social accounting matrixes that show how different segments of the economy are interlinked, and national income accounts data.”  Using a dynamic version of the GTAP model has allowed the ITC to estimate changes in various economic measures (real GDP, employment, exports, imports, etc.) up to 30 years in the future.  Most of the analysis focuses on the 15-year period beginning in 2017 and ending in 2032.

In order to evaluate how TPP might influence the economy in the future, it first was necessary to create a baseline projection of what the economy would be like in the years ahead without TPP.  The ITC has done this by incorporating projections made by the International Monetary Fund (IMF) and the Organisation for Economic Co-operation and Development (OECD) regarding growth rates in many countries for labor, population, and GDP.  Once the 30-year baseline was established, the model was shocked by adding TPP’s annual policy changes.  (TPP gradually phases in many reductions in trade restrictions year by year.)  The economic effects of TPP then were measured as differences between the original baseline and the baseline following the shocks from TPP.  The dynamic GTAP model provides a mathematically sound means to estimate future economic variances caused by policy changes. 

The real questions regarding forward-looking estimates have to do with the baseline itself.  The IMF and OECD are quite capable when it comes to analyzing historic trends.  Generally it’s not unreasonable to project well-established trends a short distance into the future.  If global GDP has grown at an average rate of 3 percent over the past ten years, for instance, it may be quite sensible to estimate that growth in the coming year also will be around 3 percent.  The problems come as we look further into the future.  Life’s inherent uncertainties make it relatively likely that a future projection of U.S. exports or imports of cheese, for instance, will turn out not to be precisely accurate.  How much confidence should we have in projections five years into the future?  Fifteen years?  Thirty?

Making estimates that turn out to be different than actual future outcomes is not a problem to economists and statisticians schooled in economic modeling.  They understand well that the ITC’s estimates were made using the best available information and up-to-date econometric techniques, and that the real world economy simply diverged from what had been projected in the baseline.  Unfortunately, not everyone interested in trade policy has such depth of knowledge and understanding.  My concern is not with the integrity of the modeling, but rather the challenges that trade supporters may face in defending the results of the analysis against criticism.

Even with comparative static modeling, opponents of expanded international trade have been inclined to misinterpret the analysis.  There are claims, for example, that the ITC did a poor job with its KORUS study because the U.S. trade deficit with South Korea has gone up since the agreement went into effect.  The KORUS study didn’t say anything about what might happen to the trade deficit in the future.  However, it did indicate a likely decrease in the deficit in the hypothetical situation in which all provisions of KORUS were somehow implemented during the static 2007 baseline period.

Now that the ITC’s TPP study has used a dynamic CGE approach that actually does make estimates about future trade flows, critics of trade agreements no doubt will be happy to point out how the ITC “got things all wrong.”  (In fact, the ITC’s estimates are seldom likely to be “right.”)  Trade skeptics are unlikely to bother explaining that the real source of the estimated “errors” is that the underlying economy evolved differently than the IMF and OECD had projected.  Most anti-trade NGOs have little interest in raising the quality of the trade policy debate.  Rather, they may be inclined to argue that all economic analysis showing positive effects for the United States from trade agreements is suspect and can’t be trusted.

Supporters of trade liberalization will do their best to counter such misinformation by explaining the details of dynamic CGE modeling.  But the criticism of the ITC’s estimates will take only a few words; setting the record straight will require several sentences or paragraphs.  Protectionist rhetoric may prove to have a greater influence on public opinion than do the substantive explanations. 

It will be interesting to see whether analyzing trade agreements via dynamic CGE modeling leads to a more informed public discussion than has been the case for the comparative static technique.  With a comparative static approach, the ITC was never wrong, but often misunderstood.  With dynamic modeling, the ITC will almost never be right, while still being misunderstood. 

 

Daniel R. Pearson is a senior fellow in the Cato Institute’s Herbert A. Stiefel Center for Trade Policy Studies, and is a former chairman of the U.S. International Trade Commission.

Global Science Report is a feature from the Center for the Study of Science, where we highlight one or two important new items in the scientific literature or the popular media. For broader and more technical perspectives, consult our monthly “Current Wisdom.”

There is a new paper generating some press attention (e.g. Chris Mooney at the Washington Post) that strongly suggests global warming is leading to specific changes in the atmospheric circulation over the Northern Hemisphere that is causing an enhancement of surface melting across Greenland—and of course, that this mechanism will make things even worse than expected into the future.

We are here to strongly suggest this is not the case.

The new paper is by a team of authors led by Marco Tedesco from Columbia University’s Lamont-Doherty Earth Observatory. The main gist of the paper is that Arctic sea ice loss as a result of human-caused global warming is causing the jet stream to slow down and become wigglier—with deeper north-south excursions that hang around longer.  This type of behavior is referred to as atmospheric “blocking.”

If this sounds familiar, it’s the same theoretical argument that is made to try to link wintertime “polar vortex” events (i.e., cold outbreaks) and blizzards to global warming. This argument which has been pretty well debunked, time and time again.

Well, at least it has as it concerns wintertime climate.

The twist of the new Tedesco and colleagues’ paper is that they’ve applied it to the summertime climate over Greenland. They argue that global warming is leading to an increase in blocking events over Greenland in the summer and that is causing warm air to be “locked” in place leading to enhanced surface melting there. Chris Mooney, who likes to promote climate alarm buzzwords, refers to this behavior as “weird.” And he describes the worrysome implications:

The key issue, then, is whether 2015 is a harbinger of a future in which the jet stream keeps sending Greenland atmospheric systems that drive major melt — and in turn, whether the Arctic amplification of climate change is driving this. If so, that could be a factor, not currently included in many climate change simulations, that would worsen the ice sheet’s melt, drive additional sea level rise and perhaps upend ocean currents due to large influxes of fresh water.

As proof that things were weird over Greenland in recent summers, Tedesco’s team offers up this figure in their paper:

" title="<--break-->" class="mceItem">

This chart (part of a multipanel figure) shows the time history of the North Atlantic Oscillation (NAO—a pattern of atmospheric variation over the North Atlantic) as red bars and something called the Greenland Blocking Index (GBI) as the black line, for the month of July during the period 1950-2015. The chart is meant to show that in recent years, the NAO has been very low with 2015 being “a new record low of -1.23 (since 1899),” and the GBI has been very high with the authors noting that “[c]oncurrently, the GBI also set a new record for the month of July [2015].” Clearly the evidence is showing that atmospheric blocking increasing over Greenland which fits nicely into the global warming/sea ice loss/wiggly jet stream theory.

So what’s our beef?

A couple of months ago, some of the same authors of the Tedesco paper (notably Ed Hanna) published a paper showing the history of the monthly GBI going back to 1851 (as opposed to 1950 as depicted in the Tedesco paper).

Here’s their GBI plotted for the month of July from 1851 to 2015:

This picture tells a completely different story. Instead of a long-term trend that could be related to anthropogenic global warming, what we see is large annual and multidecadal variability, with the end of the record not looking much different than say a period around 1880 and with the highest GBI occurring in 1918 (with 1919 coming in 2nd place). While this doesn’t conclusively demonstrate that the current rise in GBI is not related to jet stream changes induced by sea ice loss, it most certainly does demonstrate that global-warming induced sea ice loss is not a requirement for blocking events to occur over Greenland and that recent events are not  at all “weird.”  An equally plausible, if not much more plausible, expectation of future behavior is that this GBI highstand is part of multidecadal natural variability and will soon relax back towards normal values.  But such an explanation isn’t Post-worthy.

Another big problem with all the new hype is that history shows the current goings-on in Greenland to be irrelevant, because humans just can’t make it warm enough up there to melt all that much ice. For example, in 2013, Dorthe Dahl-Jensen and her colleagues published a paper in Nature detailing the history of the ice in Northwest Greenland during the beginning of the last interglacial, which included a 6,000 year period in which her ice core data showed averaged a whopping 6⁰C warmer in summer than the 20th century average. Greenland only lost around 30% of its ice with a heat load of (6 X 6000) 36,000 degree-summers. The best humans could ever hope to do with greenhouse gases is—very liberally—about 5 degrees for 500 summers, or (5 X 500) 2,500 degree-summers. In other words, the best we can do is 500/6000 times 30%, or a 2.5% of the ice, resulting in a grand total of seven inches of sea level rise over 500 years. That’s pretty much the death of the Greenland disaster story, despite every lame press release and hyped “news” article on it.

While you won’t find this kind of analysis elsewhere, we’re happy to do it here at Cato. 

References:

Dahl-Jensen, D., et al., 2013.  Eemian interglacial reconstructed from a Greenland folded ice core.  Nature 489, doi: 10.1038/nature11789.

Hanna, E., et al., 2016. Greenland Blocking Index 1851-2015: a regional climate change signal. International Journal of Climatology, doi: 10.1002/joc.4673.

Tedesco, M., et al., 2016. Arctic cut-off high drives the poleward shift of a new Greenland melting record. Nature Communications, DOI: 10.1038/ncomms11723, http://www.nature.com/ncomms/2016/160609/ncomms11723/full/ncomms11723.html

North Korea’s ruling elite appears to be getting along fine despite international sanctions. Washington needs to find a new approach toward the North.

The so-called Democratic People’s Republic of Korea poses one of the most vexing challenges to American policy. For more than 20 years U.S. presidents have insisted that the DPRK cannot be allowed to develop nuclear weapons. Yet it apparently is preparing for a fifth nuclear test.

A military strike, as proposed by Ashton Carter before he was appointed Defense Secretary, would risk engulfing the peninsula in war. So the U.S. has relied on sanctions. Every time Pyongyang misbehaves—especially tests a nuclear weapon or launches a missile—American officials impose tougher domestic economic penalties and press for harsher UN sanctions.

 After the North’s latest nuclear test earlier this year, China agreed to a new round of restrictions. The increased penalties had no impact of North Korean policy. To the contrary, in early May the Kim regime used the party congress to highlight Pyongyang’s nuclear program.

Sanctions have had an impact. The People’s Republic of China has been losing patience and appears to be more tightly regulating cross-border commerce. Some North Korean representatives of blacklisted agencies moved from China to Southeast Asian nations. The regime has resorted to smuggling to bring in banned products. Moreover, Pyongyang appears to be having more difficulty selling weapons abroad.

Nevertheless, Beijing continues to moderate the impact of sanctions. Illicit goods still cross the border and some observers expect the PRC commitment to fade as Western attention moves elsewhere. Beijing more fears chaos on its border than a North Korea with nuclear weapons. President Xi Jinping recently declared: “As a close neighbor of the peninsula, we will absolutely not permit war or chaos on the peninsula.”

The Xi government so far refuses to halt energy and food shipments, the only step that would apply bone-crunching pressure to the Kim regime. Even then, Pyongyang might refuse to comply. The regime already is blaming the West, preparing its people for what it calls an “arduous march.”

During the late 1990s the regime survived the virtual collapse of the economy and starvation death of a half million or more North Koreans. The Kim dynasty might survive similar hardship in the future.

Unfortunately, the uniform experience of sanctions is that they hurt those with the least resources and influence. That appears to be the case in North Korea.

So far the powerful have prospered, despite penalties directed against luxury imports. The Washington Post recently reported on “Pyonghattan,” home to North Korea’s privileged elite. In contrast, argued Andrei Lankov of Kookmin University, “the average North Korean will also bear the brunt of the sanctions.”

The latest round of sanctions has increased hardship. Choi Ha-young, chairman of the Love North Korean Children Charity, complained: “Currently, due to the UN sanctions, people in the lowest class are really impacted.”

As I point out in National Interest: “Washington seems to have only one response to the North: increase sanctions. However, this policy is a dead-end. The U.S. and its allies must find a new strategy toward Pyongyang.”

A victory for property rights and individual liberty came via the unanimous Supreme Court decision earlier this month against EPA’s ability to control the “environment” on private property—though their use of wetlands “jurisdictional determinations” under the Clean Water Act. The high court’s opinion states that land owners are now able to challenge government agencies that attempt to assert control over the environment of private property before any permitting process by the owner begins—versus after the owners expenditure of time, effort and expense to obtain a permit. Furthermore, this court’s decision will limit the government’s ability to restrict land owners activities through the application of EPA’s “Waters of the United States” (WOTUS) rule issued last year.

This is the second major SCOTUS decision this year to go against the EPA—the other being the stay issued in February against EPA’s Clean Power Plan. Compared to previous administrations, this EPA appears to be spending way too much time in court defending its actions, and not nearly enough time effectively protecting the nation’s environment. Some of the agency’s actions, or inactions, have resulted in environmental damage. Two glaring examples are last year’s contamination of Colorado’s Animas River drinking water supply, and the ongoing lead contamination of drinking water in Flint, Michigan.

Then there is the fallout from EPA’s regulatory agenda—particularly the Clean Power Plan—their crown jewel of carbon emissions rulemaking. Although the CPP was stayed, EPA officials are not deterred and are now moving ahead with key components of the plan, particularly in those 17 states lead by democrat governors. The EPA may be flagrantly violating the law by ignoring the Supreme Court ruling on the CPP. According to the electric utility industry, 30 states, and their state agencies, all of whom are suing to eliminate the plan—EPA is absolutely in violation. 

There is also demonstrated collusion between EPA employees and outside environmental interests. FOIA requests and legal depositions have revealed a pattern of illicit email trails and phone calls between EPA officials and radical environmental groups. In some cases the outside groups have actually “co-authored” EPA regulations—creating a circus out of federal agency rulemaking—which is supposed to be based on transparent public participation and not “insider trading” by the environmental movement.

For example, a considerable amount of collusion with outsiders is known to have occurred while EPA was crafting the Clean Power Plan. The evidence includes a tranche of emails discovered on a private email account that indicate outside environmental interests heavily influenced EPA policy on regulating coal-fired power plants. Other EPA collaboration with outsiders include its former Administrator, Lisa Jackson, who in 2010 resigned after being caught using a private email account to correspond with environmental activists about EPA activities.

Other email trails indicate that EPA, for several years, had been colluding with environmentalists opposed to the Pebble Partnership developing a major copper-gold deposit in southwest Alaska. During verbal discovery in March in the Pebble case, an EPA employee, Mr. Phil North, freely admitted that he was providing outside environmental groups unprecedented access to the EPA decision making process involving the Pebble lease—which is on state, not federal land. Not surprisingly, the agency’s regional administrator for Alaska testified to a congressional committee that he had not read the deposition and was not willing to look into the matter further. Nevertheless, the U.S. is now facing the possibility of a lawsuit because Pebble’s parent company, Northern Alliance is Canadian and is protected under the North American Free Trade Agreement (NAFTA).

Then there is the destruction of federal-state partnerships that have been built up over the years, caused by states lack of confidence in EPA actions. For example, strident EPA rulemaking has resulted in more states going to court to stop promulgation of blanket federal regulations that are of questionable benefit to states. These regulations include hydraulic fracturing, methane emissions, and mandated ozone levels related to energy development, manufacturing and industrial activity—all above and beyond the Clean Power Plan.

Other legal challenges to EPA include the 27 states that have sued to block the EPA’s Waters of the United States (WOTUS) rule, which dramatically expands Federal authority over local construction activities, including energy projects, nationwide. In the East, 21 states sued EPA over its Chesapeake Bay cleanup plan, asserting that the plan represents “…the culmination of EPA’s decade-long attempt to control exactly how states achieve federal water quality requirements (under the Clean Water Act), and marks the beginning of the end of meaningful state participation in water pollution regulation.” In the West, Utah, Colorado, and New Mexico will see EPA in court over the agency’s disastrous accidental heavy metal sludge contamination of the Animas River that has affected the drinking water of three states and the Navajo Indian Reservation. 

Clearly, this EPA has continued unabated to exert unprecedented “environmental control” over the air, land, and water of the U.S. using numerous unpopular rules and regulations. The continued overregulation and outside collusion require ever greater amounts of EPA’s time in court to answer for their actions—ostensibly leaving less time for environmental protection responsibilities—if they were ever so inclined. In short, their actions are threatening to unravel the long-established, important, and legitimate fabric of environmental protection developed since the agency’s founding almost 50 years ago. What greater irony is there when the environmental stewardship of the several states needs to be legally protected from the federal Environmental Protection Agency.

China’s economic rise over the past decades has been meteoric, during which time the volume of rhetoric about the “China threat” has also grown at historic rates. In the early 1990s the Pentagon needed a new superpower rival to justify Cold War-sized defense budgets. But displays of American military power in the first Gulf War and the 1995-96 crisis in the Taiwan Strait also prompted China to develop a military strategy designed to keep American forces out of its neighborhood. Now, with counterterrorism missions in Iraq and Afghanistan down from their peak and China’s military posture maturing significantly, the U.S. military has been devoting more time and resources to figuring out ways to counter China’s new strategy.

Beyond the military, political hawks have been quick to draw attention to the China threat. During last weekend’s Shangri La Dialogue in Singapore, Senator John McCain (R-AZ) said that China has a choice between peaceful cooperation and engaging in a “zero-sum game for regional power and influence.” Even academics have gotten in on the game, with many arguing that China’s rise will not be peaceful.

Though China’s saber rattling in East Asia and the South China Sea hasn’t made a big splash in the 2016 presidential campaign so far, the question of how the United States should respond to China’s rising military and economic power is one of the most important foreign policy challenges the next president will face.

Both candidates have staked out aggressive positions on China. Trump has promised to impose steep tariffs on Chinese imports, suggested that South Korea and Japan should acquire nuclear weapons, and has called for a strong military presence in Asia to discourage “Chinese adventurism.” Clinton, for her part, was a lead architect of the “pivot to Asia” as Secretary of State, redirecting U.S. military and diplomatic efforts from the Middle East to Asia to confront China’s rise.

A close look at public opinion, however, reveals that although complex, the American public’s attitudes towards China are more sanguine than those of its fearful leaders.

To be sure, most Americans have always harbored concerns about the Communist nation and its intentions, and during difficult times Americans worry about the challenge China poses to their economic fortunes. But despite China’s aggressive campaign to modernize its military, and despite two decades of one-sided debate about the China threat, most Americans correctly continue to identify the United States as the stronger military power, and fewer than half view China’s military power as a serious threat (even fewer rate it a “critical threat.”)

Moreover, the prolonged fear mongering has failed to move the needle when it comes to how Americans feel about China. Gallup polls show a slight increase in China’s favorability rating among Americans between 1990 and 2016. And in 2014 the Chicago Council on Global Affairs found that just 48% of the public views China as primarily a rival and 49% see it primarily as a partner.

Most importantly, though, Americans overwhelming support a cooperative approach to dealing with China rather than a confrontational one. Sixty-seven percent responded to the 2014 CCGA poll that the best way to handle the rise of Chinese power is to “undertake friendly cooperation and engagement,” compared to 29% who said the United States should “actively work to limit the growth of China’s power.” And when it comes to the prospect of military conflict with China the public is truly not interested. Just 26% believe the United States should send troops to help if China invades Taiwan.

These figures provide fair warning to the next president to think twice about how to deal with China. An aggressive military posture like the one in place today (and promoted by both candidates) not only runs contrary to public preferences, it also increases the prospects for direct conflict between the United States and China. 

In this commentary, I will analyze the concept of sound money and its relevance today.  The concept evolved in the 19th century as many countries adopted the gold standard.  It became associated with commodity money or “hard currency.”  For example, Mises (1966: 782) stated:

The principle of soundness meant that the standard coins — i.e., those to which unlimited legal tender power was assigned by the laws — should be properly assayed and stamped bars of bullion coined in such a way as to make the detection of clipping, abrasion, and counterfeiting easy.  To the government’s stamp no function was attributed other than to certify the weight and fineness of the metal contained.

There was no requirement that “standard coins” be the exclusive or even preponderant means of payment in day-to-day transactions.  So long as banks of issue (private or central banks) maintained convertibility, then the monetary system had the characteristics of sound money.  As Mises suggested, government’s role was minimal.

As a matter of history, sound money is associated with commodity money.  Mises’ characterization assumes commodity money.  Can there be sound fiat currency?

Most 19th century writers on money assumed a system of commodity money with allowance for temporary suspensions during wartime and a return to the standard in peacetime.  The suspension of specie payments by the Bank of England during the Napoleonic Wars prompted banker Henry Thornton to author in 1802 a treatise on managing a paper currency: An Enquiry Into the Nature and Effects of the Paper Credit of Great Britain.  Thornton was a successful banker with an economist’s understanding of banking, finance, and the real economy.  He pioneered analytical distinctions that would not be rediscovered for almost another century: the distinction between real and nominal interest rates, and the concept of an equilibrium or natural rate of interest.

The volume should have become a standard reference work.  But it came to be forgotten because the Bank of England re-established convertibility, and other countries gradually adopted the gold standard over the course of the 19th century.  How to manage a fiat money system (paper credit) ceased to be a practical issue.

When the Federal Reserve System was created at the end of 1913, the United States was on the gold standard along with most of the rest of the world.  The Federal Reserve was not created to manage money in the modern sense, but to provide a national currency as part of a gold standard.  No one was thinking of managing a fiat currency on the eve of World War One.  That was all soon to change with the requirements of wartime finance.

After World War One, the world returned to a global, pseudo gold standard that was chronically short of gold reserves.  The currencies of many countries were overvalued relative to gold.  When the system collapsed in the 1930s, countries were thrust into a fiat currency world without a playbook.  For a time, there were efforts to restore the global gold standard but they came to naught.  World War Two interrupted any effort to craft a new international monetary system.

The post-War, Bretton Woods system constituted the new global monetary order.  Volumes have been written on it.  I do not share the nostalgia of some for it.  It was even less of a gold standard than existed in the interwar period.  In truth, it was a dollar standard.  The dollar was pegged to gold and other currencies pegged to the dollar.  There were numerous exchange-rate adjustments.  The system contained inner contradictions.  Inevitably, the producer of the dominant currency was bound to abuse its “exorbitant privilege” and the United States did so.  The system collapsed and the world was then on a fiat standard.

There was no accepted theory of managing money in a fiat money world.  This was not Henry Thornton’s world in which fiat money was a temporary expedient with an expectation of a return to specie conversion.  It was not the world of the classical quantity theory, which was constructed in a commodity-standard world.  The quantity theory demonstrated the limits of monetary expansion (or changes in the demand for money) before prices would begin to rise sufficiently to threaten convertibility.  In a classical gold standard, the supply of money is endogenous and the price level fixed in the long run.

Milton Friedman and the monetarists offered a restatement of the quantity theory and a model of monetary control for a fiat currency.  Friedman, his students and colleagues believed they had discovered stable empirical relationships among the monetary base, broader measures of money (especially M2), and the demand for money (Friedman 1956 and Friedman and Schwartz 1956).  When monetary targeting was finally implemented by the Volcker Federal Reserve in the 1980s, the posited empirical relationships broke down.  The Fed abandoned monetary targeting.

What followed was a period that John Taylor dubbed the Great Moderation, in which the Fed and other central banks seemed to get it right.  There was enhanced macroeconomic stability (as measured by decreased variance in prices and output).  Taylor discerned that the Fed was following a tacit rule, which others called the Taylor Rule.  But the Fed and then other central banks began to deviate from the rule by lowering interest rates in response to the Dotcom bust.  Taylor (2009) argued the housing bust was the consequence of the boom created by the policy of low interest rates.  “No boom, no bust.”  Central banks have not returned to a monetary rule.  Instead, they have engaged in monetary improvisation.

Money in the 21st century is proving immune to control by central bankers.  The relationship between monetary reserves and various monetary aggregates (the money multiplier) has broken down.  More precisely, central banks appear to have lost the ability to control inflation.  In the United States, Europe and Japan, inflation rates have remained chronically below central bank targets over the course of the economic recovery from the Great Recession.  (The growth of real GDP has also been subpar.)  Economists as diverse as Jerry Jordan (2016) and James Bullard (2016) have questioned whether our textbook models of money creation and inflation control are any longer valid.  That is not to say that future inflation rates will not rise to two percent or beyond.  If they do so, however, it will likely not be the consequence of any central bank policy actions (Jordan 2016).

To reiterate, I question whether we ever had a practical theory of how to manage money in a fiat money world.  The proponents of monetary rules believe they have such a theory. One class of such rules involves NGDP targeting.

The specific question I pose for advocates of NGDP targeting is how today will anything the Federal Reserve does to its balance sheet alter the growth rate of NGDP in a predictable fashion?  The answer to such a question could be that the central bank should do more.  How much more?  And what, then, becomes of the rule?  It sounds like a recipe for discretion.  In any case, central banks have been unable to get either component of NGDP to grow in a normal or predictable manner.

Monetary institutions and policies vary among the major central banks.  For instance, both the European Central Bank and the Bank of Japan have instituted negative interest rates on commercial bank deposits at the central bank.  Meanwhile, the Fed has been paying interest on bank reserves for some time.  The institutions and policies differ, are even opposed to each other, but the policy failures are common.  (The policies have failed on their own terms, regardless of whether one agrees with them.)

Let me return to the classical idea of sound money.  Sound money is a rule, but of a different kind than modern monetary rules such as the Taylor Rule or NGDP targeting.  Sound Money was not a rule based on empirical relationships among economic variables.  It was not invented, but discovered.  It is more analogous to the rule of law.  Mises (1971: 414) made this point clearly. “Ideologically it [sound money] belongs in the same class with political constitutions and bills of rights.  The demand for constitutional guarantees and bills of rights was a reaction against arbitrary rule and non-observance of old customs by kings.”

The argument for sound money is not merely a technical economic argument, but a political economy and even constitutional argument.  When classical economists contended that commodity standards were a bulwark against inflation, they did not suggest that there would be no variability of inflation under a gold standard.  Their own experience told them otherwise.  Rather, they recognized that a gold standard was protection against arbitrary actions by sovereigns to depreciate the currency.  Protection against arbitrary and capricious governmental actions is what constitutions are meant to provide.

Is there a way to avoid arbitrariness in monetary matters in a fiat money system?  Can a monetary rule of some type today provide the protections that existed in the classical, pre-World War One gold standard?  These questions are central to the debate over monetary rules.  They apart from the technical ones I raised above.  Both sets of questions need to be addressed in debates over monetary policy.

References

Bullard, J. (2016) “Permazero.” Cato Journal. 36 (Spring/Summer): 415-29.

Friedman, M., ed. (1956) Studies in the Quantity Theory of Money. Chicago: The University of Chicago Press.

Friedman, M. and A. J. Schwartz (1963) A Monetary History of the United States, 1867-1960. Princeton: Princeton University Press.

Jordan, J. L. (2016) “The New Monetary Framework.” Cato Journal 36 (Spring/Summer): 367-83.

Mises, L. v. (1971 [1952]) The Theory of Money and Credit. Irvington-on-Hudson: The Foundation for Economic Education.

________ (1966) Human Action, 3d ed. Chicago: Henry Regnery.

Taylor, J. B. (2009) Getting Off Track. Stanford: The Hoover Institution Press.

[Cross-posted from Alt-M.org]

American businesses have become leaner in recent decades, with fewer layers of management. By contrast, New York University’s Paul Light has found that the number of management layers in federal government agencies has increased substantially.

Light argues that today’s “over-layered chain of command” in the federal government is a major source of failure. Overlaying stifles information flow, slows decisionmaking, and makes it harder to hold people accountable for failures.

The Washington Post looks at a failure in the Department of Homeland Security (DHS) that will have you shaking your head. Reporter Joe Davidson describes DHS efforts after the December 2 attacks in San Bernardino, California. The acronyms are all bureaus within the DHS.

The day after the attack that left 14 dead and 22 wounded, ICE learned that Enrique Marquez, who authorities say purchased the weapons used by shooters Syed Farook and Tashfeen Malik, might be at a USCIS office in San Bernardino. The office was protected by private security guards under contract to FPS.

Five HSI agents, decked out in tactical gear, rushed to the office to prevent any further attacks and to detain Marquez and his wife for questioning.

Yet despite the urgency, coming less than 24 hours after the attack, “the FPS guards advised the HSI agents that they had to stay in the lobby until the Field Office Director approved their entry.”

At first, the guards couldn’t find the director because she didn’t answer her phone. Once located, she didn’t want to allow the agents into the building. In true bureaucratic fashion, the field office director said she had to check with her boss, the district director in Los Angeles, who then checked with a higher boss, the regional director in Laguna Niguel, Calif.

The district director instructed the field office director to allow the agents into the building “to determine what they wanted.” Then they waited.

“[T]he agents were confined to the lobby for approximately 15 to 20 minutes,” they told the inspector general’s office. More than enough time for any suspect to get away.

Imagine five cops anxious to take down a terrorist waiting in the lobby for permission to go further into the building before they could search for him.

After the initial wait, “the agents were escorted to a USCIS conference room by FPS guards, where they met with the Field Office Director,” the inspector general’s report said. “According to the HSI agents’ accounts, they waited approximately 10 additional minutes in the conference room before the Field Office Director met with them. The agents told her they were looking for Marquez because he was connected to the shootings and there was concern that he could be in the building.”

The field office director’s response?

[Inspector General John] Roth said “the Field Office Director told the agents they were not allowed to arrest, detain, or interview anyone in the building based on USCIS policy, and that she would need to obtain guidance from her superior before allowing them access.”

The field office director again called the district director who notified the regional director, who notified an associate director in Washington, who met with USCIS lawyers.

Meanwhile, the field office staff determined that neither Marquez nor his wife was at the office.

The agents then asked for information about Marquez from the USCIS file, but the field office director refused. She did provide a photo.

At some point, the associate director determined that the agents could have the file. That information was relayed back down the chain, to the regional director, then to the district director, then to the field office director. More than an hour after arriving, an agent hand-copied information from the file and the law enforcement officers left.

This is how the federal government operates. Why anyone (like current presidential candidates) would want to give this dysfunctional institution more power and control over our lives is beyond me—whether more power over security, health care, housing, education, transportation, trade, or anything else.

Over a vast range of activities, the federal government fails repeatedly for basic structural reasons. The government is a monopoly. It functions through coercion, not voluntary relations. It has a guaranteed source of funds, and thus has little reason to serve the interests of the public. It is controlled by self-interested politicians. It receives no market signals and little feedback to guide its decisionmaking. It is 100 times larger than the average-sized state government, and thus far too large to manage with any decent level of efficiency or quality.

The nation would be better off if the DHS superstructure were abolished and the overall government cut in size.

You can read more about the causes of federal failure here, here, here, and here.

Yesterday the D.C. City Council unanimously approved a measure that would gradually raise the $10.50 minimum wage to $15.00 by 2020, and then index future increases to changes in the Consumer Price Index. These new scheduled increases will come on the heels of an already significant 39 percent increase currently being phased in. With the passage of this bill, D.C follows California and  in passing substantial minimum wage hikes beyond the scope of past experience in the U.S. The related adverse disemployment effects will primarily impact younger workers and people with limited job skills or educational attainment, putting the important first rung of the job ladder out of reach for many of them.

While proponents of an increase tend to focus on families, roughly half of minimum wage workers are between 16 and 24, and a more than one-fifth are teenagers. People lacking a high school diploma are more likely to be in minimum wage jobs, and even with some recent incremental improvements, the 4-year adjusted cohort graduation rate for D.C. public schools is only 64.4 percent and for African-American students it is less than 62 percent. While the aggregate unemployment rate for the District might not seem alarmingly high at 6.4 percent in April, there is a lot of variation between the eight wards, with the unemployment rate as high as 9.9 percent in Ward 7 and 12 percent in Ward 8.  One survey found that almost half of responding businesses had already reduced staff or hours to cope with the first raft of minimum wage increases. Younger workers and people with limited educational attainment will find it increasingly difficult to find employment as labor costs continue to surge.

These minimum wage jobs often play an important role in helping people develop the skills they need to eventually move on to more lucrative and promising jobs, far from being a dead-end where these workers get stuck forever. The majority of minimum wage workers that stick with it get a raise within a year. An earlier study looking at data from 1979 to 2002 found that almost two-thirds of minimum wage employees who continue working earned higher than the minimum wage within a year. More recently, 72 percent of minimum wage earners got a raise between 2014 and 2015. About a fifth of these people saw their earnings rise due to mandated minimum wage increases, but 57.5 percent of people working continuously got a raise or moved into a higher-paying job outside of those effects, and this share could have been even higher in the absence of those legislated minimum wage increases. Far from stagnating in these entry-level jobs, most of the people in these positions use these opportunities as a springboard to better things.

Supporters of the new bill may say that they want to ensure that hard work is rewarded and that people can support their families, but D.C.’s substantial minimum wage increases will make it much harder for many people, especially younger workers and people with limited job skills, to find any work at all.

Pages