Cato Op-Eds

Individual Liberty, Free Markets, and Peace
Subscribe to Cato Op-Eds feed

The weekend meeting between Chinese President Xi Jinping and President Ma Ying-jeou of Taiwan was a positive development for peace in the Taiwan Strait, despite the meeting’s mostly symbolic nature. No grand bargains or binding statements resulted, but the meeting highlights the importance of high-level discussions and constructive dialogue.

The question of Taiwan’s political status, as an independent country or renegade province, is of serious concern to the United States. A forceful military “reunification” of Taiwan with China could draw the United States into war. U.S. government officials should encourage steps that reduce the possibility of armed conflict such as the Xi-Ma meeting.

Figuring out a way to settle the Taiwan question peacefully has been complicated by the fact that the military balance across the Taiwan Strait has shifted firmly in China’s favor. China’s military capabilities have also raised the cost that the United States would have to pay in a war over Taiwan.

These developments in the cross-strait military balance don’t mean that a Chinese attack is inevitable. But the changing balance creates a sense of urgency for keeping the cross-strait dispute from erupting into war. This will require a new military strategy in Taiwan. However, preventing war isn’t solely in the hands of the military; the political leaders on both sides need to recognize that “mutual compromise is the only effective way forward.”

The Xi-Ma meeting is by no means a silver bullet. Ma’s push for close economic ties with China has significantly damaged the popularity of his Kuomintang (KMT) party. The KMT will likely lose the 2016 presidential election to the more pro-independence Democratic Progressive Party, led by Tsai Ing-wen. Tsai was highly critical of Ma’s decision to meet with Xi. Many Taiwanese people were also opposed to the meeting, and want little to do with mainland China’s political system.

Despite the challenges, regular summits between the presidents of China and Taiwan should become a permanent fixture of cross-strait relations. War over Taiwan could inflict a great deal of damage to Taiwan, China, and the United States. High-level summits won’t eliminate the possibility of war, but they can provide a space for disagreements and crises to be resolved peacefully before they spiral into conflict.

I’m a big fan of the flat tax because a low tax rate and no double taxation will result in faster growth and more upward mobility.

I also like the flat tax because it gets rid of all deductions, credits, exemptions, preferences, exclusions, and other distortions. And a loophole-free tax code would be a great way of reducing Washington corruption and promoting simplicity.

Moreover, keep in mind that eliminating all favors from the internal revenue code also would be good for growth because people then will make decisions on the basis of what makes economic sense rather than because of peculiar quirks of the tax system.

Sounds great, right?

Well, it’s not quite as simple as it sounds because there’s a debate about how to measure loopholes. Sensible people want a tax code that’s neutral, which means the government doesn’t tilt the playing field. And one of the main implications of this benchmark is that the tax code shouldn’t create a bias against income that is saved and invested. In the world of public finance, this means they favor a neutral “consumption-base” tax system, but that’s simply another way of saying they want income taxed only one time.

Folks on the left, however, are advocates of a “Haig-Simons” tax system, which means they believe that there should be double taxation of all income that is saved and invested. You see this approach from the Joint Committee on Taxation. You see it from the Government Accountability Office. You see it from the Congressional Budget Office. Heck, you even sometimes see Republicans mistakenly use this benchmark.

Let’s look at three examples to see what this means in practice.

Example #1: Because they don’t want a bias that encourages people to spend their income today rather than in the future, advocates of a neutral tax code want to get rid of all double taxation of savings (Canada is moving in that direction). So that means they like IRAs and 401(k)s since those vehicles at least allow some savings to be protected from double taxation.

Proponents of Haig-Simons taxation, by contrast, think that IRAs and 401(k)s are loopholes.

Example #2: Another controversy revolves around the tax treatment of business investment. Advocates of neutral taxation believe in expensing, which is simply the common-sense view that investment expenditures should be recognized when they actually occur.

Proponents of Haig-Simons, however, think that investment expenditures should be “depreciated,” which means companies are forced to pretend that most of their investment costs which are incurred today actually take place in future years.

Example #3: Supporters of neutral taxation think capital gains taxes should be abolished because there already is tax on the income generated by assets such as stocks and bonds. So the “preferential rates” in the current system aren’t a loophole, but instead should be viewed as the partial mitigation of a penalty.

Proponents of Haig-Simons, not surprisingly, have the opposite view. Not only do they want to double tax capital gains, they also want them fully taxed, which would mean an economically jarring jump in the tax rate of more than 15 percentage points.

Now, having provided all this background information, let’s finally get to today’s topic.

If you’ve been following the presidential campaign, you’ll be aware that there’s a controversy over something called “carried interest.” It’s a wonky tax issue that seems very complicated, so I’m very happy that the Center for Freedom and Prosperity has produced a video that cuts through all the jargon and explains in a very clear and concise fashion that it’s really just an effort by some people to increase the capital gains tax.

There are four points from the video that deserve special emphasis.

  1. Partnerships are voluntary agreements between consenting adults, and both parties concur that carried interest helps create a good incentive structure for productive investment.
  2. Capital formation is very important for growth, which is one of the reasons why there shouldn’t be any capital gains tax.
  3. A capital gain doesn’t magically become labor income just because an investor decides to share a portion of the gain with a fund manager.
  4. An increase in the tax on carried interest would be the camel’s nose under the tent for more broad-based increases in the tax burden on capital gains.

By the way, I liked that the video also took a gentle swipe at some of the ignorant politicians who want to boost the tax burden on carried interest. They claim they’re going after hedge funds, when the tax actually is much more targeted at private equity partnerships.

But what really matters is not the ignorance of politicians. Instead, we should be focused on whether tax policy is being needlessly destructive because of high - and duplicative - taxes on saving and investment.

Such levies would reduce investment. And that means lower levels of productivity and concomitantly lower wages.

In other words, ordinary people will suffer a lot of collateral damage if this tax-the-rich scheme for carried interest is implemented.

Over at Cato’s Police Misconduct web site, we have selected the worst case for the month of October.

It was the case from Owasso, Oklahoma.  Officer Michael Denton was charged with excessive force for beating a motorist with the butt of a shotgun.  The reason why this matter is arguably the worst case from last month is because this is the very same officer who was fired for excessive force for elbowing an inmate in the face.  An arbitrator later reversed that dismissal and in February Denton was awarded $280,000 in back-pay.

So it is not just a problem officer here.  The system for getting rid of problem officers seems broken.  If Denton does not go to prison, will he be reinstated again?  We will be watching with interest.

Cato will be hosting a conference on Policing in America on December 1. 


Debates about income inequality, “the top 1 percent,” and poverty typically examine those issues within the context of a single country. But, consider a global perspective. This web tool lets you find out which income percentile you belong to relative to all the other people in the world. If you make more than $32,400 per year, you are in the top 1 percent of the richest people in the world! 

And, bear in mind that the world is more prosperous than it has ever been in the past. Compared to you, the vast majority of people who have lived on this planet were desperately poor. Poverty, as Cato’s David Boaz put it in this online lecture, used to be ubiquitous. “Why are some people poor? That’s always the wrong question. The question is why are some people rich? Poverty is the natural condition of mankind, but it’s easy to forget that.” 

Fortunately, prosperity is rising and global inequality decreasing. Even as the world population has exploded, the number of people living in poverty has fallen. As a result of spreading prosperity, infant mortalityilliteracy, and malnutrition are in decline, and people are living longer. Extreme poverty’s end is in sight.

Prosperity does not, of course, materialize without a cause. The role of industrialization and trade in bringing about economic growth and prosperity cannot be emphasized enough. 

So the next time someone brings up poverty or income inequality within the United States, keep in mind the importance of a proper perspective. From a global standpoint, you may very well be a part of “the top 1 percent.”

The U.S. Court of Appeals for the Fifth Circuit has now affirmed the injunction against President Obama’s executive actions on immigration. The opinion seems daunting at 135 pages, but only just half of that is the majority opinion, and much of that consists of technical discussion. The nub of the ruling is that the 26 plaintiff states have established a “likelihood of success” on their claim that (1) the administration both violated the Administrative Procedure Act by not going through proper rulemaking channels and (2) exceeded the authority that the relevant statutes give the executive branch in enforcing immigration law. This was not a surprise given the way oral argument went – and that the two judges in the majority were also on the panel that denied the administration an emergency stay of the injunction earlier in the year – but it’s still significant.

The court cuts through the government’s obfuscation about “prosecutorial discretion” and the like, the argument that granting temporary status to millions if people is no different than a decision to prioritze deportation of murderers over those whose only violation is being in the country without authorization: ”Deferred action, however, is much more than nonenforcement: It would affirmatively confer ‘lawful presence’ and associated benefits on a class of unlawfully present aliens.” (35) “Moreover, if deferred action meant only nonprosecution, it would not necessarily result in lawful presence… . Declining to prosecute does not transform presence deemed unlawful by Congress into lawful presence and confer eligibility for otherwise unavailable benefits based on that change.” (36)

The court goes on to explain how the novel Deferred Action for Parents of Americans and Lawful Permanent Residents (DAPA) goes against what Congress has legislated. “The interpretation of those provisions that the Secretary advances would allow him to grant lawful presence and work authorization to any illegal alien in the United States—an untenable position in light of the INA’s intricate system of immigration classifications and employment eligibility.” (62) Moreover,

because DAPA is not authorized by statute, the United States posits that its authority is grounded in historical practice, but that “does not, by itself, create power,” and in any event, previous deferred-action programs are not analogous to DAPA. “[M]ost … discretionary deferrals have been done on a country-specific basis, usually in response to war, civil unrest, or natural disasters,” but DAPA is not such a program. Likewise, many of the previous programs were bridges from one legal status to another, whereas DAPA awards lawful presence to persons who have never had a legal status and may never receive one. (63)

This analysis mirrors the argument we make in Cato’s brief regarding the proper application of deferred action historically, as a bridge between lawful statuses rather than a tunnel around under and around the immigration laws.

In short, while we wish Congress had acted to make some sense of our immigration regime, it hasn’t – and the president can’t rewrite the law even it makes good policy sense to do so.

Now, where do we go from here? If the government files a cert petition with the Supreme Court this or next week, the case could conceivably make it onto the docket as one of the last ones to be argued this term, meaning a decision the last week of June 2016. But the government may not do that – it waited an awfully long time to file its “emergency” motion to stay the district court’s injunction – in order to keep this immigration battle alive into the presidential election. Indeed, regardless whether the Supreme Court ultimately upholds or dissolves the injunction against President Obama’s executive action, presumptive Democratic nominee Hillary Clinton would probably rather maintain the issue as a live one – especially if, as the conventional wisdom now holds, she’ll be running against one of the Cuban-Americans running for the GOP nomination, Ted Cruz or Marco Rubio.

But that’s all political speculation. For the moment we have another defeat for the imperial executive and a victory for the separation of powers and the rule of law.

On Thursday, November 12th, Cato hosts its 33rd Annual Monetary Conference. This year’s conference theme is “Rethinking Monetary Policy.” I will be presenting a paper on “Monetary Policy: The Knowledge Problem.”

The knowledge problem in conducting monetary policy, or any other government policy, is that the required knowledge is simply not available to policymakers. The knowledge is not available in any one place, nor can it be assembled in a form that would enable policymakers to formulate an “optimal” policy.

My paper focuses on Friedrich Hayek because he first formulated the knowledge problem. He argued that knowledge is inherently dispersed and localized across the population of economic agents. It is not possible to assemble the totality of knowledge existing in society in any one mind or place. Moreover, what the totality of individuals knows far exceeds what any policymaker can know, no matter his expertise and wisdom.

In order to formulate an optimal policy, a monetary authority must predict how alternative policy actions will affect the plans of millions of people. That information is unavailable. Assuming it exists in an economic model doesn’t make it so.

It is the conceit of central bankers (or at least most) that they can acquire the knowledge needed to conduct optimal monetary policy. In his Nobel Prize lecture, Hayek called that “The Pretence of Knowledge.”

In my paper, I also detail Milton Friedman’s contribution to the knowledge problem in monetary policy. That contribution has been under-appreciated in the literature.

Some problems cannot be solved. The knowledge problem is one such. But it can be mitigated, and I conclude my paper by discussing how that might happen. I suggest, as did Hayek and Friedman, that a monetary rule works best.

(If you would like to see this paper presented, you can register here. With several central bankers on the distinguished line up, including St. Louis Fed President James Bullard, Richmond Fed President Jeffrey Lacker, and Bank of Mexico Deputy Governor Manuel Sánchez, this year’s conference is a particularly interesting place to discuss the knowledge problem in the context of central banking. If you cannot attend, the conference papers will appear in a forthcoming edition of the Cato Journal).

[Cross-posted from]

One measure of the government’s size is government spending as a share of gross domestic product. The OECD has released new data (Table 25) on this measure for 31 member countries, which I chart here for 2015. The spending includes all levels government: federal, state, and local.

Politics and bureaucratic mismanagement drive up costs and generate failure in the federal government. More evidence comes from a Washington Post report today on a botched computer project at the Department of Homeland Security:

Heaving under mountains of paperwork, the government has spent more than $1 billion trying to replace its antiquated approach to managing immigration with a system of digitized records, online applications and a full suite of nearly 100 electronic forms.

A decade in, all that officials have to show for the effort is a single form that’s now available for online applications and a single type of fee that immigrants pay electronically. The 94 other forms can be filed only with paper.

This project, run by U.S. Citizenship and Immigration Services, was originally supposed to cost a half-billion dollars and be finished in 2013. Instead, it’s now projected to reach up to $3.1 billion, and be done nearly four years from now.

A six times cost overrun! That is epic. I’ve described Edwards law of Cost Doubling in government, but this DHS project rises to an elite screw-up category reached by the Big Dig, the San Francisco-Oakland Bay Bridge, and a veterans hospital in Denver, which all more than quadrupled in cost.

Other than “shoddy planning” and mismanagement, what else contributed to the latest DHS screw-up? The Post reports on the role of politics:

By 2012, officials at the Department of Homeland Security, which includes USCIS, were aware that the project was riddled with hundreds of critical software and other defects. But the agency nonetheless began to roll it out, in part because of pressure from Obama administration officials who considered it vital for their plans to overhaul the nation’s immigration policies, according to the internal documents and interviews.

… By 2012, the system’s fundamental flaws — including frequent computer crashes and bad software code — were apparent to officials involved with the project and, according to one of them, and it was clear that “it wasn’t going to work.”

But killing the project wasn’t really an option, according to officials involved at the time. President Obama was running for reelection and was intent on pushing an ambitious immigration reform program in his second term. A workable electronic system would be vital.

“There was incredible pressure over immigration reform,” a second former official said. “No one wanted to hear the system wasn’t going to work. It was like, ‘We got some points on the board, we can go back and fix it.’ ”

For more, see the new Downsizing Government essays Federal Government Cost Overruns and Bureaucratic Failure in the Federal Government.

In another installment of our series on how science and technology are working to improve lives and solve problems, we sum up some exciting new developments in robotics and 3-D printing, and even news on a sonic tractor beam right out of Star Wars.

The Robots Chasing Amazon

In 2012, Amazon bought the warehouse robot maker, Kiva Systems, in order to keep the technology away from its competitors. This created a gap in demand for warehouse robots, giving Fetch Robotics and Harvest Automation a chance to enter the market. Both of these companies have created robots that follow warehouse employees around for the purpose of collecting and moving the inventory items that they take off the shelves. These robots have greatly improved efficiency and are cheaper than hiring more workers or introducing more infrastructure, such as conveyor belts. Fetch Robotics currently sells these robots for $25,000, while Harvest Automation will sell them for $15,000 or rent them for $1,000 a month beginning next year. Currently these robots are designed to work alongside warehouse employees, but Fetch Robotics has already begun working on warehouse robots that grab the items from the shelves themselves.

Robot Builder Designed for Construction Sites

Recently at ETH Zurich, a robot that is able to lay bricks in various designs was created. It is the first robot that can lay bricks without rigid design constraints, which could make construction sites much more efficient. The robot consists of a robotic arm attached to a mobile base unit and two computer systems. The first computer controls the arm, while the second generates a 3D version of the construction site, allowing it to envision its location. It is hoped that in the future, construction workers and these robots will be able to work together in order to efficiently erect various structures. ETH Zurich is also working on two other projects. One is a robot that can analyze pieces of rubble and then assemble them into a structure. The second robot uses the 3D-printing of mesh, along with the use of concrete filler, to replace older methods of using molds for making concrete pieces. A new robotic fabrication facility will hopefully open by September of 2016. 

3D-printed hip and knee joints coming to a hospital near you

A new breakthrough has occurred in the world of hip and knee surgery. Dr. Clarke has created a technique where virtual models of patient’s bodies are generated for two purposes: One being so surgeons are able to practice the procedure beforehand. The second being so precision instruments for the surgery can be made. Currently, because the size and shape of everyone’s hip and knee joints are different, surgery involves a “trial and error” procedure in order to discover the correct fitting of instruments for an individual’s joint. But Dr. Clarke’s cutting-edge technique allows surgeons to know the size and shape of the patient’s joint beforehand, which lets them use 3D-printing to create surgical instruments that perfectly fit the specific person. This technique diminishes recovery time, allows for smaller incisions, and reduces blood loss.

Star Wars style sonic tractor beam invented by scientists              

It has been portrayed in movies like Star Wars and Star Trek, but now it is a reality. Researches have created a tractor beam that can move, rotate, and suspend a four millimeter plastic bead in mid-air. It uses sixty-four miniature loud speakers that emit high frequency sound waves, which the human ear cannot detect. However, in order to lift larger objects, lower frequencies that humans can detect would have to be used, which poses an unfavorable sound problem. Other implications of this technology include performing surgical procedures inside the human body without making any incisions.  

It’s a belated round two for Florida’s legislation on eminent domain. In the 1980 case Webb’s Fabulous Pharmacies v. Beckwith, the Supreme Court struck down a Florida statute giving the state ownership over the interest earned on disputed funds—what lawyers call “interpleader” funds—held on deposit in the Florida court registry. The Court held that the statute effected unconstitutional takings under the Fifth and Fourteenth Amendments.

Now there’s a challenge to a similar statute, concerning “quick-take” deposit funds. Florida’s eminent domain law empowers condemning authorities to fast-track their appropriation of a desired property by allowing the authority to simply deposit the constitutionally required just compensation into the court registry and then taking title to the condemned property. The scheme further authorizes the court clerk to invest the deposited funds before a court renders a final valuation judgment. The interest on that investment is split 90/10 between the condemning authority and court clerk, respectively, with none of it going to the (now former) property owner.

Several property owners challenged this mechanism, wanting to recover the interest that accrued on compensatory funds that were indisputably theirs as a matter of state law. Florida’s intermediate appellate courts have evaded the Webb’s precedent and denied these challenges—and the state supreme court has thus far declined to review the rulings. Cato has filed a brief supporting the challengers’ request that the U.S. Supreme Court take the case.

The lower state courts have ruled that quick-take deposits are “public property” until final judgment and thus owners of condemned properties had no interest in the accrued interest, as it were. Yet Webb’s stands for the proposition that funds deposited with a court registry are “private property,” belonging to the ultimate beneficiary of the legal action. In announcing this holding, the Supreme Court applied the “interest follows principal” rule.

Twenty-five years later, eminent-domain condemnees stand in the shoes of the Webb’s plaintiffs and should be entitled to the interest earned on funds that were, after all, deposited in the court registry for their benefit. The Florida judiciary has again unduly deferred to a state legislative schemed that violates the Fifth and Fourteenth Amendments, so the U.S. Supreme Court should again take up this issue.

Months of agitation promoting a government investigation of supposedly wrongful advocacy on the issue of climate change have begun to pay off. As Holman Jenkins notes, purportedly levelheaded Democrats and environmentalists are now jumping on the bandwagon for a probe of possible unlawful speech or non-speech by energy companies and advocacy groups they’ve backed. Perhaps the most remarkable name on that list is Hillary Clinton, who said the other day in New Hampshire, referring to Exxon, “There’s a lot of evidence that they misled people.” That’s right: Hillary Clinton, of all people, now wants to make it unlawful for those who engage in public controversy to mislead people.

The first high-profile law enforcer to bite, it seems, will be Eric Schneiderman, whose doings I’ve examined at length lately. “The New York attorney general has launched an investigation into Exxon Mobil to determine whether the country’s largest oil and gas company lied to investors about how global warming could hurt its balance sheets and also hid the risks posed by climate change from the public,” reports U.S. News. Show me the denier, as someone almost said, and I will find you the crime: “The Martin Act is a nearly empty vessel into which the AG can pour virtually any content that he wants,” as Reuters points out. More on the Martin Act here and here.

At Forbes, Daniel Fisher notes the possible origins of the legal action in an environmentalist-litigator confab in 2012 (“Climate Accountability Initiative”) in which participants speculated that getting access to the internal files of energy companies and advocacy groups could be a way to blow up the climate controversy politically. Fisher also notes that Justice Stephen Breyer, in the Nike v. Kasky case dismissed 12 years ago on other grounds, warned that it will tend to chill advocacy both truthful and otherwise by businesses if opponents can seize on disagreements on contentious public issues and run to court with complaints of consumer (or presumably securities) fraud.

Perhaps in this case chilling advocacy is the whole point. And very much related: my colleague Roger Pilon’s post last week, “Whatever Happened to the Left’s Love of Free Speech?“; Robert Samuelson (“The advocates of a probe into Exxon Mobil are essentially proposing that the company be punished for expressing its opinions.”)

[cross-posted from Overlawyered]

Washington’s football team has been called the Redskins since 1933, and that team name has been a registered trademark since 1967. Nevertheless, last year, the Patent and Trademark Office (PTO) cancelled the Redskins’ trademarks on the basis that a “substantial composite” of Native Americans found the team name “disparaging” when those trademarks were first registered. The team challenged the cancellation on the ground that it was based on the content of the marks’ expression, in violation of the First Amendment.

The federal district court in Virginia held that the First Amendment is irrelevant here because the Redskins remained free to use their name and marks without registration and, in any event, trademarks are government speech and the government can decide how it wants to speak. The Redskins have appealed that decision to the Richmond-based U.S. Court of Appeals for the Fourth Circuit—read their entertaining brief—and Cato has filed a brief supporting the team.

Although the line between core political speech and commercial speech may at times be difficult to draw, both are entitled to First Amendment protection and, in any event, trademarks are used for more than commercial transaction. Furthermore, registration offers substantial rights and benefits to trademark owners—such as the right to license and to sue for misappropriation—which the government can’t deny simply because it doesn’t like the mark. And trademarks, among other types of intellectual property, don’t constitute government speech.

The lower court relied on the Supreme Court’s recent decision in Walker v. Texas Division (the Confederate flag license-plate case), but trademarks don’t satisfy Walker’s new test for government speech: (1) the government has not traditionally used trademarks to communicate messages, (2) nor has trademark registration historically been restricted to speech with which the government agrees, (3) nor do observers understand trademarks to be the speech of the government, (4) nor does the government maintain control over trademarks upon registration. Instead, the Lanham Act—the federal trademark statute—establishes a generally available regime allowing the expression of a variety of viewpoints. Because such expression is constitutionally protected, the Lanham Act’s registration process is subject to First Amendment review, which dooms the law’s “disparagement” bar.

Indeed, the PTO’s record reveals confusion, bordering on incoherence, from the highly subjective application of disparagement standards built on shifting attitudes. For instance, the office has registered a number of trademarks involving the words “dyke” and “fag” (our brief has many more colorful examples) but also at times denied registration for designations using those words.

Moreover, the disparagement bar enshrines the heckler’s veto. As the Supreme Court said in Texas v. Johnson (the 1989 flag-burning case), “If there is a bedrock principle underlying the First Amendment, it is that the government may not prohibit the expression of an idea simply because society finds the idea itself offensive or disagreeable.” Although there are categories of speech unprotected by the First Amendment—such as defamation or incitement of violence—disparaging speech is not one of them and courts do not have “freewheeling authority to declare new categories of speech outside the scope of the First Amendment.” The Lanham Act’s disparagement bar is content-based because it cannot be “justified without reference to the content of the regulated speech.”

Even worse, the bar compels viewpoint discrimination—a particularly pernicious form of censorship—by allowing positive references to a group or idea but not (arguably) negative ones. Even if trademarks were considered purely commercial speech, which enjoys less constitutional protection under current doctrine, the disparagement bar would fail because the government has no substantial interest unrelated to the suppression of speech.

For more on the case, which is formally called Pro-Football, Inc. v. Blackhorse, see my USA Today op-ed and Federalist Society podcast.

The full text of the Trans Pacific Partnership agreement was released last Thursday.  At over 6,000 pages (by most estimates – I haven’t counted them myself!), it’s quite a challenge to digest.  It’s easy to pick out obscure technical issues and discuss them with trade experts; it’s harder to talk about the big picture significance for a mass audience.

Economist Jeffrey Sachs tried to do this in an op-ed in the Boston Globe, and I think he offered a good starting point:

The agreement, with its 30 chapters, is really four complex deals in one. The first is a free-trade deal among the signatories. That part could be signed today. Tariff rates would come down to zero; quotas would drop; trade would expand; and protectionism would be held at bay.

The second is a set of regulatory standards for trade. Most of these are useful, requiring that regulations that limit trade should be based on evidence, not on political whims or hidden protectionism. 

The third is a set of regulations governing investor rights, intellectual property, and regulations in key service sectors, including financial services, telecommunications, e-commerce, and pharmaceuticals. These chapters are a mix of the good, the bad, and the ugly. Their common denominator is that they enshrine the power of corporate capital above all other parts of society, including labor and even governments.

The fourth is a set of standards on labor and environment that purport to advance the cause of social fairness and environmental sustainability. But the agreements are thin, unenforceable, and generally unimaginative. For example, climate change is not even mentioned, much less addressed boldly and creatively. 

I would say he gets this about half right.  Some clarifications are in order.

He’s right about the free trade part. We should do the tariff/quota reduction part today (or better yet, years ago!).

He’s also right about his third category of regulations, on investor rights, intellectual property, and services.  As with most regulations, some are good, some are useless, and some are downright harmful.  The TPP regulations are no different.

As for the second category, however, when you talk about international rules that require evidence-based regulation, or prohibit hidden protectionism, that’s already taken care of at the WTO.  The WTO has a great track record of handling these cases.  It’s not clear how the TPP would add much here.

Finally, he wants stronger labor and environmental rules in the TPP, but it’s not clear to me why those subjects are suited for trade agreements.  Whatever you think about climate change, for example, if you are going to address that in a treaty, it seems appropriate to do so in a climate change treaty, not a trade treaty.  Regardless, these regulations are really just part of the general category of regulations, along with investor rights, intellectual property, and services.

Summing this up, and narrowing his four categories to two, the TPP has two major aspects to think about: the trade liberalizing part, which is good; and the regulatory part, which is pretty mixed.  The best way to evaluate the TPP over the next year – which many people are saying is how long we have until a Congressional vote – is to figure out how much liberalization is in there, and just how good/bad/ugly the regulatory aspects are, and weigh and balance the two.

Friday afternoon Rush Limbaugh took a call from a conservative teenager who wanted to know how to help his generation “realize what’s happening in our nation.” Rush offered some thoughts, beginning with this:

Liberalism is so easy.  All you have to do is see some suffering and tell everybody that you see it, and that it really bothers you. Right there, you are given great credit for having great compassion, and people will say great things about you.  All you have to do is notice it.  You don’t have to offer a solution.  If you do offer a solution, say, “The government ought to do something,” then they’ll really, really love you. Liberalism’s easy. 

That’s why a lot of people end up going there, is no resistance to it. It doesn’t take any kind of thought because it’s all based in emotion, and thinking is harder than feeling.  Thinking’s an applied process. 

That’s a good point. It is indeed easy to see a problem and say “the government ought to do something.” People don’t make enough money? Raise the minimum wage. Don’t think about what the effects of that might be. Or just increase welfare. And again, don’t think through the long-term effects. IBM is too big? Break it up, even as new competition is about to leave IBM in the dust. Part of the problem here is taking a snapshot view of the world – which at any point will be full of inefficiencies and inequalities— rather than a dynamic view. The world is constantly changing. Economic growth is a process. Things that are first bought only by the rich become cheaper and more available to the middle class and then to everyone. And centralized, compulsory “solutions” to immediate problems may impede growth, improvement, and progress.

But Rush might have mentioned that sometimes “conservatism” is easy, too. All you have to do is see a problem and demand a government program. Some people get in trouble with drugs? Ban ’em. The Middle East is in chaos? Bomb some more countries. Russia is assertive? Stand up to ’em! “It doesn’t take any kind of thought because it’s all based in emotion, and thinking is harder than feeling.  Thinking’s an applied process.” And when you think about it, you might realize that prohibition introduces all sorts of new problems, that the United States can’t control the whole world any more than it can control the American economy, that threatening war with a nuclear-armed Russia might have disastrous consequences.

Yes, thinking is harder than feeling. It’s easy to say, “The government ought to do something.” And both liberals and conservatives default too easily to such easy answers.

The monetary base is the only magnitude that the Fed directly controls. It consists of currency held by the general public (including both Federal Reserve notes and Treasury coin) and the total aggregate reserves of banks and other depositories (whether held in the form of vault cash or deposits at one of the regional Federal Reserve banks).

Some would translate this control over the base into direct Fed control over total reserves, but that is not strictly correct. Even though the Fed initially increases (or decreases) the base by increasing (or decreasing) reserves, the general public and the banks determine how much of the base is ultimately held instead in the form of currency in circulation. Thus, it would seem desirable to have the Fed report the base and its two components accurately. Yet the Fed’s reported measures of total reserves exclude significant amounts of bank vault cash. Even with changes in the Fed’s monthly releases implemented in July 2013, the problem has not been rectified. Moreover, there also remains a minor omission from the total base that while not yet serious could become so in the future. More important, once the Fed began paying interest on reserves in 2008, it dramatically altered the monetary relevance of its base and reserve measures.

Misreporting Total Reserves

Several different measures of total reserves exist. Both the St. Louis Fed and the Board of Governors have reported total reserves adjusted for changes in reserve requirements. Although the St. Louis Fed continues to do so, the Board of Governors discontinued its adjusted series in July 2013.[1] But these series, especially when seasonally adjusted as well, are not the raw numbers. While allegedly (but dubiously) useful for conducting monetary policy, adjustments for changes in reserve requirements grossly distort the historical record.

Only the Board of Governors in its weekly H.3 Release reports total reserves unadjusted for reserve requirements. But this series excludes any excess reserves held in the form of vault cash, and before July 2013 all required clearing balances and Fed float, and therefore under reports the total.[2] For some idea of how massive the resulting misrepresentation can be, consider December 2007. The Board of Governors reports total reserves (monthly, not seasonally, adjusted, and not adjusted for changes in reserve requirements) of $42.7 billion. If you add in vault cash not covering reserve requirements, that number jumps to $60.3 billion. And when you bring in required clearing balances and float, the number rises to $72.6 billion, 70 percent greater than the Board’s estimate.[3] If the distortion were consistent across time, the Board’s reserve totals would still tell us something. But the distortion is not close to consistent across time, in part because banks used increasing amounts of vault cash in their ATMs.

Consequently, to arrive at an accurate series for total reserves, one has to take the Board of Governors Monetary Base (not seasonally adjusted and not adjusted for changes in reserve requirements) and subtract the currency in circulation component of M1 (not seasonally adjusted).[4] Or alternatively, one could make the same subtraction of M1 currency from what the St. Louis Fed calls the Source Base (monthly and not seasonally adjusted), which is virtually identical to the Board’s measure of the base.[5] And just to add to the potential confusion, one must not use the so-called “currency in circulation” reported in the Board’s H.3 and H.4.1 Releases.[6] That measure includes not only currency in the hands of the public but also the vault cash of banks and other depositories. Subtracting it from the monetary base would yield the same misleading measure of total reserves as that of the Board. Only the currency component of M1, reported in the Fed’s weekly H.6 Release, confines itself to currency held by the public.

In July of 2013 the Board made a few changes. It introduced a new measure of the monetary base in the H.3 Release at the same time that it eliminated the small clearing balances banks were required to hold and revised Regulation D to simplify the administration of reserve requirements.[7] Although the Board has calculated this new, modified version of the monetary base going back to January 1959, its differences with the old version are so minor as to be hardly noticeable.[8] The elimination of the requirement that banks hold clearing balances, however, offers a third way of calculating total reserves from mid-2013 forward. One can simply add “Surplus Vault Cash” to “Total Reserves” in Table 2 of the H.3 Release. Yet this remains probably the least accurate of the three ways because it excludes the small amount of vault cash held by depositories whose total checking accounts fall below the level subject to reserve requirements.

It goes without saying that none of three ways of correctly determining total reserves is directly available on the St. Louis Fed’s interactive FRED website. Curiously, the St Louis Fed considers the Board’s “narrow” definition of total reserves less than satisfactory for “modeling the role of depository institutions in the economy.”[9 ] As far as I can tell, it uses the broader definition that includes all vault cash, rather than the Board’s narrow definition, as the basis for its series of total reserves adjusted for reserve requirements. Yet it has nowhere reported its preferred unadjusted broad measure.[10] Figure 1 illustrates how significant were the differences in these various measures of total reserves between 1979 and 2008.

Misreporting the Monetary Base

Less serious are some peculiarities in the Fed’s reporting of the total monetary base, but they have the potential of becoming more misleading in the future. They arise because banks and other depositories are not the only institutions that can deposit reserves at the Federal Reserve Banks. The Board’s weekly H.4.1 Release divides these additional deposits into two categories: “foreign official” and “other.”[11] Foreign official deposits are balances of foreign central banks and monetary authorities, foreign governments, and other foreign official institutions. The deposits labelled “other” include balances of international and multilateral organizations such as the International Monetary Fund, the United Nations, and the World Bank, along with such government-owned agencies or government-sponsored enterprises as Fannie Mae, Freddie Mac, and the Federal Homes Loan Banks. Neither of these two categories of deposits at the Fed has ever been included within measures of the monetary base.

The case for excluding foreign official deposits seems straightforward. Being held by institutions abroad, these deposits are not part of the domestic monetary base. But this does create an odd asymmetry; large amounts of U.S. currency are also held abroad, presently at least half of that in circulation, by most estimates.[12] Currency in circulation, in turn, is a large component of the reported monetary base: about 90 percent prior to the financial crisis and today, after quantitative easing and the huge increase in banks reserves, still about 30 percent. It would be nice to have two consistent estimates of the monetary base, one including all currency and all reserves, whether held domestically or abroad, and the other including only domestically held currency and reserves. But estimates of currency held abroad are quite unreliable. Fortunately, the total amount official foreign deposits have been and remain small.[13]

None of these mitigating factors, however, holds as strongly for the category of deposits listed on the Fed balance sheet as “other.” To begin with, Fannie, Freddie, and other government-sponsored enterprises are domestic institutions. We could debate exactly where their Fed deposits belong in the monetary base. Because these institutions do not create money, one could argue that their deposits at the Fed are really only an alternative form of currency and should be counted as such in the base. On the other hand, Fed deposits allow these institutions to participate in the Federal funds market. Indeed, because these institutions do not currently earn interest on these deposits, they have become the major players keeping the Federal funds rate below the interest rate on reserves. Few banks are going to loan out their reserves at less than what the Fed is paying them. As a result, the introduction of interest on reserves in October 2008 and the resulting accumulation of reserves by banks not only caused a collapse of Federal funds lending from over $200 billion to nearly a third of that, but the Federal Home Loan Banks became the dominant lenders in this market.[14] This would suggest that their deposits should be counted in the base as total reserves.

Wherever in the base these “other” deposits should be categorized, they were as insignificant as “official foreign” deposits prior to the financial crisis. Yet since then they have risen as high as $107 billion in December 2011. (See Figure 2.) Although that amount, if counted in the base at that time, would have increased the total base by only 4.1 percent, the same $107 billion would have increased the pre-quantitative base by more than 10 percent.[15] There is no guarantee that a Fed exit strategy that decreases the monetary base will pari passu decrease these non-interest earning deposits. In fact, it is likely that access of government-sponsored enterprises to Fed deposits will expand in the future.

Indeed, a change introduced by the Board of Governors on February 18, 2014, portents such an expansion of non-bank deposits at the Fed. The Dodd-Frank Act permits the Fed to provide financial services to what are styled Financial Market Utilities (FMU’s), and the new Financial Stability Oversight Committee has so far designated eight such FMU’s. They are the Clearing House Payments Company, CLS Bank International, Chicago Mercantile Exchange, the Depository Trust Company, Fixed Income Clearing Corporation, ICE Clear Credit, National Securities Clearing Corporation, and the Options Clearing Corporation.[16] All FMU’s now lodge deposits at the Fed and may eventually earn interest on them. Some of these entities previously had Fed deposits, but before February 2014, their deposits, along with those of banks, were counted in the monetary base. Now all FMU deposits are in the “other” category and have been dropped out of the the base. In other words, this represent still another omission that could in the future more seriously distort reported measures of both total reserves and the monetary base.[17]

Monetary Relevance of the Base and Reserves

Up to this point, I have focused on statistical inconsistencies in the Fed’s reported measures of both the monetary base and total reserves. But once the Fed began paying interest on reserves, it created a theoretical problem with the reported versions of these measures. Monetary economists distinguish between outside money and inside money.[18] Checking accounts and other deposits at banks qualify as inside money because they have both an asset and liability side; they are an asset of the depositor and a liability of the bank. Redeemable for Federal Reserve notes, they therefore entail financial intermediation in which the depositor can be thought to lend cash to the bank, which then relends part or all of it. Federal Reserve notes, on the other hand, are outside money. While nominally a liability on the Fed’s balance sheet, this paper fiat money is not a genuine liability and not redeemable for anything other than an equal amount of more of the same. Federal Reserve notes therefore are an asset only; like gold coins in a commodity money system. Prior to the financial crisis, the monetary base consisted entirely of outside money.[19]

This changed when the Fed began paying interest on reserves. Through these interest payments, these reserves have become a genuine liability on the Fed’s balance sheet. They are just like interest-earning Treasury securities. The Fed is now in effect borrowing money from banks in order to relend it on the asset side of its balance sheet. In short, the Fed is now involved in financial intermediation, doing the same thing as Fannie Mae, Freddie Mac, and the Federal Home Loan Banks. Interest-earning reserves therefore cease to be outside money and become another form of inside money. Or to put it another way, the Fed in essence is conducting fiscal policy just like the U.S. Treasury. When the Treasury borrows money, even with short-term Treasury bills, those securities are not considered part of the monetary base. There is no good reason why Fed borrowing should be any different.[20]

The Fed’s Term Deposit Facility(TDC), created on April 30, 2010, helps highlight this logic. The TDC is a mechanism through which banks can convert their reserve deposits at the Fed (which are like Fed-provided, interest-earning checking accounts for banks) into deposits of fixed maturity at higher interest rates set by auction (making them like Fed-provided certificates of deposit for banks). The Fed so far has only tested term deposits, which peaked at $404 billion in February 2014, with maturities ranging from 14 to 84 days. But for obvious reasons, this form of Fed borrowing is quite correctly excluded from Fed measures of both total reserves and the monetary base.[21]

Not all bank reserves earn interest—only those reserves held as deposits at the Fed. A bank’s vault cash earns nothing, but vault cash currently amounts to a little less than $70 billion, about the same as total reserves before the Fed began quantitative easing.[22] Thus, at least $2.5 trillion of the post-crisis explosion of the monetary base constitutes interest-bearing inside money that in substance is government debt merely intermediated by the Fed.[23] Confining the definition of the monetary base and total reserves to only non-interest bearing, Fed-created outside money would yield the results for the period from 2001 to mid-2015 depicted in Figure 3, 4, and 5.[24] With this adjustment, the mere $500 billion increase in what we can call the “outside base” since September 2008 represents merely a slightly more rapid rate of increase than the rate of increase in the base the decade prior, and nearly all of that recent increase has been in the form of hand-held currency.[25]

No wonder that the high inflation that so many expected from quantitative easing never materialized.


[1] Richard G. Anderson and Robert H. Rasche, with Jeffrey Loesel, “A Reconstruction of the Federal Reserve Bank of St. Louis Adjusted Monetary Base and Reserves,” Federal Reserve Bank of St. Louis Review 85 (September/October 2003): 39-69. The Board of Governors adjusted series is labelled as TRARR on the St. Louis Fed “FRED” website.

[2] This monthly series is labelled TOTRESNS on the “FRED” website.

[3] Required clearing balances arose out of the Fed’s check-clearing operations, paid interest, and are explained in E. J. Stevens, “Required Clearing Balances,” Federal Reserve Bank of Cleveland Economic Review 29 (1993, Quarter 4): 2–14. Float also results from the Fed’s check clearing and is reported in the Fed’s H.4.1 Release. It requires the smallest adjustment. Before extensive electronic clearings, the time it took for checks to clear almost always exceed the brief hold the Fed puts on checks submitted for clearing. So the float would be positive, and two banks would temporarily be counting the same reserves, giving a small boost to total reserves and the monetary base. Despite being quite small, however, float usually made a bigger contribution to reserves than Fed discount loans to depositories. For instance, on July 26, 1996, the float was a mere $769 million, or only 0.17 percent of Fed assets. But on the same date, total discounts were even less: $258 billion. Now with electronic clearings, the float is almost always negative. Checks clear faster than banks receive credit for them, trivially reducing total reserves and the base. On August 28, 2002, for example, the float was a negative $324 million, but still larger in absolute value than the $189 million in total discounts. With the huge increase in reserves resulting from quantitative easing, the affect of the float is so insignificant as to be hardly worth bothering about.

[4] I used the monthly series in my calculations and figures, labelled BOGUMNBS and CURRNS, respectively, at “FRED.” But one can do the same manipulation with weekly series.

[5] Labelled SBASENS at “Fred.”

[6] Labelled MBCURRCIR at “Fred.”

[7] The change was announced in the H.3 Release of June 6, 2013, and implemented in the H.3. Release of July 11, 2013. The new monetary-base series is labeled BOMGBASE at “FRED,” and the revisions of Regulation D are detailed in the Federal Register.

[8] Among the “Technical Q&As” on the H.3 Release at the Board’s website, it states that the “levels and growth rates of the two series are nearly identical,” and provides a confirming graph. But I double-checked with an Excel spreadsheet of the two series just to make sure.

[9] “St. Louis Adjusted Monetary Base Series,” Federal Reserve Bank of St. Louis Economic Research (November 18, 1996).

[10] St. Louis Adjusted Reserves are reported bi-weekly and monthly, and both seasonally adjusted and not seasonally adjusted. The monthly not seasonally adjusted series is labelled ADJRESNS at the FRED website. See also Richard G. Anderson and Robert H. Rasche, “A Revised Measure of the St. Louis Adjusted Monetary Base,” Federal Reserve Bank of St. Louis Review (March/April 1996).

[11] The H.4.1. Release lists these deposits in the table labelled “Factors Affecting Reserve Balances of Depository Institutions” and again in the table labelled “Consolidated Statement of Condition of All Federal Reserve Banks.”

[12] Ruth Judson, “Crisis and Calm: Demand for U.S. Currency at Home and Abroad from the Fall of the Berlin Wall to 2011,” Board of Governors of the Federal Reserve System, International Finance Discussion Papers, IFDP 1058 (November 2012).

[13] Although the amount of these deposits rose from the neighborhood of $100 million prior to the financial crisis to as high as $11.2 billion afterwards, they have never exceeded 0.4 percent of the total monetary base. This series is labelled as WLFOL at FRED.

[14] Gara Afonso, Alex Entz, and Eric LeSueur, “Who’s Lending in the Fed Funds Market?” Federal Reserve Bank of New York Liberty Street Economics (December 2, 2013).

[15] The FRED time series for “other” deposits is labelled WOTHLB.

[16] For details on these FMU’s, see “Designated Financial Market Utilities” at the Fed Board of Governors website.

[17] The handling of FMU’s in the Board’s Releases is described at “Technical Q&As” on the H.3 Release and “Have a Question about the H.4.1?” at the Board’s website.

[18] John G. Gurley and Edward S. Shaw, Money in a Theory of Finance (Washington: Brookings Institution, 1960), first coined the terms inside and outside money. Their distinction was between money that was issued through financial intermediation (inside), with an offsetting liability side, and money that was an asset only (outside), without an offsetting liability side. They were challenged by Boris P. Pesek and Thomas R. Saving, Money, Wealth, and Economic Theory (New York: Macmillan, 1967), who argued that the critical distinction was between interest-bearing and non-interest bearing money. But Pesek and Saving then leapt to the conclusion that much bank-created money over and above bank reserves counted as outside money. The subsequent tortuous debate was best sorted out by Friedman and Schwartz, Monetary Statistics of the United States: Estimates, Sources, Methods (New York: Columbia University Press, 1970), pp. 110-118, 128-130; who argued that the bank-created money that Pesek and Saving were implicitly counting as outside money was better thought of as reflecting the valuable charters of banks, often because of the monopoly privileges that banks then enjoyed.

[19] With the possible exception of the small amount of interest-earning required clearing balances mentioned in the first section and discontinued in July 2013.

[20] The Fiscal Theory of the Price Level implicitly denies that even currency in circulation is genuine outside money but, because it is payable of taxes, a form of inside money, as pointed out in Jeffrey Rogers Hummel, “Mises, the Regression Theorem, and Free Banking, Liberty Matters: An Online Discussion Forum (January 2014). Here is not the place to fully address this contention.

[21] This series is reported in the Board’s H.4.1 Release and is labelled WLTDHDIA at FRED. Other forms of Fed borrowing that are also quite correctly not counted as reserves or in the monetary base are Treasury deposits (which during the financial crises were expanded with what was called the Supplementary Financing Account, discontinued in July 2011) and reverse repurchase agreements. For details about these as well as the Term Deposit Facility, see Jeffrey Rogers Hummel,The Federal Reserve’s Exit Strategy: Looming Inflation or Controllable Overhang,” Mercatus Research, Mercatus Center at George Mason University, September 2014.

[22] Total vault cash is now reported in the Board’s H.3 Release and is labelled TLVAULTW at FRED.

[23] A more sophisticated approach would treat interest-bearing reserves as partly both inside and outside money that should be weighted on the basis of the difference between the interest rate paid on reserves and some higher market rate. But then one would have to view the liquidity services of many other financial assets as making them partly outside money as well. Although this is an enormous, if not totally insurmountable, empirical problem, it is an approach that has been frequently suggested and is similar to what Divisia aggregates try to do with measures of the money stock weighted according to liquidity.

[24] The total outside base is calculated by taking the Board’s base series (monthly, not seasonally adjusted) available at “FRED” as BOMGBASE and, beginning in September 2008, subtracting the Board’s series on Total Reserves Maintained at Federal Reserve Banks (monthly, not seasonally adjusted), reported in the H.3 Release and available at “FRED” as RESBALNS. Currency is still the currency component of M1, reported in the H.6 Release and as CURRNS at “FRED.” Outside reserves is the difference between the total outside base and currency.

[25] Since the crisis, the growth rate of the non-interest bearing base (outside money) has risen from less than 2 percent in mid-2008 to has high as 9 percent annually. The irrelevance of interest-bearing base money for genuine monetary policy has also been noted by John A. Tatom, “U.S. Monetary Policy in Disarray,” Journal of Financial Stability. 12 (2014): 47-58. One minor difference between Tatom’s analysis and mine is that his “adjusted monetary base” only subtracts excess reserves held as deposits at the Fed from the monetary base, whereas I subtract all interest-bearing reserves, whether excess or required.

[Cross-posted from]

Here is the first paragraph of an Associated Press story about the new House highway bill:

Despite years of warnings that the nation’s roads, bridges and transit systems are falling apart and will bring nightmarish congestion, the House on Thursday passed a six-year transportation bill that maintains the spending status quo.

Yet some of the government’s own data—as I cite here—shows that, rather than “falling apart,” the nation’s bridges and Interstate highways have steadily improved in quality over the past two decades.

I wish reporters would explore the data themselves, rather than just parroting what the transportation lobby groups say. I also wish they would use their imaginations a bit and realize that if bridges and highways were actually nightmarish and falling apart, then state and local governments—who own the bridges and highways—have the responsibility and full capability of fixing them themselves.

As for the “spending status quo,” that status quo has included steady increases over the past two decades. The chart shows total highway trust fund spending using Federal Highway Administration data for 1990 to 2014, and the Congressional Budget Office baseline projection for 2015 to 2021. The new House highway bill would spend this baseline amount over the next six years, while the Senate bill would spend somewhat more.

Drought is a common feature of climate; but every so often when a longer-lasting or somewhat severe drought occurs, it is not long before someone, somewhere, makes the claim that that drought was either caused or made worse by CO2-induced global warming. A simple test of this thesis can be conducted by examining the historic record of drought for the location in question. If it can be shown that similar (or greater) frequencies or magnitudes of drought have occurred in the past, prior to the modern increase in CO2, then it cannot be definitively concluded that the current drought is the product of anything other than natural climate variability.

Unfortunately, long-term historical drought records covering more than a few decades of time are lacking for most locations across the planet. As a result, scientists have sought to augment these short-term instrumental drought histories with much longer proxy records, records that will sometimes extend back in time several centuries to millennia. Such is the case in the recent study of Vance et al. (2015), who derived a 1,003-year proxy of historical drought in eastern Australia.

In recent years, concerns of a CO2-induced influence on drought in eastern Australia were magnified with the 1997-2009 occurrence of what has been called the “Big Dry” – the most persistent drought to envelop the region since the start of the 20th century. Noting that there is a scarcity of long-term drought records in the region and that “no high-resolution studies cover this era of Australian prehistory,” Vance et al. set out to produce “the first millennial-length Australian drought record.” In doing so, they utilized climate records from the Law Dome ice core in East Antarctica to reconstruct a 1,000 year record of the Interdecadal Pacific Oscillation that they then combined with an eastern Australian rainfall proxy (also derived from the Law Dome site) from which they were able to identify historic megadroughts (defined as more than 5 years of below average rainfall). The resultant record is presented in the figure below.

0.5 for both reconstructions) highlighted in blue banding. Bottom panel: Annual Law Dome summer sea-salt time series (grey), with 13 year Gaussian smooth (thick black) and drought periods (> 5 year duration, >0.5 for both IPO reconstructions) identified (orange banding). Source: Vance et al. (2015)." title="Top panel: Independent reconstructions of the Interdecadal Pacific Oscillation (blue = decision tree and red = piecewise linear derivation), with positive phases (>0.5 for both reconstructions) highlighted in blue banding. Bottom panel: Annual Law Dome summer sea-salt time series (grey), with 13 year Gaussian smooth (thick black) and drought periods (> 5 year duration, >0.5 for both IPO reconstructions) identified (orange banding). Source: Vance et al. (2015)." height="218" width="700">

Top panel: Independent reconstructions of the Interdecadal Pacific Oscillation (blue = decision tree and red = piecewise linear derivation), with positive phases (>0.5 for both reconstructions) highlighted in blue banding. Bottom panel: Annual Law Dome summer sea-salt time series (grey), with 13 year Gaussian smooth (thick black) and drought periods (> 5 year duration, >0.5 for both IPO reconstructions) identified (orange banding). Source: Vance et al. (2015).

As indicated by the orange shading in the figure, eight megadroughts are noted in the proxy record, the longest of which has “no modern analog.” Lasting 39 years (1174-1212 AD), this unparalleled drought was the exclamation point on a uniquely dry period in which 80 out of 111 concurrent years (over the period 1102-1212 AD) persisted in drought. The modern, or so-called “Big Dry,” by comparison, was judged by Vance et al. to be “far from an exceptional eastern Australian drought in the context of the past millennium.” And, as a result of this observation, the researchers conclude that water management in eastern Australia “needs to account for decadal-scale droughts being a normal feature of the hydrological cycle.” Indeed it should; and climate alarmists should take equal notice that there is no evidence to support the claim that CO2-induced global warming caused or enhanced the occurrence of the Big Dry.



Vance, T.R., Roberts, J.L., Plummer, C.T., Keim, A.S. and van Ommen, T.D. 2015. Interdecadal Pacific variability and eastern Australian megadroughts over the last millennium. Geophysical Research Letters 42: 129-137.


There was a time in America when the Left could be counted on to defend free speech. But as countless examples today demonstrate, those days are long gone. From campus speech codes to campaign finance to prosecutorial threats against climate change critics and more, the evidence is as fresh as this morning’s newspapers.

Campus assaults have been so well documented by the Foundation for Individual Rights in Education (FIRE) that they need no elaboration here. But the latest campaign finance “reform”—“until the court reverses its decision in Citizens United”—can be found championed in an op-ed in this morning’s Washington Post by such stalwarts of the Left as Yale Law School’s Bruce Ackerman and Ian Ayers. On Tuesday last, it seems, Seattle voters approved a measure that would “give” each registered voter a $100 “democracy voucher” that could be spent “for only one purpose—to support their favorite candidates for municipal office.” The city can of course “give” that $100 voucher only if it first “takes” the $100 from its taxpayers, which it will do in all the unequal ways that modern tax systems exhibit. Thus is the political speech of private individuals reduced by forcing the funds they might otherwise direct to candidates of their choice to be redirected through this public funding scheme to candidates they may oppose.

But that inroad on free speech pales in comparison to recent attacks on what most Americans would have thought were the free speech rights of climate skeptics, the RICO-ing of whom my colleague Walter Olson has been covering—along with the machinations of New York Attorney General Eric Schneiderman. The latest from the latter is all over the papers today, the Post’s headline reading “Exxon investigated over climate change research.” The Left has already browbeaten Exxon Mobil into ending its funding for think tanks and advocacy organizations that express climate change skepticism. Now, however, it’s getting more serious, with Schneiderman issuing a subpoena that focuses, we’re told, “on whether Exxon Mobil intentionally clouded public debate about science and hid from investors the risks that climate change could pose to its business.” “Clouded?” What, a debate that is crystal clear? That of course is what the environmental establishment would like as to believe.

And circling back to the academy, so too, apparently, would one Naomi Oreskes, a professor of the history of science at Harvard University and a critic of Exxon who laments that we haven’t yet implemented a carbon tax. There are many reasons we haven’t, she tells the Post, but a significant one “is the role of Exxon Mobil and others in fomenting disinformation, undermining public support for such initiatives, and lobbying against policies that would have begun to decrease our fossil fuel dependency.” And this from a professor of the history of science, the annals of which are littered with the corpses of “settled science.” Clearly, if we don’t stop this speaking and lobbying, we could have one more corpse.

The Black Alliance for Education Options released the results of a new survey of black voters in four states on education policy. The poll found that more than six in ten blacks in Alabama, Louisiana, New Jersey, and Tennessee support school vouchers.


Source: BAEO Survey on Education Policy

The results are similar to Education Next’s 2015 survey, which found that 58 percent of blacks nationwide supported universal school vouchers and 66 percent supported vouchers for low-income families.

The survey also asked about black voters’ views on charter schools (about two-thirds support them), “parent choice” generally (three-quarters support it), and the importance of testing. However, it appears that BAEO is overinterpreting the findings on that last question, claiming:

The survey also indicated solid support among Black voters that believe educational standards such as Common Core and its related assessments is essential to holding education stakeholders responsible for student learning outcomes.

If the wording of the survey question was identical to how it appears on their website, then it says absolutely nothing about black support for Common Core. The question as it appears on their website is: “Do you think that testing is necessary to hold school accountable for student achievement?” The question doesn’t mention Common Core at all. For that matter, it doesn’t mention standardized testing specifically, nor explain how the testing is meant to “hold schools accountable.” Perhaps it means publishing the score results so parents will hold schools accountable. Or perhaps it means the state government will offer financial carrots or regulatory sticks. Or maybe it means whatever the survey respondent wants it to mean. 

Source: BAEO Survey on Education Policy

If Acme Snack Co. asked survey respondents, “Do you like snacks that are delicious and nutritious?” and then claimed “two-thirds of Americans enjoy delicious and nutritious snacks such as Acme Snack Co. snacks,” they would be guilty of false advertising. Maybe the survey respondents really do like Acme Snacks–or Common Core–but we can’t know that from that survey. Just as some people may enjoy carrots (delicious and nutritious) but find Acme Snacks revolting, lots of parents may support some measure of testing while opposing Common Core testing for any number of reasons.

BAEO’s question on vouchers was clear: “Do you support school vouchers/scholarships?” Yes, most blacks do. But its question on testing is much less clear, and therefore so are the results. All the BAEO survey tells us is that most blacks support using some sort of testing to hold schools accountable in some undefined way. Interpreting these results as support for Common Core is irresponsible.

After filmmaker Quentin Tarantino delivered an impassioned speech at a rally denouncing as “murder” some recent police uses of force against civilians, pro-police groups called for a boycott of his films.  So far, so dull. But now, according to the Hollywood Reporter, things have taken a new and remarkable turn. 

In a veiled threat, the largest police union in the country says it has a “surprise” in store for Quentin Tarantino.

Jim Pasco, executive director of the Fraternal Order of Police, would not go into any detail about what is being cooked up for the Hollywood director, but he did tell THR: “We’ll be opportunistic.” 

Pasco specified that the “surprise” in question would be in addition to the standing call for a boycott. 

“Something is in the works, but the element of surprise is the most important element,” says Pasco. “Something could happen anytime between now and [the premiere]. And a lot of it is going to be driven by Tarantino, who is nothing if not predictable.

“The right time and place will come up and we’ll try to hurt him in the only way that seems to matter to him, and that’s economically,” says Pasco.

When asked if this was a threat, Pasco said no, at least not a physical threat.

Note well that last bit, which did not deny that the surprise might involve forms of on-the-job retaliation by Pasco’s members falling short of physical violence. Might it involve traffic problems at a Tarantino appearance? Asking patrons to state their business as they walk to a premiere? Simple failure to extend protection can accomplish a lot, as Padma Lakshmi discovered last year when police outside Boston failed to protect her from a vicious onslaught and tire-slashings when her crew tried to film a segment of Top Chef without a demanded union contingent. 

Like many others, I have taken positions adverse to FOP’s – opposing its call for attacks on police to be covered by the enhanced penalties of hate crime laws, for example, and criticizing the LEOBR laws that confer teacher-like tenure on errant cops. Perhaps from now on I too should worry about a “surprise” at the hands of police unionists who might, after finding my movements “predictable,” seize the “right time and place” to “try to hurt.”