Cato Op-Eds

Individual Liberty, Free Markets, and Peace
Subscribe to Cato Op-Eds feed

Former Arkansas Governor Mike Huckabee launched his presidential campaign last week. Huckabee highlighted his fiscal successes as governor during his announcement. He claims that he cut taxes 94 times while governor, and he promised to bring his tax-cutting experience to Washington, D.C. Huckabee’s statements do not tell the full story. While Huckabee cut some taxes, his time in office also included a rapid increase in Arkansas state spending and multiple tax hikes. 

Huckabee took office in July 1996 after Governor Jim Guy Tucker was convicted for his involvement in the Whitewater scandal. Shortly after taking office,  Huckabee signed a $70 million  package of income tax cuts. It eliminated the marriage penalty, increased the standard deduction, and indexed tax brackets to inflation. The broad-based tax cut was Arkansas’s first in 20 years.  Huckabee followed it with a large cut to the state’s capital gains tax. These tax cuts were popular, and they improved Arkansas’s economic climate.

Huckabee’s fiscal policies then changed direction. Huckabee used the state’s tobacco settlement money to expand Medicaid, and he supported a large bond initiative to increase spending for infrastructure. These and other spending policies came with a hefty price tag.

When Huckabee was in office during fiscal year 1997, Arkansas general fund spending was $2.6 billion, according to data from the National Association of State Budget Officers. By 2007, Huckabee’s last year in office, general fund spending had grown by 54 percent to $4 billion. Total state spending–which includes spending from federal aid and other non-general sources–grew even faster. Over the same period, it rose from $8.3 billion to $16.1 billion, an increase of 94 percent.

Huckabee relied upon multiple tax increases to fund this rapid spending growth. According to data from the state of Arkansas, examined by the Washington Post, net taxes increased by $505 million during Huckabee’s tenure. Huckabee supported increases in the state gasoline, cigarette, and sales taxes. He instituted a three percent personal income surtax.

Huckabee’s scores on Cato’s Fiscal Policy Report Card show his growing embrace of big government. Cato’s report card includes various measures of tax and spending restraint, and assigns governors grades on an A through F scale. Below are Huckabee’s scores:

In 2006 Huckabee tied for the worst-rated Republican governor. The authors of the report summarized Huckabee’s fiscal record: “Like many Republicans, his grades dropped the longer he stayed in office…Huckabee’s leadership has left taxpayers in Arkansas much worse off.”

If elected president, Huckabee promises not to increase taxes and to control federal spending. However, given his proclivity for raising taxes and spending while governor, his promises ring hollow.

Note: This post is part of a series on the fiscal records of governors running for president. Previous editions covered include Martin O’Malley and Jeb Bush.

The Common Core War, over the last few months, has been fought on a largely new front: whether students can be forced to take state tests – in the vast majority of cases, Core-aligned tests – or whether parents and students can refuse. It is perhaps an even more fundamental question than whether the federal government may constitutionally coerce standardization and testing generally, and with Common Core, specific standards and tests. The testing battle is to a large extent about whether a child, in seeming opposition to the seminal Supreme Court ruling in Pierce v. Society of Sisters, is indeed a “mere creature of the State.”

The opt-out numbers are hard to pin down, though there is little question that some districts have seen very large percentages while others – probably the large majority nationwide – have seen few. It is also probably reasonable to conclude that the leader of the opt-out crusade has been New York State, where animosity toward the Core has been high since the state first rushed implementation and state officials, in an effort to calm things, actually inflamed them with a condescending approach to public engagement that launched weeks of recriminations. Last year the state saw an estimated 60,000 students opt out, which leapt to nearly 200,000 this year.

The root question, of course, is should students and parents be able to opt out without fear of punishment? And since punishment would be coming from a government institution – yes, that is what a public school is – that means without fear of punishment by the state. If children are, in part, creatures of the state – and Pierce did not say there is no legitimate state role in education – than punishment is legitimate. If, however, the public schools exist to serve fully free citizens, then punishment cannot be meted out for refusing the test; it is up to parents to freely decide whether or not their children are subjected to the tests.

So far the answer to whether students may opt out without fear of punishment has been muddled. In part this is for a good reason: federalism allows states – and within states, local control allows districts – to decide for themselves what they want their policies to be. Unfortunately, another part of the confusion lies with Washington, which has a law on the books – No Child Left Behind – that says 95 percent of students in a district must take state tests. The Obama administration, however, has issued waivers out of parts of NCLB to numerous states with various provisions, and it is unclear to whom the 95 percent requirement actually applies. Exacerbating this – and illustrating why a few clear laws beat rule by waiver, regulation, and cabinet secretaries – is that even if the 95 percent rule should technically apply, U.S. Secretary of Education Arne Duncan has mainly invoked the specter of federal force rather than stating clearly what he will do to under-95-percenters. Of course, there are likely political calculations behind this: he wants to push states and districts to force testing while being able to technically say, “Washington didn’t require anything.”

To a large extent, the opt-out conflict is no different than the seemingly endless battles over countless matters into which public schooling forces Americans. As we at CEF never get tired of saying – and politicians never get tired of ignoring – all children, families, and communities are different. They have different needs, desires, abilities, values, educational philosophies, and on and on, and no single system can possibly treat them all equally.  That is why educational freedom – connecting educational funding and decisions to individual children – is the essential reform. That said, if parents are allowed to opt their children out of government-dictated tests it would be a welcome move in the right direction. It would loosen the state’s grip on the children, at least a little bit.

Since before the Declaration of Independence, equality under the law has long been a central feature of American identity—and was encapsulated in the Constitution. The Fourteenth Amendment expanded that constitutional precept to actions by states, not just the federal government.

For example, if a state government wants to use race as a factor in pursuing a certain policy, it must do so in the furtherance of a compelling reason—like preventing prison riots—and it must do so in as narrowly tailored a way as possible. This means, among other things, that race-neutral solutions must be considered and used as much as possible.

So if a state were to, say, set race-based quotas for its construction contracts and claim that no race-neutral alternatives will suffice—without showing why—that would fall far short of the high bar our laws set for race-conscious government action.

Yet that is precisely what Montana has done. Montana’s Disadvantaged Business Enterprise (“DBE”) program implements a federal program aimed at remedying past discrimination against minority and women contractors by granting competitive benefits to those groups. While there may be a valid government interest in remedying past discrimination, in its recent changes to the program, Montana blew through strict constitutional requirements and based its broad use of racial preferences on a single study that involved weak anecdotal evidence—a study that recommended more race-neutral alternatives, not fewer.

Even worse, Montana’s federal district court upheld the new provisions. Although Montana did not show which race-neutral alternatives were considered, tried, or rejected as insufficiently addressing past discriminatory practices, the court upheld the DBE’s grant of benefits to groups that were not shown to have ever been discriminated against. The contracting company that brought the suit has appealed the case to the U.S. Court of Appeals for the Ninth Circuit.

Cato has joined the Pacific Legal Foundation and Center for Equal Opportunity in filing a brief supporting that appeal. We argue that Montana doesn’t meet the high standard of narrow tailoring in its approach to the DBE program because it (1) failed to establish that race-neutral measures were insufficient, (2) failed to seriously consider race-neutral alternatives, and (3) extended benefits to groups who never even suffered past discrimination. We point out that Montana also failed to adequately establish the very existence of the discrimination that its program purportedly intends to remedy.

By cutting corners and paying lip service to race-neutral solutions, Montana and the lower court have each done a disservice to the hard-won principle of equality under the law. We urge the Ninth Circuit to correct those mistakes when it takes up Mountain West Holding Co. v. Montana this summer.

On Friday, May 8, the public comment period closed for the new 2015 Dietary Guidelines issued by the U.S. Department of Agriculture (USDA) and the Department of Health and Human Services (HHS). In a nutshell, the new dietary guidelines are to eat a diet richer in plant-based foods and leaner in animal-based products. One of the considerations used by the USDA/HHA in their Scientific Report used to rationalize these new dietary guidelines was that such diets are

“associated with more favorable environmental outcomes (lower greenhouse gas emissions and more favorable land, water, and energy use) than are current U.S. dietary patterns.” [emphasis added]

Throughout the Scientific Report whenever greenhouse gases are mentioned, a negative connotation is attached and food choices are praised if they lead to reduced emissions.

This is misleading on two fronts. First, the dominant greenhouse gas emitted by human activities is carbon dioxide which is a plant fertilizer whose increasing atmospheric concentrations have led to more productive plants, increasing total crop yields by some 10-15 percent to date. The USDA/HHS is at odds with itself in casting a positive light on actions that are geared towards lessening a beneficial outcome for plants, while at the same time espousing a more plant-based diet.

And second, the impact that food choices have on greenhouse gas emissions is vanishingly small—especially when cast in terms of climate change. And yet it is in this context that the discussion of GHGs is included in the Scientific Report. The USDA/HHS elevates the import of GHG emissions as a consideration in dietary choice far and above the level of its actual impact.

In our Comment to the USDA/HHS, we attempted to set them straight on these issues.

Our full Comment is available here, but for those looking for a synopsis, here is the abstract:

There are really only two reasons to discuss greenhouse gas emissions (primarily carbon dioxide) in the context of dietary guidelines in the U.S., and yet the USDA and HHS did neither in their Scientific Report of the 2015 Dietary Guidelines Advisory Committee (DGAC).

The first reason would be to discuss how the rising atmospheric concentration of CO2—a result primarily of the burning of fossil fuels to produce energy—is a growing benefit to plant life. This is an appropriate discussion in a dietary context as atmospheric CO2 is a fertilizer that promotes healthier, more productive plants, including crops used directly as food for humans or indirectly as animal feed. It has been estimated that from the atmospheric CO2 enrichment to date, total crop production as increased by 10-15 percent. This is a positive and beneficial outcome and one that most certainly should be included in any discussion of the role of greenhouse gases emissions in diet and nutrition—but is inexplicably lacking from such discussion in the DGAC report.

The second reason to discuss greenhouse gas emissions in a diet and nutrition report would be to dispel the notion that through your choice of food you can “do something” about climate change.  In this context, it would be appropriate to provide a quantitative example of how the dietary changes recommended by the DGAC would potentially impact projections of the future course of the climate. Again, the DGAC failed to do this.  We help fill this oversight with straightforward calculation of averted global warming that assumes all Americans cut meat out of their diet and become vegetarians—an action that, according to the studies cited by the DGAC, would have the maximum possible impact on reducing greenhouse gas emissions and thus mitigating future climate change.  Even assuming such an unlikely occurrence, the amount of global warming that would be averted works out to 0.01°C (one hundredth of a degree) by the end of the 21st century.  Such an inconsequential outcome has no tangible implications.  This should be expressed by the DGAC and mention of making dietary changes in the name of climate change must be summarily deleted.

We recommend that if the DGAC insists on including a discussion of greenhouse gas emissions (and thus climate change) in it 2015 Dietary Guidelines, that the current discussion be supplemented, or preferably replaced, with a more accurate and applicable one—one that indicates that carbon dioxide has widespread and near-universal positive benefits on the supply of food we eat, and that attempting to limit future climate change through dietary choice is misguided and unproductive.  These changes must be made prior to the issuance of the final guidelines. 

We can only guess on what sort of impact our Comment will have, but we can at least say we tried.

Free speech can get awfully expensive when billionaires are involved. Just ask the International Crisis Group, a charity that seeks to prevent war and related atrocities by monitoring conditions in the world’s most dangerous regions.

In 2003, ICG published a report on the political and social climate of Serbia following the assassination of Zoran Đinđić, the country’s first democratically elected prime minister after the fall of Slobodan Milošević. One of the issues noted there was the concern of “average Serbs” that powerful businesses were still benefiting from corrupt regulatory arrangements that dated back to the Milošević regime.

One of several oligarchs mentioned was Milan Jankovic, who also goes by the name Philip Zepter. With an estimated net worth of $5 billion, Jankovic is widely believed to be the richest Serb (and one of the 300 wealthiest men in the world). His holdings include Zepter International, which sells billions of dollars of cookware each year and has more than 130,000 employees.

One might think that a man responsible for running a vast business empire would have better things to do than suing a charity, but you’d be wrong. For the last decade, Jankovic has hounded ICG, relentlessly pressing a defamation suit, first in Europe and now in the United States. After 10 years of litigation, the case finally comes down to a single question: Is Milan Jankovic a public figure?

The Supreme Court has long held that the First Amendment’s protection of speech (and political criticism) requires libel plaintiffs who are public figures—like politicians and celebrities—to show that potentially defamatory statements were not only false but also published with “actual malice.” Under this standard, the defendant must have actually known that the statements were false; a negligent misstatement or the innocent repetition of another’s falsehood isn’t enough.

In an amicus brief filed in the U.S. Court of Appeals for the D.C. Circuit, Cato, along with a diverse group of organizations including the Brookings Institution, Council on Foreign Relations, and PEN American Center, argues that while Jankovic is not a politician or other government official, he should still be treated as a public figure for the purpose of this case.

Under the “limited public figure” doctrine, the Supreme Court holds that private citizens become public figures when they “thrust themselves to the forefront of particular public controversies in order to influence the resolution of the issues involved.” As we argue, Jankovic is in his own words one of Serbia’s most powerful and influential citizens, whose vast wealth and political connections gives him a near-unparalleled ability to shape the outcome of public debates. What’s more, Jankovic has played an active role in Serbian politics. He describes himself as one of the men responsible for overthrowing Milošević, and he once hired American lobbyists to represent the Serbian government in Washington. He’s even rumored to have used his own money to fund the government during a budget crisis!

In short, Jankovic is the very definition of a public figure—and criticism of public figures, whether they be elected officials like Frank Underwood or shadowy powerbrokers like Raymond Tusk, must be privileged. Unless the weakest are free to criticize the most powerful, democracy is nothing but a house of cards.

The D.C. Circuit will hear argument in Jankovic v. International Crisis Group later this spring or summer.

One consequence of the financial crisis of 2008-09 has been renewed interest in the merits of contingent convertible debt as a mechanism for equity bail-ins at moments of acute financial distress. Should it fail, a financial institution’s contingent bonds are automatically converted into equity shares. History suggests that convertible debt can help to preserve financial stability by limiting the spillover effects of individual financial institution failures.

A particularly revealing historical illustration of this advantage of contingent debt comes from the Scottish free banking era. From 1716 to 1845, the Scottish financial system functioned with no official central bank or lender of last resort, no public (or private) monopoly on currency issuance, no legal reserve or capital requirements, and no formal limits on bank size, at a time when Scotland’s was a classic emerging economy with large speculative capital flows, a fixed exchange rate, and substantial external debt. Despite this, Scotland’s banking sector survived many major shocks, including two severe balance of payments crises arising from political disturbances during the Seven Years’ War.

The stability of the Scottish banking system depended in part on the use it made of voluntary contingent liability arrangements. Until the practice was prohibited in 1765, some Scottish banks included an “optional clause” on their larger-denomination notes. The clause allowed the banks’ directors to convert the notes into short-term, interest-bearing bonds. Although the clause was seldom invoked, it was successfully employed as a means for preventing large-scale exchange rate speculators from draining the Scottish banks’ specie reserves and remitting them to London during war-related balance of payments crises–that is, as a private and voluntary alternative to government-imposed capital controls.

Contingent debt also helped to make Scottish bank failures less costly and disruptive. If an unlimited liability Scottish bank failed, its shorter-term creditors were again sometimes converted into bondholders, while its shareholders were liable for its debts to the full extent of their personal wealth. Although the Scottish system lacked a lender of last resort, the unlimited liability of shareholders in bankrupt Scottish banks served as a substitute, with sequestration of shareholders’ personal estates serving to “bail them in” beyond their subscribed capital. The issuance of tradeable bonds to short-term creditors, secured by mortgages to shareholders’ estates, served in turn to limit bank counter-parties’ exposure to losses, keeping credit flowing despite adverse shocks.

A particularly fascinating illustration of how such devices worked came with the spectacular collapse in June 1772 of the large Scottish banking firm of Douglas, Heron & Co., better known as the Ayr (or Air) Bank, after the parish where its head office was located. The Ayr collapsed when the failure of a London bond dealer in Scottish bonds caused its creditors to panic. The creditors doubted that the bank could could meet liabilities that, thanks to its reckless lending, had ballooned to almost £1.3 million. The disruption of Scottish credit ended quickly, however, when the Ayr’s partners resorted to a £500,000 bond issue, secured by £3,000,000 in mortgages upon their often vast personal estates—including several dukedoms. By this means the Ayr Bank managed to satisfy creditors, at 5% interest, as the Ayr’s assets, together with those of its partners, were gradually liquidated. In modern parlance, the Ayr Bank had been transformed into a “bad bank,” whose sole function was to gradually work off its assets and repay creditors while the immense landed wealth of its proprietors’ personal estates provided a financial backstop. Creditors were thus temporarily satisfied with fully secured, negotiable bonds, which were eventually redeemed in full, with interest.

We are unlikely today to witness a return to unlimited liability for financial institution shareholders. The extensive and effective use of contingent liability contracts during the Scottish free banking episode nevertheless offers important evidence concerning private market devices for limiting the disruptive consequences of financial-market crises. When compared to the contemporary practice of public socialization of loss through financial bail-outs, such private market alternatives appear to deserve serious consideration. Most importantly, perhaps, by encouraging closer monitoring of financial institutions by contingently liable creditors and equity holders, these private alternatives appear, in the Scottish case at least, not only to have made crises less severe, but also to have made them far less common.

This post is based on Tyler Goodspeed’s doctoral dissertation, a revised version of which is under consideration at Harvard University Press under the title Legislating Instability: Adam Smith, Free Banking, and the Financial Crisis of 1772.

My op-ed today at The Federalist discusses exciting developments in Canada and Britain regarding personal savings. Both nations have implemented universal savings vehicles of the type I proposed with Ernest Christian back in 2002. The vehicles have been a roaring success in Canada and Britain, and both countries have recently expanded them.

In Canada, the government’s new budget increased the annual contribution limit on Tax-Free Savings Accounts (TFSAs) from $5,500 to $10,000. In Britain, the annual contribution limit on Individual Savings Accounts (ISAs) was recently increased to 15,240 pounds (about $23,000). TFSAs and ISAs are impressive reforms—they are pro-growth, pro-family, and pro-freedom.

America should create a version of these accounts, which Christian and I dubbed Universal Savings Accounts (USAs). As with Roth IRAs, individuals would contribute to USAs with after-tax income, and then earnings and withdrawals would be tax-free. With USAs, withdrawals could be made at any time for any reason.

USAs, TFSAs, and ISAs adopt the principle that saving for all reasons is important, not just reasons chosen by the government. When people can use such accounts for all types of saving and for any length of time, it increases simplicity, flexibility, and liquidity.

In the United States, the government chooses which savings to favor, with the result that we have a mess of separate accounts with different rules for retirement, health care, and education. Everyone agrees that Americans don’t save enough, and one reason is the complexity of savings accounts. The creation of large accounts for all types of saving would simplify personal financial planning and encourage more saving.

There are differences between the Canadian and British accounts. While the annual contribution limit is lower for TFSAs than ISAs, unused contribution amounts can be carried forward under the TFSA, but not the ISA. Also, the TFSA is simpler because it is a single type of account. By contrast, the Brits created unneeded complexity by having separate “cash” and “stocks and shares” versions of ISAs.

Dividends, interest, and capital gains earned within TFSAs and ISAs are completely tax-free. Some U.K. news articles say that higher-earners may face a 10 percent dividend tax on shares held within ISAs. That is not correct, as Richard Teather confirmed to me. The U.K. has a complicated system for non-ISA dividends, which involves the use of a 10 percent dividend credit. That seems to have confused some reporters about dividends within ISAs.

If legislation to enact USAs moves ahead in America, we might expect complaints that such accounts would only benefit high earners. Such complaints would be both short-sighted and incorrect. In this new report, HM Revenue and Customs data show that ISAs have broad-based appeal in Britain. The columns in the chart below show that 13 million of the 23 million ISA account holders earn less that 20,000 pounds (about $30,000) a year. That high level of use by moderate-income individuals is great news.


The red line shows that the average value of accounts rises with income. That is not surprising given that people with higher incomes do more saving, which, by the way, is good for the overall economy. But note that the relative level of holdings is higher for people nearer the bottom. For example, earners in the 10,000-19,999 income range hold about 18,000 pounds of assets in their ISAs, so the average holding is about as high as annual income. But for higher earners, average account holdings are only a fraction of annual income.

In sum, policymakers in the United States have put too much emphasis on giving certain groups narrow tax breaks. USAs would be a better policy approach because they would help all Americans help themselves through their own thrift.

For more on universal savings accounts, see my op-ed with Amity Shlaes.  

Interested in how to advance economic growth? Join the Cato Institute’s Center for Monetary and Financial Alternatives in New York on June 2nd for a day examining the current state of U.S. capital markets regulation at Capital Unbound: The Cato Summit on Financial Regulation.

We’ve assembled an impressive list of distinguished speakers to discuss efficient capital markets and offer proposals to unleash a new engine of American economic growth.

Our lineup includes such notables as Commissioner of the U.S. Commodity Futures Trading Commission J. Christopher Giancarlo, Commissioner of the U.S. Securities and Exchange Commission Michael Piwowar, and our very own CMFA Director George Selgin.

The speakers will explore a wide variety of topics, including alternative vehicles for small business capital, the failure of mathematical modeling, and alternative solutions to monetary and financial instability.

Click here for the full schedule and to register for the event. We hope to see you in New York on June 2nd!

The ceasefire in eastern Ukraine is under strain as Kiev presses the West for more financial and military aid. Americans’ sympathies should go to both Ukrainians and Russians suffering in Vladimir Putin’s deadly geopolitical games, but Washington should stay out of the battle.

Putin obviously bears immediate responsibility for the conflict. However, Washington and Brussels consistently disregarded Russian security interests.

That still didn’t justify Putin’s actions and the results have been a horror for many Ukrainians, though Kiev’s military and nationalist militias have contributed to the unnecessary carnage. However, Moscow views the war less about expanding Russia’s “empire” than about protecting Russia from America’s expanding “empire.”

The U.S. should not intervene and treat Moscow as an adversary. To the contrary, Washington should stay out of the conflict and maintain a passable relationship with Russia.

After all, the latter, with a substantial nuclear arsenal, is the one power capable of annihilating America. Moscow also matters at the United Nations and in policy toward Afghanistan, Iran, North Korea, Syria, and terrorism.

Moscow’s behavior in Ukraine, though atrocious, poses no threat to America. Some emotional Ukrainian expatriates compare Putin to Hitler, but Russia isn’t a reincarnation of the Soviet Union, let alone Nazi Germany. Moscow is a declining, not rising power.

Ukraine obviously matters more to Europe than America. But Europe has a greater GDP and population than America (and much larger advantages over Russia). Yet almost all European states continue to disarm. No one is prepared to fight for Ukraine.

There also is a humanitarian call for action, but Ukraine ranks below many conflicts elsewhere. Some Ukrainians point out that Kiev gave up its nuclear weapons, leftovers from the Soviet arsenal, in return for international guarantees.  But Washington never promised to act militarily.

Anyway, the allies have no cost-effective way to force Moscow to back down. Iraq, Russia, a major power with nuclear weapons and a deep sense of grievances, is certain to prove intractable and respond with great force.

Of course, the U.S. and European militaries are more powerful than Russia’s armed forces. However, the latter possesses the great equalizer of nuclear weapons. Moreover, with far more at stake, the Kremlin will bear greater costs and take greater risks.

Kiev wants additional military aid. But Moscow likely would respond in kind, just as it intervened more directly last year when Ukrainian forces began winning on the field. The stakes for Moscow are too high to yield.

Arming Kiev would put U.S. credibility at issue. If greater American efforts only led to higher Ukrainian losses, pressure would build for additional weapons and training, and perhaps much more, including airstrikes and ground personnel.

Ian Brzezinski of the Atlantic Council recently urged Congress to authorize NATO’s Supreme Allied commander “to deploy in real time against provocative Russian military operations,” that is, offer combat and start a war. Yet no policymaker of note in the West is prepared for war over Ukrainian separatism.

Finally, ramping up sanctions on banking and energy wouldn’t likely change Moscow’s behavior. There’s little European support for such a course. Putin could respond by expanding economic controls, political repression, and foreign adventurism.

Nor is a domestic crisis likely to yield a liberal, pro-Western government. Putin actually appears to be a pragmatic nationalist compared to more radical forces.

The best outcome would be a negotiated settlement recognizing Ukraine as nominally whole while according the Donbas extensive autonomy and guaranteeing no NATO membership or other Western-oriented military relationship for Ukraine.

Ukrainians insist that these decisions should be up to them. Kiev should set its own policy, but then bear the cost of doing so. Washington and Brussels should not support permanent confrontation and potential war with Moscow.

As I argue on Forbes online: “Hopefully the tattered ceasefire in the Donbas will hold and both sides will accept a compromise solution. In any case, the U.S. should keep its arms and troops home. Ukraine is not America’s fight.”

To capitalism’s detractors, Nike symbolizes the Dickensian horrors of trade and globalization – a world ripened for mass exploitation of workers and the environment for the impious purpose of padding the bottom line. They are offended by President Obama’s selection of Nike headquarters as the setting for his speech, last week, in which he touted the benefits of the emerging Trans-Pacific Partnership agreement. But Nike exemplifies the redeeming virtues of globalization and illustrates how self-interested capitalism satisfies popular demands – including, even, the demands of its detractors.

Fealty to the reviled bottom line incentivizes companies like Nike to deliver, in a sustainable manner, what those genuinely concerned about development claim to want. U.S. and other Western investments in developing-country manufacturing and assembly operations tend to raise local labor, environmental, and product safety standards. Western companies usually offer higher wages than the local average to attract the best workers, which can reduce the total cost of labor through higher productivity and lower employee turnover. Western companies often use production technologies and techniques that meet higher standards and bring best practices that are emulated by local firms, leading to improvements in working conditions, environmental outcomes, and product safety.

Perhaps most significantly, companies like Nike are understandably protective of their brands, which are usually their most valuable assets. In an age when people increasingly demand social accountability as an attribute of the products and services they consume, mere allegations – let alone confirmed instances – of labor abuses, safety violations, tainted products, environmental degradation, and other objectionable practices can quickly degrade or destroy a brand. Western brands have every incentive to find scrupulous supply chain partners and even to submit to third party verifications of the veracity of all sorts of practices in developing countries because the verdict of the marketplace can be swift and unambiguous.

Nike remembers the boycotts and the profit losses it endured on account of global reactions to its association with “sweatshop” working conditions in the past. Mattel’s bottom line took a beating when some of its toys manufactured in certain Chinese factories were found to contain dangerous levels of lead paint. There have been numerous examples of lax oversight and wanting conditions, but increasingly they are becoming the exception and not the rule.

Obviously, most Americans would find developing country factory conditions and practices to be, on average, inferior to those in the United States. But the proper comparison is not between wages and conditions in a factory in Ho Chi Minh City and Akron, Ohio or between Akron in 2015 and Akron in 1915. Trade and globalization scolds who would hamper investment flows to developing countries by demanding that poor countries price themselves out of global supply chain networks by adopting rich-country standards should stop and ponder the conditions that would prevail in those locations without Western investment because that’s where their demands ultimately lead.

Even New York Times columnist Nicholas Kristof – an icon of the Left – has argued that factory work offers a step up the ladder for billions of impoverished people around the world.  His stories about the limited options for subsisting among Cambodian women before the arrival of apparel factories, which included picking through garbage dumps, backbreaking agricultural work, and prostitution, remind us that development is a process and not one that is prone to use of magic wands. What employment options would exist in the absence of Western investment? How much accountability would there be if locally-owned factories were the only choices? Without Western investment, there would be much less opportunity and much less scrutiny of labor and environmental practices.

Globalization has brought greater accountability by assigning globally recognizable brand names to otherwise anonymous, small-scale, production and assembly operations. Brands have the most to lose from the discovery of any unscrupulous practices, so the incentives are aligned with the goals of development. An important lesson of capitalism and markets is that even corporate behavior that meets the disapproval of consumers gets punished and corrected.

Unfortunately, a lesson that too many on the Left fail to heed is that capitalism and trade are making life much better for people around the world. Calling globalization a “race to the bottom” may make for a hip bumper sticker, but it has no bearing in reality.

You Ought to Have a Look is a feature from the Center for the Study of Science posted by Patrick J. Michaels and Paul C. (“Chip”) Knappenberger.  While this section will feature all of the areas of interest that we are emphasizing, the prominence of the climate issue is driving a tremendous amount of web traffic.  Here we post a few of the best in recent days, along with our color commentary.

Two papers were announced this week that sought to examine the sources of bias in the scientific literature. They could not be more starkly opposed.

First off is a doozy of a new paper by Stephan Lewandowsky, Naomi Oreskes and colleagues that complains that skeptical viewpoints are disproportionately influencing the science of climate change. Recall that Lewandowsky and Oreskes are quixotic climate change denialslayers—conspiracy theorists of somewhat ill-repute.

According to a story in Science Daily (the Lewandowsky et al. paper was not available at the time of this writing) Lewandowsky and Oreskes argue that:

Climate change denial in public discourse may encourage climate scientists to over-emphasize scientific uncertainty and is also affecting how they themselves speak – and perhaps even think – about their own research.

Lewandowsky and Oreskes fret:

The idea that ‘global warming has stopped’ has been promoted in contrarian blogs and media articles for many years, and ultimately the idea of a ‘pause’ or ‘hiatus’ has become ensconced in the scientific literature, including in the latest assessment report of the Intergovernmental Panel on Climate Change (IPCC).

The Science Daily article continues:

Recent warming has been slower than the long term trend, but this fluctuation differs little from past fluctuations in warming rate, including past periods of more rapid than average warming. Crucially, on previous occasions when decadal warming was particularly rapid, the scientific community did not give short-term climate variability the attention it has now received, when decadal warming was slower. During earlier rapid warming there was no additional research effort directed at explaining ‘catastrophic’ warming. By contrast, the recent modest decrease in the rate of warming has elicited numerous articles and special issues of leading journals.

This asymmetry in response to fluctuations in the decadal warming trend likely reflects what the study’s authors call the ‘seepage’ of contrarian claims into scientific work.

And according the Lewandowsky, this is a problem because:

“It seems reasonable to conclude that the pressure of climate contrarians has contributed, at least to some degree, to scientists re-examining their own theory, data and models, even though all of them permit – indeed, expect – changes in the rate of warming over any arbitrarily chosen period.”

So why might scientists be affected by contrarian public discourse? The study argues that three recognised psychological mechanisms are at work: ‘stereotype threat’, ‘pluralistic ignorance’ and the ‘third-person effect’.

‘Stereotype threat’ refers to the emotional and behaviour responses when a person is reminded of an adverse stereotype against a group to which they belong. Thus, when scientists are stereotyped as ‘alarmists’, a predicted response would be for them to try to avoid seeming alarmist by downplaying the degree of threat. Several studies have indeed shown that scientists tend to avoid highlighting risks, lest they be seen as ‘alarmist’.

‘Pluralistic ignorance’ describes the phenomenon which arises when a minority opinion is given disproportionate prominence in public debate, resulting in the majority of people incorrectly assuming their opinion is marginalised. Thus, a public discourse that asserts that the IPCC has exaggerated the threat of climate change may cause scientists who disagree to think their views are in the minority, and they may therefore feel inhibited from speaking out in public.

Research shows that people generally believe that persuasive communications exert a stronger effect on others than on themselves: this is known as the ‘third-person effect’. However, in actual fact, people tend to be more affected by persuasive messages than they think. This suggests the scientific community may be susceptible to arguments against climate change even when they know them to be false.

We humbly assert that Lewandowsky, Oreskes, and colleagues have this completely backwards.

When global warming was occurring faster than climate models expected during the 1990s, there was little effort by the mainstream climate science community to look into why, despite plenty of skeptic voices (such as our own) pointing to the influence of natural variability.  Instead, headlines proclaimed “Global warming worse than expected,” which fueled the human-caused climate change hysteria (favored by the 1990s White House) and helped build the push for calls to regulate greenhouse gas emissions from fossil fuels.  But since the late 1990s, there has been no statistically significant warming trend in the highly-cited HadCRU4 temperature record, and both the RSS and UAH satellite records are now in their 21st consecutive year without a significant trend.  This behavior contrasted with, and called into question, the veracity of climate model projections. And it was these projections upon which rested the case for a dangerous human influence on the climate. Again, skeptic voices were raised in objection to the mainstream view of climate change and the need for government intervention. But this time, the skeptic voices were accompanied by data that clearly showed that rather than “worse than expected,” climate change was actually proceeding at a quite modest pace.

It was only then, with the threat of losing support for actions to mitigate climate change—actions that a top U.N. climate official, Christine Figueres, described as an effort “to intentionally transform the economic development model, for the first time in human history” —that the mainstream climate community started to pay attention and began investigating the “hiatus” or “pause”—the words so despised by Lewandowsky and Oreskes.

Through these research efforts, we have learned a lot about the role of natural variability in the broader climate system and how such variability impacts of projections of human-caused climate change (such as through a better understanding of the equilibrium climate sensitivity—how much warming results from a doubling of atmospheric carbon dioxide concentration).

In other words, science has been moved forward, propelled by folks who didn’t take the mainstream climate science at face value, and instead questioned it—i.e., Lewandowsky’s and Oreskes’ “deniers.”

The outcome of all of this is, in fact, the opposite of what Lewandowsky and Oreskes assert has occurred.  Rather than “skeptic” ideas “seeping” into science and leading to a false narrative, skeptic ideas instead have spurred new research and therefore new knowledge. Such was not the case when skeptics were being shut out. The only thing different now vs. 20 years ago, is that this time around, the existence of a profoundly inconvenient truth (a “hiatus” in global warming) gave public credence to the skeptics which forced them to be taken seriously by the scientific consensus-keepers. Incontrovertible evidence that threatened to tear down the meme of climate alarmism clearly required some sort of response.

Science is biased not by the inclusion of skeptical voices, but rather the exclusion of them.

In fact, this week, we announced the framework for an investigation into the existence of such bias.

We teamed with Dr. David Wojick to produce a Cato Working Paper titled “Is the Government Buying Science or Support? A Framework Analysis of Federal Funding-induced Biases” we describe:

The purpose of this report is to provide a framework for doing research on the problem of bias in science, especially bias induced by Federal funding of research. In recent years the issue of bias in science has come under increasing scrutiny, including within the scientific community. Much of this scrutiny is focused on the potential for bias induced by the commercial funding of research. However, relatively little attention has been given to the potential role of Federal funding in fostering bias. The research question is clear: does biased funding skew research in a preferred direction, one that supports an agency mission, policy or paradigm?

An interested reader may want to review the fifteen bias-inducing scientific practices that we identify and compare them with the “three recognised psychological mechanisms” that Lewandowsky and Oreskes assert are at work to see which seem to make the most sense.

Essentially, our project seeks to determine if the dog is wagging the tail. Lewandowsky and Oreskes propose the tail is wagging the dog.

Hopefully, in the not too distant future, we’ll be able to report back what we find in our investigations. We’ll be surprised if we find that exclusionary practices drive science forward more efficiently than inclusive ones!


Lewandowsky, S., N. Oreskes, J. S. Risbey, B. R. Newell and M. Smithson, 2015. Climate change denial and its effect on the scientific community. Global Environmental Change, (in press) 

Watching Robert Reich’s new video in which he endorses raising the minimum wage by $7.75 per hour – to $15 per hour – is painful.  It hurts to encounter such rapid-fire economic ignorance, even if the barrage lasts for only two minutes. 

Perhaps the most remarkable flaw in this video is Reich’s manner of addressing the bedrock economic objection to the minimum wage – namely, that minimum wage prices some low-skilled workers out of jobs.  Ignoring supply-and-demand analysis (which depicts the correct common-sense understanding that the higher the minimum wage, the lower is the quantity of unskilled workers that firms can profitably employ), Reich asserts that a higher minimum wage enables workers to spend more money on consumer goods which, in turn, prompts employers to hire more workers.  Reich apparently believes that his ability to describe and draw such a “virtuous circle” of increased spending and hiring is reason enough to dismiss the concerns of “scare-mongers” (his term) who worry that raising the price of unskilled labor makes such labor less attractive to employers. 

Ignore (as Reich does) that any additional amounts paid in total to workers mean lower profits for firms or higher prices paid by consumers – and, thus, less spending elsewhere in the economy by people other than the higher-paid workers.

Ignore (as Reich does) the extraordinarily low probability that workers who are paid a higher minimum wage will spend all of their additional earnings on goods and services produced by minimum-wage workers. 

Ignore (as Reich does) the impossibility of making people richer simply by having them circulate amongst themselves a larger quantity of money.  (If Reich is correct that raising the minimum wage by $7.75 per hour will do nothing but enrich all low-wage workers to the tune of $7.75 per hour because workers will spend all of their additional earnings in ways that make it profitable for their employers to pay them an additional $7.75 per hour, then it can legitimately be asked: Why not raise the minimum wage to $150 per hour?  If higher minimum wages are fully returned to employers in the form of higher spending by workers as Reich theorizes, then there is no obvious limit to the amount by which government can hike the minimum wage before risking an increase in unemployment.)

Focus instead on Reich’s apparent complete ignorance of the important concept of the elasticity of demand for labor.  This concept refers to the responsiveness of employers to changes in wage rates.  It’s true that if employers’ demand for unskilled workers is “inelastic,” then a higher minimum wage would indeed put more money into the pockets of unskilled workers as a group.  The increased pay of workers who keep their jobs more than offsets the lower pay of worker who lose their jobs.  Workers as a group could then spend more in total.  But if employers’ demand for unskilled workers is “elastic,” then raising the minimum wage reduces, rather than increases, the amount of money in the pockets of unskilled workers as a group.  When the demand for labor is elastic, the higher pay of those workers fortunate enough to keep their jobs is more than offset by the lower pay of workers who lose their jobs.  So total spending by minimum-wage workers would likely fall, not rise.

By completely ignoring elasticity, Reich assumes his conclusion.  That is, he simply assumes that raising the minimum wage raises the total pay of unskilled workers (and, thereby, raises the total spending of such workers).  Yet whether or not raising the minimum wage has this effect is among the core issues in the debate over the merits of minimum-wage legislation.  Even if (contrary to fact) increased spending by unskilled workers were sufficient to bootstrap up the employment of such workers, raising the minimum wage might well reduce the total amount of money paid to unskilled workers and, thus, lower their spending.

So is employers’ demand for unskilled workers more likely to be elastic or inelastic?  The answer depends on how much the minimum wage is raised.  If it were raised by, say, only five percent, it might be inelastic, causing only a relatively few worker to lose their jobs and, thus, the total take-home pay of unskilled workers as a group to rise.  But Reich calls for an increase in the minimum wage of 107 percent!  It’s impossible to believe that more than doubling the minimum wage would not cause a huge negative response by employers.  Such an assumption – if it described reality – would mean that unskilled workers are today so underpaid (relative to their productivity) that their employers are reaping gigantic windfall profits off of such workers.  But the fact that we see increasing automation of low-skilled tasks, as well as continuing high rates of unemployment of teenagers and other unskilled workers, is solid evidence that the typical low-wage worker is not such a bountiful source of profit for his or her employer. 

Reich’s video is infected, from start to finish, with too many other errors to count.  I hope that other sensible people will take the time to expose them all.

The Big Picture: Fight for $15 with Robert Reich

A new documentary by Cato Senior Fellow Johan Norberg, shown recently on PBS stations nationwide, is a non-political look at the reality of the world’s energy problems. “Energy questions are complicated, and there are always trade-offs,” Norberg notes.    While bringing electricity to many remote villages in India and the Sahara causes an increase in carbon emissions, it also allows families to have refrigeration for their food, electricity to light their homes and the time to develop their lives beyond working just to sustain themselves every day.   “Don’t they deserve the same kinds of life changing benefits that power has brought the west?” Norberg asks.

This program explains how ALL sources of energy have their attributes and drawbacks.   It will take large amounts of low-cost power to fuel economic development in the third world, while also keeping up with growth in the developed world.  There is no “perfect” source to meet these needs:  Coal and oil make up a third of the current world energy supply, so while the infrastructure is in place and works fairly inexpensively, these fossil fuels are consistently tagged as “dirty.”  Natural gas is abundant and clean, and cheap and easy to use, but the means of getting to it (fracking) is controversial.  Nuclear energy power is one of the only large-scale alternatives to fossil fuels, but nuclear accidents like Chernobyl and Three Mile Island have made the public wary.  Hydro power is clean and fairly cheap, but dams have been targeted by environmentalist for harming fish populations. And, Norberg notes, most good sources of hydropower are already being utilized to their full capacity, leaving little chance to expand this resource.  Solar power is clean and abundant, but it doesn’t work when the sun doesn’t shine, and the infrastructure to capture it is expensive.  Wind supplies only one percent of energy globally because while it’s clean, it’s intermittent and doesn’t always come at the right velocity.

Norberg doesn’t make judgements, for the most part…except to say that top-down, government imposed “solutions” to the world’s energy problems have not worked yet, and are highly unlikely to suddenly start working. 

This is an excellent program for people who really want to understand the basics of world energy needs.  Watch it at Cato’s site here, and read more about the Free to Choose network here.

In a ruling certain to profoundly shape the ongoing debate over surveillance reform in Congress, the U.S. Court of Appeals for the Second Circuit today held that the National Security Agency’s indiscriminate collection of Americans’ telephone calling records exceeds the legal authority granted by the Patriot Act’s controversial section 215, which is set to expire at the end of this month.  Legislation to reform and constrain that authority, the USA Freedom Act, has drawn broad bipartisan support, but Senate Majority Leader Mitch McConnell has stubbornly pressed ahead with a bill to reauthorize §215 without any changes.  But the Second Circuit ruling gives even defenders of the NSA program powerful reasons to support reform.

McConnell and other reform opponents have consistently insisted, in defiance of overwhelming evidence, that the NSA program is an essential tool in the fight against terrorism, and that any reform would hinder efforts to keep Americans safe—a claim rejected even by the leaders of the intelligence community. (Talk about being more Catholic than the Pope!)  Now, however, a federal appellate court has clearly said that no amount of contortion can stretch the language of §215 into a justification for NSA’s massive database—which means it’s no longer clear that a simple reauthorization would preserve the program. Ironically, if McConnell is determined to salvage some version of this ineffective program, his best hope may now be… the USA Freedom Act!

The Freedom Act would, in line with the Second Circuit opinion, bar the use of §215 and related authorities to indiscriminately collect records in bulk, requiring that a “specific selection term,” like a phone number, be used to identify the records sought by the government.  It also, however, creates a separate streamlined process that would allow call records databases already retained by telephone companies to be rapidly searched and cross-referenced, allowing NSA to more quickly obtain the specific information it seeks about terror suspects and their associates without placing everyone’s phone records in the government’s hands.  If the Second Circuit’s ruling is upheld, NSA will likely have to cease bulk collection even if Congress does reauthorize §215.  That makes passage of the Freedom Act the best way to guarantee preservation of the rapid search capability McConnell seems to think is so important—though, of course, the government will retain the ability to obtain specific phone records (albeit less quickly) under either scenario.  With this ruling, in short, the arguments against reform have gone from feeble to completely unsustainable.

A few notable points from the ruling itself.  Echoing the reasoning of the Privacy and Civil Liberties Oversight Board’s extremely thorough report on §215, the Second Circuit rejected the torured legal logic underpinning both the NSA telephone program and a now-defunct program that gathered international Internet metadata in bulk.  The government had persuaded the Foreign Intelligence Surveillance Court to interpret an authority to get records “relevant to an authorized investigation” as permitting collection of entire vast databases of information, the overwhelming majority of which are clearly not relevant to any investigation, on the premise that this allows NSA to later search for specific records that are relevant.  As the court noted, this not only defies common sense, but it is wildly inconsistent with the way the standard of “relevance”—which governs subpoenas and court orders used in routine criminal investigations— has been interpreted for decades.  If every American’s phone records are “relevant” to counterterrorism investigations, after all, why wouldn’t those and other records be similarly “relevant” to investigations aiming to ferret out narcotics traffickers or fraudsters or tax cheats?  Past cases invoked by the government, in which courts have blessed relatively broad subpoenas under a standard of “relevance” only underscore how unprecedented the NSA’s interpretation of that standard truly is—since even the broadest such subpoenas fall dramatically short of the indiscriminate, indefinite hoovering the agency is now enaged in.

The court also quickly dispatched arguments that the plaintiffs here lacked standing to challenge the NSA program.  In general, parties seeking to challenge government action must demonstrate they’ve been harmed in some concrete way—which presents a significant hurdle when the government operates behind a thick veil of secrecy.  Since documents disclosed to press by Edward Snowden—and the government’s own subsequent admissions—leave little question that the plaintiffs’ phone records are indeed being obtained, however, there’s no need for a further showing that those records were subsequently reviewed or used against the plaintiffs.  That’s critical because advocates of broad surveillance powers have often sought to argue that the mere collection of information, even on a massive scale, does not raise privacy concerns—and focus should instead be on whether the information is used appropriately.  The court here makes plain that the unauthorized colleciton of data—placing it in the control and discretion of the government—is itself a privacy harm.

Finally, the court repudiated the Foreign Intelligence Surveillance Court’s strained use of the doctine of legislative ratification to bless the NSA program.  Under this theory—reasonable enough in most cases—when courts have interpreted some statutory language in a particular way, legislatures are presumed to incorporate that interpretation when they use similar language in subsequent laws.  The FISC reasoned that Congress had therefore effectively “ratified” the NSA telephone program, and the sweeping legal theory behind it, by repeatedly reauthorizing §215.  But as the court pointed out—somewhat more diplomatically—it’s absurd to apply that doctrine to surveillance programs and legal interpretations that were, until recently, secret even from many (if not most) members of Congress, let alone the general public.

While the court didn’t reach the crucial question of whether the program violates the Fourth Amendment, the ruling gives civil libertarians good reason to hope that a massive and egregious violation of every American’s privacy will finally come to an end.

The U.S. Court of Appeals for the Second Circuit has ruled that section 215 of the USA-PATRIOT Act never authorized the National Security Agency’s collection of all Americans’ phone calling records. It’s pleasing to see the opinion parallel arguments that Randy Barnett and I put forward over the last couple of years.

Two points from different parts of the opinion that can help structure our thinking about constitutional protection for communications data and other digital information. Data is property, which can be unconstitutionally seized.

As cases like this often do, the decision spends much time on niceties like standing to sue. In that discussion—finding that the ACLU indeed has legal standing to challenge government collection of its calling data—the court parried the government’s argument that the ACLU suffers no offense until its data is searched.

“The Fourth Amendment protects against unreasonable searches and seizures,” the court emphasized. Data is a thing that can be owned, and when the government takes someone’s data, it is seized.

In this situation, the data is owned jointly by telecommunications companies and their customers. The companies hold it subject to obligations they owe their customers limiting what they can do with it. Think of covenants that run with land. These covenants run with data for the benefit of the customer.

Far later in the decision, on the other side of the substantive ruling that section 215 doesn’t authorize the NSA’s program, the court discusses the Supreme Court’s 2012 Jones decision. Jones found that attaching a GPS tracking device to a vehicle requires a warrant.

”[Jones] held that the operation was a search entitled to Fourth Amendment protection,” the Second Circuit says, “because the attachment of the GPS device constituted a technical trespass on the defendant’s vehicle.”

That’s the interpretation I’ve given to Jones, that it is best regarded as a seizure case. When government agents put a GPS device on a car, they converted the car to their purposes, in a small way, to transport their device. The car was not theirs to use this way.

The Supreme Court itself didn’t call this a seizure, but the essential element of what happened was that tiny seizure of the defendant’s car when they put the device on it.

Data is property that can be seized. And even tiny seizures are subject to the constitutional requirement of reasonableness and a warrant. These gems from the Second Circuit’s opinion help show the way privacy can be protected through application of the Fourth Amendment in the digital age.

Food prices are (slightly) lower today than they were in 1961. Yes, that’s right. Adjusted for inflation, the United Nations’ Food and Agriculture Organization calculates, the food price index in 2015 stood at 131.2. It was 131.7 in 1961.

In the meantime, the world population has increased from 3.01 billion to 7.28 billion – a rise of 4.2 billion or 135 percent.

If you are Paul Ehrlich, Lester Brown, William and Paul Paddock, Garrett Hardin, Rajiv Gandhi and countless other followers of Reverend Malthus, this should NOT be happening. But, it is. Human beings are intelligent animals. Unlike rabbits, who overbreed when food is plentiful and die out when it is not, humans innovate their way out of scarcity.

So, happy Thursday to you all.

Common Core is either meaningless or antithetical to a free and pluralistic society.

That’s the key conundrum that Professor Jay P. Greene, chair of the Department of Education Reform at the University of Arkansas, identified yesterday during his testimony before the Arkansas Council on Common Core Review, which is currently considering whether to keep, modify, or scrap the standards:

Because standards are about values, their content is not merely a technical issue that can be determined by scientific methods. There is no technically correct set of standards, just as there is no technically correct political party or religion. Reasonable people have legitimate differences of opinion about what they want their children taught. A fundamental problem with national standards efforts, like Common Core, is that they are attempting to impose a single vision of a proper education on a large and diverse country with differing views.

National standards can try to produce uniformity out of diversity with some combination of two approaches. They can promote standards that are so bland and ambiguous as to be inoffensive to almost everyone. Or they can force their particular vision on those who believe differently. Either way, national standards, like Common Core, are inappropriate and likely to be ineffective. If national standards embrace a vague consensus, then they make no difference since almost everyone already believes them and is already working toward them. If, on the other hand, national standards attempt to impose their particular vision of a proper education on those with differing visions, then national standards are oppressive and likely to face high levels of resistance and non-compliance. So, national standards are doomed to be either unnecessary or illiberal. Either way, they are wrong. [emphasis added]

Supporters of Common Core clearly hope it does bend educators to their will induce “instructional shifts” in our nation’s classrooms, but as Greene points out, for Common Core to be more than “just a bunch of words in a document,” it needs some sort of mechanism to coerce schools and educators into changing their practice to align with the Core. Prominent backers of Common Core have long promoted a “tripod” of standards, tests, and “accountability” measures – i.e. rewards or (more likely) punishments tied to performance on those tests.

And that brings us to the second conundrum Greene identified: either a combination of frustrated educators and parents will neuter the “accountability” measures (enter the opt-out movement), or those measures will create perverse incentives that could warp the education system in ways that even Common Core supporters wouldn’t like:

The problem with trying to use PARCC or Smarter Balanced tests to drive Common Core changes is that it almost certainly requires more coercion than is politically possible and would be undesirable even if it could be accomplished. If Arkansas tries to use the PARCC test to impose strong enough sanctions on schools and educators to drive changes in their practice, we will witness a well-organized and effective counter-attack from educators and sympathetic parents who will likely neuter those sanctions. If, on the other hand, the consequences of PARCC are roughly the equivalent of double secret probation in the movie, Animal House, then no one has to change practice to align with the new standards.

And even if by some political miracle the new PARCC test could be used to impose tough sanctions on schools and educators who failed to comply with Common Core, it’s a really bad idea to try to run school systems with a test. All sorts of bad things happen when maximizing performance on standardized tests becomes the governing principle of schools. Schools and educators are likely to narrow the curriculum by focusing on tested subjects at the expense of untested ones. If we care at all about the Arts, History, and Science we should oppose trying to run schools with math and ELA tests. And within tested subjects schools and educators are likely to focus narrowly on tested items at the expense of a more complete understanding of math and English.

So if national standards don’t work, does that mean abandoning testing and accountability entirely? Not at all. As Greene concludes:

The purpose of PARCC is to drive changes in educator behavior in ways that are desired by Common Core. But we should not be using tests aligned with a set of standards to coerce schools and educators to change their practice. What we really need from standardized testing is just information about how our students are performing. This can be accomplished at much lower cost by just buying a nationally-normed test off of the shelf. And lower stakes tests that are primarily about information rather than coercion will produce much less harmful narrowing of the curriculum.

I would add that opposing uniform, government-imposed standards does not mean opposing all standards. Rather, it means leaving space for competing standards from which schools and parents can choose. There is no One Best Way to educate or to measure educational progress, so a top-down accountability system amounts to hubristic folly. Instead, we should employ the market’s “bottom-up channeling of knowledge” that Yuval Levin so thoughtfully described in a recent essay:

… Put simply, it is a process that involves three general steps, all grounded in humility: experimentation, evaluation, and evolution.

Markets are ideally suited to following these steps. They offer entrepreneurs and businesses a huge incentive to try new ways of doing things (experimentation); the people directly affected decide which ways they like best (evaluation); and those consumer responses inform which ways are kept and which are left behind (evolution).

This three-step process is at work well beyond the bounds of explicitly economic activity. It is how our culture learns and evolves, how norms and habits form, and how society as a general matter “decides” what to keep and what to change. It is an exceedingly effective way to balance stability with improvement, continuity with alteration, tradition with dynamism. It involves conservation of the core with experimentation at the margins in an effort to attain the best of both.

Supporters of Common Core are right to lament a broken system that produces mediocre results on average, and acts as a slaughterhouse of dreams at worse. But they have misdiagnosed the problem, and therefore propose the wrong solution. The problem isn’t that 50 states had 50 different sets of standards, but rather that a government-run schooling system lacks the ability to engage in the experimentation, end-user evaluation, and consumer-driven evolution that have produced great advances and increased productivity in other sectors. The solution, therefore, is not to grant more power to bureaucrats to remake our education system from the top down, but to support polices that empower parents to remake it from the bottom up.

The British luxury passenger liner RMS Lusitania was torpedoed a century ago. The sinking was deemed an atrocity of war and encouraged American intervention in World War I.

But the ship was carrying munitions through a war zone and left unprotected by the Royal Navy. The “Great War” was a thoroughly modern conflict, enshrouded in government lies. We see similar deceptions today.

World War I was a mindless imperial slugfest triggered by an act of state terrorism by Serbian authorities. Contending alliances acted as transmission belts of war. Nearly 20 million died in the resulting military avalanche.

America’s Woodrow Wilson initially declared neutrality, though he in fact leaned sharply toward the motley “Entente.” The German-led Central Powers were no prize. However, the British grouping included a terrorist state, an anti-Semitic despotism, a ruthless imperial power, and a militaristic colonial republic.

Britain was the best of a bad lot, but it ruled much of the globe without the consent of those “governed.” This clash of empires was no “war for democracy” as often characterized.

London ignored the traditional rules of war when imposing a starvation blockade on Germany and neutrals supplying the Germans. Explained Winston Churchill, First Lord of the Admiralty, Britain’s policy was to “starve the whole population—men, women, and children, old and young, wounded and sound—into submission.”

Since Berlin lacked the warships necessary to break Britain’s naval cordon sanitaire, Germany could retaliate only with surface raiders, which were vulnerable to London’s globe-spanning navy, and submarines. U-boats were more effective, but were unable to play by the normal rules of war and stop and search suspect vessels.

The British Admiralty armed some passenger liners and cargo ships, and ordered captains to fire on or ram any submarines that surfaced. Britain also misused neutral flags to shelter its ships. Thus, the U-boats were forced to torpedo allied and some neutral vessels, sending guilty and innocent alike to the ocean’s bottom..

However, Churchill encouraged the voyages. The week before the Lusitania’s sinking he explained that it was “most important to attract neutral shipping to our shores, in the hope especially of embroiling the United States with Germany.”

Wilson complained about the British blockade, but never threatened the bilateral relationship. Washington took a very different attitude toward the U-boat campaign.

The Imperial German government sponsored newspaper ads warning Americans against traveling on British liners, but that didn’t stop the foolhardy from booking passage. Off Ireland’s coast the Lusitania went down after a single torpedo hit; the coup d’ grace apparently was a second explosion of the ship’s cargo of munitions. The dead included 128 Americans.

There was a political firestorm in the U.S., but the flames subsided short of Churchill’s desired declaration of war. Still, the president demanded “strict accountability” for the German U-boat campaign.

His position was frankly absurd: Americans should be able to safely travel on armed vessels of a belligerent power carrying munitions through a war zone. The president eventually issued a de facto ultimatum which caused Berlin to suspend attacks on liners and limit attacks on neutral vessels.

As the war dragged on, however, Berlin tired of placating Washington. In January 1917 the Kaiser approved resumption of submarine warfare. But the effort could not redress Germany’s continental military disadvantages.

After the conflict ended the egotistical, vainglorious Wilson was outmaneuvered by cynical European leaders. The Versailles “peace” treaty turned out to be but a generational truce during which the participants prepared for another round of war.

Today America’s unofficial war lobby routinely clamors for Washington to bomb, invade, and occupy other lands. As I wrote on Forbes, “On the centennial of the Lusitania’s demise Americans should remember the importance of just saying no. Now as then Americans need a president and Congress that believe war to be a last resort for use only when necessary to protect this nation, its people, liberties, and future.”

Prime Minister Shinzo Abe’s trip to Washington demonstrated that Japan remains America’s number one Asian ally. Unfortunately, the relationship increases the likelihood of a confrontation between the United States and China.

Japan’s international role has been sharply limited since World War II. During Prime Minister Abe’s visit, the two governments released new “Guidelines for Japan-U.S. Defense Cooperation.” The document clearly sets America against China.

First, the rewrite targets China. Japan’s greatest security concern is the ongoing Senkaku/Diaoyu dispute and Tokyo had pushed hard for an explicit U.S. guarantee for the unpopulated rocks. Second, Japan’s promise to do more means little; the document stated that it created no “legal rights or obligations.” Tokyo will remain reluctant to act outside of core Japanese interests.

Third, though the new rules remove geographical limits from Japanese operations, most of Japan’s new international responsibilities appeared to be what Prime Minister Abe called “human security.” In his speech to Congress, the prime minister mostly cited humanitarian and peacekeeping operations as examples of his nation’s new duties.

Moreover, the guidelines indicate that the SDF’s military involvement will be “from the rear and not on offensive operations,” noted analysts at the Center for Strategic and International Studies. Defense Minister Gen Nakatani cited “ship inspection” as an example of helping America’s defense.

Fourth, to the extent force is involved, Japan mostly promises to help the United States defend Japan. For instance, Tokyo cited the fact that Japanese vessels now could assist U.S. ships if the latter were attacked while on a joint patrol.

This should be inherent to any alliance, but Narushige Michishita, at Tokyo’s National Graduate Institute for Policy Studies, noted that “technically” it remains impossible for Japanese forces to defend even a U.S. vessel in a Japanese flotilla “when an attack on that ship does not directly or will not directly threaten Japan’s security.” That means a situation which “threatens Japan’s survival and poses a clear danger to overturn fundamentally its people’s right to life, liberty, and pursuit of happiness, to ensure Japan’s survival, and to protect its people.”

In contrast, the revised guidelines begin with an affirmation that “The United States will continue to extend deterrence to Japan through the full range of capabilities, including U.S. nuclear forces. The United States also will continue to forward deploy combat-ready forces in the Asia-Pacific region and maintain the ability to reinforce those forces rapidly.” This means more and newer weapons.

Fifth, as I wrote in China-U.S. Focus, “America’s burden will grow. Tokyo’s military expenditures have been flat for years, but now Japan plans on devoting more resources to non-combat activities. That will leave less for defense against what the Japanese government sees as the greatest threat, the PRC—which continues to hike military outlays. Washington will be expected to fill the ever widening gap.”

Sixth, the new rules build on the Obama administration’s explicit promise to defend Tokyo’s contested territorial claims, most importantly the Senkakus/Diaoyus. U.S. forces will be drawn into the islands’ defense.

According to the document, “If the need arises, the Self-Defense Forces will conduct operations to retake an island.” The SDF would, of course, expect American support. Protecting Tokyo’s claims also encourages the Japanese government to be needlessly provocative.

Japanese and U.S. authorities also are discussing mounting joint air patrols to the edge of the East China Sea and into the South China Sea. In the latter, Tokyo is working with other countries, including Indonesia, the Philippines, and Vietnam. Thus, a U.S. plane could find itself challenging Chinese aircraft in support of a third nation’s disputed territorial claim.

President Obama argued that “we don’t think that a strong U.S.-Japan alliance should be seen as a provocation,” but it will be if directed against the PRC. Unfortunately, the new guidelines make it more likely that Washington will find itself confronting China over issues of limited interest to America.

Judging from the November electoral tsunami, whose epicenter was in coal country, people aren’t taking very kindly to the persistent exaggeration of mundane weather and climate stories that ultimately leads to, among other things, unemployment and increased cost of living. In response, we’ve decided to initiate “The Spin Cycles” based upon just how much the latest weather or climate story, policy pronouncement, or simply poo-bah blather spins the truth.

Like the popular and useful Fujita tornado ratings (“F1” through “F5”), or the oft-quoted Saffir-Simpson hurricane severity index (Category 1 through Category 5), and in the spirit of the Washington Post’s iconic “Pinocchios,”, we hereby initiate the “Spin Cycle,” using a scale of Delicates through Permanent Press. Our image will be the universal vortex symbol for tropical cyclones, intimately familiar to anyone who has ever been alive during hurricane season, being spun by a washing machine. Here’s how they stack up, with apologies to the late Ted Fujita and Bob Simpson, two of the true heroes of atmospheric science with regard to the number of lives their research ultimately saved.

And so, here we have it:

Delicates. An accidentally misleading statement by a person operating outside their area of expertise. Little harm, little foul. One spin cycle.

Slightly Soiled.  Over-the-top rhetoric. An example is the common meme that some obnoxious weather element is new, thanks to anthropogenic global warming, when it’s in fact as old as the earth. An example would the president’s science advisor John Holdren’s claim the “polar vortex,” a circumpolar westerly wind that separates polar cold from tropical warmth, is a man-made phenomenon. It waves and wiggles all over the place, sometimes over your head, thanks to the fact that the atmosphere behaves like a fluid, complete with waves, eddies, and stalls. It’s been around since the earth first acquired an atmosphere and rotation, somewhere around the beginning of the Book of Genesis. Two spin cycles.

Normal Wash. Using government authority to create public panic regarding climate change, particularly those omitting benefits, in an effort to advance policy. For example, the 2014 National Climate Assessment. Three spin cycles.

Heavy Duty. Government regulations or treaties claiming to save the planet from certain destruction, but which actually accomplish nothing. Can also apply to important UN climate confabs, such as Copenhagen 2009 (or, quite likely, the upcoming 2015 Paris Summit), that are predicted to result in a massive, sweeping, and world-saving new treaty, followed by self-congratulatory back-patting. Four spin cycles.

Permanent Press. Purposefully misleading commentary on science which will hinder actual scientific debate and credibility for generations to come, especially those with negative policy outcomes. Linking extreme weather events to climate change, the perpetually impending demise of the polar bears, the Federal government attempting to convince you to sell your beachfront property before it’s submerged. Five spin cycles.




In State of Michigan et al. v. Environmental Protection Agency, the EPA contends that the costs to reduce and then eliminate mercury from power plant effluent are justified because current emissions are lowering I.Q. scores. The result will be to eliminate all coal-fired generation of electricity, [double entendre ahead] currently around 40 percent of our total electric power.

You remember IQ (“Intelligence Quotient”) tests, right? Oh, well, maybe you don’t, because public schools can’t use them anymore. Whether or not they measure intelligence (whatever that is) or not, not all socioeconomic groups score the same, so they can’t be fair (whatever that means). But they do predict, within certain humongous error ranges, lifetime income—which isn’t fair, either.

Which, means, according to EPA, that power plant emissions of mercury are harming…whom?

So—we can’t make this stuff up, the EPA invented a population of 240,000 nonexistent women who fish day in and day out, in order to feed themselves. We won’t get into the fact that, given the cost of, say, a can of mackerel, these folks are paying themselves far, far below the minimum wage. No, instead, they eat—or should we say gorge—up to 300 pounds of hand-caught freshwater fish per day. And then they go home and do the sort of things that lead to children., whose IQ scores are lowered thanks to the mercury in those fish.

Nevermind that U.S. power plants emit less than 0.7 percent of the total mercury input to the atmosphere each year, or that the total U.S. contribution is a mere two percent, or that East Asia, (mainly China) contributes around 36 percent.  Given that mercury can stay in the atmosphere for weeks before it is deposited on the surface, their contribution to our mercury deposition is huge compared to what comes from our homegrown power plants.

The average IQ score is 100. The measurement error for practical purposes is +/- 5 points (one standard deviation). That means if you score 140, your true score is likely between 135 (“highly intelligent”) and 145 (“genius’), or about the average score of our readers.

Those hard facts weren’t enough to keep the EPA from confidently stating that the average IQ reduction in the hypothetical children of the hypothetical fish-obsessed women will be (drum roll!) 0.00209 IQ points. In other words, the average IQ of these sorry tots will read 99.997, with a real value of between 94.997 and 104.997.

Nowhere did the EPA say that avoiding such an IQ loss could impact future earnings, but they still proceeded to translate the value of 0.00209 IQ points to a value of up to $6,000,000 per year across 240,000 hypothetical kids.

One gets the impression that people who think they can find a needle of precisely 0.00209 IQ points in a haystack of 10.0000, or two-hundreths of one percent of the error range, might not score too high on such a test. Of course, since they are most likely government bureaucrats making around $115K per year, that shows how good IQ tests are, after all.

For “thinking” that we can measure 0.00209 IQ points, and, for that, we will shut down power plants that produce 40 percent of our juice, the inaugural recipient of the Spin Cycle award, the U.S. Environmental Protection Agency, gets five spin cycles, or Permanent Press.