Policy Institutes

Note:  David Wojick, who holds a doctorate in the history and philosophy of science, sent me this essay.  It is thought provoking and deserves a read.

The US National Science Foundation seems to think that natural decades-to-centuries climate change does not exist unless provoked by humans. This ignores a lot of established science.

One of the great issues in climate science today is the nature of long-term, natural climate change. Long-term here means multiple decades to centuries, often called “dec-cen” climate change. The scientific question is how much of observed climate change over the last century or so is natural and how much is due to human activities? This issue even has a well known name – The Attribution Problem.

 This problem has been known for a long time. See for example these National Research Council reports: “Natural Climate Variability on Decade-to-Century Timescales (NAP, 1995)” and “Decade-to-Century-Scale Climate Variability and Change (NAP, 1998). The Preface of the 1998 Report provides a clear statement of the attribution problem:

The climate change and variability that we experience will be a commingling of the ever changing natural climate state with any anthropogenic change. While we are ultimately interested in understanding and predicting how climate will change, regardless of the cause, an ability to differentiate anthropogenic change from natural variability is fundamental to help guide policy decisions, treaty negotiations, and adaptation versus mitigation strategies. Without a clear understanding of how climate has changed naturally in the past, and the mechanisms involved, our ability to interpret any future change will be significantly confounded and our ability to predict future change severely curtailed.

Thus we were shocked to learn that the US National Science Foundation denies that this great research question even exists. The agency has a series of Research Overviews for its various funded research areas, fifteen in all. Their climate change research area is funded to the tune of over $300 million a year, or $3 billion a decade.

The NSF Research Overview for climate change begins with this astounding claim:

Weather changes all the time. The average pattern of weather, called climate, usually stays the same for centuries if it is undisturbed.

This is simply not true. To begin with, there is the Little Ice Age to consider. This is a multi-century period of exceptional cold that is thought to have ended in the 19th century. Since then there have been two periods of warming, roughly from 1910 to 1940, and then from 1976 through 1998.  There’s real controversy about what happened since then.  Until our government joggled the measured ocean surface temperatures last summer, scientists could all see that warming had pretty much stopped—what happened has been attended to here, and to say the least, the new record is controversial. 

But the two agreed-upon warmings indeed are indistinguishable in magnitude—only the first one could not have been caused by increasing atmospheric carbon dioxide, because we had emitted so little by then.   If it were, i.e. if climate is that “sensitive”, it would be so hot now that there wouldn’t be a scientific debate on the Attribution Problem. 

Prior to the Little Ice Age there is good evidence that we had what is called the Medieval Warm Period, which may even have been as warm as today.

Thus it is clearly not the case that climate “stays the same for centuries.” So far as we can tell it has never done this. Instead, dec-cen natural variability appears to be the rule.

Why has NSF chosen to deny dec-cen natural variability? The next few sentences in the Research Overview may provide an answer. NSF says this:

However, Earth is not being left alone. People are taking actions that can change Earth and its climate in significant ways. Carbon dioxide is the main culprit. Burning carbon-containing “fossil fuels” such as coal, oil and gas has a large impact on climate because it releases carbon dioxide gas into the atmosphere.

NSF has chosen to promote the alarmist view of human-induced climate change. This is the official view of the Obama Administration. In order to do this it must deny the possibility that long-term natural variability may play a significant role in observed climate change, despite the obvious evidence from the Medieval Warm Period, the Little Ice Age and the early 20th century warming.  As an editorial this might be tolerable, but this is a Research Overview of a multi-billion dollar Federal research program.

NSF is supposed to be doing the best possible science, which means pursuing the most important scientific questions. This is what Congress funds the agency to do. But if NSF is deliberately ignoring the attribution problem, in order to promote the alarmism of human-induced climate change, then it may be misusing its research budget. This would be very bad science indeed.

In technology policy there is a standard rule that says the Government should not pick winners and losers. It appears we need a similar rule in science policy. In the language of science what we seem to have here is the National Science Foundation espousing one research paradigm – human induced climate change and no other cause – at the expense of a competing paradigm – long-term natural variability.

Thomas Kuhn, who coined the term paradigm for the fundamental assumptions that guide research, pointed out that it is common for the proponents of one paradigm to shield it from a competitor. NSF’s actions look like a clear case of this kind of paradigm protection.

An Uber driver is accused of killing six people and wounding two others in a shooting rampage that took place in Kalamazoo, Michigan on Saturday. The victims seem to have been picked at random and were shot at three different locations. An unnamed source told CNN that the suspected killer, Jason Dalton, completed rides in between the shootings, which took place over a seven-hour period. It might be tempting to think in the wake of the Kalamazoo shooting that Uber should reform its background check system, but this would be an overreaction to a problem a different background check process wouldn’t have solved. 

Uber screens its drivers by checking county, state, and federal criminal records. As I explained in my Cato Institute paper on ridesharing safety, Uber is oftentimes stricter than taxi companies in major American cities when it comes to preventing felons and those with a recent history of dangerous driving from using its platform. And Dalton did pass Uber’s background check.

However, it’s important to keep in mind a disturbing detail: according to Kalamazoo Public Safety Chief Jeff Hadley, the suspected shooter did not have a criminal record and was not known to the authorities. In fact, Dalton, a married father of two, does not seem to have prompted many concerns from anyone. The Washington Post reports that Dalton’s neighbors noticed “nothing unusual” about him, although the son of one neighbor did say that he was sometimes a “hothead.”

That an apparently normal man with no criminal history can murder six people is troubling, but it’s hard to blame Uber for this. It’s not clear what changes Uber could make to its background check system in order to prevent incidents like the Kalamazoo shooting. What county court record, fingerprint scan, or criminal database would have been able to tell Uber that a man with no criminal record would one day go on a shooting rampage?

The Kalamazoo shooting is a tragedy, but it shouldn’t distract from the fact that Uber and other ridesharing companies like Lyft have features such as driver and passenger ratings as well as ETA (estimated time of arrival) sharing that make their rides safer than those offered by traditional competitors.

With the information we have it looks like Dalton could have passed a background check to have been a taxi driver or a teacher. While perhaps an unnerving fact, criminal background checks cannot predict the future, whether they are used to screen potential school bus drivers, police officers, or rideshare drivers. 

The European Union faces so many different crises that it has been–until now–impossible to predict the precise catalyst for its likely demise. The obvious candidates for destroying the EU include the looming refugee crisis, the tottering banking structure that is resistant to both bail-outs and bail-ins, the public distrust of the political establishment, and the nearly immobilized EU institutions.

But the most immediate crisis that could spell the EU’s doom is Prime Minister David Cameron’s failure to wrest from Brussels concessions that he needs in order to placate the increasingly euro-skeptic British public. Prime Minster Cameron has failed because the EU cannot grant the necessary concessions. There are three special reasons, as well as one underlying reality, that have made Cameron’s task impossible.

First, a profound reform of the EU-British relationship, which Cameron initially promised, was always impossible, because it required a “treaty change” in each of the twenty-eight EU member-states. That could not happen without approval either by parliamentary votes or, as this is especially difficult, a national referendum that Brussels dreads. There is simply no appetite in Europe to run such risks just to appease the UK.

Second, Cameron was willing to settle for a compromise but failed to obtain most of what he needed, because Brussels fears that other member states will follow the British example and demand similar accommodations. That would result in a “smorgasbord EU” in which each country could pick and choose what serves its own best interests. In other words, it would make a mockery of “an ever closer Europe.”

And so, Cameron ended up with what one Conservative Member of Parliament, Jacob Rees-Mogg, called a “thin gruel [of a reform that] has been further watered down.”

To make matters worse, Cameron’s “thin gruel” will require a vote by the EU Parliament after the British referendum takes place. As a sovereign body, the EU Parliament will be able to make changes to any deal approved by the British public–an uncomfortable and inescapable fact that the advocates of “Brexit” will surely utilize to their advantage during the referendum campaign.

At the root of the conundrum faced by the British and European negotiators is a struggle between a national political system anchored in parliamentary supremacy and a supra-national technocracy in Brussels that requires pooling of national sovereignty in order to achieve a European federal union. The public debate over the referendum is driving this fundamental incompatibility home. The choice between one and the other can no longer be deferred.

The most common image of a failing Europe is that of something falling apart, unraveling, crumbling away, or even evaporating into thin air. This imagery is misleading. Following the British referendum, and perhaps even before it, the EU will most likely implode.

Once the odds of Britain’s exit from the EU increase from merely likely to near-certain, the rush for the exits will begin in earnest. It is impossible to predict which of the remaining member-states will lead the charge, but the floodgates are sure to open. Those pro-EU governments that still remain in office are under siege in almost every member-state. As in Britain, the establishment parties will have to appease increasingly hostile electorates by getting a “better deal” from the EU. Or they could face electoral oblivion.

You Ought to Have a Look is a feature from the Center for the Study of Science posted by Patrick J. Michaels and Paul C. (“Chip”) Knappenberger.  While this section will feature all of the areas of interest that we are emphasizing, the prominence of the climate issue is driving a tremendous amount of web traffic. Here we post a few of the best in recent days, along with our color commentary.

Let’s begin this installment of You Ought to Have a Look with a peek at the heroic attempt by Rep. Michael Burgess (R-TX) to try to reel in the fanatical actions by the Department of Energy (DoE) to regulate the energy usage (operation) of virtually all the appliances in your home. The DoE effort is being undertaken as part of President Obama’s broader actions to mitigate climate change as directed under his Climate Action Plan. It is an extremely intrusive action and one that interferes with the operation of the free market.

We have been pushing back (through the submission of critiques during the public comment period of each new proposed regulation), but the sheer number and repetition of newly proposed regulations spilling forth from the DoE overwhelms our determination and wherewithal.

Rep. Burgess’s newly introduced legislation seeks to help lighten our suffering.

Bill H.R. 4504, the “Energy Efficiency Free Market Act of 2016” would “strike all government-mandated energy efficiency standards currently required on a variety of consumer products found in millions of American homes.”

Burgess reasons:

“The federal government must trust the American people to make the right decisions when it comes to the products they buy. When the government sets the efficiency standard for a product, that often becomes the ceiling. I have long been a firm believer in energy efficiency; however, when the market drives the standard, there’s no limit to how fast and how aggressive manufacturers will be when consumers demand more efficient and better made products.”

“Government standards have proven to be unworkable. The Commerce Clause of the U.S. Constitution was meant as a limitation on federal power. It was never intended to allow the federal government to micromanage everyday consumer products that do not pose a risk to human health or safety.”



The full text of H.R. 5404, the “Energy Efficiency Free Market Act of 2016” can be found here.

The bigger point is that the free market will drive efficiency improvements at the rate that the market bears. It doesn’t need government “help.”

Take the shale gas revolution that came about via the technologies of fracking and horizontal drilling. Not only did this unlock loads of natural gas from geologic formations never thought to relinquish their holdings economically, but the fuel being recovered (natural gas) produces fewer climate relevant carbon dioxide emissions to generate electricity than does coal, in effect, doing the DoE’s and President Obama’s work for them.

But is the President happy about this? Of course not. In fact, buried with the now stayed Clean Power Plan are disincentives to further build out natural gas fueled power plants.

But, the Administration efforts to reign in natural gas development aren’t anticipated to constrain the adoption of the natural gas extraction technologies to the rest of the world.

In its recently-released Annual Energy Outlook, BP anticipates:

Technological innovation and productivity gains have unlocked vast resources of tight oil and shale gas, causing us to revise the outlook for US production successively higher…

Globally, shale gas is expected to grow by 5.6% p.a. between 2014 and 2035, well in excess of the growth of total gas production.  As a result, the share of shale gas in global gas production more than doubles from 11% in 2014 to 24% by 2035…

As with the past 10 years, the growth of shale gas supply is dominated by North American production, which accounts for around two-thirds of the increase in global shale gas supplies. But over the Outlook period, we expect shale gas to expand outside of North America, most notably in Asia Pacific and particularly in China, where shale gas production reaches 13 Bcf/d by 2035.

The rapidly expanding global production of natural gas means that when it comes to business-as-usual projections of future greenhouse gas emissions, we ought to be favoring the ones with faster declines in the carbon intensity of the energy supply. These BAU scenarios, coupled with an equilibrium climate sensitivity towards the low end of consensus estimates means that the overall global temperature rise (over the pre-industrial conditions) of somewhere in the neighborhood 2.0°C by the end of this century is well within the realm of possibilities—even without an international climate agreement (or a carbon tax in the U.S.).

You are not hearing of this possibility many places besides here (for example, here and here). So stay tuned in!

Finally, on the lighter side, in the all-bad-things-come-from-climate-change department, is a story a town in Australia that is experiencing a “hairy panic.” No, it’s not a crisis brought about by a local shortage of razor blades, or the lingering effects from no-shave November,  but rather an invasion of a type of Australian tumbleweed.

Check it out!

Of course, no opportunity to relate any sort of human misery to global warming passes by. In this case, to shield yourself from the exposure to unnecessary nonsense, you probably ought not have a look! 

Given the “facts” that have been bandied about in the media since Justice Antonin Scalia’s death concerning presidential election year nominations and confirmations to the Supreme Court, I asked Anthony Gruzdis, our crack research assistant for the Center for Constitutional Studies, to do an exhaustive study of the subject, and here, in summary, are the most relevant facts.

It turns out that most election year resignations and deaths were in the pre-modern (pre-1900) era—many in the era before today’s two major parties were established. And the pre-1900 picture is further complicated by several multiple nominations and confirmations of the same person, both before and after the election, so it’s not until the modern era that we get a picture that is more clearly relevant and instructive for the current situation.

Looking at the history of the matter since 1900, then, until last week only four vacancies have occurred during an election year, two in 1916, one in 1932, and one in 1956. (Three more occurred during the previous year, in 1911, 1939, and 1987; the nominees in each case were confirmed, respectively, in February, early January, and early February of the election year that followed.) The first three were filled when the president’s party also controlled the Senate, so that’s not the situation we have now. And when Justice Sherman Minton resigned for health reasons on October 15, 1956, President Eisenhower made a recess appointment that same day of William J. Brennan, Jr., nominating him for the seat on January 14, 1957, for which Brennan was confirmed by voice vote on March 19, 1957. In 1956 the Senate was closely divided with 48 Democrats, 47 Republicans, and 1 Independent. In 1957 it was also closely divided with 49 Democrats and 47 Republicans, although in both cases the Southern Democrats often voted with the Republicans.

The resignation of Chief Justice Earl Warren in June 1968 and President’s Johnson’s nomination of Justice Abe Fortas to succeed Warren has been cited as a parallel for today, but the complex details of that case hardly make it so. In a nutshell, after a heated debate concerning speaking fees Fortas had accepted plus his political activities while on the bench, Fortas asked President Johnson on October 1, 1968 to withdraw his nomination to be chief justice. He resigned from the Court on May 14, 1969, shortly after which President Nixon nominated Warren Burger to be chief justice.

More often the nomination of Justice Anthony Kennedy is cited as a parallel for today, but here too there are important differences. In particular, the seat Kennedy holds became vacant not in an election year but in late June 1987 when Justice Lewis Powell, Jr. announced his retirement. The stormy hearings for Judge Robert Bork followed. After that nomination failed, President Reagan named Judge Douglas Ginsburg, who withdrew his name shortly thereafter. Finally, the president nominated then-Judge Kennedy on November 11, 1987, still not in an election year. Kennedy was confirmed on February 3, 1988. The one parallel to today is that President Reagan faced a Senate that was 55-45 Democratic. It is likely, however, that the president’s popularity, plus the wish to bring to an end the exhausting struggle of the previous seven months, explains the confirmation vote of 97-0.

In sum, in the modern era there is no close parallel to the situation today when the presidential primary elections are already underway, the White House and the Senate are held by different parties, the parties are deeply divided, and the most recent off-year elections reflected that divide fairly clearly. The Constitution gives the president the power to nominate a successor to Justice Scalia. But it also gives the Senate the power to confirm, or not. In the end, this is a political matter.

The U.S. is bankrupt. Of course, Uncle Sam has the power to tax. But at some point even Washington might not be able to squeeze enough cash out of the American people to pay its bills.

President Barack Obama would have everyone believe that he has placed federal finances on sound footing. The deficit did drop from over a trillion dollars during his first years in office to “only” $439 billion last year. But the early peak was a result of emergency spending in the aftermath of the financial crisis and the new “normal” is just short of the pre-financial crisis record set by President George W. Bush. The reduction is not much of an achievement.

Worse, the fiscal “good times” are over. The Congressional Budget Office expects the deficit to jump this year, to $544 billion.

The deficit is not caused by too little money collected by Uncle Sam. Revenues are rising four percent this year, and will account for 18.3 percent of GDP, well above the last 50-year average of 17.4 percent. But outlays are projected to rise six percent, leaving expenditures at 21.2 percent of GDP, greater the 20.2 percent average of the last half century.

Alas, this year’s big deficit jump is just the start. Revenues will rise from $3.4 trillion to $5 trillion between 2016 and 2026. As a share of GDP they will remain relatively constant, ending up at 18.2 percent. However, outlays will rise much faster, from about $4 trillion this year to $6.4 trillion in 2026. As a percent of GDP spending will jump from 21.2 percent to 23.1 percent over the same period,

Thus, the amount of red ink steadily rises and is expected to be back over $1 trillion in 2026. The cumulative deficit from 2017 through 2026 will run $9.4 trillion. Total debt will rise by around 70 percent, to roughly $24 trillion in 2026.

Reality is likely to be worse. So-called discretionary spending, subject to annual appropriations, is to be held to record, and probably unrealistically, low levels. In contrast, entitlements will be exploding.

Last June, CBO published a report looking at the federal budget through 2040. Warned the agency: “the extended baseline projections show revenues that fall well short of spending over the long term, producing a substantial imbalance in the federal budget.” By 2040 the agency imagines revenues rising sharply, to 19.4 percent of GDP, but spending going up even further, to 25.3 percent of GDP.

Using its revised figures, CBO warned: “Three decades from now debt held by the public is projected to equal 155 percent of GDP, a higher percentage than any previously recorded in the United States.” Even when exiting World War II—106 percent in 1946, the previous record.

CBO noted the potentially destructive consequences of such indebtedness. Washington’s interest burden would rise sharply. Moreover, “because federal borrowing reduces total saving in the economy over time, the nation’s capital stock would ultimately be smaller than it would be if debt was smaller, and productivity and total wages would be lower.” Americans would be poorer and have less money to fund the steadily rising budgets.

Worse, investors could come to see federal debt as unpayable. Warned CBO: “There would be a greater risk that investors would become unwilling to finance the government’s borrowing needs unless they were compensated with very high interest rates; if that happened, interest rates on federal debt would rise suddenly and sharply.” This in turn “would reduce the market value of outstanding government bonds, causing losses for investors and perhaps precipitating a broader financial crisis by creating losses for mutual funds, pension funds, insurance companies, banks, and other holders of government debt—losses that might be large enough to cause some financial institutions to fail.”

As I wrote for American Spectator: “There’s no time to waste. Uncle Sam is headed toward bankruptcy. Without serious budget reform, we all will be paying the high price of fiscal failure.”

The Current Wisdom is a series of occasional articles in which Patrick J. Michaels and Paul C. “Chip” Knappenberger, from Cato’s Center for the Study of Science, review interesting items on global warming in the scientific literature or of a more technical nature. These items may not have received the media attention that they deserved or have been misinterpreted in the popular press.

We hardly need a high tech fly-swatter (although they are fun and effective) to kill this nuisance—it’s so languorous that one can just put their thumb over it and squish.

Jeb Bush’s candidacy? No, rather the purported connection between human-caused global warming and the highly-publicized spread of the Zika virus.

According to a recent headline in The Guardian (big surprise) “Climate change may have helped spread Zika virus, according to WHO scientists.”

Here are a few salient passages from The Guardian article:

“Zika is the kind of thing we’ve been ranting about for 20 years,” said Daniel Brooks, a biologist at University of Nebraska-Lincoln. “We should’ve anticipated it. Whenever the planet has faced a major climate change event, man-made or not, species have moved around and their pathogens have come into contact with species with no resistance.”


“We know that warmer and wetter conditions facilitate the transmission of mosquito-borne diseases so it’s plausible that climate conditions have added the spread of Zika,” said Dr. Diarmid Campbell-Lendrum, a lead scientist on climate change at WHO.

Is it really “plausible?”


The Zika virus is transmitted by two species of mosquitoes, Aedes aegypti and Aedes albopictus, that are now widespread in tropical and sub-tropical regions of the globe (including the Southeastern U.S.), although they haven’t always been.

These mosquito species, respectively, have their origins in the jungles of Africa and Asia—where they largely remained for countless thousands of years. It’s hypothesized that Aedes aegypti found its way out of Africa during the slave trade, which brought the mosquitoes into the New World, and from there they spread throughout the tropics and subtropics of North and South America. Aedes albopictus, also known as the Asian tiger mosquito, is a more recent global traveler, spreading out from the forests of Southeast Asia to the rest of the world during the 1980s thanks to the interconnectedness of the modern transportation network. Figure 1 below shows the modeled geographic distribution of each of these species.

Figure 1. (Top) Global map of the modelled distribution of Aedes aegypti. (Bottom) Global map of the modelled distribution of Aedes albopictus. The map depicts the probability of occurrence (from 0 blue to 1 red) at a spatial resolution of 5 km × 5 km. (figures from Kraemer et al., 2015)

The distribution explosion from confined forests to the global tropics and subtropics had nothing whatsoever to do with climate change, rather, it was due to the introduction of the mosquito species into favorable existing climates.

Since the Aedes mosquitoes are now widely present in many areas also highly populated by the human species, the possibility exists for outbreaks and the rapid spread of any of the diseases that the mosquitoes may carry, including dengue fever, yellow fever, chikungunya, West Nile and Zika.

But what about climate change? It’s got to make things better for warmth-loving mosquitos and the diseases they spread, right? After all, we read that in the New York Times!

Hardly. Climate change acts at the periphery of the large extant climate-limited distributions. And its impact on those margins is anything but straightforward.

Many scientific studies have used various types of climate and ecological niche modelling to try to project how Aedes mosquito populations and geographic range may change under global warming and they all project a mixed bag of outcomes. The models project range/population expansions in some areas, and range/population contractions in others—with the specific details varying among the different studies. The one take-home message from all of these studies is that the interaction between mosquitos and climate and disease, is, in a word, complex.

For example, here are some typical results (from Khormi and Kumar, 2014) showing the current distribution of Aedes aegypti mosquitos along with the projected distribution in the latter part of this century under a pretty much business-as-usual climate change scenario.

Figure 2. (Top) Modeled suitability for Aedes aegyptibased on the current climate conditions. (Bottom) Estimated suitability for Aedes aegyptiin 2070 based on SRES A1B (business-as-usual) emission scenario. White = areas unfavorable for the Aedesmosquito; blue = marginally suitable areas; blue/yellow = favorable areas; yellow/red = very favorable areas. (adapted from Khormi and Kumar, 2014).

Good luck seeing much of a difference. Which is our point: the bulk of the large distribution of Aedes aegypti remains the same now as 50 years from now, with small changes (of varying sign) taking place at the edges. Similar results have been reported for Aedes albopictus by Yiannis Proestos and colleagues. The authors of the Aedes aegypti study write:

Surprisingly, the overall result of the modelling reported here indicates an overall contraction in the climatically suitable area for Aedes in the future… The situation is not straightforward but rather complicated as some areas will see an upsurge in transmission, while others are likely to see a decline. [emphasis added]

The human response to the presence of the mosquitos and the disease is equally complicated, if not more so.

As Dr. Paul Reiter pointed out in his 2001 classic “Climate Change and Mosquito-Borne Disease”:

[T]he histories of three such diseases—malaria, yellow fever, and dengue—reveal that climate has rarely been the principal determinant of their prevalence or range; human activities and their impact on local ecology have generally been much more significant.

More recently, Christofer Aström and colleagues found this to be the case in their modelling efforts for dengue fever (a similar result would be expected for Zika, carried by the same mosquitos). They report:

Empirically, the geographic distribution of dengue is strongly dependent on both climatic and socioeconomic variables. Under a scenario of constant [socioeconomic development], global climate change results in a modest but important increase in the global population at risk of dengue. Under scenarios of high [socioeconomic development], this adverse effect of climate change is counteracted by the beneficial effect of socioeconomic development.

The bottom line is that even 50-100 years from now, under scenarios of continued global warming, the changes to the Aedes populations are extremely difficult to detect and make up an exceedingly small portion of the extant distributions and range. The impact of those changes is further complicated by the interactions of the mosquitos, humans, and disease—interactions which can be mitigated with a modicum of prevention.

Linking climate change to the current mosquito range and outbreak of Zika (or any other of your favorite Aedes-borne diseases)?  No can do.

The threat of the spread of vector-borne diseases, like most every other horror purported to be brought about by climate change, is greatly overblown. The scientific literature shows the impact, if any, to be ill-defined and minor. As such, it is many times more effectively addressed by directed, local measures, than by attempting to alter the future course of the earth’s climate through restrictions on greenhouse gas emissions (produced from the burning of fossil fuels to generate power)—the success and outcome  of which is itself uncertain



Aström, C, et al., 2012. Potential Distribution of Dengue Fever Under Scenarios of Climate Change and Economic Development. EcoHealth, 9, 448-454.

Khormi, H. M., and L. Kumar, 2014. Climate change and the potential global distribution of Aedes

aegypti: spatial modelling using geographical information system and CLIMEX. Geospatial Health, 8, 405-415.

Kraemer, M. U. G., et al., 2015. The global distribution of the arbovirus vectors Aedes aegypti and Ae. albopictus, eLife, 4, doi: 10.7554/eLife.08374

Proestos, Y., et al., 2015. Present and future projections of habitat suitability of the Asian tiger mosquito, a vector of viral pathogens, from global climate simulation. Philosophical Transactions of the Royal Society B, 370,20130554, http://dx.doi.org/10.1098/rstb.2013.0554

Reiter, P., 2001. Climate change and mosquito-borne disease, Environmental Health Perspectives, 109, 141-161.

Last year, the comedy duo Key & Peele’s TeachingCenter sketch imagined what it would be like if teachers were treated like pro-athletes, earning millions, being drafted in widely televised events, and starring in car commercials. We’re not likely to see the latter two anytime soon, but some teachers are already earning seven figures.

The Key & Peele sketch inspired think pieces arguing that K-12 teachers should be paid more, but without making any fundamental changes to the existing system. Matt Barnum at The Seventy-Four brilliantly satirized this view in calling for pro-athletes to be treated more like teachers: stop judging teams based on wins or players based on points scored, eliminate performance pay in favor of seniority pay, and get rid of profits.

Barnum’s serious point, of course, is that these factors all contribute to athletes’ high salaries. There are at least two other major factors: the relative scarcity of highly talented athletes and their huge audience. The world’s best curlers don’t make seven figures because no one cares about curling (apologies to any Canadian readers), and while high-quality football referees are crucial to a sport with a huge audience, they’re a lot more replaceable than a good quarterback.

But what if we combined these ingredients? What if there was a for-profit system where high-quality teachers had access to a huge audience and they were paid based on their performance?

Actually, such a system already exists:

Kim Ki-hoon earns $4 million a year in South Korea, where he is known as a rock-star teacher—a combination of words not typically heard in the rest of the world. Mr. Kim has been teaching for over 20 years, all of them in the country’s private, after-school tutoring academies, known as hagwons. Unlike most teachers across the globe, he is paid according to the demand for his skills—and he is in high demand.

He may be a “rock star” but how does Mr. Kim have an audience large enough that he earns more than the average Major League Baseball player? Answer: The Internet.

Mr. Kim works about 60 hours a week teaching English, although he spends only three of those hours giving lectures. His classes are recorded on video, and the Internet has turned them into commodities, available for purchase online at the rate of $4 an hour. He spends most of his week responding to students’ online requests for help, developing lesson plans and writing accompanying textbooks and workbooks (some 200 to date).

In the United States, several companies are taking a similar approach to higher education. Last week, Time Magazine profiled Udemy, one of the largest providers of digital higher education:

On Feb. 12, Udemy will announce that more than 10 million students have taken one of its courses. In the U.S., there were about 13 million students working toward a four-year degree during fall 2015 semester, according to the Department of Education. It is another example of the rising popularity of online education as college costs have boomed in the United States. Americans hold $1.2 trillion in student loan debt, second only to mortgages in terms of consumer obligations. Entering the workforce deep in the red could be a handicap that follows graduates the rest of their careers, economists say.

Digital instruction is still in the early stages of development, and research on its impact so far has been mixed. It’s not for everyone. However, it holds the promise of providing students much greater access to top instructors at a lower cost. At the same time, as Joanne Jacobs highlighted, it also gives great instructors access to a much larger audience, and that can translate into significant earnings. As Time reports:

Udemy courses can be rewarding for the platform’s instructors, too. Rob Percival, a former high school teacher in the United Kingdom, has made $6.8 million from a Udemy web development course that took him three months to build. “It got to the stage several months ago where I hit a million hours of viewing that particular month,” he says. “It’s a very different experience than the classroom. The amount of good you can do on this scale is staggering. It’s a fantastic feeling knowing that it’s out there, and while I sleep people can still learn from me.”

Digital instruction is not a panacea for all our education policy challenges (nothing is), and it’s unlikely that it will replace in-person learning, especially for younger students. But it is a good example of how harnessing the market can improve the lot of both students and teachers.

Yet again North Korea has angered “the world.” Pyongyang violated another United Nations ban, launching a satellite into orbit. Washington is leading the campaign to sanction the North.

Announced UN Ambassador Samantha Power: “The accelerated development of North Korea’s nuclear and ballistic missile program poses a serious threat to international peace and security—to the peace and security not just of North Korea’s neighbors, but the peace and security of the entire world.”

The Democratic People’s Republic of Korea is a bad actor. No one should welcome further enhancements to the DPRK’s weapons arsenal.

Yet inflating the North Korean threat also doesn’t serve America’s interests. The U.S. has the most powerful military on earth, including 7100 nuclear warheads and almost 800 ICBMs/SLBMs/nuclear-capable bombers. Absent evidence of a suicidal impulse in Pyongyang, there’s little reason for Washington to fear a North Korean attack.

Moreover, the North is surrounded by nations with nuclear weapons (China, Russia) and missiles (those two plus Japan and South Korea). As a “shrimp among whales,” any Korean government could understandably desire to possess the ultimate weapon.

Under such circumstances, allied complaints about the North Korean test sound an awful lot like whining. For two decades U.S. presidents have said that Pyongyang cannot be allowed to develop nuclear weapons. It has done so. Assertions that the DPRK cannot be allowed to deploy ICBMs sound no more credible.

After all, the UN Security Council still is working on new sanctions after the nuclear test last month. China continues to oppose meaningful penalties. Despite U.S. criticism, the People’s Republic of China has reason to fear disintegration of the North Korean regime: loss of political influence and economic investments, possible mass refugee flows, violent factional combat, and loose nukes, and the creation of a reunified Korea hosting American troops on China’s border.

Moreover, Beijing blames the U.S. for creating the hostile security environment which encourages the North to develop WMDs. Why should Beijing sacrifice its interests to solve a problem of its chief global adversary’s making?

Pyongyang appears to have taken the measure of its large neighbor. The Kim regime announced its satellite launch on the same day that it reported the visit of a Chinese envoy, suggesting another insulting rebuff for Beijing.

Even if China does more, the North might not yield.

Thus, the U.S. and its allies have no better alternatives in dealing with Pyongyang today than they did last month after the nuclear test. War would be foolhardy, sanctions are a dead-end, and China remains unpersuaded.

As I point out in National Interest: “The only alternative that remains is some form of engagement with the DPRK. Cho Han-bum of the Korea Institute for National Unification argued that the North was using the satellite launch to force talks with America. However, Washington showed no interest in negotiation, so the DPRK launched.”

Of course, no one should bet on negotiating away North Korea’s weapons. If nothing else, Pyongyang watched American and European governments oust Libya’s Moammar Khadafy after, in its view, at least, he foolishly traded away his nuclear weapons and missiles.

Nevertheless, there are things which the North wants, such as direct talks with America, a peace treaty, and economic assistance. Moreover, the DPRK, rather like Burma’s reforming military regime, appears to desire to reduce its reliance on Beijing. This creates an opportunity for the U.S. and its allies.

Perhaps negotiation would temper the North’s worst excesses. Perhaps engagement would encourage domestic reforms. Perhaps a U.S. initiative would spur greater Chinese pressure on Pyongyang.

Perhaps not. But current policy has failed.

Yet again the North has misbehaved. Yet again the allies are talking tough. Samantha Power insisted that “we cannot and will not allow” the North to develop “nuclear-tipped intercontinental ballistic missiles.”

However, yet again Washington is only doing what it has done before. Unfortunately, the same policy will yield the same result as before. It is time to try something different.


President Obama has issued his final federal budget, which includes his proposed spending for 2017. With this data, we can compare spending growth over eight years under Obama to spending growth under past presidents.

Figures 1 and 2 show annual average real (inflation-adjusted) spending growth during presidential terms back to Eisenhower. The data comes from Table 6.1 here, but I made two adjustments, as discussed below.

Figure 1 shows total federal outlays. Ike is negative because defense spending fell at the end of the Korean War. LBJ is the big-spending champ. He increased spending enormously on both guns and butter, as did fellow Texan George W. Bush. Bush II was the biggest spender since LBJ. As for Obama, he comes out as the most frugal president since Ike, based on this metric.

Figure 2 shows total outlays other than defense. Recent presidents have presided over lower spending growth than past presidents. Nixon still stands as the biggest spender since FDR, and the mid-20th century was a horror show of big spenders in general. The Bush II and Obama years have been awful for limited government, but the LBJ-Nixon tag team was a nightmare—not just for rapid spending during their tenures, but also for the creation of many spending and regulatory programs that still haunt us today.

I made two adjustments to the official budget data, both for 2009. First, the official data includes an outlay of $151 billion for TARP in 2009 (page 4 here). But TARP ended up costing taxpayers virtually nothing, and official budget data reverses out the spending in later years. So I’ve subtracted $151 billion from the official 2009 amount. Second, 2009 is the last budget year for Bush II, but 2009 was extraordinary because Obama signed into law a giant spending (stimulus) bill, which included large outlays immediately in 2009. It is not fair to blame Bush II for that (misguided) spending, so I’ve subtracted $114 billion in stimulus spending for that year, per official estimates.

Readers will note that Congress is supposed to have the “power of the purse,” not presidents. But I think that the veto gives presidents and congresses roughly equal budget power today. So Figures 1 and 2 can be interpreted as spending growth during each president’s tenure, reflecting the fiscal disposition of both the administration and Congress at the time. Spending growth during the Clinton and Obama years, for example, was moderated by Republican congresses that leaned against the larger domestic spending favored by those two presidents.

One more caveat is that presidents have limited year-to-year control over spending that is on auto-pilot. Some presidents luck out with slower growth on such spending, as Obama has with relatively slow growth on Medicare in recent years.

Finally, it is good news that recent spending growth is down from past decades, but economic growth is slower these days, so the government’s share of the economy still expanded during the Bush II and Obama years. Besides, the FDR-to-Carter years bestowed on us a massive federal government that is actively damaging American society, so we should be working to shrink it, not just grow it more slowly.

A recent Cato policy forum on European over-regulation took an unexpected turn when my friend and colleague Richard Rahn suggested that falling prices and increasing availability of writing paper may have been responsible for increasing the number and length of our laws and regulations. (Dodd-Frank is longer than the King James’ Bible, to give just one obvious example.)

Goodness knows what will happen when new legislation stops being printed on writing paper and starts appearing only on the internet. (I never read Apple’s “terms and conditions,” do you?)

Anyhow, Richard’s hypothesis will soon be put to the test in Great Britain, where the lawmakers have just decided to stop writing the Acts of Parliament on calfskin – a tradition dating back to the Magna Carta – and use paper instead. Will the length of British laws increase, as Rahn’s hypothesis predicts? We shall see.

In the meantime, Americans remain stuck with a Niagara Falls of laws and regulations that our lawmakers generate every year. Many distinguished scholars have wondered how to slow our Capitol Hill busybodies down a little. The great Jim Buchanan had some good ideas, but the most effective, if not harshest, means of preventing over-regulation was surely developed by the Locrians in the 7th century BC.

As Edward Gibbon narrates in The History of the Decline and Fall of the Roman Empire, “A Locrian who proposed any new law, stood forth in the assembly of the people with a cord round his neck, and if the law was rejected, the innovator was instantly strangled.” (Vol. IV, chapter XLI; V pp. 783–4 in volume 2 of the Penguin edition.)

Ours is, of course, an enlightened Republic, not an iron-age Greek settlement on the southern tip of Italy. The life and limb of our elected officials must, therefore, remain safe. But, what if instead of physical destruction, a failed legislative proposal resulted in, so to speak, “political death?” What if the Congressman or Senator, whose name appeared on a bill that failed to pass through Congress, were prevented from running for reelection after their term in office came to an end? 

Just a happy thought before the weekend.

Ohio governor and GOP presidential hopeful John Kasich says he opposes ObamaCare. Yet somehow, he has managed to embrace the law in every possible way. He wanted to implement an Exchange, even if it was clearly unconstitutional under Ohio law. He denounced the Medicaid expansion’s “large and unsustainable costs,” which “will just rack up higher deficits…leaving future generations to pick up the tab.” Then he went ahead and implemented it anyway. Worse, he did so unilaterally, after the Ohio legislature passed legislation prohibiting him from doing so (which he vetoed). When Republican legislators and pro-life groups filed suit to stop him, Kasich defended his power-grab all the way to the Ohio Supreme Court. 

Kasich’s defense of his record on ObamaCare has been…less than honest. Just one example: in a town hall meeting in South Carolina last night, Kasich railed against how ObamaCare increases the cost of health care at the same time he boasted he has constrained Medicaid spending in Ohio. In fact, Kasich’s unilateral Medicaid expansion not only increased the cost of Medicaid to taxpayers nationwide, but according to Jonathan Ingram of the Foundation for Government Accountability, it “has run $2.7 billion over budget so far [and] is set to run $8 billion over budget by 2017.” 

For more examples of Kasich’s ObamaCare duplicity, see my new four-part (yet highly readable!) series at DarwinsFool.com:


Four decades ago South Korea’s President Park Chung-hee, father of the current president, launched a quest for nuclear weapons. Washington, the South’s military protector, applied substantial pressure to kill the program.

Today it looks like Park might have been right.

The Democratic People’s Republic of Korea continues its relentless quest for nuclear weapons and long-range missiles. The South is attempting to find an effective response.

Although the DPRK is unlikely to attack since it would lose a full-scale war, the Republic of Korea remains uncomfortably dependent on America. And Washington’s commitment to the populous and prosperous ROK likely will decline as America’s finances worsen and challenges elsewhere multiply.

In response, there is talk of reviving the South’s nuclear option. Won Yoo-cheol, parliamentary floor leader of the ruling Saenuri Party, told the National Assembly: “We cannot borrow an umbrella from a neighbor every time it rains. We need to have a raincoat and wear it ourselves.”

Chung Moon-jong—member of the National Assembly, presidential candidate, and Asan Institute founder—made a similar plea two years ago. He told an American audience “if North Korea keeps insisting on staying nuclear then it must know that we will have no choice but to go nuclear.” He suggested that the South withdraw from the Nuclear Nonproliferation Treaty and “match North Korea’s nuclear progress step-by step while committing to stop if North Korea stops.”

The public seems receptive. Support for a South Korean nuclear program is on the upswing, hitting 66 percent in 2013. While President Park Geun-hye’s government remains formally committed to the NPT, Seoul has conducted nuclear experiments and resisted oversight by the International Atomic Energy Agency.

Of course, the idea triggers a horrified reaction in Washington.

Unfortunately, in Northeast Asia today nonproliferation operates a little like gun control in the U.S.: only the bad guys end up armed. China, Russia, and North Korea all have nuclear weapons. America’s allies, Japan and South Korea, do not, and expect Washington to defend them. To do so the U.S. would have to risk Los Angeles to protect Seoul and Tokyo—and maybe Taipei and Canberra as well, depending on how far Washington extends the “nuclear umbrella.”

While America’s overwhelming nuclear arsenal should deter anyone else from using nukes, conflicts do not always evolve rationally. South Korea and Japan are important international partners, but their protection is not worth creating an unnecessary existential threat to the American homeland.

Better to create a balance of power in which the U.S. is not a target if nukes start falling. And that would be achieved by independent South Korean and Japanese nuclear deterrents. Such a prospect would antagonize China. But then, such an arsenal would deter the People’s Republic of China as well as DPRK. Which also would serve American interests.

Moreover, the mere threat might solve the problem. When faced with the prospect of Japanese and South Korean nuclear weapons, China might come to see the wisdom of applying greater pressure on the North.

The U.S.-ROK discussions over THAAD may have encouraged Beijing to indicate its willingness support a UN resolution imposing more pain on the North for its latest nuclear launch. The prospect of having two more nuclear neighbors would concentrate minds in Zhongnanhai.

Abandoning nonproliferation is not a decision to take lightly. No one wants a nuclear arms race.

But the PRC already is improving its nuclear forces. And allowing North Korea to enjoy a unilateral advantage creates great dangers.

So as I wrote for National Interest: “policymakers should consider the possibility of a nuclear South Korea. Keeping America entangled in the Korean imbroglio as Pyongyang develops nuclear weapons is a bad option which could turn catastrophic. Blessing allied development of nuclear weapons might prove to be a better alternative.”

Park Chung-hee was a brute, but his desire for an ROK nuclear weapon looks prescient. Maybe it’s time for the good guys in Northeast Asia to be armed as well.

Yesterday, the New York Times editorial board called on Hillary Clinton to leave the realm of economic reality behind and join the ranks of those seeking to drastically increase the minimum wage to $15. The timing of this latest exhortation would almost be amusing, if it weren’t so disconcerting. It came the same day that four former Democratic chairs of the Council of Economic Advisers sent an open letter to her rival for the Democratic nomination Bernie Sanders citing concerns that some of the claims of his campaign “cannot be supported by the economic evidence.” The editorial board engages in its own bout of economic fantasy with the claim that “economic obstacles are not standing in the way” of more than doubling the minimum wage. Even Alan Krueger, author of one of the major studies finding negligible disemployment effects from a past minimum wage increase, took to the pages of the New York Times to oppose a $15 minimum because it is “beyond international experience, and could well be counterproductive.”

In order to make their support for a $15 minimum seem more reasonable, the board alludes to a understanding among proponents of a higher minimum wage that a robust one equals half the average wage. These proposals generally use the median hourly wage, not the higher mean the board references, and getting to that ratio with a $15 minimum wage in 2022 would require wage growth higher than 5 percent, levels not seen in well over a decade. So it’s not only the Sanders campaign engaging in fanciful economic assumptions, and the NYT editorial board is right there with them. More reasonable wage growth assumptions would place a fully phased-in $15 minimum wage around the highest levels seen in the developed world. beyond the frontier of the existing economic literature. Such a move would make major negative unintended consequences near certain.

Even this could understate the problems with a federal $15 minimum wage, as some states and jurisdictions would be even less able to absorb the effects of such a radical increase. The editorial board asserts that the minimum wage needs to be increased at the federal level because some states have not raised it on their own. The piece fails to give any consideration to how less affluent parts of the country will fare under the proposed increase. In Puerto Rico, for instance, the current federal minimum wage is already 77 percent of the median hourly wage (France, for comparison is around 61 percent). While there are certainly other factors contributing to the crisis on Puerto Rico, the high minimum wag is a contributing factor, and doubling it would have disastrous consequences for the roughly 3.6 million people living there. The same dynamic to a lesser extent would hold for less affluent states like Alabama or Arkansas, especially in non-metro areas, where this increase would lead to a higher ratio relative to the median hourly wage than almost anywhere else in the developed world. This could have a devastating impact on state and local economies in these places, and exacerbate the serious problems with poverty and limited opportunity those places already grapple with.

These concerns aside, the minimum wage is an incredibly ineffective way to try to alleviate poverty, and would probably further limit the opportunities for affected workers. Few of the benefits of a higher minimum wage would accrue to families in poverty: with a $15 minimum wage, only 12 percent of the benefits would go to poor families, while 38 percent would go to families at least three times above the poverty line. Even worse, while some studies have indeed found past minimum wage increases had a limited impact on employment, others focusing on targeted workers (younger workers with lower skills) have found significant disemployment effects and reduced economic mobility for this group. Minimum wage increases are poorly targeted to reduce poverty, and could actually be counterproductive for the very people they are supposed to help.

The NYT editorial board’s call for Hillary to embrace the $15 minimum wage elevates intentions above outcomes, leaving behind valid questions about trade-offs for the realm of economic fantasy. Their piece ignores the concerns that an increase of that magnitude could have severe unintended consequences that could exacerbate many of the problems they would want it to address. Perhaps they should have perused their own archives, as the same editorial board said years ago, “[t]he idea of using a minimum wage to overcome poverty is old, honorable - and fundamentally flawed.”

New data from Gallup suggests that residents in US states with freer markets are more optimistic about their state’s economic prospects. In their 50-State Poll, Gallup asked Americans what they thought about the current economic conditions in their own state as well as their economic expectations for the future. North Dakota (92%), Utah (84%), and Texas (82%) top the list as states with the highest share of residents who rate their current economic conditions as excellent or good.  In stark contrast, only 18% of Rhode Island residents, 23% of Illinois residents, and 28% of West Virginians rate their state’s economic conditions as excellent or good. Similarly Americans most optimistic about their state’s economic futures include Utah (83%) and Texas (77%) while states at the bottom include Illinois (34%) and West Virginia (36%).

What explains these stark differences in economic evaluations and expectations across US states? Could differences across states in economic freedom, such as government regulations on business, tax rates, government spending, and property rights protection, be part of the story?

Figure 1: Relationship Between State Economic Freedom Scores
and Residents’ Evaluations of Current Economic Conditions


 Source: Economic Freedom Index 2011, Freedom in the 50 States; Gallup 50-State Poll 2015

To investigate, I examined the Freedom in the 50 States Economic Freedom Index, calculated by Jason Sorens and William Ruger and published by the Mercatus Center, to compare. I then contrasted this data with Gallup’s public opinion numbers. It turns out there is a strong relationship between economic freedom and public economic optimism. Americans who live in states that have lower taxes, lower government spending, fewer regulations on business, better property rights protections, better tort system, etc. also tend to have more confidence in their state’s economic conditions.

Figure 1 shows the relationship between states’ economic freedom scores and public evaluations of current economic conditions. As you can see, states that are more economically freemeaning less government regulation, lower taxes, and less government spending, etc.also tend to have residents that say their state economies are excellent or good. This correlation1 between economic freedom and residents’ evaluations of their state’s current economic conditions is .55.

Next, Figure 2 shows the relationship between each state’s economic freedom score and share of residents in each state who think their state economy is getting better. As you can see, states that are more economically free tend to have residents who are more optimistic about their the states’ economic futures. The correlation between economic freedom and residents’ positive economic outlook is .44. Notably, economic optimism in states like New York and California that have less free economies tend to be the exception rather than the rule when compared to the nation as a whole. 

Figure 2: Relationship Between State Economic Freedom Scores
and Residents’ Positive Economic Outlook 

Source: Economic Freedom Index 2011, Freedom in the 50 States; Gallup 50-State Poll 2015. Correlation: .44, if outliers like New York, California, and Nevada are excluded, correlation rises to .58.

Of course, correlation is not causation. But it would also be a mistake not to consider the obvious possibility that economic freedom and economic success are related. There is a reasonable argument to be made that making it easier to start businesses, reducing regulatory barriers, and leaving more money in the private sector can lead to more entrepreneurship, companies creating more jobs, technological innovation, economic growth, and thus more optimistic people.

While these results do not report on actual economic conditions, public perception of economic conditions is also crucially important. Americans’ perception of their state economies can act like a composite score of how residents perceive the multifarious economic forces at work in their states. 

In sum, these results provide some indication that freer markets may provide greater hope for a better economic future.

Check out the Freedom in the 50 State Rankings:


1 Correlation is a measure of relationship

At the start of the year there was a lot of speculation on whether President Obama would crown his historic rapprochement with Cuba with an equally historic visit to the island. The guesswork is over with today’s announcement of his trip next month.

Many things have changed in the last year in the relationship between the United States and Cuba: diplomatic ties have been restored, the leaders of both countries have met twice, dozens of commercial flights per day have been authorized, hundreds of thousands of Americans are travelling to the once-forbidden island, and many economic sanctions have been lifted.

And yet there’s one thing that hasn’t changed: the repressive nature of Cuba’s Communist dictatorship. If anything, things might be getting worse. The Miami Herald’s columnist Andrés Oppenheimer recently reported that the number of self-employed workers in Cuba has actually dropped in the last six months. Arbitrary detentions of peaceful opposition activists are on the rise. Economic reforms are still too timid. If there is a lot of enthusiasm about Cuba lately, it has to do more with what Washington is doing than what Havana is actually delivering.

This is not to say that Obama’s rapprochement with Cuba has failed: Washington’s previous policy of isolating the island was utterly counterproductive. But we should not kid ourselves about an imminent change of the nature of the Castro regime.

President Obama has said that his trip’s main objective will be to “improve the lives of the Cuban people.” If so, he should follow the steps of Jimmy Carter when he visited the island in 2002: the former president met with dissidents and was allowed to address the nation uncensored in a speech on national TV where he called for democratic elections, respect for human rights and greater civil liberties.

If Obama fails to get similar concessions, his trip will only boost the standing of the Castro regime. It will be all about cementing his legacy and not about trying to improve the lives of ordinary Cubans. 

Wow, will bacon be next?

Last fall we wrote about the improvement in tomato taste that results from growing them in elevated carbon dioxide and seawater (Idso and Michaels, 2015). Now it looks like the same treatment improves lettuce. 

Enhancing crop nutritional value has long been a goal of the agricultural industry. Growing plants under less than optimal conditions for a short period of time generally increases their oxidative stress. To counter such stress, plants will usually increase their antioxidant metabolism which, in turn, elevates the presence of various antioxidant compounds in their tissues, compounds that can be of great nutritional value from a human point of view, such as helping to reduce the effects of ageing.

However, stress-induced nutritional benefits often come at a price, including a reduction in plant growth and yield, making it unproductive and costly to implement these practices in the real world. But what if there was a way achieve such benefits without sacrificing crop biomass, having our cake and eating it, too? An intriguing paper recently published in the journal Scientia Horticulturae explains just how this can happen, involving lettuce, salt stress, and atmospheric CO2.

According to Pérez-López et al. (2015), the authors of this new work, “few studies have utilized salt irrigation combined with CO2-enriched atmospheres to enhance a crop’s nutraceutical value.” Thus, the team of five Spanish researchers set out to conduct just such an experiment involving two lettuce cultivars, Blonde of Paris Badavia (a green-leaf lettuce) and Oak Leaf (a red-leaf lettuce and a common garden variety). In so doing, they grew the lettuce cultivars from seed in controlled environment chambers at either ambient or enriched CO2 for a period of 35 days after which they supplied a subset of the two treatments with either 0 or 200 mM NaCl for 4 days to simulate salt stress. Thereafter they conducted a series of analyses to report growth and nutritional characteristics of the cultivars under these varying growth conditions. And what did those analyses reveal?

As shown in Figure 1, elevated CO2 increased the leaf water content, biomass and antioxidant capacity of both lettuce cultivars under normal and salt-stressed growing conditions. Plant biomass, for example, was enhanced by 42 and 62 percent for green and red lettuce, respectively under normal growing conditions (no salt stress) and by 56 and 61 percent, respectively, when salt stressed. Similarly, elevated CO2 increased lettuce antioxidant capacity by 196 (green cultivar) and 293 percent (red cultivar) under normal conditions (non-salt-stressed), and by 61 (green cultivar) and 109 percent (red cultivar) when salt stressed. What is more, as this graphic illustrates, elevated CO2 totally ameliorated the negative effects of salt stress on plant biomass and antioxidant capacity, such that the value of these two parameters under elevated CO2 and salt stress conditions were higher than values observed under normal CO2 with no salt stress and normal CO2 with salt stress. 

Figure 1. Effects of salt treatment and CO2 concentration on water content (A), biomass production (B) and antioxidant capacity (C) of green leaf and red leaf lettuce. Adapted from Pérez-López et al. (2015).

Pérez-López et al. also report that “under elevated CO2, both lettuce cultivars increased the uptake of almost all minerals to adjust to the higher growth rates, reaching similar concentrations to the ones detected under ambient CO2.” However, the red cultivar “seemed to gain a greater advantage from elevated CO2 than the green cultivar because it better adjusted both mineral uptake and antioxidant metabolism.”

Such positive findings bode well for the future of the lettuce industry, for as concluded by Pérez-López et al., “elevated CO2 alone or in combination with short environmental salt stress permits us to increase the nutritional quality (increasing the concentration of some minerals and antioxidants) of lettuce without yield losses or even increasing production.” And that is a future worth hoping for!

PS: One of us is going to run a seawater experiment on tomatoes this summer. Readers should get a full report around August 15!



Idso, C.D., and P. J. Michaels, 2015.  Want better tomatoes?  Add carbon dioxide and a pinch of salt! http://www.cato.org/blog/want-better-tomatoes-add-carbon-dioxide-pinch-salt

Pérez-López, U., Miranda-Apodaca, J., Lacuesta, M., Mena-Petite, A. and Muñoz-Rueda, A. 2015. Growth and nutritional quality improvement in two differently pigmented lettuce cultivars grown under elevated CO2 and/or salinity. Scientia Horticulturae 195: 56-66.


When the Brookings Institution and Urban Institute claim any tax reform will “lose trillions” they are comparing their static estimates of revenues from those plans (which assume tax rates could double or be cut in half with no effect on growth or tax avoidance) to totally unrealistic “baseline” projections from the Congresional Budget Office.  Those CBO projections assume that rapid 2.4% annual increases in real hourly compensation over the next decade will push more people into higher tax brackets every year.  As a result, the average tax burden supposedly rises forever – from 17.7% of GDP in 2015 to 19.9% in 2045 and 23.8% by 2090.  And, typical of static estimates, this ever-increasing tax burden is imagined to have no bad effects on the economy.

Such a high level of federal taxation never happened in the past (20% was a record set in the tech stock boom of 2000) and it will never happen in the future.  In short, this is an entirely bogus basis by which to judge tax reform plans.

A far more sensible question would be this:

Will the Cruz or Rubio tax reforms raise just as much money as the Obama “tax increase” has – namely, 17.5% of GDP from 2013 to 2015. If so, then real tax revenues will grow faster after reform because real GDP growth will surely be at least 1.2% faster – or a middling 3.5% a year, which is all the cautious Tax Foundation estimates suggest.

Ted Cruz says the defense plan he unveiled Tuesday in South Carolina would give the U.S. military “more tooth, less tail.” Actually Cruz’s plan would produce more of everything, especially debt.

Cruz says that as President he’ll spend 4.1 percent of gross domestic product (GDP) on defense for two years, and 4 percent thereafter. As the chart below shows, under standard growth predictions, Cruz’s plan produces a massive increase in military spending: about $1.2 trillion over what would be Cruz’s first term and $2.6 trillion over eight years. Details on the chart are at the end of this post.

The chart also shows how much Cruz’s plan exceeds the Budget Control Act’s caps on defense spending, which remain in force through 2021. Spending bills exceeding those caps trigger sequestration: across-the-board cuts that keep spending at the cap. So Cruz’s plan depends on Congress repealing the law. Experience suggests that Congress will instead trade on dodgy future savings to raise caps by twenty or thirty billion a year—about a tenth of what Cruz needs.

Cruz is relatively clear on the tooth—force structure—he hopes to buy. He’d grow the Army’s end strength to 1.4 million, with at least 525,000 in the active force. Those numbers are scheduled to fall to 980,000 and 450,000in 2018.  Cruz would “reverse the cuts to the manpower of the Marines,” which presumably means going from the current 182,000 active to the 202,000 peak size reached in 2011 for the Iraq and Afghanistan wars. The Navy would grow from 287 to at least 350 ships, and the Air Force would add 1,000 aircraft to reach 6,000. The plan would also modernize each leg of the nuclear triad of intercontinental ballistic missiles, bombers and submarine-launched ballistic missiles. It asserts that the triad is “on the verge of slipping away,” ignoring current plans to modernize each leg, needlessly.

Despite the plan’s “paying for rebuilding” section, Cruz is unclear on how he’ll fund the buildup. The “less tail” option is a red-herring. The implication, standard among those trying to look fiscally responsible while throwing money at the military, is that you cut administration to pay for force structure. But Cruz doesn’t sustain the pretense beyond attacking the Pentagon’s “bloated bureaucracy and social experiments.” He never identifies what bureaucracy—commands, budget line or contracts—he’d axe. He doesn’t explain how to overcome the Pentagon’s tendency to increase overhead during buildups or betray concern about the meager results of past efforts to shift tail to tooth. He complains about “12,000 accountants in Pentagon,” but between its buildup and push to audit the Pentagon, the Cruz administration would need thousands more. Cruz’s refusal to “bow to political correctness,” evidently does not cover politically-dicey efficiency measures like another Base Realignment and Closure round and reforming military benefit and compensation packages. Neither does Cruz say how his idea of social experiments, like letting gays serve openly or accommodating soldiers allergic to gluten, add costs.

Cruz, of course, will not fund his buildup through taxes. Instead, the plan mentions selling federal assets, unspecified spending cuts, and tax revenue juiced by four or five percent annual growth. Wishful thinking seems a fair summary. Asset sales would only help marginally. The plan lacks a method for abolishing Democrats, who will almost certainly retain enough Senators to keep non-defense discretionary spending pegged to defense, or to make entitlement cuts politically unpalatable. And betting that you can vastly outperform typical growth isn’t sound planning.

That leaves debt as the most likely way President Cruz will fund his buildup, adding to the pile he already plans. And that helps explain why Cruz’s defense plan won’t happen. With CBO’s recent estimate of accelerated deficit growth, the politics constraining Pentagon spending should last. Assuming Democrats and vulnerable Republicans continue to block big entitlement spending cuts and Republicans prevent tax increases, concern about deficits translates into pressure to restrain discretionary spending, of which defense is more than half.

Also in the unclear category: the rationale undergirding the plan. Cruz’s force structure hopes, which he doesn’t price, have little to do with his 4 percent spending goal, beyond a general commitment to more. As Matt Fay explains, spending four percent of GDP on defense is to punt on strategy, which involves prioritizing resources to meet threats. Threats don’t grow with wealth or fade with poverty.

Cruz’s force structure suggestions also lack a strategic basis. Cruz criticizes nation-building wars and worries about China. But rather than distribute his buildup accordingly by focusing it on the Navy and Air Force, he gives equally to the ground forces. Ultimately, Cruz proposes spending a lot more to do what we are now doing. Like Jeb Bush, Marco Rubio, and even Dick Cheney, Cruz’s rhetorical assaults on the Obama administration’s defense policy belie underlying agreement with its premises. I attack those bipartisan beliefs in various other places.


The chart covers 2018-2025 because those are the first and last fiscal year budgets that the next administration will plan, assuming it wins a second term. For GDP growth, the chart uses recent Congressional Budget Office projections. The current spending plans are in budget authority and come from the Pentagon’s new five year defense spending plan (see figure 1-3 on page 1-5) and Office of Management and Budget estimates for subsequent years (table 28-1). That spending track falls under 3 percent of CBO’s GDP prediction in 2018 and slides toward 2.5 percent by the mid 2020s.  The caps are on page 13 here. All figures are in nominal dollars. 

Barely a week after a Georgia judge threw out a challenge to the state’s scholarship tax credit law, today the Oklahoma Supreme Court unanimously upheld school vouchers for students with special needs.

Plaintiffs argued that the Lindsey Nicole Henry Scholarships for Students with Disabilities violated the Oklahoma constitution’s historically anti-Catholic Blaine Amendment, which prohibits the state from appopriating public funds for sectarian purposes. A lower court had agreed, limiting the vouchers only to private schools without any religious affiliation. Today, the state supreme court overturned that decision, upholding the law in its entirety.

Plaintiffs had argued that the vouchers unconstitutionally aided religious schools, but the court found that the voucher law “is void of any preference between a sectarian or non-sectarian private school” and that “there is no influence being exerted by the State for any sectarian purpose with respect to whether a private school satisfies [the law’s eligibility] requirements.”

Despite being “religion neutral,” the plaintiffs argued that the law is unconstitutional because more voucher recipients chose to attend religious schools than non-religious schools. However, the court rejected this claim, citing the U.S. Supreme Court’s decision in Zelman v. Simmons-Harris (which upheld school vouchers in Ohio): “the constitutionality of a neutral educational aid program simply does not turn on whether and why, in a particular area, at a particular time, most private schools are religious, or most recipients choose to use the aid at a religious school.” What matters to the constitution, the Oklahoma court explained, is only that the law is religiously neutral and that parents have a choice: “When the parents and not the government are the ones determining which private school offers the best learning environment for their child, the circuit between government and religion is broken… Scholarship funds deposited to a private sectarian school occur only as a result of the private independent choice by the parent or legal guardian.” [emphasis in the original]

The court outlined the key factors that led to their conclusion:

(1) voluntary participation by families in the scholarship program;

(2) genuine independent choice by parent or legal guardian in selecting sectarian or non-sectarian private school;

(3) payment warrant issued to parent or legal guardian [not directly to a private school];

(4) parent endorses payment to independently chosen private school;

(5) Act is religion neutral with respect to criteria to become an approvate school for scholarship program;

(6) each public school district has the option to contract with a private school to provide mandated special educational services instead of private services in the district;

(7) acceptance of the scholarship under the Act serves as parental revocation of all federally guaranteed rights due to children who qualify for services under [the Individuals with Disabilities Act]; and

(8) the district public school is relieved of its obligation to provide educational services to the child with disabilities as long as the child utilizes the scholarship.

The timing of the decision couldn’t be better for supporters of the education savings account (ESA) legislation that just received a green light from the Oklahoma House Common Education Committee this week. Opponents had argued that the ESAs were likely unconstitutional, but with the court’s unanimous ruling, that will no longer be a concern. Legislators can now focus on the merits of ESAs.