Feed aggregator

The U.S. is bankrupt. Of course, Uncle Sam has the power to tax. But at some point even Washington might not be able to squeeze enough cash out of the American people to pay its bills.

President Barack Obama would have everyone believe that he has placed federal finances on sound footing. The deficit did drop from over a trillion dollars during his first years in office to “only” $439 billion last year. But the early peak was a result of emergency spending in the aftermath of the financial crisis and the new “normal” is just short of the pre-financial crisis record set by President George W. Bush. The reduction is not much of an achievement.

Worse, the fiscal “good times” are over. The Congressional Budget Office expects the deficit to jump this year, to $544 billion.

The deficit is not caused by too little money collected by Uncle Sam. Revenues are rising four percent this year, and will account for 18.3 percent of GDP, well above the last 50-year average of 17.4 percent. But outlays are projected to rise six percent, leaving expenditures at 21.2 percent of GDP, greater the 20.2 percent average of the last half century.

Alas, this year’s big deficit jump is just the start. Revenues will rise from $3.4 trillion to $5 trillion between 2016 and 2026. As a share of GDP they will remain relatively constant, ending up at 18.2 percent. However, outlays will rise much faster, from about $4 trillion this year to $6.4 trillion in 2026. As a percent of GDP spending will jump from 21.2 percent to 23.1 percent over the same period,

Thus, the amount of red ink steadily rises and is expected to be back over $1 trillion in 2026. The cumulative deficit from 2017 through 2026 will run $9.4 trillion. Total debt will rise by around 70 percent, to roughly $24 trillion in 2026.

Reality is likely to be worse. So-called discretionary spending, subject to annual appropriations, is to be held to record, and probably unrealistically, low levels. In contrast, entitlements will be exploding.

Last June, CBO published a report looking at the federal budget through 2040. Warned the agency: “the extended baseline projections show revenues that fall well short of spending over the long term, producing a substantial imbalance in the federal budget.” By 2040 the agency imagines revenues rising sharply, to 19.4 percent of GDP, but spending going up even further, to 25.3 percent of GDP.

Using its revised figures, CBO warned: “Three decades from now debt held by the public is projected to equal 155 percent of GDP, a higher percentage than any previously recorded in the United States.” Even when exiting World War II—106 percent in 1946, the previous record.

CBO noted the potentially destructive consequences of such indebtedness. Washington’s interest burden would rise sharply. Moreover, “because federal borrowing reduces total saving in the economy over time, the nation’s capital stock would ultimately be smaller than it would be if debt was smaller, and productivity and total wages would be lower.” Americans would be poorer and have less money to fund the steadily rising budgets.

Worse, investors could come to see federal debt as unpayable. Warned CBO: “There would be a greater risk that investors would become unwilling to finance the government’s borrowing needs unless they were compensated with very high interest rates; if that happened, interest rates on federal debt would rise suddenly and sharply.” This in turn “would reduce the market value of outstanding government bonds, causing losses for investors and perhaps precipitating a broader financial crisis by creating losses for mutual funds, pension funds, insurance companies, banks, and other holders of government debt—losses that might be large enough to cause some financial institutions to fail.”

As I wrote for American Spectator: “There’s no time to waste. Uncle Sam is headed toward bankruptcy. Without serious budget reform, we all will be paying the high price of fiscal failure.”

The Current Wisdom is a series of occasional articles in which Patrick J. Michaels and Paul C. “Chip” Knappenberger, from Cato’s Center for the Study of Science, review interesting items on global warming in the scientific literature or of a more technical nature. These items may not have received the media attention that they deserved or have been misinterpreted in the popular press.

We hardly need a high tech fly-swatter (although they are fun and effective) to kill this nuisance—it’s so languorous that one can just put their thumb over it and squish.

Jeb Bush’s candidacy? No, rather the purported connection between human-caused global warming and the highly-publicized spread of the Zika virus.

According to a recent headline in The Guardian (big surprise) “Climate change may have helped spread Zika virus, according to WHO scientists.”

Here are a few salient passages from The Guardian article:

“Zika is the kind of thing we’ve been ranting about for 20 years,” said Daniel Brooks, a biologist at University of Nebraska-Lincoln. “We should’ve anticipated it. Whenever the planet has faced a major climate change event, man-made or not, species have moved around and their pathogens have come into contact with species with no resistance.”

And,

“We know that warmer and wetter conditions facilitate the transmission of mosquito-borne diseases so it’s plausible that climate conditions have added the spread of Zika,” said Dr. Diarmid Campbell-Lendrum, a lead scientist on climate change at WHO.

Is it really “plausible?”

Hardly.

The Zika virus is transmitted by two species of mosquitoes, Aedes aegypti and Aedes albopictus, that are now widespread in tropical and sub-tropical regions of the globe (including the Southeastern U.S.), although they haven’t always been.

These mosquito species, respectively, have their origins in the jungles of Africa and Asia—where they largely remained for countless thousands of years. It’s hypothesized that Aedes aegypti found its way out of Africa during the slave trade, which brought the mosquitoes into the New World, and from there they spread throughout the tropics and subtropics of North and South America. Aedes albopictus, also known as the Asian tiger mosquito, is a more recent global traveler, spreading out from the forests of Southeast Asia to the rest of the world during the 1980s thanks to the interconnectedness of the modern transportation network. Figure 1 below shows the modeled geographic distribution of each of these species.

Figure 1. (Top) Global map of the modelled distribution of Aedes aegypti. (Bottom) Global map of the modelled distribution of Aedes albopictus. The map depicts the probability of occurrence (from 0 blue to 1 red) at a spatial resolution of 5 km × 5 km. (figures from Kraemer et al., 2015)

The distribution explosion from confined forests to the global tropics and subtropics had nothing whatsoever to do with climate change, rather, it was due to the introduction of the mosquito species into favorable existing climates.

Since the Aedes mosquitoes are now widely present in many areas also highly populated by the human species, the possibility exists for outbreaks and the rapid spread of any of the diseases that the mosquitoes may carry, including dengue fever, yellow fever, chikungunya, West Nile and Zika.

But what about climate change? It’s got to make things better for warmth-loving mosquitos and the diseases they spread, right? After all, we read that in the New York Times!

Hardly. Climate change acts at the periphery of the large extant climate-limited distributions. And its impact on those margins is anything but straightforward.

Many scientific studies have used various types of climate and ecological niche modelling to try to project how Aedes mosquito populations and geographic range may change under global warming and they all project a mixed bag of outcomes. The models project range/population expansions in some areas, and range/population contractions in others—with the specific details varying among the different studies. The one take-home message from all of these studies is that the interaction between mosquitos and climate and disease, is, in a word, complex.

For example, here are some typical results (from Khormi and Kumar, 2014) showing the current distribution of Aedes aegypti mosquitos along with the projected distribution in the latter part of this century under a pretty much business-as-usual climate change scenario.

Figure 2. (Top) Modeled suitability for Aedes aegyptibased on the current climate conditions. (Bottom) Estimated suitability for Aedes aegyptiin 2070 based on SRES A1B (business-as-usual) emission scenario. White = areas unfavorable for the Aedesmosquito; blue = marginally suitable areas; blue/yellow = favorable areas; yellow/red = very favorable areas. (adapted from Khormi and Kumar, 2014).

Good luck seeing much of a difference. Which is our point: the bulk of the large distribution of Aedes aegypti remains the same now as 50 years from now, with small changes (of varying sign) taking place at the edges. Similar results have been reported for Aedes albopictus by Yiannis Proestos and colleagues. The authors of the Aedes aegypti study write:

Surprisingly, the overall result of the modelling reported here indicates an overall contraction in the climatically suitable area for Aedes in the future… The situation is not straightforward but rather complicated as some areas will see an upsurge in transmission, while others are likely to see a decline. [emphasis added]

The human response to the presence of the mosquitos and the disease is equally complicated, if not more so.

As Dr. Paul Reiter pointed out in his 2001 classic “Climate Change and Mosquito-Borne Disease”:

[T]he histories of three such diseases—malaria, yellow fever, and dengue—reveal that climate has rarely been the principal determinant of their prevalence or range; human activities and their impact on local ecology have generally been much more significant.

More recently, Christofer Aström and colleagues found this to be the case in their modelling efforts for dengue fever (a similar result would be expected for Zika, carried by the same mosquitos). They report:

Empirically, the geographic distribution of dengue is strongly dependent on both climatic and socioeconomic variables. Under a scenario of constant [socioeconomic development], global climate change results in a modest but important increase in the global population at risk of dengue. Under scenarios of high [socioeconomic development], this adverse effect of climate change is counteracted by the beneficial effect of socioeconomic development.

The bottom line is that even 50-100 years from now, under scenarios of continued global warming, the changes to the Aedes populations are extremely difficult to detect and make up an exceedingly small portion of the extant distributions and range. The impact of those changes is further complicated by the interactions of the mosquitos, humans, and disease—interactions which can be mitigated with a modicum of prevention.

Linking climate change to the current mosquito range and outbreak of Zika (or any other of your favorite Aedes-borne diseases)?  No can do.

The threat of the spread of vector-borne diseases, like most every other horror purported to be brought about by climate change, is greatly overblown. The scientific literature shows the impact, if any, to be ill-defined and minor. As such, it is many times more effectively addressed by directed, local measures, than by attempting to alter the future course of the earth’s climate through restrictions on greenhouse gas emissions (produced from the burning of fossil fuels to generate power)—the success and outcome  of which is itself uncertain

 

References:

Aström, C, et al., 2012. Potential Distribution of Dengue Fever Under Scenarios of Climate Change and Economic Development. EcoHealth, 9, 448-454.

Khormi, H. M., and L. Kumar, 2014. Climate change and the potential global distribution of Aedes

aegypti: spatial modelling using geographical information system and CLIMEX. Geospatial Health, 8, 405-415.

Kraemer, M. U. G., et al., 2015. The global distribution of the arbovirus vectors Aedes aegypti and Ae. albopictus, eLife, 4, doi: 10.7554/eLife.08374

Proestos, Y., et al., 2015. Present and future projections of habitat suitability of the Asian tiger mosquito, a vector of viral pathogens, from global climate simulation. Philosophical Transactions of the Royal Society B, 370,20130554, http://dx.doi.org/10.1098/rstb.2013.0554

Reiter, P., 2001. Climate change and mosquito-borne disease, Environmental Health Perspectives, 109, 141-161.

Last year, the comedy duo Key & Peele’s TeachingCenter sketch imagined what it would be like if teachers were treated like pro-athletes, earning millions, being drafted in widely televised events, and starring in car commercials. We’re not likely to see the latter two anytime soon, but some teachers are already earning seven figures.


The Key & Peele sketch inspired think pieces arguing that K-12 teachers should be paid more, but without making any fundamental changes to the existing system. Matt Barnum at The Seventy-Four brilliantly satirized this view in calling for pro-athletes to be treated more like teachers: stop judging teams based on wins or players based on points scored, eliminate performance pay in favor of seniority pay, and get rid of profits.

Barnum’s serious point, of course, is that these factors all contribute to athletes’ high salaries. There are at least two other major factors: the relative scarcity of highly talented athletes and their huge audience. The world’s best curlers don’t make seven figures because no one cares about curling (apologies to any Canadian readers), and while high-quality football referees are crucial to a sport with a huge audience, they’re a lot more replaceable than a good quarterback.

But what if we combined these ingredients? What if there was a for-profit system where high-quality teachers had access to a huge audience and they were paid based on their performance?

Actually, such a system already exists:

Kim Ki-hoon earns $4 million a year in South Korea, where he is known as a rock-star teacher—a combination of words not typically heard in the rest of the world. Mr. Kim has been teaching for over 20 years, all of them in the country’s private, after-school tutoring academies, known as hagwons. Unlike most teachers across the globe, he is paid according to the demand for his skills—and he is in high demand.

He may be a “rock star” but how does Mr. Kim have an audience large enough that he earns more than the average Major League Baseball player? Answer: The Internet.

Mr. Kim works about 60 hours a week teaching English, although he spends only three of those hours giving lectures. His classes are recorded on video, and the Internet has turned them into commodities, available for purchase online at the rate of $4 an hour. He spends most of his week responding to students’ online requests for help, developing lesson plans and writing accompanying textbooks and workbooks (some 200 to date).

In the United States, several companies are taking a similar approach to higher education. Last week, Time Magazine profiled Udemy, one of the largest providers of digital higher education:

On Feb. 12, Udemy will announce that more than 10 million students have taken one of its courses. In the U.S., there were about 13 million students working toward a four-year degree during fall 2015 semester, according to the Department of Education. It is another example of the rising popularity of online education as college costs have boomed in the United States. Americans hold $1.2 trillion in student loan debt, second only to mortgages in terms of consumer obligations. Entering the workforce deep in the red could be a handicap that follows graduates the rest of their careers, economists say.

Digital instruction is still in the early stages of development, and research on its impact so far has been mixed. It’s not for everyone. However, it holds the promise of providing students much greater access to top instructors at a lower cost. At the same time, as Joanne Jacobs highlighted, it also gives great instructors access to a much larger audience, and that can translate into significant earnings. As Time reports:

Udemy courses can be rewarding for the platform’s instructors, too. Rob Percival, a former high school teacher in the United Kingdom, has made $6.8 million from a Udemy web development course that took him three months to build. “It got to the stage several months ago where I hit a million hours of viewing that particular month,” he says. “It’s a very different experience than the classroom. The amount of good you can do on this scale is staggering. It’s a fantastic feeling knowing that it’s out there, and while I sleep people can still learn from me.”

Digital instruction is not a panacea for all our education policy challenges (nothing is), and it’s unlikely that it will replace in-person learning, especially for younger students. But it is a good example of how harnessing the market can improve the lot of both students and teachers.

Yet again North Korea has angered “the world.” Pyongyang violated another United Nations ban, launching a satellite into orbit. Washington is leading the campaign to sanction the North.

Announced UN Ambassador Samantha Power: “The accelerated development of North Korea’s nuclear and ballistic missile program poses a serious threat to international peace and security—to the peace and security not just of North Korea’s neighbors, but the peace and security of the entire world.”

The Democratic People’s Republic of Korea is a bad actor. No one should welcome further enhancements to the DPRK’s weapons arsenal.

Yet inflating the North Korean threat also doesn’t serve America’s interests. The U.S. has the most powerful military on earth, including 7100 nuclear warheads and almost 800 ICBMs/SLBMs/nuclear-capable bombers. Absent evidence of a suicidal impulse in Pyongyang, there’s little reason for Washington to fear a North Korean attack.

Moreover, the North is surrounded by nations with nuclear weapons (China, Russia) and missiles (those two plus Japan and South Korea). As a “shrimp among whales,” any Korean government could understandably desire to possess the ultimate weapon.

Under such circumstances, allied complaints about the North Korean test sound an awful lot like whining. For two decades U.S. presidents have said that Pyongyang cannot be allowed to develop nuclear weapons. It has done so. Assertions that the DPRK cannot be allowed to deploy ICBMs sound no more credible.

After all, the UN Security Council still is working on new sanctions after the nuclear test last month. China continues to oppose meaningful penalties. Despite U.S. criticism, the People’s Republic of China has reason to fear disintegration of the North Korean regime: loss of political influence and economic investments, possible mass refugee flows, violent factional combat, and loose nukes, and the creation of a reunified Korea hosting American troops on China’s border.

Moreover, Beijing blames the U.S. for creating the hostile security environment which encourages the North to develop WMDs. Why should Beijing sacrifice its interests to solve a problem of its chief global adversary’s making?

Pyongyang appears to have taken the measure of its large neighbor. The Kim regime announced its satellite launch on the same day that it reported the visit of a Chinese envoy, suggesting another insulting rebuff for Beijing.

Even if China does more, the North might not yield.

Thus, the U.S. and its allies have no better alternatives in dealing with Pyongyang today than they did last month after the nuclear test. War would be foolhardy, sanctions are a dead-end, and China remains unpersuaded.

As I point out in National Interest: “The only alternative that remains is some form of engagement with the DPRK. Cho Han-bum of the Korea Institute for National Unification argued that the North was using the satellite launch to force talks with America. However, Washington showed no interest in negotiation, so the DPRK launched.”

Of course, no one should bet on negotiating away North Korea’s weapons. If nothing else, Pyongyang watched American and European governments oust Libya’s Moammar Khadafy after, in its view, at least, he foolishly traded away his nuclear weapons and missiles.

Nevertheless, there are things which the North wants, such as direct talks with America, a peace treaty, and economic assistance. Moreover, the DPRK, rather like Burma’s reforming military regime, appears to desire to reduce its reliance on Beijing. This creates an opportunity for the U.S. and its allies.

Perhaps negotiation would temper the North’s worst excesses. Perhaps engagement would encourage domestic reforms. Perhaps a U.S. initiative would spur greater Chinese pressure on Pyongyang.

Perhaps not. But current policy has failed.

Yet again the North has misbehaved. Yet again the allies are talking tough. Samantha Power insisted that “we cannot and will not allow” the North to develop “nuclear-tipped intercontinental ballistic missiles.”

However, yet again Washington is only doing what it has done before. Unfortunately, the same policy will yield the same result as before. It is time to try something different.

 

President Obama has issued his final federal budget, which includes his proposed spending for 2017. With this data, we can compare spending growth over eight years under Obama to spending growth under past presidents.

Figures 1 and 2 show annual average real (inflation-adjusted) spending growth during presidential terms back to Eisenhower. The data comes from Table 6.1 here, but I made two adjustments, as discussed below.

Figure 1 shows total federal outlays. Ike is negative because defense spending fell at the end of the Korean War. LBJ is the big-spending champ. He increased spending enormously on both guns and butter, as did fellow Texan George W. Bush. Bush II was the biggest spender since LBJ. As for Obama, he comes out as the most frugal president since Ike, based on this metric.

Figure 2 shows total outlays other than defense. Recent presidents have presided over lower spending growth than past presidents. Nixon still stands as the biggest spender since FDR, and the mid-20th century was a horror show of big spenders in general. The Bush II and Obama years have been awful for limited government, but the LBJ-Nixon tag team was a nightmare—not just for rapid spending during their tenures, but also for the creation of many spending and regulatory programs that still haunt us today.

I made two adjustments to the official budget data, both for 2009. First, the official data includes an outlay of $151 billion for TARP in 2009 (page 4 here). But TARP ended up costing taxpayers virtually nothing, and official budget data reverses out the spending in later years. So I’ve subtracted $151 billion from the official 2009 amount. Second, 2009 is the last budget year for Bush II, but 2009 was extraordinary because Obama signed into law a giant spending (stimulus) bill, which included large outlays immediately in 2009. It is not fair to blame Bush II for that (misguided) spending, so I’ve subtracted $114 billion in stimulus spending for that year, per official estimates.

Readers will note that Congress is supposed to have the “power of the purse,” not presidents. But I think that the veto gives presidents and congresses roughly equal budget power today. So Figures 1 and 2 can be interpreted as spending growth during each president’s tenure, reflecting the fiscal disposition of both the administration and Congress at the time. Spending growth during the Clinton and Obama years, for example, was moderated by Republican congresses that leaned against the larger domestic spending favored by those two presidents.

One more caveat is that presidents have limited year-to-year control over spending that is on auto-pilot. Some presidents luck out with slower growth on such spending, as Obama has with relatively slow growth on Medicare in recent years.

Finally, it is good news that recent spending growth is down from past decades, but economic growth is slower these days, so the government’s share of the economy still expanded during the Bush II and Obama years. Besides, the FDR-to-Carter years bestowed on us a massive federal government that is actively damaging American society, so we should be working to shrink it, not just grow it more slowly.

A recent Cato policy forum on European over-regulation took an unexpected turn when my friend and colleague Richard Rahn suggested that falling prices and increasing availability of writing paper may have been responsible for increasing the number and length of our laws and regulations. (Dodd-Frank is longer than the King James’ Bible, to give just one obvious example.)

Goodness knows what will happen when new legislation stops being printed on writing paper and starts appearing only on the internet. (I never read Apple’s “terms and conditions,” do you?)

Anyhow, Richard’s hypothesis will soon be put to the test in Great Britain, where the lawmakers have just decided to stop writing the Acts of Parliament on calfskin – a tradition dating back to the Magna Carta – and use paper instead. Will the length of British laws increase, as Rahn’s hypothesis predicts? We shall see.

In the meantime, Americans remain stuck with a Niagara Falls of laws and regulations that our lawmakers generate every year. Many distinguished scholars have wondered how to slow our Capitol Hill busybodies down a little. The great Jim Buchanan had some good ideas, but the most effective, if not harshest, means of preventing over-regulation was surely developed by the Locrians in the 7th century BC.

As Edward Gibbon narrates in The History of the Decline and Fall of the Roman Empire, “A Locrian who proposed any new law, stood forth in the assembly of the people with a cord round his neck, and if the law was rejected, the innovator was instantly strangled.” (Vol. IV, chapter XLI; V pp. 783–4 in volume 2 of the Penguin edition.)

Ours is, of course, an enlightened Republic, not an iron-age Greek settlement on the southern tip of Italy. The life and limb of our elected officials must, therefore, remain safe. But, what if instead of physical destruction, a failed legislative proposal resulted in, so to speak, “political death?” What if the Congressman or Senator, whose name appeared on a bill that failed to pass through Congress, were prevented from running for reelection after their term in office came to an end? 

Just a happy thought before the weekend.

Ohio governor and GOP presidential hopeful John Kasich says he opposes ObamaCare. Yet somehow, he has managed to embrace the law in every possible way. He wanted to implement an Exchange, even if it was clearly unconstitutional under Ohio law. He denounced the Medicaid expansion’s “large and unsustainable costs,” which “will just rack up higher deficits…leaving future generations to pick up the tab.” Then he went ahead and implemented it anyway. Worse, he did so unilaterally, after the Ohio legislature passed legislation prohibiting him from doing so (which he vetoed). When Republican legislators and pro-life groups filed suit to stop him, Kasich defended his power-grab all the way to the Ohio Supreme Court. 

Kasich’s defense of his record on ObamaCare has been…less than honest. Just one example: in a town hall meeting in South Carolina last night, Kasich railed against how ObamaCare increases the cost of health care at the same time he boasted he has constrained Medicaid spending in Ohio. In fact, Kasich’s unilateral Medicaid expansion not only increased the cost of Medicaid to taxpayers nationwide, but according to Jonathan Ingram of the Foundation for Government Accountability, it “has run $2.7 billion over budget so far [and] is set to run $8 billion over budget by 2017.” 

For more examples of Kasich’s ObamaCare duplicity, see my new four-part (yet highly readable!) series at DarwinsFool.com:

 

Four decades ago South Korea’s President Park Chung-hee, father of the current president, launched a quest for nuclear weapons. Washington, the South’s military protector, applied substantial pressure to kill the program.

Today it looks like Park might have been right.

The Democratic People’s Republic of Korea continues its relentless quest for nuclear weapons and long-range missiles. The South is attempting to find an effective response.

Although the DPRK is unlikely to attack since it would lose a full-scale war, the Republic of Korea remains uncomfortably dependent on America. And Washington’s commitment to the populous and prosperous ROK likely will decline as America’s finances worsen and challenges elsewhere multiply.

In response, there is talk of reviving the South’s nuclear option. Won Yoo-cheol, parliamentary floor leader of the ruling Saenuri Party, told the National Assembly: “We cannot borrow an umbrella from a neighbor every time it rains. We need to have a raincoat and wear it ourselves.”

Chung Moon-jong—member of the National Assembly, presidential candidate, and Asan Institute founder—made a similar plea two years ago. He told an American audience “if North Korea keeps insisting on staying nuclear then it must know that we will have no choice but to go nuclear.” He suggested that the South withdraw from the Nuclear Nonproliferation Treaty and “match North Korea’s nuclear progress step-by step while committing to stop if North Korea stops.”

The public seems receptive. Support for a South Korean nuclear program is on the upswing, hitting 66 percent in 2013. While President Park Geun-hye’s government remains formally committed to the NPT, Seoul has conducted nuclear experiments and resisted oversight by the International Atomic Energy Agency.

Of course, the idea triggers a horrified reaction in Washington.

Unfortunately, in Northeast Asia today nonproliferation operates a little like gun control in the U.S.: only the bad guys end up armed. China, Russia, and North Korea all have nuclear weapons. America’s allies, Japan and South Korea, do not, and expect Washington to defend them. To do so the U.S. would have to risk Los Angeles to protect Seoul and Tokyo—and maybe Taipei and Canberra as well, depending on how far Washington extends the “nuclear umbrella.”

While America’s overwhelming nuclear arsenal should deter anyone else from using nukes, conflicts do not always evolve rationally. South Korea and Japan are important international partners, but their protection is not worth creating an unnecessary existential threat to the American homeland.

Better to create a balance of power in which the U.S. is not a target if nukes start falling. And that would be achieved by independent South Korean and Japanese nuclear deterrents. Such a prospect would antagonize China. But then, such an arsenal would deter the People’s Republic of China as well as DPRK. Which also would serve American interests.

Moreover, the mere threat might solve the problem. When faced with the prospect of Japanese and South Korean nuclear weapons, China might come to see the wisdom of applying greater pressure on the North.

The U.S.-ROK discussions over THAAD may have encouraged Beijing to indicate its willingness support a UN resolution imposing more pain on the North for its latest nuclear launch. The prospect of having two more nuclear neighbors would concentrate minds in Zhongnanhai.

Abandoning nonproliferation is not a decision to take lightly. No one wants a nuclear arms race.

But the PRC already is improving its nuclear forces. And allowing North Korea to enjoy a unilateral advantage creates great dangers.

So as I wrote for National Interest: “policymakers should consider the possibility of a nuclear South Korea. Keeping America entangled in the Korean imbroglio as Pyongyang develops nuclear weapons is a bad option which could turn catastrophic. Blessing allied development of nuclear weapons might prove to be a better alternative.”

Park Chung-hee was a brute, but his desire for an ROK nuclear weapon looks prescient. Maybe it’s time for the good guys in Northeast Asia to be armed as well.

Yesterday, the New York Times editorial board called on Hillary Clinton to leave the realm of economic reality behind and join the ranks of those seeking to drastically increase the minimum wage to $15. The timing of this latest exhortation would almost be amusing, if it weren’t so disconcerting. It came the same day that four former Democratic chairs of the Council of Economic Advisers sent an open letter to her rival for the Democratic nomination Bernie Sanders citing concerns that some of the claims of his campaign “cannot be supported by the economic evidence.” The editorial board engages in its own bout of economic fantasy with the claim that “economic obstacles are not standing in the way” of more than doubling the minimum wage. Even Alan Krueger, author of one of the major studies finding negligible disemployment effects from a past minimum wage increase, took to the pages of the New York Times to oppose a $15 minimum because it is “beyond international experience, and could well be counterproductive.”

In order to make their support for a $15 minimum seem more reasonable, the board alludes to a understanding among proponents of a higher minimum wage that a robust one equals half the average wage. These proposals generally use the median hourly wage, not the higher mean the board references, and getting to that ratio with a $15 minimum wage in 2022 would require wage growth higher than 5 percent, levels not seen in well over a decade. So it’s not only the Sanders campaign engaging in fanciful economic assumptions, and the NYT editorial board is right there with them. More reasonable wage growth assumptions would place a fully phased-in $15 minimum wage around the highest levels seen in the developed world. beyond the frontier of the existing economic literature. Such a move would make major negative unintended consequences near certain.

Even this could understate the problems with a federal $15 minimum wage, as some states and jurisdictions would be even less able to absorb the effects of such a radical increase. The editorial board asserts that the minimum wage needs to be increased at the federal level because some states have not raised it on their own. The piece fails to give any consideration to how less affluent parts of the country will fare under the proposed increase. In Puerto Rico, for instance, the current federal minimum wage is already 77 percent of the median hourly wage (France, for comparison is around 61 percent). While there are certainly other factors contributing to the crisis on Puerto Rico, the high minimum wag is a contributing factor, and doubling it would have disastrous consequences for the roughly 3.6 million people living there. The same dynamic to a lesser extent would hold for less affluent states like Alabama or Arkansas, especially in non-metro areas, where this increase would lead to a higher ratio relative to the median hourly wage than almost anywhere else in the developed world. This could have a devastating impact on state and local economies in these places, and exacerbate the serious problems with poverty and limited opportunity those places already grapple with.

These concerns aside, the minimum wage is an incredibly ineffective way to try to alleviate poverty, and would probably further limit the opportunities for affected workers. Few of the benefits of a higher minimum wage would accrue to families in poverty: with a $15 minimum wage, only 12 percent of the benefits would go to poor families, while 38 percent would go to families at least three times above the poverty line. Even worse, while some studies have indeed found past minimum wage increases had a limited impact on employment, others focusing on targeted workers (younger workers with lower skills) have found significant disemployment effects and reduced economic mobility for this group. Minimum wage increases are poorly targeted to reduce poverty, and could actually be counterproductive for the very people they are supposed to help.

The NYT editorial board’s call for Hillary to embrace the $15 minimum wage elevates intentions above outcomes, leaving behind valid questions about trade-offs for the realm of economic fantasy. Their piece ignores the concerns that an increase of that magnitude could have severe unintended consequences that could exacerbate many of the problems they would want it to address. Perhaps they should have perused their own archives, as the same editorial board said years ago, “[t]he idea of using a minimum wage to overcome poverty is old, honorable - and fundamentally flawed.”

New data from Gallup suggests that residents in US states with freer markets are more optimistic about their state’s economic prospects. In their 50-State Poll, Gallup asked Americans what they thought about the current economic conditions in their own state as well as their economic expectations for the future. North Dakota (92%), Utah (84%), and Texas (82%) top the list as states with the highest share of residents who rate their current economic conditions as excellent or good.  In stark contrast, only 18% of Rhode Island residents, 23% of Illinois residents, and 28% of West Virginians rate their state’s economic conditions as excellent or good. Similarly Americans most optimistic about their state’s economic futures include Utah (83%) and Texas (77%) while states at the bottom include Illinois (34%) and West Virginia (36%).

What explains these stark differences in economic evaluations and expectations across US states? Could differences across states in economic freedom, such as government regulations on business, tax rates, government spending, and property rights protection, be part of the story?

Figure 1: Relationship Between State Economic Freedom Scores
and Residents’ Evaluations of Current Economic Conditions

 

 Source: Economic Freedom Index 2011, Freedom in the 50 States; Gallup 50-State Poll 2015

To investigate, I examined the Freedom in the 50 States Economic Freedom Index, calculated by Jason Sorens and William Ruger and published by the Mercatus Center, to compare. I then contrasted this data with Gallup’s public opinion numbers. It turns out there is a strong relationship between economic freedom and public economic optimism. Americans who live in states that have lower taxes, lower government spending, fewer regulations on business, better property rights protections, better tort system, etc. also tend to have more confidence in their state’s economic conditions.

Figure 1 shows the relationship between states’ economic freedom scores and public evaluations of current economic conditions. As you can see, states that are more economically freemeaning less government regulation, lower taxes, and less government spending, etc.also tend to have residents that say their state economies are excellent or good. This correlation1 between economic freedom and residents’ evaluations of their state’s current economic conditions is .55.

Next, Figure 2 shows the relationship between each state’s economic freedom score and share of residents in each state who think their state economy is getting better. As you can see, states that are more economically free tend to have residents who are more optimistic about their the states’ economic futures. The correlation between economic freedom and residents’ positive economic outlook is .44. Notably, economic optimism in states like New York and California that have less free economies tend to be the exception rather than the rule when compared to the nation as a whole. 

Figure 2: Relationship Between State Economic Freedom Scores
and Residents’ Positive Economic Outlook 


Source: Economic Freedom Index 2011, Freedom in the 50 States; Gallup 50-State Poll 2015. Correlation: .44, if outliers like New York, California, and Nevada are excluded, correlation rises to .58.

Of course, correlation is not causation. But it would also be a mistake not to consider the obvious possibility that economic freedom and economic success are related. There is a reasonable argument to be made that making it easier to start businesses, reducing regulatory barriers, and leaving more money in the private sector can lead to more entrepreneurship, companies creating more jobs, technological innovation, economic growth, and thus more optimistic people.

While these results do not report on actual economic conditions, public perception of economic conditions is also crucially important. Americans’ perception of their state economies can act like a composite score of how residents perceive the multifarious economic forces at work in their states. 

In sum, these results provide some indication that freer markets may provide greater hope for a better economic future.

Check out the Freedom in the 50 State Rankings:

 

1 Correlation is a measure of relationship

At the start of the year there was a lot of speculation on whether President Obama would crown his historic rapprochement with Cuba with an equally historic visit to the island. The guesswork is over with today’s announcement of his trip next month.

Many things have changed in the last year in the relationship between the United States and Cuba: diplomatic ties have been restored, the leaders of both countries have met twice, dozens of commercial flights per day have been authorized, hundreds of thousands of Americans are travelling to the once-forbidden island, and many economic sanctions have been lifted.

And yet there’s one thing that hasn’t changed: the repressive nature of Cuba’s Communist dictatorship. If anything, things might be getting worse. The Miami Herald’s columnist Andrés Oppenheimer recently reported that the number of self-employed workers in Cuba has actually dropped in the last six months. Arbitrary detentions of peaceful opposition activists are on the rise. Economic reforms are still too timid. If there is a lot of enthusiasm about Cuba lately, it has to do more with what Washington is doing than what Havana is actually delivering.

This is not to say that Obama’s rapprochement with Cuba has failed: Washington’s previous policy of isolating the island was utterly counterproductive. But we should not kid ourselves about an imminent change of the nature of the Castro regime.

President Obama has said that his trip’s main objective will be to “improve the lives of the Cuban people.” If so, he should follow the steps of Jimmy Carter when he visited the island in 2002: the former president met with dissidents and was allowed to address the nation uncensored in a speech on national TV where he called for democratic elections, respect for human rights and greater civil liberties.

If Obama fails to get similar concessions, his trip will only boost the standing of the Castro regime. It will be all about cementing his legacy and not about trying to improve the lives of ordinary Cubans. 

Wow, will bacon be next?

Last fall we wrote about the improvement in tomato taste that results from growing them in elevated carbon dioxide and seawater (Idso and Michaels, 2015). Now it looks like the same treatment improves lettuce. 

Enhancing crop nutritional value has long been a goal of the agricultural industry. Growing plants under less than optimal conditions for a short period of time generally increases their oxidative stress. To counter such stress, plants will usually increase their antioxidant metabolism which, in turn, elevates the presence of various antioxidant compounds in their tissues, compounds that can be of great nutritional value from a human point of view, such as helping to reduce the effects of ageing.

However, stress-induced nutritional benefits often come at a price, including a reduction in plant growth and yield, making it unproductive and costly to implement these practices in the real world. But what if there was a way achieve such benefits without sacrificing crop biomass, having our cake and eating it, too? An intriguing paper recently published in the journal Scientia Horticulturae explains just how this can happen, involving lettuce, salt stress, and atmospheric CO2.

According to Pérez-López et al. (2015), the authors of this new work, “few studies have utilized salt irrigation combined with CO2-enriched atmospheres to enhance a crop’s nutraceutical value.” Thus, the team of five Spanish researchers set out to conduct just such an experiment involving two lettuce cultivars, Blonde of Paris Badavia (a green-leaf lettuce) and Oak Leaf (a red-leaf lettuce and a common garden variety). In so doing, they grew the lettuce cultivars from seed in controlled environment chambers at either ambient or enriched CO2 for a period of 35 days after which they supplied a subset of the two treatments with either 0 or 200 mM NaCl for 4 days to simulate salt stress. Thereafter they conducted a series of analyses to report growth and nutritional characteristics of the cultivars under these varying growth conditions. And what did those analyses reveal?

As shown in Figure 1, elevated CO2 increased the leaf water content, biomass and antioxidant capacity of both lettuce cultivars under normal and salt-stressed growing conditions. Plant biomass, for example, was enhanced by 42 and 62 percent for green and red lettuce, respectively under normal growing conditions (no salt stress) and by 56 and 61 percent, respectively, when salt stressed. Similarly, elevated CO2 increased lettuce antioxidant capacity by 196 (green cultivar) and 293 percent (red cultivar) under normal conditions (non-salt-stressed), and by 61 (green cultivar) and 109 percent (red cultivar) when salt stressed. What is more, as this graphic illustrates, elevated CO2 totally ameliorated the negative effects of salt stress on plant biomass and antioxidant capacity, such that the value of these two parameters under elevated CO2 and salt stress conditions were higher than values observed under normal CO2 with no salt stress and normal CO2 with salt stress. 

Figure 1. Effects of salt treatment and CO2 concentration on water content (A), biomass production (B) and antioxidant capacity (C) of green leaf and red leaf lettuce. Adapted from Pérez-López et al. (2015).

Pérez-López et al. also report that “under elevated CO2, both lettuce cultivars increased the uptake of almost all minerals to adjust to the higher growth rates, reaching similar concentrations to the ones detected under ambient CO2.” However, the red cultivar “seemed to gain a greater advantage from elevated CO2 than the green cultivar because it better adjusted both mineral uptake and antioxidant metabolism.”

Such positive findings bode well for the future of the lettuce industry, for as concluded by Pérez-López et al., “elevated CO2 alone or in combination with short environmental salt stress permits us to increase the nutritional quality (increasing the concentration of some minerals and antioxidants) of lettuce without yield losses or even increasing production.” And that is a future worth hoping for!

PS: One of us is going to run a seawater experiment on tomatoes this summer. Readers should get a full report around August 15!

 

References

Idso, C.D., and P. J. Michaels, 2015.  Want better tomatoes?  Add carbon dioxide and a pinch of salt! http://www.cato.org/blog/want-better-tomatoes-add-carbon-dioxide-pinch-salt

Pérez-López, U., Miranda-Apodaca, J., Lacuesta, M., Mena-Petite, A. and Muñoz-Rueda, A. 2015. Growth and nutritional quality improvement in two differently pigmented lettuce cultivars grown under elevated CO2 and/or salinity. Scientia Horticulturae 195: 56-66.

 

When the Brookings Institution and Urban Institute claim any tax reform will “lose trillions” they are comparing their static estimates of revenues from those plans (which assume tax rates could double or be cut in half with no effect on growth or tax avoidance) to totally unrealistic “baseline” projections from the Congresional Budget Office.  Those CBO projections assume that rapid 2.4% annual increases in real hourly compensation over the next decade will push more people into higher tax brackets every year.  As a result, the average tax burden supposedly rises forever – from 17.7% of GDP in 2015 to 19.9% in 2045 and 23.8% by 2090.  And, typical of static estimates, this ever-increasing tax burden is imagined to have no bad effects on the economy.

Such a high level of federal taxation never happened in the past (20% was a record set in the tech stock boom of 2000) and it will never happen in the future.  In short, this is an entirely bogus basis by which to judge tax reform plans.

A far more sensible question would be this:

Will the Cruz or Rubio tax reforms raise just as much money as the Obama “tax increase” has – namely, 17.5% of GDP from 2013 to 2015. If so, then real tax revenues will grow faster after reform because real GDP growth will surely be at least 1.2% faster – or a middling 3.5% a year, which is all the cautious Tax Foundation estimates suggest.

Ted Cruz says the defense plan he unveiled Tuesday in South Carolina would give the U.S. military “more tooth, less tail.” Actually Cruz’s plan would produce more of everything, especially debt.

Cruz says that as President he’ll spend 4.1 percent of gross domestic product (GDP) on defense for two years, and 4 percent thereafter. As the chart below shows, under standard growth predictions, Cruz’s plan produces a massive increase in military spending: about $1.2 trillion over what would be Cruz’s first term and $2.6 trillion over eight years. Details on the chart are at the end of this post.

The chart also shows how much Cruz’s plan exceeds the Budget Control Act’s caps on defense spending, which remain in force through 2021. Spending bills exceeding those caps trigger sequestration: across-the-board cuts that keep spending at the cap. So Cruz’s plan depends on Congress repealing the law. Experience suggests that Congress will instead trade on dodgy future savings to raise caps by twenty or thirty billion a year—about a tenth of what Cruz needs.

Cruz is relatively clear on the tooth—force structure—he hopes to buy. He’d grow the Army’s end strength to 1.4 million, with at least 525,000 in the active force. Those numbers are scheduled to fall to 980,000 and 450,000in 2018.  Cruz would “reverse the cuts to the manpower of the Marines,” which presumably means going from the current 182,000 active to the 202,000 peak size reached in 2011 for the Iraq and Afghanistan wars. The Navy would grow from 287 to at least 350 ships, and the Air Force would add 1,000 aircraft to reach 6,000. The plan would also modernize each leg of the nuclear triad of intercontinental ballistic missiles, bombers and submarine-launched ballistic missiles. It asserts that the triad is “on the verge of slipping away,” ignoring current plans to modernize each leg, needlessly.

Despite the plan’s “paying for rebuilding” section, Cruz is unclear on how he’ll fund the buildup. The “less tail” option is a red-herring. The implication, standard among those trying to look fiscally responsible while throwing money at the military, is that you cut administration to pay for force structure. But Cruz doesn’t sustain the pretense beyond attacking the Pentagon’s “bloated bureaucracy and social experiments.” He never identifies what bureaucracy—commands, budget line or contracts—he’d axe. He doesn’t explain how to overcome the Pentagon’s tendency to increase overhead during buildups or betray concern about the meager results of past efforts to shift tail to tooth. He complains about “12,000 accountants in Pentagon,” but between its buildup and push to audit the Pentagon, the Cruz administration would need thousands more. Cruz’s refusal to “bow to political correctness,” evidently does not cover politically-dicey efficiency measures like another Base Realignment and Closure round and reforming military benefit and compensation packages. Neither does Cruz say how his idea of social experiments, like letting gays serve openly or accommodating soldiers allergic to gluten, add costs.

Cruz, of course, will not fund his buildup through taxes. Instead, the plan mentions selling federal assets, unspecified spending cuts, and tax revenue juiced by four or five percent annual growth. Wishful thinking seems a fair summary. Asset sales would only help marginally. The plan lacks a method for abolishing Democrats, who will almost certainly retain enough Senators to keep non-defense discretionary spending pegged to defense, or to make entitlement cuts politically unpalatable. And betting that you can vastly outperform typical growth isn’t sound planning.

That leaves debt as the most likely way President Cruz will fund his buildup, adding to the pile he already plans. And that helps explain why Cruz’s defense plan won’t happen. With CBO’s recent estimate of accelerated deficit growth, the politics constraining Pentagon spending should last. Assuming Democrats and vulnerable Republicans continue to block big entitlement spending cuts and Republicans prevent tax increases, concern about deficits translates into pressure to restrain discretionary spending, of which defense is more than half.

Also in the unclear category: the rationale undergirding the plan. Cruz’s force structure hopes, which he doesn’t price, have little to do with his 4 percent spending goal, beyond a general commitment to more. As Matt Fay explains, spending four percent of GDP on defense is to punt on strategy, which involves prioritizing resources to meet threats. Threats don’t grow with wealth or fade with poverty.

Cruz’s force structure suggestions also lack a strategic basis. Cruz criticizes nation-building wars and worries about China. But rather than distribute his buildup accordingly by focusing it on the Navy and Air Force, he gives equally to the ground forces. Ultimately, Cruz proposes spending a lot more to do what we are now doing. Like Jeb Bush, Marco Rubio, and even Dick Cheney, Cruz’s rhetorical assaults on the Obama administration’s defense policy belie underlying agreement with its premises. I attack those bipartisan beliefs in various other places.

__________________________________________________________________

The chart covers 2018-2025 because those are the first and last fiscal year budgets that the next administration will plan, assuming it wins a second term. For GDP growth, the chart uses recent Congressional Budget Office projections. The current spending plans are in budget authority and come from the Pentagon’s new five year defense spending plan (see figure 1-3 on page 1-5) and Office of Management and Budget estimates for subsequent years (table 28-1). That spending track falls under 3 percent of CBO’s GDP prediction in 2018 and slides toward 2.5 percent by the mid 2020s.  The caps are on page 13 here. All figures are in nominal dollars. 

Barely a week after a Georgia judge threw out a challenge to the state’s scholarship tax credit law, today the Oklahoma Supreme Court unanimously upheld school vouchers for students with special needs.

Plaintiffs argued that the Lindsey Nicole Henry Scholarships for Students with Disabilities violated the Oklahoma constitution’s historically anti-Catholic Blaine Amendment, which prohibits the state from appopriating public funds for sectarian purposes. A lower court had agreed, limiting the vouchers only to private schools without any religious affiliation. Today, the state supreme court overturned that decision, upholding the law in its entirety.

Plaintiffs had argued that the vouchers unconstitutionally aided religious schools, but the court found that the voucher law “is void of any preference between a sectarian or non-sectarian private school” and that “there is no influence being exerted by the State for any sectarian purpose with respect to whether a private school satisfies [the law’s eligibility] requirements.”

Despite being “religion neutral,” the plaintiffs argued that the law is unconstitutional because more voucher recipients chose to attend religious schools than non-religious schools. However, the court rejected this claim, citing the U.S. Supreme Court’s decision in Zelman v. Simmons-Harris (which upheld school vouchers in Ohio): “the constitutionality of a neutral educational aid program simply does not turn on whether and why, in a particular area, at a particular time, most private schools are religious, or most recipients choose to use the aid at a religious school.” What matters to the constitution, the Oklahoma court explained, is only that the law is religiously neutral and that parents have a choice: “When the parents and not the government are the ones determining which private school offers the best learning environment for their child, the circuit between government and religion is broken… Scholarship funds deposited to a private sectarian school occur only as a result of the private independent choice by the parent or legal guardian.” [emphasis in the original]

The court outlined the key factors that led to their conclusion:

(1) voluntary participation by families in the scholarship program;

(2) genuine independent choice by parent or legal guardian in selecting sectarian or non-sectarian private school;

(3) payment warrant issued to parent or legal guardian [not directly to a private school];

(4) parent endorses payment to independently chosen private school;

(5) Act is religion neutral with respect to criteria to become an approvate school for scholarship program;

(6) each public school district has the option to contract with a private school to provide mandated special educational services instead of private services in the district;

(7) acceptance of the scholarship under the Act serves as parental revocation of all federally guaranteed rights due to children who qualify for services under [the Individuals with Disabilities Act]; and

(8) the district public school is relieved of its obligation to provide educational services to the child with disabilities as long as the child utilizes the scholarship.

The timing of the decision couldn’t be better for supporters of the education savings account (ESA) legislation that just received a green light from the Oklahoma House Common Education Committee this week. Opponents had argued that the ESAs were likely unconstitutional, but with the court’s unanimous ruling, that will no longer be a concern. Legislators can now focus on the merits of ESAs.

The Spin Cycle is a reoccurring feature based upon just how much the latest weather or climate story, policy pronouncement, or simply poo-bah blather spins the truth. Statements are given a rating between 1-5 spin cycles, with less cycles meaning less spin. For a more in-depth description, visit the inaugural edition.

The Obama Administration is involved in an all-out effort to soften the severity of blow that the U.S Supreme Court dealt the EPA’s Clean Power Plan (CPP) last week.*

In the day following the Court’s ruling, White House Deputy Press Secretary Eric Schultz referred to the Supreme Court’s stay as a “temporary procedural determination” and then added that “[i]t is our estimation that the inclusion of [the extensions of the renewable energy tax credits] is going to have more impact over the short term [on greenhouse gas emissions]than the Clean Power Plan.”

We covered Schultz’s first statement in our Spin Cycle from last week, giving it our top award of five Spinnies.

Here we examine the second part of his statement, that the extension of the investment tax credit (ITC) and the production tax credit (PTC) on solar and wind power, approved by Congress last December “is going to have more impact over the short term than the Clean Power Plan.”

On its face, we must admit this is true. Primarily because, under the CPP, the states aren’t required to begin cutting power plant emissions until 2022—far outside what we would consider “over the short term.” So, by the letter of the (now stayed) law, the CPP wouldn’t have to result in any greenhouse gas reductions prior to 2022. Schultz statement lacks the proper context. Walking (instead of driving) to lunch one time next week would also produce “more impact over the short term [on greenhouse gas emissions]than the Clean Power Plan” (stayed or not).

Granted, the CPP encourages (but doesn’t require) states to begin early. Based on the Regulatory Impact Analysis of the CPP, the EPA estimates that since some states would begin early, the CPP would reduce greenhouse gas emissions by about the equivalent of 75 million metric tons of carbon dioxide (mmtCO2eq) by 2020. Once all states were taking part in the CPP, the annual emissions reduction estimated by the EPA grows to 240 mmtCO2 by 2025—the year that President Obama promised that the country would achieve a 26-28 percent economy-wide greenhouse gas reduction beneath 2005 emissions levels (the projected emissions reduction under the CPP  grows to 376 mmtCO2eq by 2030).

So how large is the expected impact of the ITC and PTC extensions (which includes a phase-out over the next five years)? The White House earlier reported:

This would reduce carbon dioxide emissions by more than 200 million metric tons in 2020 alone, helping to ensure that the United States achieves our 2020 target to reduce emissions in the range of 17 percent below 2005 levels, and providing momentum towards our 2025 target.

In essence, the extension of the renewable tax credits acts as a jump start for the CPP. Then when the CPP fully kicks in, it will subsume the emissions reductions achieved through the tax credit extension. 

But if the CPP is never put into place, the impact of the renewable tax credits alone would still be identifiable—and would stand at about the same level of emission reduction as the CPP has been projected to achieve in 2025.

So, when it comes down to it, it appears that through the year 2025 the ITC/PTC extension is projected by the Obama Administration to generate roughly the same greenhouse gas emission reduction as the CPP. [Note: this is not the case in the period 2025-2030 when CPP achieves ever greater emissions reductions.]

If the story ended here, we’d probably be compelled to only award the Administration only perhaps 1 or 2 Spinnies (for espousing “short term” gains over zero).

But this isn’t where it ends. Schultz went on to assure the gathered press that:

“There are driving forces that will allow the United States to meet its [Paris] commitment outside of the Clean Power Plan rule,” Schultz said. “One of those main forces is the inclusion of the tax credits at the end of the 2015 budget agreement.”

This is just wrong. There is no way that the US will meet its Paris target “outside of the Clean Power Plan.” In fact, even with the CPP, there was little chance of meeting the president’s promised target without a whole lot of other actions that have not yet been proposed.

The Administration knows well that this is the case, as it is the conclusion of a large number of independent analyses.

Here is a sampling of a few of them:

Climate Action Tracker:  “The US will have to implement additional policies on top of the currently planned policies to reach its 2025 pledge, which requires a faster reduction rate than the rate before 2020.”

Rhodium Group: “Reducing emissions 26-28% below 2005 levels by 2025 will not be possible through current and planned policies alone.”

Climate Advisors: “To achieve the U.S. pledge, the next President would need to vigorously implement the Obama administration policies and propose new emission reduction measures”

Niskanen Center: “’Current measures’ [which include CPP] will not get us to the 2025 target.”

If you add in the latest news about a potential (un-reported by the EPA) large positive trend in methane emissions (a powerful greenhouse gas) from the United States, the U.S. Paris target is impossible without the Clean Power Plan (lacking a major economic collapse).

For making believe otherwise, and wanting all us all to buy into the charade, we award 4 Spinnies, or Heavy Duty Spin Cycle to the White House and its communications team.

Heavy Duty

 

*The sudden death of Justice Antonin Scalia, along with the likelihood that there will be no confirmed replacement until after the inauguration of a new president in 2017 means that the fate of the CPP will be determined at the ballot box on November 8, 2016.

As Walter Olson has noted, before Antonin Scalia became a federal judge and ultimately a Supreme Court justice, he spent part of his career working in public policy, including a stint as editor-in-chief (and other roles) of Regulation, which was then published by the American Enterprise Institute and now is published by Cato. Scalia wrote articles for the magazine himself and edited those written by others. The sharp analysis and rhetorical wit he would later display on the Court can be seen in Regulation’s pages.

For example, in this 1979 article he argued against the Legislative Veto, a proposal to give Congress the broad power to disapprove specific regulations through resolution (which could not be overturned by presidential veto). Though Scalia was often skeptical of federal regulatory policy, he was also skeptical of this effort to constrain it, because it further removed Congress from its constitutional duty to set policy. He wrote:

Has the difficulty really been that Congress has tried repeatedly to reverse the results of agency rulemaking through legislation but has been stymied by the President? I am not aware of a single instance. … The problem has been, quite simply, that both houses have had neither the time nor the inclination to review agency rulemaking, just as they have had neither the time nor the inclination to write more detailed legislation in the first place, which would render the most significant rulemaking unnecessary.

In the same vein, this 1980 Scalia article examined the judicial implications of congressional delegation of authority to executive agencies. Such delegation is worrisome and constitutionally dubious, he acknowledged; lawmakers dodge the difficult policy questions, leaving them to agency staff. However, the proposed remedy for this delegation—having the courts settle such issues when discerning legislative intent—is even more worrisome and dubious, he argued. Rather, Congress should fulfill its duties under the Constitution:

The sorts of judgments alluded to above—how great is the need for prompt action, how extensive is the social consensus on the vague legislated objective, and so forth—are much more appropriate for a representative assembly than for a hermetically sealed committee of nine lawyers. In earlier times heated constitutional debate did take place at the congressional level.

Later that year he contributed to Regulation’s special issue, “On Saving the Kingdom,” which featured suggestions to President-Elect Ronald Reagan from eight of the nation’s top regulatory experts. (Another contributor was Bill Niskanen, who would join Reagan’s Council of Economic Advisers and later become chairman of the Cato Institute—and another editor of Regulation). Scalia’s article, on the Federal Trade Commission, focuses on very specific topics, including whether Reagan’s appointment power could significantly alter the policy decisions of the five-member commission (at least initially), and the need to reform the FTC’s powers and objectives. But at the end, Scalia floated a big-picture question that is still relevant today: given the Department of Justice’s antitrust division and the consumer-protection powers of several federal agencies, is the FTC necessary? He wrote:

Total elimination of the commission’s antitrust powers would be politically most difficult; there also is some policy argument against it, namely, that it is only the commission’s pro-competitive, antitrust responsibilities that have given it any internal balance and prevented it from becoming, in effect, a single-mission consumer protection agency. Perhaps the most that can be recommended along these lines at the present time is the establishment of a task force to consider the antitrust enforcement issue. It might consider, at the same time, consolidation within the FTC (or elsewhere) of the various, and sometimes overlapping, consumer protection responsibilities now entrusted to other agencies—the Consumer Product Safety Commission, the Food and Drug Administration, and the Department of Agriculture, to mention only a few.

Scalia’s short article in the first issue of 1981, boldly titled, “Regulatory Reform: The Game Has Changed,” continued to offer regulatory advice to the new administration. He noted that, historically, Republican administrations had focused on regulatory reforms intended to obstruct future Democratic administrations from making new rules; early Reagan reform proposals offered more such efforts. The problem, he lamented, is the Reaganites weren’t realizing the field had reversed: instead of obstructing future rulemakings, they should be vigorously writing rules that embodied their own economic and political theories:

Executive-enfeebling measures such as those discussed above do not specifically deter regulation. What they deter is change. Imposed upon a regulation-prone executive, they will on balance slow the increase of regulation; but imposed upon an executive that is seeking to dissolve the encrusted regulation of past decades, they will impede the dissolution. Regulatory reformers who do not recognize this fact, and who continue to support the unmodified proposals of the past as though the fundamental game had not been altered, will be scoring points for the other team.

The final article of his tenure as editor-in-chief offered a surprising critical review of the federal Freedom of Information Act. Scalia agreed with the ideals behind FOIA, but not the law that ultimately resulted (following amendments in 1974) and its consequences. He wrote:

When one compares what the Freedom of Information Act was in contemplation with what it has turned out to be in reality, it is apparent that something went wrong. The act and its amendments were promoted as a means of finding out about the operations of government; they have been used largely as a means of obtaining data in the government’s hands concerning private institutions. They were promoted as a boon to the press, the public interest group, the little guy; they have been used most frequently by corporate lawyers. They were promoted as a minimal imposition on the operations of government; they have greatly burdened investigative agencies and the courts. … What happened in the 1974 amendments to the Freedom of Information Act is similar to what happened in much of the regulatory legislation and rulemaking of that era: an entirely desirable objective was pursued single-mindedly to the exclusion of equally valid competing interests. In the currently favored terminology, a lack of cost-benefit analysis; in more commonsensical terms, a loss of all sense of proportion.

Displaying his trademark wit, he concluded of FOIA, “It is the Taj Mahal of the Doctrine of Unanticipated Consequences, the Sistine Chapel of Cost- Benefit Analysis Ignored.”

These articles remind readers of Scalia’s skill in communicating complex and important issues to a general audience. Such talents are as important—and scarce—today as they were in the 1970s and 1980s.

For the past quarter century, the Cato Institute has published the magazine Regulation, which gives it the honor of carrying on part of the intellectual legacy of Antonin Scalia. Scalia served as editor-in-chief through much of the magazine’s early period, during which it was a project of the American Enterprise Institute. He shared those duties for a time with distinguished economist Murray Weidenbaum, who also went on to great things as chair of President Reagan’s Council of Economic Advisors, and with Anne Brunsdale, who went on to serve as a commissioner on the International Trade Commission. (I was there, too, as an associate editor for much of this period, and I share a few recollections from that era in a piece this morning at The Daily Beast: “My own anecdote about Justice Scalia is that he once hired me for my dream job because I wouldn’t stop arguing with him.”) 

One of the magazine’s high points in its early years had an even more direct Cato connection, the classic debate between Scalia and his former Chicago faculty colleague Richard Epstein on the proper role of the courts in protecting economic liberty. The debate was itself based on an “Economic Liberties and the Constitution” conference sponsored by the Cato Institute, instigated by Roger Pilon and organized by Jim Dorn.  Scalia – by that point, of course, no longer editor and instead a judge on the D.C. Circuit – begins his piece thus:

I recall from the earliest days of my political awareness Dwight Eisenhower’s demonstrably successful slogan that he was “a conservative in economic affairs, but a liberal in human affairs.” I am sure he meant it to connote nothing more profound than that he represented the best of both Republican and Democratic tradition. But still, that seemed to me a peculiar way to put it — contrasting economic affairs with human affairs as though economics is a science developed for the benefit of dogs or trees; something that has nothing to do with human beings, with their welfare, aspirations, or freedoms.

Epstein’s side of that memorable debate is here, and he recalls it in this new appreciation. As a judge, Scalia was somewhat more willing to assert constitutional protection for economic liberties than one might have guessed from his stand in the debate, so perhaps he was influenced to some extent by the give-and-take.   

Cato has published Regulation since 1990 and Peter Van Doren has served as its editor since 1999; the late William Niskanen has more details on its history in this retrospective. You can browse the magazine’s archives here.  During his editorship, which lasted until 1982, Scalia wrote many pieces both signed and unsigned. His contributions to the unsigned front part of the magazine can often be identified once you know to look for his distinctive style (often there was one such piece per issue). As to his signed pieces, my colleagues will have more on those in an upcoming Cato at Liberty post.

 

 

 

In 2013, 53 percent of all employment-based green cards actually sponsored the family members of workers while only 47 percent went to the workers themselves.  Some of those family members are workers, but they should have a separate green card category or be exempted from the employment green card quota altogether. 

If family members were exempted from the quota, or there was a separate green card category for them, an additional 85,232 highly skilled immigrant workers could have entered in 2013 without increasing the quota.    

As I explain in this video, President Obama has the power to improve this allocation of visas: 

For more on this issue, please read this work by immigration attorneys Cyrus Mehta and Gary Endelman and quotes here by renowned immigration attorney Matthew Kolken.

Two recent economic studies purporting to estimate the impact of the Trans-Pacific Partnership (TPP) agreement on the U.S. economy have sparked a kerfuffle between the deal’s advocates and detractors. One study, published by the Peterson Institute for International Economics, estimates increases to U.S. income of 0.5 percent by 2030 with gains to labor accruing slightly more than gains to capital.  The other, published by Tufts University’s Global Development and Environment Institute, estimates that the TPP would reduce U.S. income by 0.5 percent, reduce employment by almost half a million jobs, and increase income inequality.  The findings of each study are being trumpeted as dispositive by their respective constituencies. Who’s right?

In a recent blog post, PIIE-affiliated economist Robert Lawrence wrote that to judge the credibility of these models, three questions should be asked: Is the model used appropriate for exploring trade policy? Does the model depict TPP sensibly? Are the results credible? Lawrence then goes on to explain why he answers “yes” to each question regarding the PIIE study and “no” to each regarding the Tufts study. Well sure, Bob, at a minimum, those criteria are important. And they help distinguish the PIIE model as relatively credible – that is, relative to the Tufts model. But what about relative to reality?  

A model might depict TPP sensibly, but incompletely and imprecisely.  How can we be sure those imperfections don’t have a large impact on the results?  And even if the results are credible, in that they don’t deviate dramatically from expectations, their purpose – or, at least, the weight assigned to these studies in the public’s mind – is to produce reasonable estimates, not to corroborate the model’s capacity to process reasonable expectations.

With apologies to my trade economist friends, anyone who treats the estimates produced by economic models as mathematical truths is, well, part of the problem. Lawrence doesn’t do that, but too many trade policy combatants do. Certainly, some models are more rigorous than others, but all rely on assumptions. The greater the number and complexity of exogenous policy changes being modeled, the greater the number of estimates and assumptions to incorporate, and the further removed from reality the results will be. Sometimes the estimates are merely best guesses and sometimes the assumptions have no better than a 50 percent probability of occurrence.  For example, many of the economic benefits of TPP will derive from reductions in non-tariff barriers to trade, such as regulatory opaqueness.  How does one model the increase in regulatory transparency?  How does one account for stricter environmental or labor or intellectual property regulations? How does one assign numeric values to rules limiting restrictions on cross-border data flows?

The PIIE study assumes that 75 percent of non-tariff barriers constitute trade restrictions (with 25 percent being regulations intended to promote or protect social goods, such as food quality and safety, for example) and that about 75 percent of those NTBs affecting goods trade and 50 percent affecting services trade are actionable.  So, the PIIE assumptions are that 56.25 percent and 37.50 percent of NTBs affecting good and services, respectively, will be reformed under the TPP, and the effect of each reform is modeled as a trade cost reduction of 0 to 100 percent for each combination of industry and country.  One other assumption is that 20 percent of these reforms also will benefit non-TPP members, which – through a feedback loop – will magnify the benefits to TPP-country economies.  So, modeling the TPP requires numerous assumptions and estimates from the outset.

With respect to the PIIE study, perhaps, we can accept that the sign of the coefficient on income is positive (“the TPP will produce benefits for the U.S. economy”), but the magnitude of 0.5 isn’t much better than a guess because it is the product of so many assumptions and estimates.  And despite the model’s relative rigor in using microeconomic behavioral equations to transmit the effects of TPP, predicting how economic agents will react to a confluence of policy changes is a dicey proposition – a fatal conceit. Economics is not science. The preferences, information, and knowledge that drive economic decisions are all personal, nuanced, and dispersed.  They cannot be represented mathematically with any precision.  Assumptions about how economic agents should react that are, at best, based on probabilities and expectations of utility- or profit-maximization don’t always hold. To their credit, the Tufts economists, responding to Lawrence’s legitimate indictment of their approach, concede that economic models involve a lot of guesswork and that economic modelers have a long way to go.

What the public and policymakers should be considering – what should be under the spotlight – are the rules of the TPP, not the projected outcomes. The outcomes cannot be known with certainty. The rules are objective and concrete. We should be able to draw conclusions about the desirability of the TPP from its language – from the rules it articulates – without guarantees of particular outcomes. The TPP should be judged by the degree of economic freedom it restores, not by a shouting match over highly contestable estimates.  Indeed, some chapters of the TPP are expressly about reducing trade barriers, including tariffs and other obstacles to competition. Those provisions should be universally embraced, as they will help restore our economic freedoms. (For those whose companies or industries are subsequently exposed to greater competition when barriers are removed, please realize that you have been benefitting at everyone else’s expense and that liberalization rights that wrong.)

Other chapters of the TPP are less about liberalization and more about crafting common rules about how governments treat foreign enterprises and how they enforce labor rights, environmental regulations, intellectual property provisions, and so on.  It is less clear whether and how these “governance” chapters enhance or impair or economic freedom. But each chapter can be assessed exhaustively on a qualitative basis, without need of highly malleable estimates of economic outcomes.

In the weeks and month ahead, look for a comprehensive, chapter-by-chapter assessment of the TPP from my Cato Institute trade policy colleagues and me.

Pages