Feed aggregator

Global Science Report is a weekly feature from the Center for the Study of Science, where we highlight one or two important new items in the scientific literature or the popular media. For broader and more technical perspectives, consult our monthly “Current Wisdom.”

Second only to incidences of high temperature, supporters of government action to restrict energy choice like to say “extreme” precipitation events–be they in the form of rain, sleet, snow, or hail falling from tropical cyclones, mid-latitude extratropical storms, or summer thunderstorm complexes–are evidence that greenhouse gas emissions from human activities make our climate and daily weather worse.

The federal government encourages and promotes such associations. Take, for example, the opening stanzas of its 2014 National Climate Assessment: Climate Change Impacts in the United States, a document regularly cited by President Obama in support of his climatic perseverations:

This National Climate Assessment concludes that the evidence of human-induced climate change continues to strengthen and that impacts are increasing across the country.

Americans are noticing changes all around them. Summers are longer and hotter, and extended periods of unusual heat last longer than any living American has ever experienced. Winters are generally shorter and warmer. Rain comes in heavier downpours.

President Obama often calls out the extreme rain meme when he is running through his list of climate change evils. His Executive Order “Preparing for the Impacts of Climate Change,” includes:

The impacts of climate change – including…more heavy downpours… – are already affecting communities, natural resources, ecosystems, economies, and public health across the Nation.

So, certainly the science must be settled demonstrating a strong greenhouse-gas altered climate signal in the observed patterns of extreme precipitation trends and variability across the United States in recent decades, right?

Wrong.

Here are the conclusions of a freshly minted study, titled “Characterizing Recent Trends in U.S. Heavy Precipitation” from a group of scientists led by Dr. Martin Hoerling from the NOAA’s System Research Laboratory in Boulder, Colorado:

Analysis of the seasonality in heavy daily precipitation trends supports physical arguments that their changes during 1979-2013 have been intimately linked to internal decadal ocean variability, and less to human-induced climate change…Analysis of model ensemble spread reveals that appreciable 35-yr trends in heavy daily precipitation can occur in the absence of forcing, thereby limiting detection of the weak anthropogenic influence at regional scales [emphasis added].

Basically, after reviewing observations of heavy rains across the country and comparing them to climate model explanations/expectations, Hoerling and colleagues determined that natural variability acting through variations in sea surface temperature patterns, not global warming, is the main driver of the observed changes in heavy precipitation.

They summed up their efforts and findings this way (emphasis also added):

In conclusion, the paper sought to answer the question whether the recent observed trends in heavy daily precipitation constitute a strongly constrained outcome, either of external radiative forcing alone [i.e., greenhouse gas increase], or from a combination of radiative and internal ocean boundary forcing. We emphasized that the overall spatial pattern and seasonality of US trends has been more consistent with internally driven ocean-related forcing than with external radiative forcing. Yet, the magnitude of these forced changes since 1979 was at most equal to the magnitude of observed trends (e.g. over the Far West), and in areas such as the Far Northeast where especially large upward trends have occurred, the forced signals were several factors smaller. From the perspective of external forcing alone [i.e., changes in atmospheric carbon dioxide], the observed trends appear not to have been strongly constrained, and apparently much less so than the efficacy of an external driving mechanism surmised in the National Climate Assessment.

Hoerling’s team tried to say it nicely, but, basically they’re saying that the federal government’s assessment of the impacts of climate change greatly overstates the case for linking dreaded carbon dioxide emissions to extreme precipitation events across the United States (Note: We weren’t as nice when saying that, in fact, the National Assessment Report overstates the case for linking carbon dioxide emissions to darn near everything.)

This is not to say that Hoerling and colleagues don’t think that an increasing atmospheric concentration of carbon dioxide isn’t supposed to lead to an enhancement of heavy precipitation over the course of the 21st century. (If they didn’t say that, they’d probably be exiled to the federal climatologist rubber room). Rather, they think that folks (including the president and the authors of the National Climate Assessment) are far too premature in linking observed changes to date with our reliance on coal, oil, and natural gas as primary fuels for our energy production.

Whether or not at some later date a definitive and sizeable (actionable) anthropogenic signal is identifiable in the patterns and trends in heavy precipitation occurrence across the United States is a question whose answer will have to wait—most likely until much closer to the end of the century or beyond.

Reference:

Hoerling, M., J. Eischeid, J. Perlwitz, X. Quan, K. Wolter, and L. Cheng, 2016. Characterizing Recent Trends in U.S. Heavy Precipitation. Journal of Climate. doi:10.1175/JCLI-D-15-0441.1, in press.

 

In the 1990s, the Clinton administration proposed restructuring our air traffic control (ATC) system, creating a self-funded organization outside of the Federal Aviation Administration (FAA). The idea went nowhere in Congress at the time.

Since then, numerous countries have successfully privatized their ATC systems, including Britain and Canada. Meanwhile, our ATC is still trapped inside the FAA bureaucracy, and it continues to fall short on crucial technology upgrade projects.

The good news is that major restructuring is back on the agenda in Congress. House Transportation and Infrastructure Committee chairman, Bill Shuster, is expected to soon unveil a major reform proposal, perhaps along the lines of Canada’s non-profit ATC corporation, Nav Canada. The FAA must be reauthorized by the end of March, which gives some momentum to reform. If President Obama wants an important pro-growth legacy in his final year in office, he should get behind this effort.

Canada’s ATC privatization has been a huge success. In a recent Wall Street Journal interview, the head of Nav Canada, John Crichton said, “This business of ours has evolved long past the time when government should be in it … Governments are not suited to run … dynamic, high-tech, 24-hour businesses.” Exactly—and for all the reasons I discuss here.

Please join us Thursday for a Capitol Hill forum to discuss these issues (Rayburn B-354, noon). We will hear from two top experts. Dorothy Robyn was a top economic advisor to both Presidents Clinton and Obama, and she wrote an excellent study on ATC reform for Brookings. Stephen Van Beek is a long-time aviation industry expert.  

A popular knock against vouchers and other school choice programs is that private schools do not serve many students with disabilities, whereas public schools serve everyone. If that’s true, then the vast majority of public schools in New York City must actually be private.

According to a federal investigation just rejected by the de Blasio administration, the large majority of New York City elementary schools – 83 percent – are not “fully accessible” to students with disabilities. That forces many disabled students to travel far afield from their local public schools, which are supposed to serve every zoned child. The U.S. Department of Justice’s letter to the city laying all this out contains this anecdote:

In the course of our investigation, we spoke to one family who went to extreme measures to keep their child enrolled in their zoned local school, rather than subject the child to a lengthy commute to the closest “accessible” school. A parent of this elementary school child was forced to travel to the school multiple times a day, every school day, in order to carry her child up and down stairs to her classroom, to the cafeteria, and to other areas of the school in which classes and programs were held.

Of course, it is unrealistic to expect that every school is going to be able to provide the best possible education for every child – all kids learn different things at different rates and have different strengths and weaknesses – but it is especially true for children with disabilities. Yet while the public schools often fall lightyears short of the goal, that is the standard to which public schooling advocates love to hold schools in choice programs. And not only is it unrealistic no matter what, but vouchers are usually a fraction of the funding public schools get, averaging around $7,000, versus New York City’s nearly $19,000 per pupil.

The scope of NYC’s failure to live up to the ideal is sobering, but revelations of double standards on this front are not new. School districts often pay for kids with the most challenging disabilities to attend private institutions, and there are several choice programs that are, in fact, specifically designed for children with disabilities. But maybe now, before choice opponents attack private schools again, they’ll at least try to get their own house in order. Or in New York City, their hundreds of houses not fully serving disabled children.

Jeb Bush spent at least $14.9 trying to win the Iowa Republican primary, the most of any candidate in either party. He finished sixth.

Will this persuade people that money does not buy elections? Probably not. The belief that “money buys elections” is not really falsifiable. It is a matter of faith.

But perhaps those who believe that money buys elections will now think it is somewhat less probable they are correct.

On Wednesday, February 3, the Senate Environment and Public Works committee will hold a hearing on a new “Stream Protection Rule” being proposed by the Department of the Interior’s Office of Surface Mining (OSM) that looks to be another nail being hammered into the coal industry’s coffin by the Obama Administration.

Energy and mineral resource development in the U.S. is being thwarted by a wave of agenda-driven federal agency rulemakings being rushed through before the end of this administration. Oil, natural gas, and coal have been targeted for replacement by renewable energy sources. The coal industry has been fast-tracked by the OSM’s proposed new “Stream Protection Rule” (SPR). 

The new SPR would supersede the existing Stream Buffer Zone Rule, enacted in 2008 to control the increasingly few negative effects of surface coal mining on aquatic environments in the nation’s three largest coal mining areas: Appalachia, the Illinois Basin—Midwest, and Rocky Mountains—and Northern Great Plains. But, as is so often the case in the world of environmental regulation, that was not sufficient for the OSM, and, over the past seven years it has continued to press for more and stricter regulations on coal mining all across the United States.  They seem to prefer a nationwide one-size-fits-all regulatory enforcement scenario, even though local geology, geochemistry, and terrain vary widely between states and basins.  As it is, these concerns are more efficiently addressed by the states and policed by the industry.

That aside, the real impacts of the SPR, openly acknowledged by OSM, leave tens of billions of dollars’ worth of coal in the ground with no chance of future development—“stranded reserves,” as OSM terms them in the rule. Those coal deposits, according to OSM, “…are technically and economically minable, but unavailable for production given new requirements and restrictions included in the proposed rule.”  Yet, OSM’s engineering analysis, cited by a Congressional Research Service study, states that there will be no increase in “stranded reserves” under the SPR. In other words, the same volume of coal will be mined under the proposed rule as under the current rule…an OSM oversight, no doubt.

The proposed rulemaking employs questionable geoscience and mining engineering issues such as overemphasizing the importance of ephemeral streams to limit mining activities in all areas, requiring needless increases of subsurface drilling and geologic sampling, redefining accepted technical terms such as “approximate original contour” and “material damage to hydrologic balance,” and creating new unfamiliar terms such as “hydrological form” and “ecological function.”

But OSM likely is not focused on technical issues as much as their main concern: that the new rule is more stringent than the existing 2008 rule as is possible, and that it will apply nationally. Hence, the rule appears to be more for the benefit of regulators and places undue burden and expense on coal miners. Neither is OSM overly concerned with the big three tangible adverse impacts of their proposed rulemaking: lost jobs, lost resources, and lost tax revenue—with Appalachia being hit the hardest. Consensus estimates—not OSM’s—of the number of mining-related jobs lost nationally due to the SPR: in excess of 100,000 to upwards of 300,000. The decrease in coal tonnage recovered: between roughly 30 to 65 percent less. The annual value of coal left in the ground because of the rule: between 14 to 29 billion dollars. The estimated decrease in Federal and coal state tax bases: between 3.1 to 6.4 billion dollars. These are not very encouraging statistics for an industry that is currently responsible for supplying 40 percent of U.S. electrical power generation.   

Interior’s Office of Surface Mining has failed to adequately justify its proposed Stream Protection Rule in light of the federal and state rules and regulations already in place. Rather, OSM has embarked on a seven year odyssey of agenda-driven rulemaking that would force-fit regional and local characteristics coal mining operations to a nationwide template. However, Congress and the courts had already established that a uniform nationwide federal standard for coal mining would not be workable given the significant differences in regional and local geology, hydrology, topography, and environmental factors related to mining operations everywhere. On the non-technical side, OSM does not retreat from its admission in the preamble to the proposed rule that the SPR is politically motivated. Press reports have quoted an OSM official as acknowledging that there was pressure to get the SPR done in this administration’s last year.

Enacting the new SPR would be an ominous threat to a coal mining industry that deserves much better from this or any other future administration. This is one reason why OSM’s proposed SPR has been tagged by the National Mining Association as “a rule in search of a problem.” However, to paraphrase a more appropriate quote: the voluminous Stream Protection Rule is not the solution to the coal industry’s problems—rather the Stream Protection Rule is the problem.

It will be interesting to see how this all plays out in the Senate on Wednesday.

The U.S. Department of Labor’s Occupational Safety and Health Administration (OSHA) is soon set to release new exposure limits to air-borne silica dust. The rulemaking has been in the works for about three years with a final rule scheduled to be announced this year. The silica industry is not enthused.

Silica dust is known to cause respiratory illnesses (e.g., silicosis, lung cancer, other airways diseases) that may contribute to or lead directly to death when it is breathed in high enough concentrations over long enough time periods.

OSHA explains that exposure to respirable silica “occurs in operations involving cutting, sawing, drilling and crushing of concrete, brick, block and other stone products and in operations using sand products, such as in glass manufacturing, foundries and sand blasting.”

OSHA’s proposal, generally, is to lower the existing permissible exposure limits (adopted in 1971) by about 50%, dropping them from around 0.1mg/m3  to 0.05mg/m3 (specific details here). OSHA explains:

The agency currently enforces 40-year-old permissible exposure limits (PELs) for crystalline silica in general industry, construction and shipyards that are outdated, inconsistent between industries and do not adequately protect worker health. The proposed rule brings protections into the 21st century.

And, as the government likes to claim with all of its regulations, the added restrictions will save lots of lives, and in doing so, will save lots of money:

OSHA estimates that the proposed rule will save nearly 700 lives and prevent 1,600 new cases of silicosis per year once the full effects of the rule are realized.

The proposed rule is estimated to provide average net benefits of about $2.8 to $4.7 billion annually over the next 60 years.

Interestingly, a visit to the Centers for Disease Control in search of deaths from silica inhalation produces this chart graphing silicosis mortality over time. The numbers have dropped considerably over the past 40+ years, and by 2010 had fallen to about 100 or so deaths per year (U.S. residents over the age of 15) attributed to silicosis as either the underlying or contributing cause.

Figure 1. Silicosis: Number of deaths, crude and age-adjusted death rates, U.S. residents age 15 and over, 1968–2010 (Source: CDC).

The CDC data shows that silicosis deaths have been declining and although the decline has slowed, it continues to drop while under the current OSHA guidelines. And further, the 100 or so deaths that are occurring annually are several times less than the annual number of deaths that OSHA predicts will be saved by the new regulations. That’s a pretty neat trick—the new regs are going to save several times more lives than are actually lost!

This means not only that the OSHA mortality benefits from the new regulations are questionable, but so too must be the economic benefits (as they are tied directly to the mortality savings).

The silica industry isn’t taking this lightly.

They contend that the OSHA mortality projections are based on dose-response relationships that are not truly indicative of what is really going on, for several reasons, primarily that they are based upon poor and inadequate data and analyses.

Dose-response curves used by the federal government are notorious for producing forecasts of a much greater health benefit than actually occurs. For one reason, often, the federal dose-response curves aren’t actually curves at all, but are rather straight lines. Which means that the response is the same for all dosage increments. This allows government regulatory agencies to claim the continually cranking down the exposure limits will continually produce positive health outcomes.

But, more and more research is proving that this is not the case. As the dose gets lower, the response often flattens out (i.e. there is a threshold) or may in fact become positive (low dosages are actually good for you). While this sounds like common sense to most of us (consider sunshine or alcohol), it is a ground-breaking notion in the field. And much of the research on this groundbreaking theory is being done by Dr. Edward Calabrese at University of Massachusetts, also an adjunct scholar to Cato’s Center for the Study of Science.

A leading industry group (the National Industrial Sand Association, NISA) makes a strong case that the existing OSHA standards are quite effective at greatly reducing, or even eliminating, occurrence of silicosis. So really, all OSHA needs to do is unify the existing regulations and better insure that they are enforced.

NISA has commissioned a scientific study of the dose-response behavior that is wider in scope and includes a greater and more detailed amount of epidemiological data than the existing studies relied upon by OSHA. From a NISA report:

While the association between silicosis and exposure to respirable crystalline silica is indisputable, there is still considerable uncertainty regarding the dose/response relationship of this association, particularly in the case of chronic simple silicosis, which is the most common form of silicosis.  It is unclear, for example, whether there is an effect threshold, or whether instead the dose/response curve is linear at even the lowest doses.  The slope of that curve is also uncertain.  As a result, there is uncertainty regarding the degree of risk remaining at various 8 hour time-weighted average exposures, including, most importantly, the current PEL of [0.10 mg/m3].  

In NISA’s view, this degree of uncertainty is unacceptable for a rulemaking of this magnitude.

The industry is being up-front about their commission (involving scientists at major universities involved in silica research) and has taken steps to be open and transparent about their involvement—or rather lack of involvement—in the study’s outcome. 

But, the results of the new study, instigated several years ago, are still forthcoming and consequently the industry has asked OSHA to wait until they are available (expected later this year) before issuing its final rule.

It’ll be interesting to see how this plays out, but preliminarily, this looks like another case of the government solving a problem that doesn’t really exist, in that existing regulations are sufficient to address the health concerns if they were better applied and enforced, and further, the promise of the new regulations is being greatly overplayed, saving many times more lives than our CDC says silicosis takes away.

But that’s the government at work—if some regulations are good, more must be better. Sadly, for the taxpayers and the regulated industries, it doesn’t always work out that way.

As I recall from my time in the Senate, there’s nothing like an energy bill to attract misguided proposals.  This week the Senate begins consideration of S.2012 — the Energy Policy Modernization Act of 2015.  Among the almost two hundred filed amendments is a proposal (Amendment #3042) from former real estate broker, Senator Isakson, to mandate that the Federal Housing Administration (FHA) reduce the quality of its loans in order to encourage more efficient energy use.

The two most concerning aspects of Amdt 3042 are 1) it would allow “estimated energy savings” to be used to increase the allowable debt-to-income (DTI) ratios for the loan and; 2) require “that the estimated energy savings…be added to the appraised value…”

These changes might not be so bad in the abstract but when combined with existing FHA standards, they set the borrower up for failure and leave the taxpayer holding the bag. Let’s recall that borrowers can already get a FHA mortgage at a loan to value (LTV) of 96.5%, and that’s assuming an accurate appraisal.  If borrowers were required to put 20 percent down, then this amendment would be a minor problem, but under existing standards, borrowers would mostly likely leave the table with an LTV over 100%, that is already underwater before they’ve even moved in.  Did Congress learn nothing from the crisis?

The increase in DTI might not matter if FHA did not already allow a DTI as high as 43% of income.  Under Amdt 3042 borrowers could easily leave the closing table devoting over half their income to their mortgage.  Again, did Congress learn nothing from the crisis?

To illustrate that the intent of the proposal is to have the taxpayer take more risk, Amdt 3042 actually prohibits FHA from imposing any standards that would offset this risk.  If these new loans perform worse, as one would expect, FHA cannot put them back to the lenders.   And let’s not forget FHA allows the borrower to have a credit history deep in the subprime range.  So you could have a subprime borrower, say FICO down to 580, LTV > 100% and DTI > 43% - what could go wrong?

If indeed energy savings actually increased the value of the home, that would be reflected in the price.  There would be no need to mandate such.  Not only does this proposal weaken FHA standards, and expose the taxpayer to greater risk, it takes us further down the path of an already politicized housing policy, where instead of relying on market prices, values are dictated by Soviet-style bureaucratic guesswork.

Afghanistan is a bust. The Taliban is expanding its control. The number of “security incidents” was up a fifth in the last months of 2015 over the previous year. Popular confidence is at its lowest level in a decade. U.S. military officers now speak of a “goal line” defense of Kabul.

While the deadly geopolitical game is not yet over, it is hard to see how the current regime can survive without Washington’s continued combat support. The nation-building mission always was Quixotic.

Indeed, the latest report from the Special Inspector General for Afghanistan Reconstruction shows how far this Central Asian land was and remains from developed status. And how ineffective U.S. aid programs have been in transforming it.

While Afghanistan enjoyed some boom years in the flood of Western cash, the foreign money also inflamed the problem of corruption. The Stockholm International Peace Research Institute explained: “The significant amount of aid and vast international military spending post-2001 has re-ingrained a culture of aid-rentierism: the Afghan elite competes internally for political rents from the international community.”

Tougher times have not increased honesty. In its latest quarterly report, SIGAR noted that a recent Afghan task force “reportedly found that millions of dollars were being embezzled while Afghanistan pays for numerous nonexistent ‘ghost’ schools, ‘ghost’ teachers, and ‘ghost’ students.”

Even worse, the same practice apparently afflicts the security forces. SIGAR cited an Associated Press investigation: “In that report, a provincial council member estimated 40% of the security forces in Helmand do not exist, while a former provincial deputy police chief said the actual number was ‘nowhere near’ the 31,000 police on the registers, and an Afghan official estimated the total ANDSF number at around 120,000—less than half the reported 322,638.”

Security never has been good during the conflict. Today it is worse than ever.

Explained SIGAR: “The Taliban now controls more territory than at any time since 2001. Vicious and repeated attacks in Kabul this quarter shook confidence in the national-unity government. A year after the Coalition handed responsibility for Afghan security to the Afghan National Defense and Security Forces (ANDSF), American and British forces were compelled on several occasions to support ANDSF troops in combat against the Taliban.”

Yet the failure of U.S. aid programs reaches well beyond insecurity. Despite pouring $113.1 billion into Afghanistan, Washington has surprisingly few sustainable, long-term benefits to show for it.

Citing just a few of its earlier audits, SIGAR reported on Afghan government agencies suffering from “divergent approaches and a lack of overall strategy, poor coordination and limited information sharing,” and unable to “handle contract research, awards, and management.” U.S.-funded “power and water systems [were] inoperable for lack of fuel” while an industrial park had minimal occupancy.

Its latest audits yielded little better results.

USAID devoted $488 million to develop Afghanistan’s oil, gas, and minerals industries. SIGAR found “limited progress overall.” Afghan ministries weren’t committed to reforms, “many mining operations are still controlled by political elites, warlords, military personnel, and the police,” transportation networks were inadequate, and several projects showed no results.

Tens of millions of dollars went for training and equipping an Afghan National Engineer Brigade. The NEB was hampered by “army staff on leave for holidays, political events, low literacy levels, and security concerns.” The brigade “lacked initiative” and “was not capable of carrying out its mission.”

Some $2.3 billion in USAID money went for stability programs, yet, said SIGAR, “villages that received USAID assistance showed a marked decrease in their stability scores relative to the overall decrease in stability scores for both villages that did and those that did not receive USAID assistance.”

The official line remains positive. On one of my visits to Afghanistan a Marine Corps officer warned me that “everyone is selling something.” Private reports were different than the glowing reviews from my NATO handlers.

As I point out on Forbes: “The U.S. has been fighting in Afghanistan for more than 14 years. It’s time to bring home the troops. No more Americans should die in Afghanistan for nothing.”

Secretary John Kerry went to Beijing to again lecture his hosts about the need for China to pressure North Korea over the latter’s nuclear program. As expected, his mission failed. The Xi government again proved unwilling to threaten the survival of the Kim dynasty.

Immediately after Pyongyang’s fourth nuclear test Kerry attacked Beijing’s policy: it “has not worked and we cannot continue business as usual.” Even before Kerry arrived the PRC made clear it disagreed. “The origin and crux of the nuclear issue on the Korean Peninsula has never been China,” said a Ministry of Foreign Affairs spokeswoman: “The key to solving the problem is not China.”

While he was in Beijing she cited the behavior of other parties as “one major reason why the denuclearization process on the peninsula has run into difficulties.” Beijing officialdom has shown plenty of irritation with the Democratic People’s Republic of Korea, but China has demonstrated it has yet to be convinced to destroy its own ally and strengthen America’s position in Northeast Asia.

Kerry made the best of an embarrassing situation when he announced that the two sides agreed to an “accelerated effort” by the UN Security Council to approve a “strong resolution that introduces significant new measures” against the DPRK. No one should hold their breath as to the nature of those “measures,” however.

Foreign Minister Wang Yi echoed Kerry in supporting passage of “a new resolution,” but added the devastating caveat: “In the meantime, we must point out that the new resolution should not provoke new tensions in the situation, still less destabilize the Korean peninsula.” Wang explained that “Sanctions are not an end in themselves” but should encourage negotiation, not punish.

As I point out in National Interest: “If Kerry wants the Chinese to follow U.S. priorities, he must convince them that America’s proposals advance Chinese interests. Which means explain to them why they should risk destroying their one military ally in the region, with the possibility of creating chaos and conflict next door and adding the entire peninsula to America’s anti-China alliance network.”

Good luck.

In 1950, the PRC went to war with the U.S. to preserve the North Korean state and prevent American forces from advancing to the Yalu River. Even today Beijing wants to see a united Korea allied with Washington about as much as it desires to have a nuclear North Korea.

Indeed, even without a U.S. garrison, a more powerful ROK would pose a challenge to the PRC. Moreover, Beijing’s favored economic position in the North would disappear as South Korean money swept away Chinese concessions.

Worse, the process of getting to a reunified Korea likely could be disastrous. Nothing in the DPRK’s history suggests a willingness to gently yield to foreign dictates. In the late 1990s the regime allowed a half million or more people to starve to death. Irrespective of China’s threats, Kim Jong-un might say no and continue in power, irrespective of the human cost.

If China ended up breaking a recalcitrant Kim dynasty by sanctioning oil and food, the result could be extraordinary hardship and armed factional combat followed by mass refugee flows across the Yalu—multiply the desperation and number of Syrians heading to Europe. Then toss in loose nuclear weapons and a possible South Korean/U.S. military push across the Demilitarized Zone to force reunification.

The result would be a first rate nightmare for Chinese President Xi Jinping. So, explain to me again, Secretary Kerry, why my country should ruin its geopolitical position to further Washington’s ends?

If John Kerry’s private message was the same as his public pronouncements, he had no hope of winning Chinese support for taking decisive action against the DPRK. Next time he visits he should employ the art of persuasion—or stay home.                      

Possibly the strangest foreign policy decision the Obama administration has made was their decision to support the Saudi-led war in Yemen. The White House has made quiet counterterrorism operations a key plank of its foreign policy agenda, and the administration includes a number of officials best known for their work on human rights issues, most notably Samantha Power. As such, the President’s decision to supply logistical, intelligence and targeting support for the Saudi-led coalition’s military campaign – a campaign which has been horrifically damaging to human rights inside Yemen, as well as detrimental to U.S. counterterrorism goals – was deeply surprising.

Less surprising was the fact that the conflict has turned into a disastrous quagmire. Yemen was already arguably a failed state when the intervention began in April 2015. The power transition negotiated in the aftermath of the Arab Spring was weak and failing, with Yemen’s perpetual insurgencies worsening the situation. Since the intervention began, the United Nations estimates that over 21 million Yemenis have been deprived of life’s basic necessities. Thousands have been killed. Even more concerning, United Nations monitors reported to the Security Council that they believed the Saudi-led coalition may be guilty of crimes against humanity for its indiscriminate air strikes on civilians.

Strategically, the coalition has made few gains. Despite the terrible loss of life, the coalition has stalled south of the capital, Sanaa. Further advances will be exceedingly difficult. At the same time, Al Qaeda inside Yemen has grown in strength and size, benefitting from the conflict, and even presenting itself as a viable partner for the Saudi coalition. It is hard to see how U.S. strategic interests - counterterrorism, human rights, or even regional stability – are being served by this conflict.

So what should the president do? In his last few months in office, President Obama should take advantage of his executive power to end U.S. support for the war in Yemen, and direct America’s diplomats to aggressively pursue a diplomatic settlement. This war is humanitarian disaster and a strategic failure; ending our support for it should be a no-brainer. 

What the President Should Do: U.S. Support in Yemen

Gabriel Roth, who turns 90 years young today, is a rock star among transportation economists, and a special inspiration for those of us who support reducing the federal government’s role in transportation. According to his C.V., Roth earned degrees in engineering from London’s Imperial College in 1948 and economics from Cambridge in 1954.

In 1959, he began research into improved road pricing systems. This led to his appointment to a Ministry of Transport commission that published a 1964 report advocating pricing congested roads in order to end that congestion.

In 1966, the Institute for Economic Affairs published his paper, A Self-Financing Road System, which argued that user fees should pay for all roads, and not just be used to relieve congestion. Roads should be expanded, Roth noted, wherever user fees exceeded the cost of providing a particular road, but not elsewhere.

In 1967, Roth moved to the United States to work for the World Bank, where he did road pricing studies for many developing countries and cities, including Bangkok, Manila, and Singapore. After leaving the World Bank in 1987, he continued to work as a consultant until 2000, among other things helping design the Dulles Toll Road and writing Roads in a Market Economy, a book published in 1996.

Since then, he has been a regular participant in transportation conferences, meetings, and hearings. He edited a 2006 book, Street Smart, co-authored a 2008 paper showing how electronic tolling could be done without invading people’s privacy, and made a presentation about tolling at the 2010 American Dream conference.

My home state of Oregon is now experimenting with mileage-based user fees, and I’m one of the volunteers in this experiment. If it goes well, we may see the realization of Roth’s ideas before he turns 100.

I hope to see Gabe on my next trip to DC. I know I’ll be able to find him by looking for the nearest transportation conference.

Quite a number of media fact-checkers tripped over Ted Cruz’s claim in last night’s debate that Barack Obama had “dramatically degraded our military,” and Marco Rubio’s related pledge to rebuild a U.S. military that is “being diminished.”

The Dallas Morning News noted that “amounts spent on weapons modernization are about the same as they were when Republican George W. Bush was president.” Meanwhile, to the extent that the military’s budget “is being squeezed,” they wrote, it is because of “the insistence of lawmakers in both parties that money be spent on bases and equipment that the Pentagon says it doesn’t need.”

Politico’s Bryan Bender (accessible to Politico Pro subscribers), concluded that while Cruz’s “facts may hold up to scrutiny…they are nonetheless misleading.” Bender pointed out that “Military technology has advanced significantly in the last quarter century and combat aircraft and warships are much more precise and pack a more powerful punch.” Politifact agreed, rating Cruz’s claim “Mostly False.”

Ultimately, alas, whether the U.S. military has been severely degraded is a judgment call. Relative to what? And when? And what does that mean for U.S. security?

But while the answers to such questions are subjective, the facts on spending are not. Undaunted by the realization that committed partisans are unlikely to be converted by them, I’m also doing my part to try to inject some facts into the debate over the Pentagon’s budget. A few weeks ago, I posed five sets of questions to the candidates at The National Interest’s Blog, The Skeptics, including, for those calling for more military spending:

Why would you spend more? What is the United States unable to do right now to preserve its security because it isn’t spending enough? To what extent is insufficient military strength the critical factor explaining America’s inability to achieve satisfactory results with respect to an array of challenges, from destroying ISIS, to repairing failed states, to halting North Korea’s nuclear program?

This morning at TNI, I offered my take on whether lower military spending as a share of GDP is to blame for the U.S. military’s supposed precipitous decline. I’m skeptical.

For one thing, the Pentagon’s base budget, excluding the costs of our recent wars, remains near historic highs. Under the bipartisan Budget Control Act passed in 2011, and as amended in 2013 and late 2015, U.S. taxpayers will spend more on the military in each of the next five years ($510 billion) than we spent, on average, during the Cold War ($460 billion). Those figures are adjusted for inflation. And the actual gap between what we spend now, and what we spent then, will be larger, because the BCA doesn’t cover war costs.

Meanwhile, it isn’t even true that spending under Barack Obama is lower than under George W. Bush. In inflation-adjusted dollars, military spending – both war and non-war – averaged $606 billion per year during Bush’s two terms in office; under Obama, it has averaged $668 billion. The United States will have spent nearly $500 billion more in the period 2009-2016 than from 2001-2008 ($4.8 trillion vs. $5.3 trillion).

So the most important question, it seems, is “Why is more spending leading to – in Cruz’s estimation (and Rubio and Jeb Bush and any other candidate that wants to spend more on the military) – less capability? A smaller Army. A smaller Navy. Fewer Air Force planes.

Do fewer troops and ships and planes imply that the military is dramatically degraded? Not necessarily. The troops are better trained than a generation ago. The ships are more capable. The weapons are more accurate.

We should not assume that less military spending – if spending did decline – would necessarily lead to a less capable military. Meanwhile, there are many possible explanations for why militaries degrade over time – for example, fighting foolish, unnecessary wars. Far fewer American troops are being killed and wounded in Iraq and Afghanistan now than in 2008.

I conclude at TNI:

it isn’t obvious that a more costly force is needed to preserve U.S. security and protect vital U.S. interests. That we are spending less as a share of GDP than at some points in U.S. history does not necessarily mean that we should spend more. It could also be true that we are spending less and getting more, or that we could safely get by with less. Once we get beyond the confusion over different ways to measure our spending, let’s examine what the U.S. military truly must do in order to keep Americans safe, and how much that will cost.

Read the whole thing here.

Did our message finally get through? (See “How ADA-for-the-Web Regulations Menace Online Freedom,” 2013). Or that of other commentators like Eric Goldman, who warned (of a related court case) that “all hell will break loose” if the law defines websites as public accommodations and makes them adopt “accessibility”? At any rate, the U.S. Department of Justice, after years of declaring that it was getting ready any day now to label your website and most others you encounter every day as out of compliance with the ADA, has suddenly turned around and done this:

In an astonishing move, the Department of Justice (DOJ) announced that it will not issue any regulations for public accommodations websites until fiscal year 2018 — eight years after it started the rulemaking process with an Advanced Notice of Proposed Rulemaking (ANPRM).

Yes, eight years is a very long time for a rulemaking, especially one pursuing issues that have been in play for many years (that link discusses testimony I gave in 2000). And predictably, some disabled interest-group advocates are already charging that the latest delay is “outrageous” and shows “indifference.” More likely, it shows that even an administration that has launched many audacious and super-costly initiatives in regulation has figured out that this one is so audacious and super-costly that it should be – well, not dropped, but left as a problem for a successor administration.

Besides, as so often happens, for regulated parties the issue is (to borrow a phrase) not freedom from obligation, but freedom from specification as to what that obligation might be. Court decisions, which for years ran mostly against ADA advocates’ “public accommodations” claim, now point confusingly in both directions. And in the mean time both private litigants and DoJ itself continue to sue online providers and fasten on them new settlements and decrees, as when Amazon lately agreed to caption more videos for the deaf; Harvard and MIT, meanwhile, were still being sued for the audacity of having offered uncaptioned online courses to the public. Minh Vu and Kristina Launey of Seyfarth Shaw:

…since issuing that [2010] ANPRM, DOJ’s enforcement attorneys have investigated numerous [entities claimed to be] public accommodations, pressuring them to make their websites accessible. DOJ even intervened in recent lawsuits (e.g., herehere, and here) taking the position that the obligation to have an accessible website has existed all this time in the absence of any new regulations.

The next administration – or better yet Congress – should summon the courage to give a firm and final No.

In recent years, politicians set impossibly high mandates for the amounts of ethanol motorists must buy in 2022 while also setting impossibly high standards for the fuel economy of cars sold in 2025.  To accomplish these conflicting goals, motorists are now given tax credits to drive heavily-subsidized electric cars, even as they will supposedly be required to buy more and more ethanol-laced fuel each year.  

Why have such blatantly contradictory laws received so little criticism, if not outrage? Probably because ethanol mandates and electric car subsidies are lucrative sources of federal grants, loans, subsidies and tax credits for “alternative fuels” and electric cars.  Those on the receiving end lobby hard to keep the gravy train rolling while those paying the bills lack the same motivation to become informed, or to organize and lobby. 

With farmers, ethanol producers and oil companies all sharing the bounty, using subsidies and mandates to pour ever-increasing amounts of ethanol into motorists’ gas tanks has been a win-win deal for politicians and the interest groups that support them and a lose-lose deal for consumers and taxpayers.

The political advantage of advocating contradictory future mandates is that the goals usually prove ridiculous only after their promoters are out of office.  This is a bipartisan affliction.  In his 2007 State of the Union Address, for example, President Bush called for mandating 35 bil­lion gallons of biofuels by 2017, an incredible target equal to one-fourth of all gasoline consumed in the United States in 2006.  Not to be outdone, “President Obama said during the presidential campaign that he favored a 60 billion gallon-a-year target.”

The Energy Independence and Security Act of 2007 (EISA) did not go quite as far as Bush or Obama, at least in the short run.  It required 15 billion gallons of corn-based ethanol by 2015 (about 2 billion more than were actually sold), but 36 billion gallons of all biofuels by 2022 (which would be more than double last year’s sales). The 2007 energy law also raised corporate average fuel economy (CAFE) standards for new cars to 35 miles per gallon by 2030, which President Obama in 2012 ostensibly raised to 54.5 mpg by 2025 (a comically precise guess, since requirements are based on the size of vehicles we buy).

The 36 billion biofuel mandate for 2022 is the mandate Iowa Governor Terry Branstad (and Donald Trump) now vigorously defend against the rather gutsy opposition of Sen. Ted Cruz.  But it is impossible to defend the impossible: Ethanol consumption can’t possibly double as fuel consumption falls.  

From 2004 to 2013, cars and light trucks consumed 11% less fuel.  The Energy Information Agency likewise predicts that fuel consumption of light vehicles will fall by another 10.1% from 2015 to 2022.   So long ethanol is no more than 10% of a gallon (much higher than Canada or Europe), ethanol use must fall as we use less gasoline rather rise as the mandates require. If we ever buy many electric cars or switch from corn to cellulosic sources of ethanol, as other impossible mandates pretend, then corn-based ethanol must fall even faster.

If raising ethanol’s mandated share above 10% is any politician’s secret plan, nobody dares admit it.  Most pre-2007 cars can’t handle more than 10 percent ethanol without damage, and drivers of older cars often lack the income or wealth to buy a new one.  Since ethanol is a third less efficient than gasoline, adding more ethanol would also make it even more impossible for car companies to comply with Obama’s wildly-ambitious fuel economy standards (which must also reduce ethanol use, if they work).

The 2007 law also mandated an astonishing 16 billion gallons of nonexistent “cellulosic” ethanol by 2022 from corn husks or whatever.  We were already supposed to be using a billion gallons of this marvelous snake oil by 2013. Despite lavish taxpayer subsidies, however, production of cellulosic biofuel was only about 7.8 million barrels a month by April, 2015 (about 94 million a year).  The Environmental Protection Agency (EPA) mandate in June 10, 2015 was 230 million billion in 2016, which is more fantasy.

It doesn’t help that the Spanish firm Abenoga – which received $229 million from U.S. taxpayers to produce just 1.7 million gallons of ethanol – is trying to sell its plant in Kansas to avoid the bankruptcy fate of cellulosic producer KiOR. It also doesn’t help that a $500,000 federally-funded study paid finds biofuels made with corn residue release 7% more greenhouse gases than gasoline.

The contradictory, fantastic and often scandalous history of ethanol mandates illustrates the increasing absurdity of mandates from Congress and the EPA. 

The 2007 biofuel mandate was not just bad policy.  It was and remains an impossible, bizarre policy.

The Obama administration has been easing restrictions on travel, exports, and export financing. Commerce Secretary Penny Pritzker spoke of “building a more open and mutually beneficial relationship.”

However, the administration expressed concern over Havana’s dismal human rights practices. Despite the warm reception given Pope Francis last fall, the Castro regime has been on the attack against Cubans of faith.

In a new report the group Christian Solidarity Worldwide warned of “an unprecedented crackdown on churches across the denominational spectrum,” which has “fueled a spike in reported violations of freedom of religion or belief.” There were 220 specific violations of religious liberties in 2014, but 2300 last year, many of which “involved entire churches or, in the cases of arrests, dozens of victims.”

Even in the best of times the Castros have never been friends of faith in anything other than themselves. The State Department’s 2014 report on religious liberty noted that “the government harassed outspoken religious leaders and their followers, including reports of beating, threats, detentions, and restrictions on travel. Religious leaders reported the government tightened controls on financial resources.”

Last year the U.S. Commission on International Religious Freedom was similarly critical. The Commission explained: “Serious religious freedom violations continue in Cuba, despite improvements for government-approved religious groups.” Never mind the papal visit, “the government continues to detain and harass religious leaders and laity, interfere in religious groups’ internal affairs, and prevent democracy and human rights activists from participating in religious activities.”

Now CSW has issued its own report. Last year’s increase in persecution “was largely due to the government declaring 2000 Assemblies of God (AoG) churches illegal, ordering the closure or demolition of 100 AoG churches in three provinces, and expropriating the properties of a number of other denominations, including the Methodist and Baptist Conventions.”

This wide-ranging campaign was led by the Office of Religious Affairs. Noted CSW: “In 2015, the ORA continued to deny authorization for a number of religious activities and in cooperation with other government agencies, issued fines and threats of confiscation to dozens of churches and religious organizations.”

Through the ORA the Communist Party exercises control over religious activities. Indeed, reported CSW, the Office “exists solely to monitor, hinder and restrict the activities of religious groups.”

The regime also has increasingly targeted church leaders and congregants, for the first time in years jailing one of the former. In early January two churches were destroyed, church members arrested, and three church leaders held incommunicado. One of the government’s more odious practices, according to CSW, has been to threaten churches with closure if they “do not comply with government demands to expel and shun specific individuals.”

The regime’s destructive activities have been justified as enforcing zoning laws. But in practice the measure is a subterfuge to shut down churches.

Other legislation threatens house churches. While not consistently implemented in the past, “church leaders have repeatedly expressed concern at its potential to close down a large percentage of house churches.”

CSW concluded that the ongoing crackdown was an attempt to limit calls for social reform which would complement ongoing, though limited, economic changes. Detentions initially were concentrated on “Cubans considered by the government to be political dissidents,” including a group of Catholic women called the Ladies in White. The regime crackdown later “expanded to include other individuals associated with independent civil society, including human rights and democracy activists.”

The Obama administration was right to engage Cuba. After more than 50 years, the embargo serves no useful purpose.

However, even lifting all economic restrictions won’t turn Cuba into a democracy. Only sustained pressure from within and without Cuba is likely to force the Castro regime to yield control to the Cuban people.

As I wrote in Forbes: “Americans should forthrightly encourage freedom in Cuba. Religious believers should be particularly vocal in supporting people seeking to live out their faith under Communist oppression. Some day autocracy will give way to liberty even in Cuba.”

In the past two decades, much scientific research has been conducted to examine the uniqueness (or non-uniqueness) of Earth’s current climate in an effort to discern whether or not rising atmospheric CO2 concentrations are having any measurable impact. Recent work by Thapa et al. (2015) adds to the growing list of such studies with respect to temperature.

According to this team of Nepalese and Indian researchers, the number of meteorological stations in Nepal are few (particularly in the mountain regions) and sparsely distributed across the country, making it “difficult to estimate the rate and geographic extent of recent warming” and to place it within a broader historical context. Thus, in an attempt to address this significant data void, Thapa et al. set out “to further extend the existing climate records of the region.”

The fruits of their labors are shown in the figure below, which presents a nearly four-century-long (AD 1640-2012) reconstruction of spring (Mar-May) temperatures based on tree-ring width chronologies acquired in the far-western Nepalese Himalaya. This temperature reconstruction identifies several periods of warming and cooling relative to its long-term mean (1897-2012). Of particular interest are the red and blue lines shown on the figure, which demark the peak warmth experienced during the past century and the temperature anomaly expressing the current warmth, respectively. As indicated by the red line, the warmest interval of the 20th century is not unique, having been eclipsed four times previous (see the shaded red circles) in the 373-year record – once in the 17th century, twice in the 18th century and once in the nineteenth century. Furthermore, the blue line reveals that current temperatures are uncharacteristically cold. Only two times in the past century have temperatures been colder than they are now!

Figure 1. Reconstructed spring (March-May) temperature anomalies of the far western Nepal Himalaya, filtered using a smoothing spline with a 50 % frequency cut off of 10 years. The red line indicates the peak temperature anomaly of the past century, the blue line indicates the current temperature anomaly, the shaded red circles indicate periods in which temperatures were warmer than the peak warmth of the past century, and the shaded blue circles indicate periods during the past century that were colder than present. Adapted from Thapa et al. (2015).

In light of the above facts, it is clear there is nothing unusual, unnatural or unprecedented about modern spring temperatures in the Nepalese Himalaya. If rising concentrations of atmospheric CO2 are having any impact at all, that impact is certainly not manifest in this record.

 

Reference

Thapa, U.K., Shah, S.K., Gaire, N.P. and Bhuju, D.R. 2015. Spring temperatures in the far-western Nepal Himalaya since AD 1640 reconstructed from Picea smithiana tree-ring widths. Climate Dynamics 45: 2069-2081.

 

An editorial in today’s New York Times calls for a financial transactions tax – a tenths of a percent charge on the market value of every trade of a stock, bond, or derivative. My Working Papers column two years ago described the pitfalls of such a tax.  While tax rates in the range of tenths of a percent sound small they would have large effects on stock values.  Bid-ask spreads are now 1 cent for large cap stocks. A 0.10 percent tax would add 5 cents to the spread for a $50 stock.

The alleged purpose of such a tax is to reduce the arms race among High Frequency Traders who exploit differences in the timing of bids and offers across exchanges at the level of thousandths of a second to engage in price arbitrage.  In the Fall 2015 issue I review a paper that demonstrates that this arms race is the result of stock exchanges’ use of “continuous-limit-order-book” design (that is, orders are taken continuously and placed when the asset reaches the order’s stipulated price). The authors use actual trading data to show that the prices of two securities that track the S&P 500 are perfectly correlated at the level of hour and minute, but at the 10 and 1 millisecond level, the correlation breaks down to provide for mechanical arbitrage opportunities even in a perfectly symmetrical information environment.  In a “frequent batch” auction design (where trades are executed, by auction, at stipulated times that can be as little as a fraction of a second apart), the advantage of incremental speed improvements disappears. In order to end the arbitrage “arms race,” the authors propose that exchanges switch to batch auctions conducted every tenth of a second.  No need for a tax.      

“Billions spent, but fewer people are using public transportation,” declares the Los Angeles Times. The headline might have been more accurate if it read, “Billions spent, so therefore fewer are using public transit,” as the billions were spent on the wrong things.

The L.A. Times article focuses on Los Angeles’ Metropolitan Transportation Authority (Metro), though the same story could be written for many other cities. In Los Angeles, ridership peaked in 1985, fell to 1995, then grew again, and now is falling again. Unmentioned in the story, 1985 is just before Los Angeles transit shifted emphasis from providing low-cost bus service to building expensive rail lines, while 1995 is just before an NAACP lawsuit led to a court order to restore bus service lost since 1985 for ten years.

The situation is actually worse than the numbers shown in the article, which are “unlinked trips.” If you take a bus, then transfer to another bus or train, you’ve taken two unlinked trips. Before building rail, more people could get to their destinations in one bus trip; after building rail, many bus lines were rerouted to funnel people to the rail lines. According to California transit expert Tom Rubin, survey data indicate that there were an average of 1.66 unlinked trips per trip in 1985, while today the average is closer to 2.20. That means today’s unlinked trip numbers must be reduced by nearly 25 percent to fairly compare them with 1985 numbers.

Transit ridership is very sensitive to transit vehicle revenue miles. Metro’s predecessor, the Southern California Rapid Transit District, ran buses for 92.6 million revenue miles in 1985. By 1995, to help pay for rail cost overruns, this had fallen to 78.9 million. Thanks to the court order in the NAACP case, this climbed back up to 92.9 million in 2006. But after the court order lapsed, it declined to 75.7 million in 2014. The riders gained on the multi-billion-dollar rail lines don’t come close to making up for this loss in bus service.

The transit agency offers all kinds of excuses for its problems. Just wait until it finishes a “complete buildout” of the rail system, says general manager Phil Washington, a process (the Times observes) that could take decades. In other words, don’t criticize us until we have spent many more billions of your dollars. Besides, agency officials say wistfully, just wait until traffic congestion worsens, gas prices rise, everyone is living in transit-oriented developments, and transit vehicles are hauled by sparkly unicorns.

A more realistic assessment is provided by Brian Taylor, the director of UCLA’s Institute of Transportation Studies, who is quoted by the L.A. Times saying, “Lots of resources are being put into a few high-profile lines that often carry a smaller number of riders compared to bus routes.”

Los Angeles ridership trends are not unusual: transit agencies building expensive rail infrastructure often can’t afford to keep running the buses that carry the bulk of their riders, so ridership declines.

  • Ridership in Houston peaked at 102.5 million trips in 2006, falling to 85.9 million in 2014 thanks to cuts in bus service necessitated by the high cost of light rail;
  • Despite huge job growth, Washington ridership peaked at 494.2 million in 2009 and has since fallen to 470.4 million due at least in part to Metro’s inability to maintain the rail lines;
  • Atlanta ridership peaked at 170.0 million trips in 2000 and has since fallen nearly 20 percent to 137.5 million and per capita ridership has fallen by two thirds since 1985;
  • San Francisco Bay Area ridership reached 490.9 million in 1982, but was only 457.0 million in 2014 as BART expansions forced cutbacks in bus service, a one-third decline in per capita ridership;
  • Pittsburgh transit regularly carried more than 85 million riders per year in the 1980s but is now down to some 65 million;
  • Austin transit carried 38 million riders in 2000, but after opening a rail line in 2010, ridership is now down to 34 million.

Even where ridership is increasing, it’s decreasing. After building two light-rail lines, transit ridership in the Twin Cities has grown by 50 percent since 1990. However, bus ridership is declining and driving has grown faster than transit.

Transit in some cities was hurt by the 2008 financial crash. But in most cases, declines in ridership parallel declines in service. If transit agencies reduce bus service to pay for the high cost of the rail lines they want to build, transit riders and ridership will be hurt.

Whatever the service levels, transit just isn’t that relevant anymore to anyone. As I’ve pointed out before, more than 95 percent of American workers live in a household with at least one car, and of the 4.5 percent who don’t, less than half take transit to work, suggesting that transit isn’t even relevant to most people who don’t have cars. This will only get worse, of course, as self-driving cars change the urban landscape.

“It’s not the dream of every bus rider to arrive in a bus that was on time, air conditioned and clean, where a seat was available,” the L.A. Times quotes USC civil engineering professor James Moore as saying. “It’s the dream of every bus rider to own a car. And as soon as they can afford one, that’s the first purchase they’ll make.”

Cities that invest in expensive transit infrastructure are ignoring the reality that, long before that infrastructure is worn out, self-driving cars will replace most transit. The short-run issue is that transit agencies that spend billions on rail transit or bus-rapid transit with dedicated lanes are doing a disservice to their customers. The most important thing they should focus on instead is increasing bus revenue miles in corridors where they will do the most good.

Politicians pander. It’s what they do. But Christians seem especially susceptible to those claiming to be their spiritual brethren. It would be better if people of faith focused on candidates’ practical ability to perform the duties of what remains a secular office.

With the Iowa caucuses drawing near, it seems like every Republican tramping through the snow claims to be a Bible-believing, God-fearing, Jesus-loving Christian. A gaggle of church leaders are promoting their favorite presidential wannabe.

It’s a fruitless exercise. It’s rarely easy to judge whether a particular candidate’s faith claims are true. God told the prophet Samuel: “Man looks at the outward appearance, but the Lord looks at the heart.” (1 Samuel 16:7)

For instance, Ted Cruz appears to have done the best this year in presenting himself as a committed Christian. His religious tale, including the conversion story of his pastor father, is contained in an 18-minute documentary. By all accounts, Cruz is doing well among the most theologically conservative Republicans in Iowa.

Yet McKay Coppins of BuzzFeed reported on doubts about Cruz’s faithfulness. Moreover, in late 2014, Cruz used a conference on persecuted Christians from the Middle East, among the most vulnerable people on the planet, as a campaign prop.

Cruz also gave less than one percent of his income to charity between 2006 and 2010. Opposing candidate Mike Huckabee observed: “It’s hard to say God is first in your life if he’s last in your budget.”

Donald Trump has been doing his best to pander without a carefully crafted story. Running casinos with strip clubs is unusual “fruit” from a Christian walk. His style of campaigning doesn’t exactly advance the Christian faith.

How about the rest of the GOP candidates? What do they really believe about God? Do they have a personal relationship with Jesus?

The best response is: who cares? One’s theological views just don’t tell much about a person’s competence to perform a civil office. Voters should care most about how a candidate would confront Washington’s virtual fiscal insolvency, end America’s constant warring in the Middle East, address dependency as well as poverty among the poor, and deal with other serious policy issues.

Indeed, by the most public measures of behavior, President Barack Obama appears to be a more faithful Christian than Donald Trump. Yet many political activists who loudly assert their Christian faith are trending toward the Donald. Indeed, Liberty University President Jerry Falwell Jr., gave a fulsome introduction to Trump, even comparing Trump to Jesus in expressing unpopular opinions.

It actually would have been more reassuring had Liberty University invited Trump to speak and The Donald done so, with neither pandering to the other. Trump ain’t my cup of tea, but the argument for his candidacy is entirely secular. Nevertheless, Christians should vote for him if they believe him to be the best candidate—and not because they believe him to be a faithful Christian like themselves.

As I wrote for American Spectator: “After years of being manipulated by ambitious politicos, believers should check their credulity at the polling place door. Christians shouldn’t cast their ballots based on their perceptions of the contenders’ religious faith. Martin Luther was right when he declared that he preferred to be governed by a smart Turk than a stupid Christian.”

Goodness and faithfulness are important, but no substitute for competence. Believers and nonbelievers alike should choose the best candidate, not the best Christian, for president. 

You Ought to Have a Look is a feature from the Center for the Study of Science posted by Patrick J. Michaels and Paul C. (“Chip”) Knappenberger.  While this section will feature all of the areas of interest that we are emphasizing, the prominence of the climate issue is driving a tremendous amount of web traffic.  Here we post a few of the best in recent days, along with our color commentary.

The New York Times ran an op-ed  last week extolling the virtues of a carbon tax by trying to poo-poo the idea that a tax on carbon emissions (which are produced by burning fossil fuels like coal, oil, and natural gas during the production of 80% of our nation’s energy supply) would produce a negative impact on our economy. The Times’ editors attempt to do this by running through a couple of examples where they claim the imposition of a carbon tax has produced economic benefits (or at least, was somewhat neutral).

Economist Dr. Robert Murphy takes the Times to task on this in his post “The NYT Gets It Wrong on Carbon Tax.”

Murphy is a senior economist at the Institute for Energy Research, research assistant professor at Texas Tech, and also the lead author of our Working Paper (soon to be Cato Policy Analysis) examining the cost for a carbon tax.

In his response to the Times, Murphy points out the Times’ editors’ favorite example of a carbon tax done well—British Columbia—actually serves as a counter-example when looked at a bit more carefully. Not only did British Columbia’s economy suffer after the establishment of a carbon tax, but also, the revenue-neutrality of the BC tax is not a real-world possibility in the U.S.

Murphy writes:

It’s worth pointing out that these dire economic impacts on the B.C. economy occurred even though they did a surprisingly good job of implementing offsetting tax cuts. Were a carbon tax to be introduced in the United States, the politics would almost certainly result in a large net tax hike.

 Ultimately Murphy concludes his piece with:

The American public is being sold a bill of goods regarding a carbon tax. On the one hand, proponents tell progressive citizens about all the “green” goodies that can be funded with the trillions in revenue that such a tax will bring in. On the other hand, supporters assure conservative citizens not to worry, that the tax will be revenue neutral and will allow for huge cuts in the corporate income tax rate. As Dr. Evil would say, “Riiight.”

 Check out Murphy’s full piece for the details of what’s wrong with the Times’ op-ed, and our “The Cost Against a Carbon Tax” Working Paper for what’s wrong with the concept of a carbon tax at large.

One of the many (bad) ideas behind a carbon tax is that the federal government will have more money in its hands to do with what it wants, including funding more climate change science to support why it needs to implement a carbon tax.

Dr. Roy Spencer frets over this is his recent blog post examining the observational science behind last year’s record high global temperatures. After reviewing the strength of the claims as to 2015’s temperature (including those as measured by satellites), Dr. Spencer‘s response is rather ho-hum: yes, the surface temperature data compilations do have 2015 as the warmest year in their record (thanks, in part, to a big El Niño); no, the same is not true for satellite observations which show the year as ranking 3rd or 4th; and take everything with a grain of salt, because all the data compilations have “issues.”

What really gets Roy fired up is the government’s behavior in all of this. Roy writes:

And this brings up the elephant in the room that I have a difficult time ignoring

By now it has become a truism that government agencies will prefer whichever dataset supports the governments desired policies. You might think that government agencies are only out to report the truth, but if that’s the case, why are these agencies run by political appointees?

Roy continues:

There indeed is a climate change problem to study…but I don’t think we know with any certainty how much is natural versus manmade. There is no way to know, because there is (contrary to the IPCC’s claims) no fingerprint of human versus natural warming. Even natural warming originating over the ocean will cause faster warming over land than over ocean, just as we already observe.

But since the government has framed virtually all of the research programs in terms of human-caused climate change, that’s what the funded scientists will dutifully report it to be, in terms of supposed causation.

And until the culture in the government funding agencies changes, I don’t see a new way of doing business materializing. It might require congress to direct the funding agencies to spend at least a small portion of their budgets to look for evidence of natural causes of climate change.

Because scientists, I have learned, will tend to find whatever they are paid to find in terms of causation…which is sometimes very difficult to pin down in science.

Be sure to check out Roy’s full post “On that Record Warmest 2015 Claim” for a more complete treatment of his concerns.

And, finally, as an amusing anecdote in support of Roy’s comments about the inherent uncertainties in weather observing systems comes this report on how snowfall during the weekend’s blizzard was measured at NOAA’s official weather station in our nation’s capital—Reagan National Airport.

Turns out it was snowing so hard at the height of the storm that the weather observers at the airport couldn’t find the official “snow board” on which the snowfall measurements were supposed to take place. So they improvised—and in doing so seemingly under-reported the snow total during the storm (to the great dismay of many snow nerds).  We are not making this up. From the Capital Weather Gang’s investigation:

It’s not that 17.8 inches of snow wasn’t enough.

But the number that will go down in the history books as Washington’s official total — recorded at Reagan National Airport — is downright paltry compared with some other spots in the region, raising the question: Why the disparity?

The reason, it turns out, may be partly due to the improvised technique used by a small team of weather observers at the airport who lost their snow-measuring device to the elements midway through the blizzard. It was buried by the very snow it was supposed to measure.

Couple the problem measuring snow at Reagan National Airport with the problem measuring temperature there that we identified last summer (see here and here) and you have the makings of a Laurel and Hardy routine on how weather observations of even fairly straightforward variables are collected. And to think, these types of somewhat major problems were identified as occurring at what would have to be considered among the best observing stations in the world.

Now you know why the data must be “adjusted.” Not a pretty picture.

Pages