Cato Op-Eds

Individual Liberty, Free Markets, and Peace
Subscribe to Cato Op-Eds feed

Saying “I do” and calling someone your spouse who legally isn’t shouldn’t be a crime, but it can be in Utah. While polygamy—being lawfully married to multiple people—isn’t legal in any state, due to its unique history, Utah has some of the strictest anti-bigamy laws in the country. Which probably makes starring in a reality TV show based on your plural marriage not the best idea for Utahns.

Nevertheless, TLC’s Sister Wives revolves around Kodi Brown, his four partners (Meri, Janelle, Christine, and Robyn), and their 17 children. While Kodi is only legally married to one of women, he claims he is in a “spiritual union” with each of the others, and describes all four as his wives—and that puts the Browns on the wrong side of Utah’s bigamy law. The day after the show premiered in 2010, local authorities announced they were investigating the family.

Because the potential sentences are quite severe (five years for each of the women, and up to 20 years for Kodi), the Browns took preemptive action, filing a federal lawsuit challenging the constitutionality of Utah’s law. The district court agreed. In granting the Brown’s motion for summary judgment, the court held that because the law criminalizes “spiritual cohabitation” (arrangements where the participants claim to be part of multiple religious marriages, but make no attempt to obtain state recognition), it violated the First and Fourteenth Amendments, and was “facially unconstitutional.” The state has appealed that ruling.

Together with First Amendment scholar Eugene Volokh, Cato has filed an amicus brief urging the U.S. Court of Appeals for the Tenth Circuit to affirm the district court. Whether or not the Utah law violates the Browns’ religious liberty, it’s a clear affront to the First Amendment’s protection of free speech.

In Utah, it’s legal to have an “open” marriage and any number of unmarried consenting adults can live together, have sex with each other, pool their finances, and describe themselves as being in a long-term polyamorous relationship. They just can’t use the “M” word. Kodi Brown could have lived with all four women, legally. He could have legally lived with his “real” wife, and carried on long-term affairs with the other women, legally. It’s only because the Browns took the symbolic step of solemnizing their relationships with religious ceremonies, and held themselves out to the world as a married quintuple (even if only in a strictly spiritual sense) that they face prosecution.

With the Browns’ case, Utah isn’t really criminalizing bigamy—say, procuring multiple marriage licenses from county clerks while already being legally married—it criminalizes speech. The Supreme Court has made clear that there are only a handful of exceptions to the First Amendment’s protection. Only the most heinous and dangerous kinds of speech—things like child pornography and inciting violence—can be criminalized. Telling people you’re married, even if it isn’t legally true, isn’t the kind of harmful speech any government has the right to censor, let alone criminalize.

In fact, it isn’t harmful at all. The Browns and the TV show that brought them to national (and prosecutorial) attention aren’t hurting anyone—not themselves, not their children, and certainly not the public—by merely claiming to be spiritually wed. Whether plural marriages should be granted legal recognition has nothing to do with this case, which involves speech—not conduct—that the state doesn’t like.

The Tenth Circuit will hear oral argument in Brown v. Buhman this fall.

China is implementing its “toughest-ever” mobile phone real-name registration system, according to the Want China Times. The effort seeks to get all remaining unregistered mobile phones associated with the true identities of their owners in the records of telecommunications firms. Those who do not register their phones will soon see their telecommunications restricted.

This policy will have wonderful security benefits. It will make identity fraud, anonymous communication, and various conspiracies much easier to detect and punish—including conspiracies to dissent from government policy.

The United States is a very different place from China—on the same tracking-and-control continuum. We have no official policy of registering phones to their owners, but in practice phone companies collect our Social Security numbers when we initiate service, they know our home addresses, and they have our credit card numbers. All of these are functional unique identifiers, and there is some evidence that the government can readily access data held by our telecommunications firms.

We have no national ID that would be used for phone registration, of course. The Department of Homeland Security says it will begin denying travel rights to people from states that do not comply with the REAL ID Act beginning in 2016.

In today’s Manchester Union-Leader, I explain the eerie resemblance that the health care plans advanced by presidential candidates Gov. Scott Walker (R-WI) and Sen. Marco Rubio (R-FL) bear to ObamaCare:

The centerpiece of both “replace” plans is a refundable tax credit for health insurance. Yet such tax credits already exist, in Obamacare. Also like Obamacare, the Walker/Rubio tax credits would allow Washington to decide how much coverage you purchase, penalize you if you don’t buy that government-defined plan, and conceal massive redistribution of income under the rubric of tax cuts…

How would Walker and Rubio pay for their new spending? Would they keep Obamacare’s tax increases? Raise taxes elsewhere? Would they finance new health care spending by cutting existing health care programs? If so, chalk up yet another way their plans would resemble Obamacare.

I also provide an alternative for reformers who actually want better, more affordable, more secure health care.

Conservatives can offer a better “replace” plan that is politically feasible by expanding a bedrock conservative initiative: health savings accounts, or HSAs, which have already enabled 14.5 million Americans to save more than $28.4 billion for their medical expenses tax-free.

Expanding HSAs would give workers a $9 trillion effective tax cut, without cutting spending or increasing the deficit, and would drastically reduce government control over Americans’ health decisions. Most important, “large” HSAs would spur innovations that make health care better, cheaper, and more secure — particularly for the most vulnerable.

Conservatives need to get this right, lest they repeat the same mistake they made in 1993-94.

For decades, prominent conservatives advocated an individual mandate. The left then picked up the idea and gave us Obamacare. Before they once again fall into the same trap, conservatives should drop any support for the implicit mandate of health-insurance tax credits. Expanding HSAs is more compassionate and provides a direct route toward freedom and better health care.

For more on Large HSAs, see here, here, and here.

As the Trans-Pacific Partnership negotiations enter their final stage, one issue remaining to be resolved concerns rules of origin for automobiles.  Rules of origin determine how much of a product needs to be made within the free trade area in order for it to receive duty-free treatment.  This is a tricky issue for automobiles because automakers rely heavily on global value chains where different parts are made in different countries.

Japan wants very liberal rules of origin because its industry’s supply chains include non-TPP countries like Thailand. Canada and Mexico want very strict rules of origin, because their industries benefit from preferential access to the U.S. market through the North American Free Trade Agreement. 

Canada and Mexico’s position in the TPP talks is protectionist. It’s also a consequence of negotiating trade liberalization through regional agreements. In the same way that industries who benefit from protectionism oppose the reduction of trade barriers, industries that benefit from preferential access through trade agreements oppose the reduction of other trade barriers.

Auto manufacturing investment in Canada and Mexico is driven in part by the fact that Canadian and Mexican content make it easier to import into the United States duty free under NAFTA.  If the United States drops trade barriers with other countries, Canada and Mexico become relatively less attractive places to invest.  Tight rules of origin in the TPP would reinforce Canada and Mexico’s advantage.

If negotiations at the World Trade Organization continue to flounder and regional agreements like the TPP proliferate, rules of origin will become a larger and larger problem for global trade.

Today the Court of Appeals for the DC Circuit issued a ruling in NSA v. Klayman that has almost no practical effect, but is a potent illustration of how excessive secrecy and stringent standing requirements effectively immunize intelligence programs from meaningful, adversarial constitutional review.

Contrary to some breathless headlines, today’s opinion does not “uphold” the NSA’s illicit bulk collection of telephone records—which, thanks to the recent passage of the USA Freedom Act, must end by November in any event. Rather, the court overturned an injunction that only ever applied specifically to the phone records of the plaintiffs. And they did so, not because the judges found the program substantially lawful, but because the plaintiff could not specifically prove that his telephone records had been swept into the database, even though the ultimate aim of the program was to collect nearly all such records.

Together with other similar thwarted challenges to mass government surveillance—most notably the Supreme Court case Clapper v. Amnesty International—the decision sends the disturbing signal that mass scale surveillance of millions of innocent people by our intelligence agencies is, for all practical purposes, immune from meaningful constitutional scrutiny. Even when we know about a mass surveillance program, as in the case of NSA’s bulk telephony program, stringent standing rules raise an impossibly high barrier to legal challenges. Perversely, the only people with a realistic chance of challenging such programs in court are actual terrorists who the government chooses to prosecute. The vast, innocent majority of people affected by bulk surveillance—those with the strongest claim that their rights have been violated—are effectively barred from ever having those rights vindicated in court.

Given the routine refusal of courts to step in to protect our Fourth Amendment rights, it is fortunate that Congress has already acted to bring this intrusive and ineffective program to a halt.

One striking feature of the first debate featuring the top tier GOP presidential candidates was how many of them described Saudi Arabia and its allies in the Persian Gulf as “friends” of the United States.  And clearly that is a bipartisan attitude.  Obama administration officials routinely refer to Saudi Arabia as a friend and ally, and one need only recall the infamous photo of President Obama bowing to Saudi King Abdullah to confirm Washington’s devotion to the relationship with Riyadh.

It is a spectacularly unwise attitude.  As Cato adjunct scholar Malou Innocent and I document in our new book, Perilous Partners: The Benefits and Pitfalls of America’s Alliances with Authoritarian Regimes, Saudi Arabia is not only an odious, totalitarian power, it has repeatedly undermined America’s security interests.

Saudi Arabia’s domestic behavior alone should probably disqualify the country as a friend of the United States.  Riyadh’s reputation as a chronic abuser of human rights is well deserved. Indeed, even as Americans and other civilized populations justifiably condemned ISIS for its barbaric practice of beheadings, America’s Saudi ally executed 83 people in 2014 by decapitation.

In addition to its awful domestic conduct, Riyadh has consistently worked to undermine America’s security.  As far back as the 1980s, when the United States and Saudi Arabia were supposedly on the same side, helping the Afghan mujahedeen resist the Soviet army of occupation, Saudi officials worked closely with Pakistan’s intelligence agency to direct the bulk of the aid to the most extreme Islamist forces.  Many of them became cadres in a variety of terrorist organizations around the world once the war in Afghanistan ended.

Saudi Arabia’s support for extremists in Afghanistan was consistent with its overall policy.  For decades, the Saudi government has funded the outreach program of the Wahhabi clergy and its fanatical message of hostility to secularism and Western values generally.  Training centers (madrassas) have sprouted like poisonous ideological mushrooms throughout much of the Muslim world, thanks to Saudi largesse.  That campaign of indoctrination has had an enormous impact on at least the last two generations of Muslim youth.  Given the pervasive program of Saudi-sponsored radicalism, it is no coincidence that 16 of the 19 hijackers on 9-11 were Saudi nationals.

Riyadh also has shown itself to be a disruptive, rather than a stabilizing, force in the Middle East.  Not only has Saudi Arabia conducted military interventions in Bahrain and Yemen, thereby eliminating the possibility of peaceful solutions to the bitter domestic divisions in those countries, the Saudi government helped fund and equip the factions in Syria and Iraq that eventually coalesced to form ISIS.  Although Saudi officials may now realize that they created an out-of-control Frankenstein monster, that realization does not diminish their responsibility for the tragedy.

In light of such a lengthy, dismal track record, one wonders why any sensible American would regard Saudi Arabia as a friend of the United States.  We do not need and should not want such repressive and untrustworthy “friends.”

No, the “waters of the United States” subject to Clean Water Act regulation do not include things like dry land over which water occasionally flows. That’s the conclusion of a federal judge who just put on hold the Environmental Protection Agency’s latest power grab.

The Clean Water Act empowers EPA and the Army Corps of Engineers to regulate the use of private property that affects “navigable waters,” which the Act defines as “the waters of the United States.” In late June, EPA and the Corps finalized a rule defining that term. This was, they said, a boon to those potentially subject to CWA regulation, because “the rule will clarify and simplify implementation of the CWA consistent with its purposes through clearer definitions and increased use of bright-line boundaries…and limit the need for case- specific analysis.”

In reality, it was yet another step in what the Supreme Court called “the immense expansion of federal regulation of land use that has occurred under the Clean Water Act.” The rule extends federal regulation—and prohibitions on land use—to “tributaries,” which it defines as anything that directly or indirectly “contributes flow” to an actually navigable body of water or wetland and “is characterized by the presence of the physical indicators of a bed and banks and an ordinary high water mark.” The point of that legalese is to reach things like “perennial, intermittent, and ephemeral streams”—in other words, areas that aren’t really “waters” at all. The broader the definition, the more land that is subject to CWA permitting requirements and, ultimately, EPA control.

The problem for the federal government is that the Supreme Court rejected basically the same expansive approach in a 2006 case, Rapanos v. United States. In a separate opinion that some believe to be controlling, Justice Kennedy explained that, to be within the reach of the Act, a water must, at the least, “significantly affect the chemical, physical, and biological integrity of other covered waters more readily understood as ‘navigable.’”

Judge Ralph Erickson recognized that the new rule “suffers from the same fatal defect.” It “allows EPA regulation of waters that do not bear any effect on the ‘chemical, physical, and biological integrity’ or any navigable-in-fact water.” That includes “vast numbers of waters that are unlikely to have a nexus to navigable waters within any reasonable understanding of the term.” In other words, EPA is overreaching once again.

This result should not be surprising to the agency; a colleague and I (among many others) helpfully raised the same points in comments on the proposed rule last year.

Judge Erickson also identified other defects. For one, the rule is arbitrary and capricious because it “asserts jurisdiction over waters that are remote and intermittent,” despite there being “no evidence [that] actually points to how these intermittent and remote wetlands” affect the quality of navigable waters. It also “arbitrarily establishes the distances from a navigable water that are subject to regulation,” roping in any damp patch within 4,000 feet—a number that, it appears, was plucked out of thin air.

For the 13 states party to the lawsuit, the rule is now stayed. EPA has said it will apply the rule elsewhere beginning on August 28.

Judge Erickson’s decision will not, of course, be the final word on this matter. In other cases, EPA has argued (with some success) that district courts lack the power to decide this kind of dispute. But Judge Erickson’s decision is notable as an early preview of the way that courts are likely to look at the issues at play in challenges to the rule. And its even-handed application of Justice Kennedy’s “significant nexus” approach from Rapanos suggests that, in the end, the “waters of the United States” rule will be sunk.  

Cleaning up the government’s nuclear weapons sites has become a vast sinkhole for taxpayer dollars. The Department of Energy (DOE) spends about $6 billion a year on environmental clean up of federal nuclear sites. These sites were despoiled in the decades following World War II with little notice taken by Congress. Then during the 1980s, a series of reports lambasted DOE for its lax safety and environmental standards, and federal polices began to change.

Since 1990, federal taxpayers have paid more than $150 billion to clean up the mess from the government’s nuclear sites, based on my calculations. Unfortunately, many more billions will be likely needed in coming years, partly because DOE management continues to be so poor.

A 2003 GAO report (GAO-03-593) found that “DOE’s past efforts to treat and dispose of high-level waste have been plagued with false starts and failures.” And a 2008 GAO report (GAO-08-1081) found that 9 out of 10 major clean up projects “experienced cost increases and schedule delays in their life cycle baseline, ranging from $139 million for one project to more than $9 billion for another.”

The largest of the nuclear clean up sites is Hanford in Washington State. One facility at the site has ballooned in cost from $4.3 billion in 2000 to $13.4 billion today (GAO-13-38). Overall, $19 billion has been spent cleaning up the Hanford site since 1989, and the effort continues to face huge problems (GAO-15-354).

The Washington Post reported yesterday:

A nearly completed government facility intended to treat the radioactive byproducts of nuclear weapons production is riddled with design flaws that could put the entire operation at risk of failure, according to a leaked internal report.

A technical review of the treatment plant on the grounds of the former Hanford nuclear site identified hundreds of “design vulnerabilities” and other weaknesses, some serious enough to lead to spills of radioactive material.

The draft report is the latest in a series of blows to the clean-up effort at Hanford, the once-secret government reservation in eastern Washington state where much of the nation’s plutonium stockpile originated. Engineers have struggled for years to come up with a safe method for disposing of Hanford’s millions of gallons of high-level radioactive waste, much of which is stored in leaky underground tanks.

Obviously this is a complex task, but a former Clinton administration DOE official told the newspaper that DOE:

“has proven to be incapable of managing a project of this magnitude and importance,” Alvarez said. “The agency has shown a long-standing intolerance for whistleblowers while conducting faith-based management of its contractors regardless of poor performance. This has bred a culture in which no safety misdeed goes unrewarded.”

In a series of unilateral moves, the Obama administration has been introducing an entirely new regime of labor law without benefit of legislation, upending decades’ worth of precedent so as to herd as many workers into unions as possible. The newest, yesterday, from the National Labor Relations Board, is also probably the most drastic yet: in a case against waste hauler Browning-Ferris Industries, the Board declared that from now on, franchisors and companies that employ subcontractors and temporary staffing agencies will often be treated as if they were really direct employers of those other firms’ workforces: they will be held liable for alleged labor law violations at the other workplaces, and will be under legal compulsion to bargain with unions deemed to represent their staff. The new test, one of “industrial realities,” will ask whether the remote company has the power, even the potential power, to significantly influence working conditions or wages at the subcontractor or franchisee; a previous test sought to determine whether the remote company exercised “ ‘direct and immediate impact’ on the worker’s terms and conditions — say, if that second company is involved in hiring and determining pay levels.”

This is a really big deal; as our friend Iain Murray puts it at CEI, it has the potential to “set back the clock 40 years, to an era of corporate giants when few people had the option of being their own bosses while pursuing innovative employment arrangements.”

  • A tech start-up currently contracts out for janitorial, cafeteria, and landscaping services. It will now be at legal risk should its hired contractors be later found to have violated labor law in some way, as by improperly resisting unionization. If it wants to avoid this danger of vicarious liability, it may have to fire the outside firms and directly hire workers of its own.
  • A national fast-food chain currently employs only headquarters staff, with franchisees employing all the staff at local restaurants. Union organizers can now insist that it bargain centrally with local organizers, at risk for alleged infractions by the franchisees. To escape, it can either try to replace its franchise model with company-owned outlets – so that it can directly control compliance – or at least try to exert more control over franchisees, twisting their arms to recognize unions or requiring that an agent of the franchiser be on site at all times to monitor labor law compliance.

Writes management-side labor lawyer Jon Hyman:

If staffing agencies and franchisors are now equal under the National Labor Relations Act with their customers and franchisees, then we will see the end of staffing agencies and franchises as viable business models. Moreover, do not think for a second that this expansion of joint-employer liability will stop at the NLRB. The Department of Labor recently announced that it is exploring a similar expansion of liability for OSHA violations. And the EEOC is similarly exploring the issue for discrimination liability.

And Beth Milito, senior legal counsel at the National Federation of Independent Business, quoted at The Hill: “It will make it much harder for self-employed subcontractors to get jobs.” What will happen to the thriving white-van culture of small skilled contractors that now provides upward mobility to so many tradespeople? Trade it in for a company van, start punching someone’s clock, and just forget about building a business of your own.

What do advocates of these changes intend to accomplish by destroying the economics of business relationships under which millions of Americans are presently employed? For many, the aim is to force much more of the economy into the mold of large-payroll, unionized employers, a system for which the 1950s are often (wrongly) idealized.

One wonders whether many of the smart New Economy people who bought into the Obama administration’s promises really knew what they were buying.

New polling from Gallup finds that more Americans view the internet industry favorably than any time since Gallup began asking the question in 2001. Today, 60% of Americans have either a “very positive” or “somewhat positive” view of the industry, compared to 49% in 2014.

Favorability toward the Internet industry has ebbed and flowed during the 2000s, but today marks the most positive perception of the industry. Compared to other industries, Gallup found that the Internet industry ranks third behind the restaurant and computer industries.

Perceptions have improved across most demographic groups, with the greatest gains found among those with lower levels of education, Republicans and independents. It is likely these groups are “late adopters” of technology and have grown more favorable as they’ve come to access it. Indeed, late adopters have been found to be older, less educated and more conservative. Pew also finds that early users of the Internet have been younger, more urban, higher income Americans, and those with more education. Indeed, as Internet usage has soared from 55% to 2001 to 84% in 2014, many of these new users come from the ranks of conservative late adopters.

These data suggest the more Americans learn about the Internet the more they come to like it and appreciate the companies who use it as a tool to offer consumer goods and services.

Please find full results at Gallup.

Research assistant Nick Zaiac contributed to this post.

KHARTOUM, SUDAN—Like the dog that didn’t bark in Sir Arthur Conan Doyle’s tale, little advertising promotes American goods in Khartoum. Washington has banned most business with Sudan.

As I point out on Forbes: “Sanctions have become a tool of choice for Washington, yet severing commercial relations rarely has promoted America’s ends. Nothing obvious has been achieved in Sudan, where the U.S. stands alone. It is time for Washington to drop its embargo.

The Clinton administration first imposed restrictions in 1993, citing Khartoum as an official state sponsor of terrorism. The Bush administration imposed additional restrictions in response to continuing ethnic conflict.”

U.S. sanctions are not watertight, but America matters, especially to an underdeveloped nation like Sudan. At the Khartoum airport I spoke with an Egyptian businessman who said “sanctions have sucked the life out of the economy.” A Sudanese economics ministry official complained that “Sanctions create many obstacles to the development process.” In some areas the poverty rate runs 50 percent.

Ironically, among the strongest supporters of economic coercion have been American Christians, yet Sudanese Christians say they suffer from Washington’s restrictions. Explained Rev. Filotheos Farag of Khartoum’s El Shahidein Coptic Church, “we want to cancel all the sanctions.”

Washington obviously intends to cause economic hardship, but for what purpose? In the early 1990s Khartoum dallied with Islamic radicalism. However, that practice ended after 9/11. The administration’s latest terrorism report stated: “During the past year, the government of Sudan continued to support counterterrorism operations to counter threats to U.S. interests and personnel in Sudan.”

Today Washington’s main complaint is that Khartoum, like many other nations, has a relationship with Iran and Hamas. Yet Sudan has been moving closer to America’s alliance partners in the Middle East—Egypt, Saudi Arabia, and the other Gulf States. In Libya Khartoum has shifted its support from Islamist to Western-backed forces.

Economic penalties also were used to punish the government for its brutal conduct in the country’s long-standing ethnic wars. However, a peace agreement ultimately was reached, leading to the formation of the Republic of South Sudan (recently in the news for its own civil war).

A separate insurgency arose in Sudan’s west around Darfur starting in 2003. Also complex, this fighting led to the indictment of Sudanese President Omar al-Bashir by the International Criminal Court. But the Darfur conflict has subsided.

Some fighting persists along Sudan’s southern border, particularly in the provinces of Blue Nile and South Kordofan (containing the NubaMountains). Although still awful, this combat is far more limited, indeed, hardly unusual for many Third World nations.

There’s no obvious reason to punish Khartoum and not many other conflict-ridden states. Nor have sanctions moderated Sudan’s policies.

Why do sanctions remain? A Sudanese businessman complained: “You said to release south of Sudan. We did so. What else is necessary to end sanctions?”

Is there any other reason to maintain sanctions? Politics today in Sudan is authoritarian, but that has never bothered Washington. After all, the U.S. is paying and arming Egypt, more repressive now than under the Mubarak dictatorship.

Khartoum also has been labeled a “Country of Particular Concern” by the U.S. Commission on International Religious Freedom. Yet persecution problems are worse in such U.S. allies as Pakistan and Saudi Arabia.

The only other CPCs under sanctions are Iran and North Korea—for their nuclear activities. Ironically, by making the penalties essentially permanent the U.S. has made dialogue over political and religious liberty more difficult.

Among the more perverse impact of sanctions has been to encourage Khartoum to look for friends elsewhere. State Minister Yahia Hussein Babiker said that we are “starting to get most of our heavy equipment through China.” Chinese were a common sight and my hotel’s restaurant offered Chinese dishes. Across the street was the “Panda Restaurant.”

Khartoum deserves continued criticism, but sanctions no longer serve American interests.  Washington should lift economic penalties against Sudan.

Back in 2011 I wrote several times about the failure of Solyndra, the solar panel company that was well connected to the Obama administration. Then, as with so many stories, the topic passed out of the headlines and I lost touch with it. Today, the Washington Post and other papers bring news of a newly released federal investigative report:

Top leaders of a troubled solar panel company that cost taxpayers a half-billion dollars repeatedly misled federal officials and omitted information about the firm’s financial prospects as they sought to win a major government loan, according to a newly-released federal investigative report.

Solyndra’s leaders engaged in a “pattern of false and misleading assertions” that drew a rosy picture of their company enjoying robust sales while they lobbied to win the first clean energy loan the new administration awarded in 2009, a lengthy investigation uncovered. The Silicon Valley start-up’s dramatic rise and then collapse into bankruptcy two years later became a rallying cry for critics of President Obama’s signature program to create jobs by injecting billions of dollars into clean energy firms.

And why would it become such a rallying cry for critics? Well, consider the hyperlink the Post inserted at that point in the article: “[Past coverage: Solyndra: Politics infused Obama energy programs]” And what did that article report?

Meant to create jobs and cut reliance on foreign oil, Obama’s green-technology program was infused with politics at every level, The Washington Post found in an analysis of thousands of memos, company records and internal ­e-mails. Political considerations were raised repeatedly by company investors, Energy Department bureaucrats and White House officials. 

The records, some previously unreported, show that when warned that financial disaster might lie ahead, the administration remained steadfast in its support for Solyndra.

The federal investigators “didn’t try to determine if political favoritism fueled the decision to award Solyndra a loan” – that was accommodating of them – “but heard some concerns about political pressure, the report said.”

“Employees acknowledged that they felt tremendous pressure, in general, to process loan guarantee applications,” the report said. “They suggested the pressure was based on the significant interest in the program from Department leadership, the Administration, Congress, and the applicants.”

As I wrote at the time, this story has all the hallmarks of government decision making:

  • officials spending other people’s money with little incentive to spend it prudently,
  • political pressure to make decisions without proper vetting,
  • the substitution of political judgment for the judgments of millions of investors,
  • the enthusiastic embrace of fads like “green energy,”
  • political officials ignoring warnings from civil servants,
  • crony capitalism,
  • close connections between politicians and the companies that benefit from government allocation of capital,
  • the appearance—at least—of favors for political supporters,
  • and the kind of promiscuous spending that has delivered us $18 trillion in national debt.

It may end up being a case study in political economy. And if you want government to guide the economy, to pick winners, to override market investments, then this is what you want. 

This week, I reported at the Daily Caller (and got a very nice write-up) about a minor milestone in the advance of government transparency: We recently finished adding computer-readable code to every version of every bill in the 113th Congress.

That’s an achievement. More than 10,000 bills were introduced in Congress’s last-completed two-year meeting (2013-14). We marked up every one of them with additional information.

We’ve been calling the project “Deepbills” because it allows computers to see more deeply into the content of federal legislation. We added XML-format codes to the texts of bills, revealing each reference to federal agencies and bureaus, and to existing laws no matter how Congress cited them. Our markup also automatically reveals budget authorities, i.e., spending.

Want to see every bill that would have amended a particular title or section of the U.S. code? Deepbills data allows that.

Want to see all the bills that referred to the Administration on Aging at HHS? Now that can be done.

Want to see every member of Congress who proposed a new spending program and how much they wanted to spend? Combining Deepbills data with other data allows you to easily collect that imporant information.

Now, data is just data. It doesn’t do all that stuff until people make web sites, information services, and apps with it. There have been some users, including the Washington Examiner, Cornell University’s Legal Information Institute, the New York Times web site, and my own

As importantly, this milestone is a proof of concept for Congress. Early this year, aware of our work, the House amended its rules, asking the Committee on House Administration, the House Clerk, and others to “broaden the availability of legislative documents in machine readable formats.” We’ve shown that it can be done, blazed a bit of a trail, and made some mistakes so Congress’s support agencies don’t have to! (They’ll make there own.) There are good folks on Capitol Hill making steady progress toward opening up the Congress to computer-aided oversight.

Deepbills has been a significant undertaking, and we’re not certain that we’ll do it again in the 114th Congress. If we do, we’ll add more data elements, so that the stories that the data can tell get richer.

In my debut policy paper on transparency, Publication Practices for Transparent Government, I analogized between data flows and water. For government to be transparent, it must publish data in particular formats, just like water must be liquid and relatively pure.

Well-formed data does not automatically produce transparency. You must have a society that is compared to consume it. As data flows about the government’s deliberations, management, and results widen, you’ll see web sites, information services, and apps expand the consumption of it. This will encourage further widening of the data flows, which will in turn draw more data consumers.

Right now, I’m looking for researchers, political scientists, and such to take the corpus of data we produced about the 113th Congress and use it to more closely examine our national legislature. There are some prominent theories about congressional behavior that could be tested a little more closely with the aid of Deepbills data. It’s there for the taking, and using Deepbills data will help show that there is a community of users and value to be gotten from better data about Congress.

If you’re not a data nerd, this achievement may seem pretty arcane. But if you are a data nerd, please join me in popping a magnum of Mountain Dew to celebrate. The Deepbills project has been supported by the Democracy Fund, which has proven itself a bastion of foresighted brilliance for doing so. They have our thanks, and deserve yours.

Global Science Report is a feature from the Center for the Study of Science, where we highlight one or two important new items in the scientific literature or the popular media. For broader and more technical perspectives, consult our monthly “Current Wisdom.”

Proxy temperature records serve a significant purpose in the global warming debate – they provide a reality check against the claim that current temperatures are unprecedentedly warm in the context of the past one to two thousand years. If it can be shown that past temperatures were just as warm as, or warmer than, they are presently, the hypothesis of a large CO2-induced global warming is weakened. It would thus raise the possibility that current temperatures are influenced to a much greater degree by natural climate oscillations than they are by rising atmospheric CO2.

Tree ring data account for one of the most commonly utilized sources of proxy temperatures. Yet, as with any substitute, proxy temperatures derived from tree ring data do not perfectly match with standard thermometer-based measurements; and, therefore, the calculations and methods are not without challenge or controversy. For example, many historic proxies are based upon a dwindling number of trees the further the proxy extends back in time. Additionally, some proxies mix data from different trees and pool their data prior to mass spectrometer measurement, which limits the ability to discern long-term climate signals among individual trees. Though it has the potential to significantly influence a proxy record, this latter phenomenon has received little attention in the literature – until now.

In an intriguing new study, Esper et al. (2015) recognize this deficiency by noting “climate reconstructions derived from detrended tree-ring δ13C data, in which δ13C level differences and age-trends have been analyzed and, if detected, removed, are largely missing from the literature.” Thus, they set out to remedy this situation by developing “a millennial-scale reconstruction based on decadally resolved, detrended, δ13C measurements, with the climate signal attributed to the comparison of annually resolved δ13C measurements with instrumental data.” Then, they compared their new proxy with proxies derived from a more common, but presumably inferior, method based on maximum latewood density (MXD) data. The location of study was at a sampling site near lake Gerber (42.63°N, 1.1°E), Spanish Pyrenees, at the upper treeline (2400 m).

The resultant proxy temperature series is presented in the figure below along with two MXD-based reconstructions from the same region. As illustrated there, and as indicated by Esper et al., the new δ13C-based reconstruction “shows warmer and more variable growing season temperatures during the Little Ice Age than previously described [in the two MXD data sets] (Büntgen et al., 2008; Dorado Liñán et al., 2012).” In discussing why this is the case, they state that “developing this reconstruction required systematically removing lower δ13C values inherent to tree rings younger than 200 years, that would otherwise lower the mean chronology levels during earlier periods of the past millennium, where these younger rings dominate the reconstruction.” In other words, the new methodology allowed the researchers to capture the low frequency climatic signals that were systematically eliminated in the MXD data sets. Thus, as a consequence, earlier warm periods during the late 14th and 15th, and 17th centuries “appear warmer” and “have been retained” by this new method, leading the team of six researchers to conclude that “late 20th century warming has not been unique within the context of the past 750 years.”

Figure 1. δ13C based mean June, July and August (JJA) temperature reconstruction of Esper et al. (2015), compared with the MXD-derived JJA maximum temperature reconstruction of Büntgen et al. (2008) and May-September mean temperature MXD reconstruction of Dorado Liñán et al. (2012)

These results are significant for two reasons. First, it weakens the claim that the modern increase in CO2 is the primary driver of current temperatures in the study region. Second, and with much wider implications, if this new technique of deriving proxy temperatures holds as the more precise method, and if the relationships shown here are maintained, then it would be likely that most, if not all, MXD-derived reconstructions underrate the warmth of historic temperatures. For as noted by Esper et al. in the final sentence of their abstract, “the overall reduced variance in earlier studies points to an underestimation of pre-instrumental summer temperature variability derived from traditional tree-ring parameters.” And that is a very big blow, indeed, to climate alarmists who insist that current temperatures are unprecedentedly warm.  As this study shows, there is plenty of evidence that suggests that it may just not be so.


Büntgen, U., Frank, D.C., Grudd, H. and Esper, J. 2008. Long-term summer temperature variations in the Pyrenees. Climate Dynamics 31: 615–631.

Dorado Liñán, I., Büntgen, U., González-Rouco, F., Zorita, E., Montávez, J.P., Gómez-Navarro, J.J., Brunet, M., Heinrich, I., Helle, G. and Gutiérrez, E. 2012. Estimating 750 years of temperature variations and uncertainties in the Pyrenees by tree-ring reconstructions and climate simulations. Climate of the Past 8: 919-933.

Esper, J., Konter, O., Krusic, P.J., Saurer, M., Holzkämper, S. and Büntgen, U. 2015. Long-term summer temperature variations in the Pyrenees from detrended stable carbon isotopes. Geochronometria 42: 53-59.

Ten years ago this week, Hurricane Katrina made landfall on the Gulf Coast and generated a huge disaster. The storm flooded New Orleans, killed more than 1,800 people, and caused $100 billion in property damage. The storm’s damage was greatly exacerbated by the failures of Congress, the Bush administration, the Federal Emergency Management Agency (FEMA), and the Army Corps of Engineers.

Weather forecasters warned government officials about Katrina’s approach, so they should have been ready for it. But they were not, and Katrina exposed major failures in America’s disaster preparedness and response systems.

Here are some of the federal failures:

  • Confusion. Key federal officials were not proactive, they gave faulty information to the public, and they were not adequately trained. The 2006 bipartisan House report on the disaster, A Failure of Initiative, said, “federal agencies … had varying degrees of unfamiliarity with their roles and responsibilities under the National Response Plan and National Incident Management System.” The report found that there was “general confusion over mission assignments, deployments, and command structure.” One reason was that FEMA’s executive suites were full of political appointees with little disaster experience.
  • Failure to Learn. The government was unprepared for Katrina even though it was widely known that such a hurricane was probable, and weather forecasters had accurately predicted the advance of Katrina before landfall. A year prior to Katrina, government agencies had performed a simulation exercise—“Hurricane Pam”—for a hurricane of similar strength hitting New Orleans, but governments “failed to learn important lessons” from the exercise.
  • Communications Breakdown. The House report found that there was “a complete breakdown in communications that paralyzed command and control and made situational awareness murky at best.” Agencies could not communicate with each other due to equipment failures and a lack of system interoperability. These problems occurred despite the fact that FEMA and predecessor agencies have been giving grants to state and local governments for emergency communication systems since the beginning of the Cold War.
  • Supply Failures. Some emergency supplies were prepositioned before the storm, but there was nowhere near enough. In places that desperately needed help, such as the New Orleans Superdome, it took days to deliver medical supplies. FEMA also wasted huge amounts of supplies. It delivered millions of pounds of ice to holding centers in cities far away from the Gulf Coast. FEMA sent truckers carrying ice on wild goose chases across the country. Two years after the storm, the agency ended up throwing out $100 million of unused ice. FEMA also paid for 25,000 mobile homes costing $900 million, but they went virtually unused because of FEMA’s own regulations that such homes cannot be used on flood plains, which is where most Katrina victims lived.
  • Indecision. Indecision plagued government leaders in the deployment of supplies, in medical personnel decisions, and in other areas. Even the grisly task of body recovery after Katrina was slow and confused. Bodies went uncollected for days “as state and federal officials remained indecisive on a body recovery plan.” FEMA waited for Louisiana to make decisions about bodies, but the governor of Louisiana blamed FEMA’s tardiness in making a deal with a contractor. Similar problems of too many bureaucratic cooks in the kitchen hampered decisionmaking in areas, such as organizing evacuations and providing law enforcement resources to Louisiana.
  • Fraud and Abuse. Free-flowing Katrina aid unleased a torrent of fraud and abuse. Federal auditors estimated that $1 billion or more in aid payments for individuals were invalid. Other estimates put the waste at $2 billion. An Associated Press analysis found that “people claiming to live in as many as 162,750 homes that did not exist before the storms may have improperly received as much as $1 billion in tax money.” The New York Times concluded: “Among the many superlatives associated with Hurricane Katrina can now be added this one: it produced one of the most extraordinary displays of scams, schemes and stupefying bureaucratic bungles in modern history, costing taxpayers up to $2 billion.”

 Perhaps the most appalling aspect of the federal response to Katrina was that officials obstructed private relief efforts, as these examples illustrate:

  • FEMA repeatedly blocked the delivery of emergency supplies ordered by the Methodist Hospital in New Orleans from its out-of-state headquarters.
  • FEMA turned away doctors volunteering their services at emergency facilities. Methodist’s sister hospital, Chalmette, for example, sent doctors to the emergency facility set up at New Orleans Airport to offer their services, but they were turned away because their names were not in a government database.
  • Private medical air transport companies played an important role in evacuations after Katrina. But FEMA officials provided no help in coordinating these services, and they actively blocked some of the flights.
  • FEMA “refused Amtrak’s offer to evacuate victims, and wouldn’t return calls from the American Bus Association.” Indeed, both the Motorcoach Association and the American Bus Association could not get through to anyone at FEMA to offer help for evacuations.   
  • The Red Cross was denied access to the Superdome in New Orleans to deliver emergency supplies.
  • FEMA turned away trucks from Walmart loaded with water for New Orleans, and it prevented the Coast Guard from delivering diesel fuel.
  • Offers of emergency supplies, vehicles, and specialized equipment from other nations were caught in federal red tape and shipments were delayed.

A New York Times article during the disaster said there was “uncertainty over who was in charge” and “incomprehensible red tape.” Katrina made clear that the government’s emergency response system is far too complex. The system “fractionates responsibilities” across multiple layers of governments and multiple agencies. There are 29 different federal agencies that have a role in disaster relief under the National Response Framework. These agencies are involved in 15 different cross-agency “Emergency Support Functions.” There is also a National Incident Management System, a National Disaster Recovery Framework, and numerous other “national” structures that are supposed to coordinate action.

But such centralization is giant mistake—you don’t get efficiency, learning, innovation, and quality performance from top-down command. Indeed, increased centralization and complexity is a disease of modern American government that is causing endemic failure. I discuss this problem in my new study, Why the Federal Government Fails.

All that said, a few government agencies performed very well during Katrina. The Coast Guard rapidly deployed 4,000 service members, 37 aircraft, and 78 boats to the area. The agency rescued more than 30,000 people in the days following the storm. Unlike FEMA, the Coast Guard has decentralized operations and relies much more on local decisionmaking. Coast Guard employees live in local communities, and so they were able to make decisions rapidly during the crisis. Coast Guard officers have an “ethos of independent action,” which was crucial during Katrina when communications systems were down.

The National Guard under state command also played a crucial role during Katrina. The Guard helped reestablish law and order in New Orleans after the local police force was devastated. A key strength of the National Guard is the existence of cross-state agreements for sharing personnel and assets. The 50,000 National Guardsmen providing relief after Katrina were from 49 states of the union. They “participated in every aspect of emergency response, from medical care to law enforcement and debris removal, and were considered invaluable by Louisiana and Mississippi officials.”

The private sector also played a large and effective role during Katrina. The Red Cross had 239 shelters ready to house 40,000 evacuees on the day Katrina made landfall. The shelters expanded to hold a peak of 146,000 evacuees, and the organization served 52 million meals and snacks to hurricane survivors. The Salvation Army housed a peak of 30,000 evacuees in 225 shelters.

For-profit businesses were also very important in the Katrina response. Insurance companies sent teams to affected areas to accelerate pay-outs to covered homeowners and to offer loans. Electric utilities rushed extra crews to disaster areas. During disasters, utilities have standing agreements with nearby utilities for mutual aid. Southern Company was well-prepared for Katrina based on its disaster plans and a large-scale prepositioning of people and assets.

Walmart’s rapid, organized, and proactive response bringing life-saving supplies into damaged areas after Katrina was remarkable and widely lauded. Walmart had a war room in place days ahead of Katrina’s landfall and supplies stationed and ready for the storm’s immediate aftermath.

Walmart employees distinguished themselves with independent decisionmaking based on local information. Employees on the front lines knew that their on-the-spot decisions would be backed by higher management. The Washington Post reported that within days, Walmart delivered “an unrivaled $20 million in cash donations, 1,500 truckloads of free merchandise, food for 100,000 meals and the promise of a job for every one of its displaced workers.”

Home Depot also earned praise for its rapid and efficient relief efforts during Katrina. Such companies provided many supplies free to needy people in the affected region. Businesses have strong incentives to aid the public when disasters strike, both from a charitable desire and in order to gain respect and loyal customers over the long term.

In a study on FEMA, I concluded that state and local governments and the private sector are in a much better position than the federal government to handle most disasters. Federal bureaucracies are poor at trying to centrally manage large and complex problems. FEMA is no exception: it is often slow, risk averse, subservient to politics, and does not have the needed local knowledge. First responders and their assets are mainly owned and managed locally, and so a bottom-up structure makes sense. FEMA intervention slows down state, local, and private responses because of all the extra bureaucracy. 

By cutting the federal role, we would reduce the ambiguity in the disaster response system. As we saw with Katrina, decisionmaking was hampered by the uncertainty over bureaucratic rules and responsibilities. When you read homeland security reports, it is striking the huge number of goals, plans, strategies, frameworks, agencies, systems, directives, offices, and other structures that are supposed to come together during disasters. A better approach than top-down planning would be to cut the federal role and let state, local, and private institutions perform their specialized functions and coordinate among themselves.

For cites to all facts and quotes used in this piece, see this study.

For more about federal government failure, see this study.

The American Civil Liberties Union announced today that it is filing a legal challenge against Nevada’s new education savings account program. The ACLU argues that using the ESA funds at religious institutions would violate the state’s historically anti-Catholic Blaine Amendment, which states “No public funds of any kind or character whatever…shall be used for sectarian purposes.”  

What “for sectarian purposes” actually means (beyond thinly veiled code for “Catholic schools”) is a matter of dispute. Would that prohibit holding Bible studies at one’s publicly subsidized apartment? Using food stamps to purchase Passover matzah? Using Medicaid at a Catholic hospital with a crucifix in every room and priests on the payroll? Would it prohibit the state from issuing college vouchers akin to the Pell Grant? Or pre-school vouchers? If not, why are K-12 subsidies different?

While the legal eagles mull those questions over, let’s consider what’s at stake. Children in Nevada–particularly Las Vegas–are trapped in overcrowded and underperforming schools. Nevada’s ESA offers families much greater freedom to customize their children’s education–a freedom they appear to appreciate. Here is how Arizona ESA parents responded when asked about their level of satisfaction with the ESA program:


And here’s how those same parents rated their level of satisfaction with the public schools that their children previously attended:


Note that the lowest-income families were the least satisfied with their previous public school and most satisfied with the providers they chose with their ESA funds.

Similar results are not guaranteed in Nevada and there are important differences between the programs–when the survey was administered, eligibility for Arizona’s ESA was limited only to families of students with special needs who received significantly more funding than the average student (though still less than the state would have spent on them at a public school). By contrast, Nevada’s ESA program is open to all public school students, but payments to low-income families are capped at the average state funding per pupil ($5,700). Nevertheless, it is the low-income students who have the most to gain from the ESA–and therefore the most to lose from the ACLU’s ill-considered lawsuit.

Last month, our friends at the Competitive Enterprise Institute filed suit against the TSA because the agency failed to follow basic administrative procedures when it deployed its notorious “strip-search machines” for use in primary screening at our nation’s airports. Four years after being ordered to do so by the U.S. Court of Appeals for the D.C. Circuit, TSA still hasn’t completed the process of taking comments from the public and finalizing a regulation setting this policy. Here’s hoping CEI’s effort helps make TSA obey the law.

The reason why federal law requires agencies to hear from the public is so that they can craft the best possible rules. Nobody believes in agency omniscience. Public input is essential to gathering the information for setting good policies.

But an agency can’t get good information if it doesn’t share the evidence, facts, and inferences that underlie its proposals and rules. That’s why this week I’ve sent TSA a request for mandatory declassification review relating to a study that it says supports its strip-search machine policy. The TSA is keeping its study secret.

In its woefully inadequate (and still unfinished) policy proposal on strip-search machines, TSA summarily asserted: “[R]isk reduction analysis shows that the chance of a successful terrorist attack on aviation targets generally decreases as TSA deploys AIT. However, the results of TSA’s risk-reduction analysis are classified.”

Since then, we’ve learned that TSA’s security measures fail 95% of the time when undercover agents try to defeat them.

By its nature, risk management requires analysts to make assumptions and to work with data that are often imprecise. It is crucial that analyses of this type be open and transparent, so that assumptions and data can be tested and challenged. Our comments on the proposal discussed risk management, as well as many other aspects of the proposed policy. Making the TSA’s “risk reduction analysis” available for public perusal would undoubtedly help the agency come up with a better rule. Hopefully, they’ll have the sense to declassify and publish it.

Though we remain uninformed by TSA’s incomplete administrative processes, next month CEI’s Marc Scribner and I will be on Capitol Hill discussing the sorry state of airline security, a product of TSA’s lawlessness and ill-advised secrecy.

(From time to time, critics of my work will suggest—not without reason—that working to bring TSA within the law is futile and that the agency should be shuttered. It should be. That is a goal that we can pursue at the same time as we pursue one alternative: an agency that follows the law and manages risks more intelligently.)

According to a report I have before me, straight from the U.S. Senate, prominent Federal Reserve officials, including the presidents of the Federal Reserve Banks of New York and Philadelphia, have publicly endorsed legislation that would establish a bipartisan Monetary Commission authorized “to make a thorough study of the country’s entire banking and monetary set-up,” and to evaluate various alternative reforms, including a “return to the gold coin standard.” The proposed commission would be the first such undertaking since the Aldrich-Vreeland Act established the original National Monetary Commission in 1908.

Surprised? It gets better. The same Senate document includes a letter from the Fed’s Chairman, addressed to the Senate Banking Committee, indicating that the Board of Governors itself welcomes the proposed commission. Such a commission, the letter says, “would be desirable and could be expected to form the basis for conservative legislation in this field.”

Can it be? Have Fed officials had a sudden change of heart? Have they really decided to welcome the proposed “Centennial Monetary Commission” with open arms? Is it time to break out the Dom Pérignon, or have I just been daydreaming?

Neither, actually. Who said anything about the Centennial Monetary Commission? The Senate report to which I refer isn’t about that commission. It concerns neither S. 1786, the Centennial Monetary Commission bill just introduced in the Senate, nor its House companion, H.R. 2912. Instead, the report refers to S. 1559, calling for the establishment of a National Monetary Commission. That’s S. 1559, not of the 115th Congress, but of the 81st Congress – the one that sat from 1949 to 1951, when Harry Truman was president.

It turns out, you see, that the Centennial Monetary Commission legislation isn’t the first time that Congress has tried to launch a new monetary commission.

Things were, evidently, rather different in 1949 than they are now. Back then, the Fed was thoroughly under the Treasury’s thumb, where it had been throughout World War II. In particular, it found its powers of monetary control severely diminished by both the vast wartime increase in the Federal debt and by the Treasury’s insistence that it intervene to support the market for that debt. Fed officials hoped to reestablish the Fed’s powers of monetary control by having it acquire the ability to set reserve requirements for non-member banks. In short, Fed officials, including then Federal Reserve Chairman Thomas McCabe (who would later lose his job for standing up to the Treasury), favored a new Monetary Commission because they anticipated that such a commission would end up recommending reforms that would enhance the Fed’s then truncated powers.

S. 1559 ended up being killed by the Subcommittee on Monetary, Credit, and Fiscal Policies of the Joint Committee on the Economic Report. Interestingly, that body argued that the proposed, comprehensive study of the U.S. monetary system should instead “be made by a committee composed exclusively of Members of Congress rather than, as proposed in S. 1559, by a mixed commission composed of Members of Congress, members of the executive department, and members drawn from private life.” As it happens, the currently proposed Centennial Monetary Commission is to have 12 voting members, all of whom are to be members of Congress.

As for any possibility that the Centennial Monetary Commission bill might itself garner support from highly-placed Fed officials: fuhgeddaboudit. Those officials now have all the power they could possibly desire. Why should they look kindly upon legislation that’s far more likely to lessen that power than to enhance it?

Although the fact that the Fed welcomed a new National Monetary Commission in 1949 is no cause for celebration today, supporters of the new reform may still have reason to be cheered by the Fed’s earlier stance. After all, should Fed officials declare themselves against the new proposal, they can be reminded of their predecessors’ stance, and asked to explain why they should oppose the same sort of inquiry that those predecessors considered a jolly good idea. If they are good for nothing else, their answers should at least be good for a chuckle.

[Cross-posted from]

Donald Trump has wrecked the best plans of nearly a score of “serious” Republican presidential candidates. Yet, what may be most extraordinary about his campaign is that, on foreign policy at least, he may be the most sensible Republican in the race. It is the “mainstream” and “acceptable” Republicans who are most extreme, dangerous, and unrealistic.

First, the Republicans scream that the world has never been so dangerous. Yet when in history has a country been as secure as America from existential and even substantial threats?

Hyperbole is Trump’s stock in trade, but he has used it only sparingly on foreign policy. Referring to North Korea, for instance, he claimed: “this world is just blowing up around us.” But he used that as a justification for talking to North Korea, not going to war.

Second, the Republicans generally refuse to criticize George W. Bush’s misadventure in Iraq. In contrast, Trump said, “I was not a fan of going to Iraq.”

Third, the Republican candidates blame the rise of the Islamic State on President Obama. This claim is false at every level. The Islamic State grew out of the Iraq invasion and succeeded with the aid of former Baathists and Sunni tribes who came to prefer an Islamist Dark Age to murderous Shia rule. There were no U.S. troops in Iraq because George W. Bush had planned their withdrawal.

Trump understands that the basic mistake was invading Iraq. He said: “They went into Iraq. They destabilized the Middle East. It was a big mistake. Okay, now we’re there. And you have ISIS.”

Fourth, Republicans see other waiting enemies, such as China. But Trump apparently doesn’t view war as an option against Beijing. Rather, he sees China primarily as an economic competitor: he declared that he would “get tough with” and “out-negotiate” the Chinese, not bomb them.

Fifth, all the other Republicans apparently view Iran as an unspeakable enemy. All would block the Obama nuclear deal and most appear ready to tear it up. Trump criticized the agreement, but announced: “I will police that deal,” a far more realistic response.

Sixth, the GOP candidates almost uniformly treat handing out security guarantees as similar to accumulating Facebook friends: the more the merrier. Yet as I point out on Forbes online: “most of America’s major allies could defend themselves. The Europeans, for instance, have a combined population and GDP greater than America and much greater than Russia. South Korea has twice the population and around 40 times the GDP of the North.”

Some potential allies are security black holes, such as Ukraine. The latter would set the United States against nuclear-armed Russia. America has nothing at stake warranting that kind of risky confrontation.

Many of America’s official friends are more oppressive than Washington’s enemies. Saudi Arabia, for instance, is a totalitarian state. Egypt today is more repressive than under Mubarak.

Here Trump is at his refreshing best. Decades ago he called on the United States to “stop paying to defend countries that can afford to defend themselves.” He then pointed to Japan and Saudi Arabia.

A couple years ago he said: “I keep asking, how long will we go on defending South Korea from North Korea without payment?” Similarly, Trump recently explained: “Pulling back from Europe would save this country millions of dollars annually. The cost of stationing NATO troops in Europe is enormous.” Regarding Ukraine, he asked: “Where’s Germany? Where are the countries of Europe?”

As I wrote in the Forbes article: “Trump obviously is not a deep thinker on foreign policy or anything else. Nevertheless, on these issues he exhibits a degree of common sense lacked by virtually every other Republican candidate. The GOP needs to have serious debate over foreign policy.”

Last year in this space, I wrote about a case in which a New Jersey appeals court found that a mother could be put on the state’s child abuse registry, with life-changing consequences, for having left her sleeping toddler alone in the back seat of her locked, running car while she ran into a store briefly. No harm had come to the child during the ten minutes and an investigation found nothing else wrong with the family. 

Now a unanimous New Jersey Supreme Court has reversed that decision. Not only does the mother deserve a hearing before being put on the registry, it said, but such a hearing should not find neglect unless her conduct is found to have placed the child at “imminent risk of harm.” 

The battle is by no means over. The New Jersey Department of Children and Families vowed to continue its efforts to hold the mother responsible for gross neglect, its spokesperson saying that “leaving a child alone in a vehicle – even for just a minute – is a dangerous and risky decision.” That’s one view. Another view is the one I expressed last year: 

When the law behaves this way, is it really protecting children? What about the risks children face when their parent is pulled into the police or Child Protective Services system because of overblown fears about what conceivably might have happened, but never did?

For much more on this subject, check out the speech at Cato last year (with me moderating) by the founder of the Free-Range Kids movement, Lenore Skenazy, who has written extensively on the New Jersey case. She’s also been contributed the lead essay at a Cato Unbound symposium on children’s safety and liberty. We’ve also covered the celebrated case of the Meitiv family of Silver Spring, Md., who have faced extensive hassles from Montgomery County, Md. Child Protective Services for letting their children walk home alone from a local park. 

This post was adapted and expanded from Overlawyered