Cato Op-Eds

Individual Liberty, Free Markets, and Peace
Subscribe to Cato Op-Eds feed

New polling from Gallup finds that more Americans view the internet industry favorably than any time since Gallup began asking the question in 2001. Today, 60% of Americans have either a “very positive” or “somewhat positive” view of the industry, compared to 49% in 2014.

Favorability toward the Internet industry has ebbed and flowed during the 2000s, but today marks the most positive perception of the industry. Compared to other industries, Gallup found that the Internet industry ranks third behind the restaurant and computer industries.

Perceptions have improved across most demographic groups, with the greatest gains found among those with lower levels of education, Republicans and independents. It is likely these groups are “late adopters” of technology and have grown more favorable as they’ve come to access it. Indeed, late adopters have been found to be older, less educated and more conservative. Pew also finds that early users of the Internet have been younger, more urban, higher income Americans, and those with more education. Indeed, as Internet usage has soared from 55% to 2001 to 84% in 2014, many of these new users come from the ranks of conservative late adopters.

These data suggest the more Americans learn about the Internet the more they come to like it and appreciate the companies who use it as a tool to offer consumer goods and services.

Please find full results at Gallup.

Research assistant Nick Zaiac contributed to this post.

KHARTOUM, SUDAN—Like the dog that didn’t bark in Sir Arthur Conan Doyle’s tale, little advertising promotes American goods in Khartoum. Washington has banned most business with Sudan.

As I point out on Forbes: “Sanctions have become a tool of choice for Washington, yet severing commercial relations rarely has promoted America’s ends. Nothing obvious has been achieved in Sudan, where the U.S. stands alone. It is time for Washington to drop its embargo.

The Clinton administration first imposed restrictions in 1993, citing Khartoum as an official state sponsor of terrorism. The Bush administration imposed additional restrictions in response to continuing ethnic conflict.”

U.S. sanctions are not watertight, but America matters, especially to an underdeveloped nation like Sudan. At the Khartoum airport I spoke with an Egyptian businessman who said “sanctions have sucked the life out of the economy.” A Sudanese economics ministry official complained that “Sanctions create many obstacles to the development process.” In some areas the poverty rate runs 50 percent.

Ironically, among the strongest supporters of economic coercion have been American Christians, yet Sudanese Christians say they suffer from Washington’s restrictions. Explained Rev. Filotheos Farag of Khartoum’s El Shahidein Coptic Church, “we want to cancel all the sanctions.”

Washington obviously intends to cause economic hardship, but for what purpose? In the early 1990s Khartoum dallied with Islamic radicalism. However, that practice ended after 9/11. The administration’s latest terrorism report stated: “During the past year, the government of Sudan continued to support counterterrorism operations to counter threats to U.S. interests and personnel in Sudan.”

Today Washington’s main complaint is that Khartoum, like many other nations, has a relationship with Iran and Hamas. Yet Sudan has been moving closer to America’s alliance partners in the Middle East—Egypt, Saudi Arabia, and the other Gulf States. In Libya Khartoum has shifted its support from Islamist to Western-backed forces.

Economic penalties also were used to punish the government for its brutal conduct in the country’s long-standing ethnic wars. However, a peace agreement ultimately was reached, leading to the formation of the Republic of South Sudan (recently in the news for its own civil war).

A separate insurgency arose in Sudan’s west around Darfur starting in 2003. Also complex, this fighting led to the indictment of Sudanese President Omar al-Bashir by the International Criminal Court. But the Darfur conflict has subsided.

Some fighting persists along Sudan’s southern border, particularly in the provinces of Blue Nile and South Kordofan (containing the NubaMountains). Although still awful, this combat is far more limited, indeed, hardly unusual for many Third World nations.

There’s no obvious reason to punish Khartoum and not many other conflict-ridden states. Nor have sanctions moderated Sudan’s policies.

Why do sanctions remain? A Sudanese businessman complained: “You said to release south of Sudan. We did so. What else is necessary to end sanctions?”

Is there any other reason to maintain sanctions? Politics today in Sudan is authoritarian, but that has never bothered Washington. After all, the U.S. is paying and arming Egypt, more repressive now than under the Mubarak dictatorship.

Khartoum also has been labeled a “Country of Particular Concern” by the U.S. Commission on International Religious Freedom. Yet persecution problems are worse in such U.S. allies as Pakistan and Saudi Arabia.

The only other CPCs under sanctions are Iran and North Korea—for their nuclear activities. Ironically, by making the penalties essentially permanent the U.S. has made dialogue over political and religious liberty more difficult.

Among the more perverse impact of sanctions has been to encourage Khartoum to look for friends elsewhere. State Minister Yahia Hussein Babiker said that we are “starting to get most of our heavy equipment through China.” Chinese were a common sight and my hotel’s restaurant offered Chinese dishes. Across the street was the “Panda Restaurant.”

Khartoum deserves continued criticism, but sanctions no longer serve American interests.  Washington should lift economic penalties against Sudan.

Back in 2011 I wrote several times about the failure of Solyndra, the solar panel company that was well connected to the Obama administration. Then, as with so many stories, the topic passed out of the headlines and I lost touch with it. Today, the Washington Post and other papers bring news of a newly released federal investigative report:

Top leaders of a troubled solar panel company that cost taxpayers a half-billion dollars repeatedly misled federal officials and omitted information about the firm’s financial prospects as they sought to win a major government loan, according to a newly-released federal investigative report.

Solyndra’s leaders engaged in a “pattern of false and misleading assertions” that drew a rosy picture of their company enjoying robust sales while they lobbied to win the first clean energy loan the new administration awarded in 2009, a lengthy investigation uncovered. The Silicon Valley start-up’s dramatic rise and then collapse into bankruptcy two years later became a rallying cry for critics of President Obama’s signature program to create jobs by injecting billions of dollars into clean energy firms.

And why would it become such a rallying cry for critics? Well, consider the hyperlink the Post inserted at that point in the article: “[Past coverage: Solyndra: Politics infused Obama energy programs]” And what did that article report?

Meant to create jobs and cut reliance on foreign oil, Obama’s green-technology program was infused with politics at every level, The Washington Post found in an analysis of thousands of memos, company records and internal ­e-mails. Political considerations were raised repeatedly by company investors, Energy Department bureaucrats and White House officials. 

The records, some previously unreported, show that when warned that financial disaster might lie ahead, the administration remained steadfast in its support for Solyndra.

The federal investigators “didn’t try to determine if political favoritism fueled the decision to award Solyndra a loan” – that was accommodating of them – “but heard some concerns about political pressure, the report said.”

“Employees acknowledged that they felt tremendous pressure, in general, to process loan guarantee applications,” the report said. “They suggested the pressure was based on the significant interest in the program from Department leadership, the Administration, Congress, and the applicants.”

As I wrote at the time, this story has all the hallmarks of government decision making:

  • officials spending other people’s money with little incentive to spend it prudently,
  • political pressure to make decisions without proper vetting,
  • the substitution of political judgment for the judgments of millions of investors,
  • the enthusiastic embrace of fads like “green energy,”
  • political officials ignoring warnings from civil servants,
  • crony capitalism,
  • close connections between politicians and the companies that benefit from government allocation of capital,
  • the appearance—at least—of favors for political supporters,
  • and the kind of promiscuous spending that has delivered us $18 trillion in national debt.

It may end up being a case study in political economy. And if you want government to guide the economy, to pick winners, to override market investments, then this is what you want. 

This week, I reported at the Daily Caller (and got a very nice write-up) about a minor milestone in the advance of government transparency: We recently finished adding computer-readable code to every version of every bill in the 113th Congress.

That’s an achievement. More than 10,000 bills were introduced in Congress’s last-completed two-year meeting (2013-14). We marked up every one of them with additional information.

We’ve been calling the project “Deepbills” because it allows computers to see more deeply into the content of federal legislation. We added XML-format codes to the texts of bills, revealing each reference to federal agencies and bureaus, and to existing laws no matter how Congress cited them. Our markup also automatically reveals budget authorities, i.e., spending.

Want to see every bill that would have amended a particular title or section of the U.S. code? Deepbills data allows that.

Want to see all the bills that referred to the Administration on Aging at HHS? Now that can be done.

Want to see every member of Congress who proposed a new spending program and how much they wanted to spend? Combining Deepbills data with other data allows you to easily collect that imporant information.

Now, data is just data. It doesn’t do all that stuff until people make web sites, information services, and apps with it. There have been some users, including the Washington Examiner, Cornell University’s Legal Information Institute, the New York Times web site, and my own

As importantly, this milestone is a proof of concept for Congress. Early this year, aware of our work, the House amended its rules, asking the Committee on House Administration, the House Clerk, and others to “broaden the availability of legislative documents in machine readable formats.” We’ve shown that it can be done, blazed a bit of a trail, and made some mistakes so Congress’s support agencies don’t have to! (They’ll make there own.) There are good folks on Capitol Hill making steady progress toward opening up the Congress to computer-aided oversight.

Deepbills has been a significant undertaking, and we’re not certain that we’ll do it again in the 114th Congress. If we do, we’ll add more data elements, so that the stories that the data can tell get richer.

In my debut policy paper on transparency, Publication Practices for Transparent Government, I analogized between data flows and water. For government to be transparent, it must publish data in particular formats, just like water must be liquid and relatively pure.

Well-formed data does not automatically produce transparency. You must have a society that is compared to consume it. As data flows about the government’s deliberations, management, and results widen, you’ll see web sites, information services, and apps expand the consumption of it. This will encourage further widening of the data flows, which will in turn draw more data consumers.

Right now, I’m looking for researchers, political scientists, and such to take the corpus of data we produced about the 113th Congress and use it to more closely examine our national legislature. There are some prominent theories about congressional behavior that could be tested a little more closely with the aid of Deepbills data. It’s there for the taking, and using Deepbills data will help show that there is a community of users and value to be gotten from better data about Congress.

If you’re not a data nerd, this achievement may seem pretty arcane. But if you are a data nerd, please join me in popping a magnum of Mountain Dew to celebrate. The Deepbills project has been supported by the Democracy Fund, which has proven itself a bastion of foresighted brilliance for doing so. They have our thanks, and deserve yours.

Global Science Report is a feature from the Center for the Study of Science, where we highlight one or two important new items in the scientific literature or the popular media. For broader and more technical perspectives, consult our monthly “Current Wisdom.”

Proxy temperature records serve a significant purpose in the global warming debate – they provide a reality check against the claim that current temperatures are unprecedentedly warm in the context of the past one to two thousand years. If it can be shown that past temperatures were just as warm as, or warmer than, they are presently, the hypothesis of a large CO2-induced global warming is weakened. It would thus raise the possibility that current temperatures are influenced to a much greater degree by natural climate oscillations than they are by rising atmospheric CO2.

Tree ring data account for one of the most commonly utilized sources of proxy temperatures. Yet, as with any substitute, proxy temperatures derived from tree ring data do not perfectly match with standard thermometer-based measurements; and, therefore, the calculations and methods are not without challenge or controversy. For example, many historic proxies are based upon a dwindling number of trees the further the proxy extends back in time. Additionally, some proxies mix data from different trees and pool their data prior to mass spectrometer measurement, which limits the ability to discern long-term climate signals among individual trees. Though it has the potential to significantly influence a proxy record, this latter phenomenon has received little attention in the literature – until now.

In an intriguing new study, Esper et al. (2015) recognize this deficiency by noting “climate reconstructions derived from detrended tree-ring δ13C data, in which δ13C level differences and age-trends have been analyzed and, if detected, removed, are largely missing from the literature.” Thus, they set out to remedy this situation by developing “a millennial-scale reconstruction based on decadally resolved, detrended, δ13C measurements, with the climate signal attributed to the comparison of annually resolved δ13C measurements with instrumental data.” Then, they compared their new proxy with proxies derived from a more common, but presumably inferior, method based on maximum latewood density (MXD) data. The location of study was at a sampling site near lake Gerber (42.63°N, 1.1°E), Spanish Pyrenees, at the upper treeline (2400 m).

The resultant proxy temperature series is presented in the figure below along with two MXD-based reconstructions from the same region. As illustrated there, and as indicated by Esper et al., the new δ13C-based reconstruction “shows warmer and more variable growing season temperatures during the Little Ice Age than previously described [in the two MXD data sets] (Büntgen et al., 2008; Dorado Liñán et al., 2012).” In discussing why this is the case, they state that “developing this reconstruction required systematically removing lower δ13C values inherent to tree rings younger than 200 years, that would otherwise lower the mean chronology levels during earlier periods of the past millennium, where these younger rings dominate the reconstruction.” In other words, the new methodology allowed the researchers to capture the low frequency climatic signals that were systematically eliminated in the MXD data sets. Thus, as a consequence, earlier warm periods during the late 14th and 15th, and 17th centuries “appear warmer” and “have been retained” by this new method, leading the team of six researchers to conclude that “late 20th century warming has not been unique within the context of the past 750 years.”

Figure 1. δ13C based mean June, July and August (JJA) temperature reconstruction of Esper et al. (2015), compared with the MXD-derived JJA maximum temperature reconstruction of Büntgen et al. (2008) and May-September mean temperature MXD reconstruction of Dorado Liñán et al. (2012)

These results are significant for two reasons. First, it weakens the claim that the modern increase in CO2 is the primary driver of current temperatures in the study region. Second, and with much wider implications, if this new technique of deriving proxy temperatures holds as the more precise method, and if the relationships shown here are maintained, then it would be likely that most, if not all, MXD-derived reconstructions underrate the warmth of historic temperatures. For as noted by Esper et al. in the final sentence of their abstract, “the overall reduced variance in earlier studies points to an underestimation of pre-instrumental summer temperature variability derived from traditional tree-ring parameters.” And that is a very big blow, indeed, to climate alarmists who insist that current temperatures are unprecedentedly warm.  As this study shows, there is plenty of evidence that suggests that it may just not be so.


Büntgen, U., Frank, D.C., Grudd, H. and Esper, J. 2008. Long-term summer temperature variations in the Pyrenees. Climate Dynamics 31: 615–631.

Dorado Liñán, I., Büntgen, U., González-Rouco, F., Zorita, E., Montávez, J.P., Gómez-Navarro, J.J., Brunet, M., Heinrich, I., Helle, G. and Gutiérrez, E. 2012. Estimating 750 years of temperature variations and uncertainties in the Pyrenees by tree-ring reconstructions and climate simulations. Climate of the Past 8: 919-933.

Esper, J., Konter, O., Krusic, P.J., Saurer, M., Holzkämper, S. and Büntgen, U. 2015. Long-term summer temperature variations in the Pyrenees from detrended stable carbon isotopes. Geochronometria 42: 53-59.

Ten years ago this week, Hurricane Katrina made landfall on the Gulf Coast and generated a huge disaster. The storm flooded New Orleans, killed more than 1,800 people, and caused $100 billion in property damage. The storm’s damage was greatly exacerbated by the failures of Congress, the Bush administration, the Federal Emergency Management Agency (FEMA), and the Army Corps of Engineers.

Weather forecasters warned government officials about Katrina’s approach, so they should have been ready for it. But they were not, and Katrina exposed major failures in America’s disaster preparedness and response systems.

Here are some of the federal failures:

  • Confusion. Key federal officials were not proactive, they gave faulty information to the public, and they were not adequately trained. The 2006 bipartisan House report on the disaster, A Failure of Initiative, said, “federal agencies … had varying degrees of unfamiliarity with their roles and responsibilities under the National Response Plan and National Incident Management System.” The report found that there was “general confusion over mission assignments, deployments, and command structure.” One reason was that FEMA’s executive suites were full of political appointees with little disaster experience.
  • Failure to Learn. The government was unprepared for Katrina even though it was widely known that such a hurricane was probable, and weather forecasters had accurately predicted the advance of Katrina before landfall. A year prior to Katrina, government agencies had performed a simulation exercise—“Hurricane Pam”—for a hurricane of similar strength hitting New Orleans, but governments “failed to learn important lessons” from the exercise.
  • Communications Breakdown. The House report found that there was “a complete breakdown in communications that paralyzed command and control and made situational awareness murky at best.” Agencies could not communicate with each other due to equipment failures and a lack of system interoperability. These problems occurred despite the fact that FEMA and predecessor agencies have been giving grants to state and local governments for emergency communication systems since the beginning of the Cold War.
  • Supply Failures. Some emergency supplies were prepositioned before the storm, but there was nowhere near enough. In places that desperately needed help, such as the New Orleans Superdome, it took days to deliver medical supplies. FEMA also wasted huge amounts of supplies. It delivered millions of pounds of ice to holding centers in cities far away from the Gulf Coast. FEMA sent truckers carrying ice on wild goose chases across the country. Two years after the storm, the agency ended up throwing out $100 million of unused ice. FEMA also paid for 25,000 mobile homes costing $900 million, but they went virtually unused because of FEMA’s own regulations that such homes cannot be used on flood plains, which is where most Katrina victims lived.
  • Indecision. Indecision plagued government leaders in the deployment of supplies, in medical personnel decisions, and in other areas. Even the grisly task of body recovery after Katrina was slow and confused. Bodies went uncollected for days “as state and federal officials remained indecisive on a body recovery plan.” FEMA waited for Louisiana to make decisions about bodies, but the governor of Louisiana blamed FEMA’s tardiness in making a deal with a contractor. Similar problems of too many bureaucratic cooks in the kitchen hampered decisionmaking in areas, such as organizing evacuations and providing law enforcement resources to Louisiana.
  • Fraud and Abuse. Free-flowing Katrina aid unleased a torrent of fraud and abuse. Federal auditors estimated that $1 billion or more in aid payments for individuals were invalid. Other estimates put the waste at $2 billion. An Associated Press analysis found that “people claiming to live in as many as 162,750 homes that did not exist before the storms may have improperly received as much as $1 billion in tax money.” The New York Times concluded: “Among the many superlatives associated with Hurricane Katrina can now be added this one: it produced one of the most extraordinary displays of scams, schemes and stupefying bureaucratic bungles in modern history, costing taxpayers up to $2 billion.”

 Perhaps the most appalling aspect of the federal response to Katrina was that officials obstructed private relief efforts, as these examples illustrate:

  • FEMA repeatedly blocked the delivery of emergency supplies ordered by the Methodist Hospital in New Orleans from its out-of-state headquarters.
  • FEMA turned away doctors volunteering their services at emergency facilities. Methodist’s sister hospital, Chalmette, for example, sent doctors to the emergency facility set up at New Orleans Airport to offer their services, but they were turned away because their names were not in a government database.
  • Private medical air transport companies played an important role in evacuations after Katrina. But FEMA officials provided no help in coordinating these services, and they actively blocked some of the flights.
  • FEMA “refused Amtrak’s offer to evacuate victims, and wouldn’t return calls from the American Bus Association.” Indeed, both the Motorcoach Association and the American Bus Association could not get through to anyone at FEMA to offer help for evacuations.   
  • The Red Cross was denied access to the Superdome in New Orleans to deliver emergency supplies.
  • FEMA turned away trucks from Walmart loaded with water for New Orleans, and it prevented the Coast Guard from delivering diesel fuel.
  • Offers of emergency supplies, vehicles, and specialized equipment from other nations were caught in federal red tape and shipments were delayed.

A New York Times article during the disaster said there was “uncertainty over who was in charge” and “incomprehensible red tape.” Katrina made clear that the government’s emergency response system is far too complex. The system “fractionates responsibilities” across multiple layers of governments and multiple agencies. There are 29 different federal agencies that have a role in disaster relief under the National Response Framework. These agencies are involved in 15 different cross-agency “Emergency Support Functions.” There is also a National Incident Management System, a National Disaster Recovery Framework, and numerous other “national” structures that are supposed to coordinate action.

But such centralization is giant mistake—you don’t get efficiency, learning, innovation, and quality performance from top-down command. Indeed, increased centralization and complexity is a disease of modern American government that is causing endemic failure. I discuss this problem in my new study, Why the Federal Government Fails.

All that said, a few government agencies performed very well during Katrina. The Coast Guard rapidly deployed 4,000 service members, 37 aircraft, and 78 boats to the area. The agency rescued more than 30,000 people in the days following the storm. Unlike FEMA, the Coast Guard has decentralized operations and relies much more on local decisionmaking. Coast Guard employees live in local communities, and so they were able to make decisions rapidly during the crisis. Coast Guard officers have an “ethos of independent action,” which was crucial during Katrina when communications systems were down.

The National Guard under state command also played a crucial role during Katrina. The Guard helped reestablish law and order in New Orleans after the local police force was devastated. A key strength of the National Guard is the existence of cross-state agreements for sharing personnel and assets. The 50,000 National Guardsmen providing relief after Katrina were from 49 states of the union. They “participated in every aspect of emergency response, from medical care to law enforcement and debris removal, and were considered invaluable by Louisiana and Mississippi officials.”

The private sector also played a large and effective role during Katrina. The Red Cross had 239 shelters ready to house 40,000 evacuees on the day Katrina made landfall. The shelters expanded to hold a peak of 146,000 evacuees, and the organization served 52 million meals and snacks to hurricane survivors. The Salvation Army housed a peak of 30,000 evacuees in 225 shelters.

For-profit businesses were also very important in the Katrina response. Insurance companies sent teams to affected areas to accelerate pay-outs to covered homeowners and to offer loans. Electric utilities rushed extra crews to disaster areas. During disasters, utilities have standing agreements with nearby utilities for mutual aid. Southern Company was well-prepared for Katrina based on its disaster plans and a large-scale prepositioning of people and assets.

Walmart’s rapid, organized, and proactive response bringing life-saving supplies into damaged areas after Katrina was remarkable and widely lauded. Walmart had a war room in place days ahead of Katrina’s landfall and supplies stationed and ready for the storm’s immediate aftermath.

Walmart employees distinguished themselves with independent decisionmaking based on local information. Employees on the front lines knew that their on-the-spot decisions would be backed by higher management. The Washington Post reported that within days, Walmart delivered “an unrivaled $20 million in cash donations, 1,500 truckloads of free merchandise, food for 100,000 meals and the promise of a job for every one of its displaced workers.”

Home Depot also earned praise for its rapid and efficient relief efforts during Katrina. Such companies provided many supplies free to needy people in the affected region. Businesses have strong incentives to aid the public when disasters strike, both from a charitable desire and in order to gain respect and loyal customers over the long term.

In a study on FEMA, I concluded that state and local governments and the private sector are in a much better position than the federal government to handle most disasters. Federal bureaucracies are poor at trying to centrally manage large and complex problems. FEMA is no exception: it is often slow, risk averse, subservient to politics, and does not have the needed local knowledge. First responders and their assets are mainly owned and managed locally, and so a bottom-up structure makes sense. FEMA intervention slows down state, local, and private responses because of all the extra bureaucracy. 

By cutting the federal role, we would reduce the ambiguity in the disaster response system. As we saw with Katrina, decisionmaking was hampered by the uncertainty over bureaucratic rules and responsibilities. When you read homeland security reports, it is striking the huge number of goals, plans, strategies, frameworks, agencies, systems, directives, offices, and other structures that are supposed to come together during disasters. A better approach than top-down planning would be to cut the federal role and let state, local, and private institutions perform their specialized functions and coordinate among themselves.

For cites to all facts and quotes used in this piece, see this study.

For more about federal government failure, see this study.

The American Civil Liberties Union announced today that it is filing a legal challenge against Nevada’s new education savings account program. The ACLU argues that using the ESA funds at religious institutions would violate the state’s historically anti-Catholic Blaine Amendment, which states “No public funds of any kind or character whatever…shall be used for sectarian purposes.”  

What “for sectarian purposes” actually means (beyond thinly veiled code for “Catholic schools”) is a matter of dispute. Would that prohibit holding Bible studies at one’s publicly subsidized apartment? Using food stamps to purchase Passover matzah? Using Medicaid at a Catholic hospital with a crucifix in every room and priests on the payroll? Would it prohibit the state from issuing college vouchers akin to the Pell Grant? Or pre-school vouchers? If not, why are K-12 subsidies different?

While the legal eagles mull those questions over, let’s consider what’s at stake. Children in Nevada–particularly Las Vegas–are trapped in overcrowded and underperforming schools. Nevada’s ESA offers families much greater freedom to customize their children’s education–a freedom they appear to appreciate. Here is how Arizona ESA parents responded when asked about their level of satisfaction with the ESA program:


And here’s how those same parents rated their level of satisfaction with the public schools that their children previously attended:


Note that the lowest-income families were the least satisfied with their previous public school and most satisfied with the providers they chose with their ESA funds.

Similar results are not guaranteed in Nevada and there are important differences between the programs–when the survey was administered, eligibility for Arizona’s ESA was limited only to families of students with special needs who received significantly more funding than the average student (though still less than the state would have spent on them at a public school). By contrast, Nevada’s ESA program is open to all public school students, but payments to low-income families are capped at the average state funding per pupil ($5,700). Nevertheless, it is the low-income students who have the most to gain from the ESA–and therefore the most to lose from the ACLU’s ill-considered lawsuit.

Last month, our friends at the Competitive Enterprise Institute filed suit against the TSA because the agency failed to follow basic administrative procedures when it deployed its notorious “strip-search machines” for use in primary screening at our nation’s airports. Four years after being ordered to do so by the U.S. Court of Appeals for the D.C. Circuit, TSA still hasn’t completed the process of taking comments from the public and finalizing a regulation setting this policy. Here’s hoping CEI’s effort helps make TSA obey the law.

The reason why federal law requires agencies to hear from the public is so that they can craft the best possible rules. Nobody believes in agency omniscience. Public input is essential to gathering the information for setting good policies.

But an agency can’t get good information if it doesn’t share the evidence, facts, and inferences that underlie its proposals and rules. That’s why this week I’ve sent TSA a request for mandatory declassification review relating to a study that it says supports its strip-search machine policy. The TSA is keeping its study secret.

In its woefully inadequate (and still unfinished) policy proposal on strip-search machines, TSA summarily asserted: “[R]isk reduction analysis shows that the chance of a successful terrorist attack on aviation targets generally decreases as TSA deploys AIT. However, the results of TSA’s risk-reduction analysis are classified.”

Since then, we’ve learned that TSA’s security measures fail 95% of the time when undercover agents try to defeat them.

By its nature, risk management requires analysts to make assumptions and to work with data that are often imprecise. It is crucial that analyses of this type be open and transparent, so that assumptions and data can be tested and challenged. Our comments on the proposal discussed risk management, as well as many other aspects of the proposed policy. Making the TSA’s “risk reduction analysis” available for public perusal would undoubtedly help the agency come up with a better rule. Hopefully, they’ll have the sense to declassify and publish it.

Though we remain uninformed by TSA’s incomplete administrative processes, next month CEI’s Marc Scribner and I will be on Capitol Hill discussing the sorry state of airline security, a product of TSA’s lawlessness and ill-advised secrecy.

(From time to time, critics of my work will suggest—not without reason—that working to bring TSA within the law is futile and that the agency should be shuttered. It should be. That is a goal that we can pursue at the same time as we pursue one alternative: an agency that follows the law and manages risks more intelligently.)

According to a report I have before me, straight from the U.S. Senate, prominent Federal Reserve officials, including the presidents of the Federal Reserve Banks of New York and Philadelphia, have publicly endorsed legislation that would establish a bipartisan Monetary Commission authorized “to make a thorough study of the country’s entire banking and monetary set-up,” and to evaluate various alternative reforms, including a “return to the gold coin standard.” The proposed commission would be the first such undertaking since the Aldrich-Vreeland Act established the original National Monetary Commission in 1908.

Surprised? It gets better. The same Senate document includes a letter from the Fed’s Chairman, addressed to the Senate Banking Committee, indicating that the Board of Governors itself welcomes the proposed commission. Such a commission, the letter says, “would be desirable and could be expected to form the basis for conservative legislation in this field.”

Can it be? Have Fed officials had a sudden change of heart? Have they really decided to welcome the proposed “Centennial Monetary Commission” with open arms? Is it time to break out the Dom Pérignon, or have I just been daydreaming?

Neither, actually. Who said anything about the Centennial Monetary Commission? The Senate report to which I refer isn’t about that commission. It concerns neither S. 1786, the Centennial Monetary Commission bill just introduced in the Senate, nor its House companion, H.R. 2912. Instead, the report refers to S. 1559, calling for the establishment of a National Monetary Commission. That’s S. 1559, not of the 115th Congress, but of the 81st Congress – the one that sat from 1949 to 1951, when Harry Truman was president.

It turns out, you see, that the Centennial Monetary Commission legislation isn’t the first time that Congress has tried to launch a new monetary commission.

Things were, evidently, rather different in 1949 than they are now. Back then, the Fed was thoroughly under the Treasury’s thumb, where it had been throughout World War II. In particular, it found its powers of monetary control severely diminished by both the vast wartime increase in the Federal debt and by the Treasury’s insistence that it intervene to support the market for that debt. Fed officials hoped to reestablish the Fed’s powers of monetary control by having it acquire the ability to set reserve requirements for non-member banks. In short, Fed officials, including then Federal Reserve Chairman Thomas McCabe (who would later lose his job for standing up to the Treasury), favored a new Monetary Commission because they anticipated that such a commission would end up recommending reforms that would enhance the Fed’s then truncated powers.

S. 1559 ended up being killed by the Subcommittee on Monetary, Credit, and Fiscal Policies of the Joint Committee on the Economic Report. Interestingly, that body argued that the proposed, comprehensive study of the U.S. monetary system should instead “be made by a committee composed exclusively of Members of Congress rather than, as proposed in S. 1559, by a mixed commission composed of Members of Congress, members of the executive department, and members drawn from private life.” As it happens, the currently proposed Centennial Monetary Commission is to have 12 voting members, all of whom are to be members of Congress.

As for any possibility that the Centennial Monetary Commission bill might itself garner support from highly-placed Fed officials: fuhgeddaboudit. Those officials now have all the power they could possibly desire. Why should they look kindly upon legislation that’s far more likely to lessen that power than to enhance it?

Although the fact that the Fed welcomed a new National Monetary Commission in 1949 is no cause for celebration today, supporters of the new reform may still have reason to be cheered by the Fed’s earlier stance. After all, should Fed officials declare themselves against the new proposal, they can be reminded of their predecessors’ stance, and asked to explain why they should oppose the same sort of inquiry that those predecessors considered a jolly good idea. If they are good for nothing else, their answers should at least be good for a chuckle.

[Cross-posted from]

Donald Trump has wrecked the best plans of nearly a score of “serious” Republican presidential candidates. Yet, what may be most extraordinary about his campaign is that, on foreign policy at least, he may be the most sensible Republican in the race. It is the “mainstream” and “acceptable” Republicans who are most extreme, dangerous, and unrealistic.

First, the Republicans scream that the world has never been so dangerous. Yet when in history has a country been as secure as America from existential and even substantial threats?

Hyperbole is Trump’s stock in trade, but he has used it only sparingly on foreign policy. Referring to North Korea, for instance, he claimed: “this world is just blowing up around us.” But he used that as a justification for talking to North Korea, not going to war.

Second, the Republicans generally refuse to criticize George W. Bush’s misadventure in Iraq. In contrast, Trump said, “I was not a fan of going to Iraq.”

Third, the Republican candidates blame the rise of the Islamic State on President Obama. This claim is false at every level. The Islamic State grew out of the Iraq invasion and succeeded with the aid of former Baathists and Sunni tribes who came to prefer an Islamist Dark Age to murderous Shia rule. There were no U.S. troops in Iraq because George W. Bush had planned their withdrawal.

Trump understands that the basic mistake was invading Iraq. He said: “They went into Iraq. They destabilized the Middle East. It was a big mistake. Okay, now we’re there. And you have ISIS.”

Fourth, Republicans see other waiting enemies, such as China. But Trump apparently doesn’t view war as an option against Beijing. Rather, he sees China primarily as an economic competitor: he declared that he would “get tough with” and “out-negotiate” the Chinese, not bomb them.

Fifth, all the other Republicans apparently view Iran as an unspeakable enemy. All would block the Obama nuclear deal and most appear ready to tear it up. Trump criticized the agreement, but announced: “I will police that deal,” a far more realistic response.

Sixth, the GOP candidates almost uniformly treat handing out security guarantees as similar to accumulating Facebook friends: the more the merrier. Yet as I point out on Forbes online: “most of America’s major allies could defend themselves. The Europeans, for instance, have a combined population and GDP greater than America and much greater than Russia. South Korea has twice the population and around 40 times the GDP of the North.”

Some potential allies are security black holes, such as Ukraine. The latter would set the United States against nuclear-armed Russia. America has nothing at stake warranting that kind of risky confrontation.

Many of America’s official friends are more oppressive than Washington’s enemies. Saudi Arabia, for instance, is a totalitarian state. Egypt today is more repressive than under Mubarak.

Here Trump is at his refreshing best. Decades ago he called on the United States to “stop paying to defend countries that can afford to defend themselves.” He then pointed to Japan and Saudi Arabia.

A couple years ago he said: “I keep asking, how long will we go on defending South Korea from North Korea without payment?” Similarly, Trump recently explained: “Pulling back from Europe would save this country millions of dollars annually. The cost of stationing NATO troops in Europe is enormous.” Regarding Ukraine, he asked: “Where’s Germany? Where are the countries of Europe?”

As I wrote in the Forbes article: “Trump obviously is not a deep thinker on foreign policy or anything else. Nevertheless, on these issues he exhibits a degree of common sense lacked by virtually every other Republican candidate. The GOP needs to have serious debate over foreign policy.”

Last year in this space, I wrote about a case in which a New Jersey appeals court found that a mother could be put on the state’s child abuse registry, with life-changing consequences, for having left her sleeping toddler alone in the back seat of her locked, running car while she ran into a store briefly. No harm had come to the child during the ten minutes and an investigation found nothing else wrong with the family. 

Now a unanimous New Jersey Supreme Court has reversed that decision. Not only does the mother deserve a hearing before being put on the registry, it said, but such a hearing should not find neglect unless her conduct is found to have placed the child at “imminent risk of harm.” 

The battle is by no means over. The New Jersey Department of Children and Families vowed to continue its efforts to hold the mother responsible for gross neglect, its spokesperson saying that “leaving a child alone in a vehicle – even for just a minute – is a dangerous and risky decision.” That’s one view. Another view is the one I expressed last year: 

When the law behaves this way, is it really protecting children? What about the risks children face when their parent is pulled into the police or Child Protective Services system because of overblown fears about what conceivably might have happened, but never did?

For much more on this subject, check out the speech at Cato last year (with me moderating) by the founder of the Free-Range Kids movement, Lenore Skenazy, who has written extensively on the New Jersey case. She’s also been contributed the lead essay at a Cato Unbound symposium on children’s safety and liberty. We’ve also covered the celebrated case of the Meitiv family of Silver Spring, Md., who have faced extensive hassles from Montgomery County, Md. Child Protective Services for letting their children walk home alone from a local park. 

This post was adapted and expanded from Overlawyered

Ending extreme poverty may sound like a remote dream voiced by idealists and beauty pageant contestants, but that goal’s attainment is actually closer than you think. The share of people living in absolute poverty (i.e., living on less than $1 a day) has dwindled to around five percent of the world’s population. Much of this progress can be attributed to massive poverty reduction in China that elevated hundreds of millions of people out of destitution.

Not only has the share of the global population living on less than $1 a day fallen, but so has the total number of people living on less than $1 a day. This is incredible when one takes into account population growth.  Consider the graph below, showing the total number of absolute poor decrease by more than 700 million between 1981 and 2008, even as the world population rose by 48 percent (i.e., over 2 billion). Again, a large part of this improvement can be explained by China. Even if one excludes China, close to 200 million people escaped absolute poverty over this time period.

If one takes a slightly broader definition of extreme poverty and considers persons living on less than $1.25 a day, instead of on less than $1 per day, then the share of people living in extreme poverty appears larger. However, the overall trend is still one of dramatic decline. According to estimates by the Brookings Institution, the portion of the world population living on less than $1.25 a day will most likely fall to five percent by 2030.

The vast majority of this poverty reduction came about, not because of international aid programs, but rather because of economic development spurred by capitalism and globalization. Even Bono acknowledges this. (Please stay tuned to that video after Bono’s comments conclude to hear’s Editor, Marian Tupy, discuss the issue in more detail with John Stossel). Consider the graph below showing China’s economic freedom increasing as the percent of its population living on less than $1.25 a day decreases.

The end of extreme poverty is in sight, quite possibly within our lifetimes, thanks to the free market. To learn more about poverty’s decline, explore the data for yourself at, and check out Cato’s recent report on sustaining growth in Africa, the world’s poorest continent. 

There have been many good, if ultimately unconvincing, arguments against allowing younger workers to privately invest a portion of their Social Security taxes through personal accounts.  There have been even more silly ones.  One of the silliest is the one regurgitated Monday by ThinkProgress, that this week’s stock market decline proves that “If Social Security Had Been In Private Accounts The Stock Market Drop Could Have Been A Disaster.”

Few personal account plans would require a retiree to cash out their entire account on the day that the market crashed.  But what if they did?  It is important to understand that someone retiring Monday would have begun paying into their account 40 years ago when the Dow was at 835.34.  After yesterday’s decline, it opened at 15,676 today.  Over those 40 years, the worker would have made roughly 1,040 contributions to their account.  Only 48 of them would have been at a time when the market was higher than today’s open.

Yep, even after Monday’s crash, the worker would have made a tidy profit.  In fact, his return would have been substantially higher than what he could expect to receive from Social Security. 

The last that defenders of the status quo made this argument was 2009, during the market crash that led into the Great Recession.  At that time the market hit a low of 6,547.   Obviously, if workers had been allowed to start investing then, they would have done pretty well.  But more importantly, retirees in 2009 would have done well too, once again better than Social Security.

Cato published this comprehensive study of that downturn and its impact on personal accounts.

Social Security is running nearly $26 trillion in future unfunded liabilities.  It cannot pay promised future benefits to young workers without substantial tax hikes.  We should begin a discussion of how to reform this troubled program.  A start to such a discussion would be to retire the canard about market crashes and personal accounts.

Cross-posted at TannerOnPolicy

KHARTOUM, SUDAN—Ubiquitous American advertising is absent in Sudan. Washington bans most imports and exports to the country. Among the strongest supporters of economic coercion have been American Christians, seeking to punish the Muslim-dominated central government for its brutal conduct in past ethnic conflicts.

While the combat has largely ceased, the embargo remains. And Sudanese Christians with whom I recently spoke said that they suffer when Washington penalizes the Sudanese people for Khartoum’s sins. Rev. Filotheos Farag of Khartoum’s El Shahidein Coptic Church, explained “we want to cancel all the sanctions.”

The Clinton administration first imposed restrictions two decades ago for Sudan’s alleged sponsorship of terrorism. But the Obama administration admits that Khartoum cooperates with the United States today.

Penalties were later strengthened to punish Sudan for its tactics in the civil war in Sudan’s south and subsequent fighting around Darfur. But the first was resolved with the independence of South Sudan, which itself has tragically descended into its own civil war. And the large-scale killings at Darfur also have ended. While some fighting continues elsewhere, it is no worse than many other Third World countries.

Khartoum also is criticized for religious repression, but American allies such as Saudi Arabia are worse. Moreover, Sudanese Christians say they are among those most hurt by sanctions.

I visited a number of churches of different denominations which appeared to operate freely. A consistent message from Christian clerics is that they suffer disproportionately from U.S. sanctions.

Farag said, “Everybody here is affected. From America, we stopped importing necessities we need.” Moreover, “Many businesses here are closed. Taxation is much higher.” In his view “the government is not punished. If officials ask about anything, they can bring it from outside. But we can’t.”

Isaiah Kanani of the Presbyterian Nile Theological College reported that “sanctions are affecting everyone in the community in every corner of the country.” Unfortunately, “the grassroots feel it very harshly.” He pointed to lost jobs and people relocating for work. Moreover, while people believe the government is not responsible for these problems, their “eyes fix on the government to find a solution.”

Hafiz Fassha, an Evangelical Presbyterian pastor at the Evangelical Church of Khartoum North, said the harm is felt in “medical services, even education.” He prays for the lifting of controls, which “are like putting oil on a fire.”

Sanctions “make life very difficult for Christians and their jobs,” reported George Banna, the Patriarchal Vicar in the Greek Melkite Catholic Patriarchate, who heads the Oriental Catholic church. A number of his parishioners are in businesses or professions and “they find difficulties importing what they need.” Over the years “many have left the country for financial reasons.”

As for the church, “we depend on donations. If members don’t work, they don’t have anything to give.” He suffered from prostate cancer and had to go abroad for treatment. “We all oppose sanctions,” he said.

I spoke with two Catholic priests in Port Sudan, E. Luigi Cignolini and Antonio Manganhe Meej. Cignolini said because of the sanctions “we don’t get offerings. Even Europe can’t send them. Of course this hampers our work.”

Meej emphasized that “Poor people feel it more.” When they aren’t able to pay their school fees “it is becoming impossible to run these schools.” In school, he said, they have trouble getting the latest information and can’t upgrade computer programs. “While the U.S. might believe it is punishing the government,” it is “only punishing the people.”

As I point out in Forbes: “There was and remains much about which to criticize Sudan’s government. However, U.S. sanctions have lost any purpose they once may have had.”

Most important for American Christians, the sanctions hurt believers already living and worshipping in difficult circumstances. Fassha said, “We love America. We need America to help Sudan.”

The world has changed since sanctions were first imposed. Washington’s policy toward Sudan should change as well.

Criminal asset forfeiture has the taste of Old Testament justice: an eye for an eye, a tooth for a tooth. The bank robber stole $100,000, so the government takes $100,000 from him. That seems right and fair, but only if we know that the defendant’s guilty.

If the government took $100,000 from someone who was innocent, or whose guilt was ambiguous, it wouldn’t merely be an “unjust” forfeiture, it would be theft—or, to be more politic, an uncompensated and unwarranted taking.

Consider the case of Sila Luis. For several years, Luis ran a healthcare company that provided home nursing services to patients enrolled in Medicare. In 2012, the government accused Luis of fraud, claiming that her company billed Medicare for unnecessary services. In addition to criminal charges, the grand jury indictment included a forfeiture finding, stipulating that if Luis is convicted, up to $45 million of her personal assets would be forfeited, to make up for all of the money her company ever received from Medicare.

Leaving aside the validity of that number—the government hasn’t alleged that all, or even most, of the claims submitted to Medicare were false—the questionable fairness of holding an individual personally responsible for a company’s liabilities, and the fact the Luis doesn’t have anywhere near $45 million, the indictment got one thing right: the government should only be able to confiscate Luis’s property after she’s been convicted. Of course, the government found a loophole: a statute providing that when the government thinks a defendant is going to spend or hide assets before they can be forfeited, prosecutors can ask for a court order “freezing” the assets.

Freezing property is practically the same as confiscating it; the defendant technically remains the owner, but is no longer allowed to sell or even use the property. The government applied for such an order on the same day that Luis was indicted—and three years later, it’s still in effect. Because Luis’s net worth is only a fraction of the $45 million the government claims, the freezing order applies to all of her assets. As a result, Luis can’t afford to pay a lawyer to defend her at trial.

Last year in Kaley v. United States (another case where Cato filed a brief), the Supreme Court ruled that there is no Sixth Amendment right to hire counsel with “tainted” money (and even the process due is less before freezing); the robber can’t use the money he stole from the bank to pay his lawyer, because he never actually owned it in the first place. But Luis is arguing that Kaley doesn’t apply to cases like this, where the government admits that none of the frozen property is in any way connected to illegal activity (only that it may be necessary to satisfy an eventual judgment).

While Cato agrees with Luis’s Sixth Amendment argument, we’ve filed an amicus brief, joined by the DKT Liberty Project, challenging the legitimacy of freezing orders on an institutional level. Fifteen years ago, the Supreme Court stopped federal courts from freezing a defendant’s assets where the plaintiff is afraid that the defendant will become bankrupt because the whole point of a trial is to determine what, if anything, the defendant owes. The Court has been adamant that unless and until the plaintiff wins that trial, judges should not interfere with the defendant’s existing rights—unless a court order is the only way to prevent a significant and incurable injury to the plaintiff that substantially outweighs any harm this action would do to the defendant.

The freezing order here isn’t protecting the government from any new loss, while the burden it continues to impose on Luis is grievous, especially since she still faces criminal charges. The Supreme Court was right when it condemned freezing orders as “nuclear weapons.” They are powerful and destructive—and their use is rarely if ever justified.

Oral arguments in Luis v. United States will take place this fall.

Earlier this month, Americans for Prosperity held a “Road to Reform” event in Las Vegas.

I got to be the warm-up speaker and made two simple points.

First, we made a lot of fiscal progress between 2009 and 2014 because various battles over debt limits, shutdowns, and sequestration actually did result in real spending discipline.

Second, I used January’s 10-year forecast from the Congressional Budget Office to explain how easy it would be to balance the budget with a modest amount of future spending restraint.

Here’s my speech:

I realize I sound uncharacteristically optimistic in these remarks, but it is amazing how easy it is to make progress with even semi-effective limits on the growth of government.

Genuine spending cuts would be very desirable, of course, but we move in the right direction so long as government spending grows slower than the private sector.

The challenge, needless to say, is convincing politicians to limit spending.

Well, we now have some new data in that battle. The CBO released its Update this morning, which means the numbers I shared in Nevada are now slightly out of date and that I need to re-do all my calculations based on the new 10-year forecast.

But it doesn’t really make a difference.  As you can see from the chart, we can balance the budget by 2021 if spending is capped so that it grows by 2 percent annually. And even if spending is allowed to grow by 3 percent per year (about 50 percent faster than projected inflation), the budget is balanced by 2024.

At this point, I feel compelled to point out that the goal should be smaller government, not fiscal balance.

But since fiscal policy debates tend to focus on how to eliminate red ink and balance the budget, I may as well take advantage of this misplaced focus to push a policy (spending restraint) that would be desirable even if we had a budget surplus.

And that’s the purpose of this video I narrated for the Center for Freedom and Prosperity back in 2010. The numbers obviously have changed over the past five years, but the underlying argument about the merits and efficacy of spending restraint are exactly the same today.

For more information on the merits of smaller government, here’s my tutorial on government spending.

A new poll by the Peter G. Peterson Foundation finds that 80 percent of Americans think that rising federal debt should be a top priority of policymakers. The poll also finds that:

… an overwhelming majority of voters (85%) are now calling for the President and Congress to spend more time addressing our nation’s long-term fiscal future. More than two-thirds (68%) say their concern about this vital issue has increased over the last few years, including nearly one-half (46%) who say it has increased “a lot.” Majorities of voters across party lines, including 53% of Democrats, 69% of Independents, and 84% of Republicans, say that their concerns about the debt have deepened in recent years.

The spokesman for the Peterson Foundation said that Americans “…want candidates to put forward plans to address our nation’s long-term fiscal challenges … Americans understand that putting our fiscal house in order is vital to ensure a growing, prosperous economy and are calling for more action from their leaders.” I agree with that, and so does the centrist group “First Budget,” which is trying to pin down candidates on fiscal specifics.

Presidential candidates should propose specific plans to cut spending. Spending cuts are good economics, and they are also good politics, at least for Republican candidates. So it is disappointing how few of them have proposed budget plans so far, let alone discussed overspending and rising debt in any detail.

Candidates interested in fiscal reform can start with these proposed cuts. A particularly good target for reforms would be terminations of state aid programs, as Emily Ekins and I discuss here.

The full Peterson poll results are here.

SHENYANG, CHINA—Public space is shrinking in China for discussion of “Western” views. But “contrary to the general crackdown, North Korea policy seems to be an exception,” a U.S. diplomat told me on my recent trip to China. One hears plenty of criticism of Pyongyang.

Even official Beijing’s unhappiness with the Democratic People’s Republic of Korea is evident, though China continues to bankroll the Kim Jong-un regime. It’s a position some Chinese would like to change, including a scholar in Shenyang, a couple hours away from the Yalu by car. My colleague was careful not to directly criticize Beijing policy but advocated a much different approach. He noted that the two nations “still care about each other,” but now there are a “lot of problems between the countries.”

The most important issue, no surprise, is nuclear weapons. China supports denuclearization of the Korean peninsula. This is the “worst disagreement between them.”
Second is economic development. “China insists on reform of the whole economic and political system,” explained my friend. Beijing’s objective is to “transform North Korea.” The DPRK government fears such change.

Issue number three involves bilateral commerce. “China wants to have normal trade with North Korea,” but the DPRK expects to receive goods even if it does not pay. This has “caused great loss for China and for companies in China.”

The fourth concern is refugees. “Many North Koreans have fled to this part of China,” he said, forcing Beijing to “think about how to deal with the issue.” So far, the People’s Republic of China has returned refugees when caught, sparking sharp international criticism.

Coming in at fifth is the Six Party Talks. My interlocutor explained that “China insists on peaceful dialogue among the respective countries” over “the nuclear problem.” On this question the PRC “has had some issues with the U.S. and North Korea not cooperating.”

In his view the U.S. and PRC should focus on solving these matters: “China and the USA have a common understanding greater than their disagreement.” He hoped the PRC and U.S. together would pressure the North. He admitted that “China has its own interests and cares about its national security.” But “China would be a direct victim of ” a nuclear North. 

What to do? There should be dialogue “among the three countries.” Beijing and Washington need to demonstrate that there isn’t “any gap, any room” between them that would allow Pyongyang to develop nuclear weapons.

More controversially, he wanted “to stop giving foreign aid to the North and impose additional sanctions on North Korea.” Penalties should be applied if the Kim government “does not listen to the U.S. and China.”

If these steps failed, at that point “we should know the way.” When asked to explain further, he said “I wouldn’t say it out loud, but Israel would know how to do it.” That is, military action. He didn’t explain whether he envisioned Chinese cooperation in or merely acquiescence to a U.S. strike. But he believed extreme measures could be justified. What would the Chinese government think of his idea? He admitted that the perspective was his own, not that of Beijing. However, if the three steps were followed, beginning with negotiation, then he believed Beijing might “follow his suggestions.” He believed the severity of the threat would drive policy.

In the end, he expected Pyongyang either to reform, open the country, or resist, destroying itself. He hoped for the first.

As I wrote for China-US Focus: “Although there’s no evidence that Beijing government is about to adopt my friend’s proposals, Chinese attachment to North Korea evidentally continues to drain away. While Beijing continues to prop up the Kim dynasty, it does so without enthusiasm. That creates an opportunity for Washington to persuade the PRC to change policy.”

My friend’s proposal offers a possible blueprint: talk with Beijing, address its problems, and suggest a common program. This strategy would offer at least a hope of change.

Whitney Ball was always outraged for the right reasons and could be counted on to add the choicest comments to the latest political or cultural atrocity. She was bright, opinionated, well-informed, and dedicated to human liberty. She also was a great friend. Those who knew her and the liberty movement were made much worse off with her recent passing.

Whitney is one of the largely unknown political activists who did far more than her share to help others. She got into the movement early, working in Washington, D.C. at the National Journalism Center for the late M. Stanton Evans—a grand figure who linked the older, more traditional and newer, more assertive conservative movements. But Whitney never hesitated to take the lone road. She was rare, a libertarian and Christian. We met when she worked at Cato a couple decades ago. She exuded kindness and wit and was impossible to dislike.

She moved on to the Philanthropy Roundtable, a conservative counterpart for the liberal Council on Foundations. Then in 1999 Whitney launched her own venture, Donors Trust. From very modest beginnings—one account—DT turned into a major success. It hit roughly 200 contributors in 2013 and to date has channeled $740 million to the cause of liberty. There is a long history of freedom-minded donors’ money being effectively hijacked by left-wing activists and causes. Money created from the inspiration and sweat of past entrepreneurs now funds some of the organizations most determined to stifle a free economy. It turns out that those most adept at creating wealth often aren’t very good at controlling how it is distributed.

As Whitney later explained: “charitable capital that’s held in a vehicle like a private foundation often drifts away from the intent of its founding donor over time.” This sort of adverse capture almost always works against advocates of a free society, she added. Organizations dedicated to liberty gained enormously from Whitney’s efforts.

Despite her serious endeavors, she had a whimsical streak. She liked cows, for instance, and incorporated them in her home décor. She demonstrated notable self-control in halting at cow kitchenware rather than making the jump, as I did in other areas, to pricier and more serious antiques.

Unfortunately, very serious was the breast cancer which struck in 2001. She accepted the pain, inconvenience, and uncertainty with extraordinary grace and good humor. She joked about her loss of hair and wearing a wig and dispassionately described the side-effects of chemotherapy. But she never quit and emerged victorious. At least, as victorious as one ever can be against that horrid disease.
The cancer returned, more virulent than ever, which she fought with as much tenacity and cheerfulness as before. She was upbeat, funny, and determined to triumph again.
Alas, it was not to be. She fought to the end, dying at a far, far too young age of 52.

As I wrote in American Spectator online: “Whitney’s life is one that truly mattered. The freedom movement was more vibrant because of her efforts. The lives of her friends and family were much enriched because of her presence.”

Her death reminds us how easy it is to take those around us for granted. We only realize how badly we miss them when they leave us. So it is with Whitney.

In its “Free Exchange” column, the Economist recently took up the issue of monetary rules. Provocatively titled “Rule It Out,” the column announced that “setting interest rates according to a fixed formula is a bad idea.”

Reading the column one quickly learns the author doesn’t understand what constitutes a rule, and what the argument for a rule is. The column moves from a general consideration of monetary rules to considering specifically the Taylor Rule. I leave it to Professor Taylor to defend his rule, which he did on his blog. I, however, consider the general case for monetary rules.

“Free Exchange” links the case for rules to the 1977 article by Finn Kydland and Edward Prescott, “Rules Rather than Discretion: The Inconsistency of Optimal Plans.” The title is not cited nor is the article’s central argument addressed: Discretionary economic policy cannot be optimal. Their article undermines the case for discretion over rules that “Free Exchange” attempts to make. “Free Exchange” lamely says that Kydland and Prescott’s argument “helps to explain the high inflation of the 1970s.” It did far more than that. It explains why policymakers cannot credibly commit to future policies. That is known as the time inconsistency problem.

The argument for rules versus discretion in monetary policy goes back at least to Henry C. Simons’ 1936 article, “Rules versus Authorities in Monetary Policy.” The case for a monetary rule was re-argued by Milton Friedman in the 1960s and he anticipated the dynamics developed in Kydland and Prescott.

“Free Exchange” states that “monetary policy based on rules has one major advantage: transparency.” Certainly a rule will likely be more transparent than policy discretion. But it has never been the central argument for a monetary rule. The central argument for a monetary rule is what is known as the knowledge problem in economics and social affairs.

Policymakers cannot in principle possess the knowledge required to devise an optimal (or time consistent) monetary policy. The information required for centralized policymaking is dispersed among the millions of actors in society. It cannot be aggregated or concentrated in one mind. No expert or set of experts can ever know as much as the totality of individuals in society.

Rules are a response to the knowledge problem. Uncertainty generates reliance on rules. Rules are constructed or evolved based on accumulated experience over long periods of time. Rules encapsulate the totality of knowledge and experience not only of all alive today, but also of those who preceded them.

Rules can be formal or informal, and could – but need not – be a formula. (Most rules are not a formula.) When people do not possess the information required to optimize, they rely on rules.

The knowledge problem arises in any attempt to set centralized policy or planning. Even before Simons, F. A. Hayek articulated the knowledge problem in monetary policy and in the debate over centralized economic planning.

“Free Exchange” turns the knowledge problem upside down: “Until the day the economy is fully understood, human judgment has a crucial role to play.” No, actually, it is just the opposite. There would be no need for reliance on a rule if the economy were fully understood. The less we know about the specifics of a situation, the more we must rely on rules. A good rule incorporates the general features of a class of situations, in which the specific features vary unpredictably. If we possess full information, why would we want to rely on a rule?

In a paper that I will present at Cato’s annual monetary conference on November 12th, I develop in more depth the case for rules in monetary policy. I attribute the central argument to Hayek. But I note that Friedman also adduced an argument based on the knowledge problem in support of his monetary rule.

Monetary rules and policy rules more generally are a subset of behavioral rules. The case for a monetary rule is ultimately the same as the case for the rule of law in society. For those would like to see that argument in print, along with the philosophical tradition undergirding it, see my recent article in the Journal of Private Enterprise, “Hayek and the Scots on Liberty.”

[Cross-posted from}