Cato Op-Eds

Individual Liberty, Free Markets, and Peace
Subscribe to Cato Op-Eds feed

The Department of Health and Human Services (HHS) is America’s first $1 trillion bureaucracy. HHS will spend $1.1 trillion in 2016, which is one $1 million repeated one million times.

You are paying for it, so you might want to know that:

  • The department spends more than $8,800 a year for every household in the United States.
  • It runs 528 different subsidy programs for state and local governments, businesses, and individuals.
  • The largest HHS subsidy program is Medicare at $589 billion in 2016, followed by Medicaid at $367 billion.
  • HHS has 73,000 employees.
  • Real, or inflation-adjusted, HHS spending has exploded ten-fold since 1970, as shown in the chart below.
  • At the 1965 signing ceremony for Medicare, President Lyndon Johnson said “No longer will young families see their own incomes, and their own hopes, eaten away simply because they are carrying out their deep moral obligations to their parents.” But since there is no Santa Claus, that is exactly what is happening today as government health spending is imposing huge debt burdens on young families.
  • HHS programs have huge fraud and abuse, which costs taxpayers tens of billions of dollars a year. The programs also distort the health care industry in serious ways because of their top-down structure and masses of regulations. The way to fix the mess is to slash federal spending and move toward a consumer-directed health care system.

By the next president’s fourth budget, annual HHS spending will have grown another $300 billion or so to $1.4 trillion. That is another $300 billion the government will have to borrow from Wall Street, China, and other places that presidential candidates are abusing on the campaign trail. Along with exploding Social Security spending, rising HHS spending is not making America great again, but pushing us into a financial crisis. What are the candidates proposing to do about it?  

The Consumer Financial Protection Bureau (CFPB) recently announced that it would start accepting consumer complaints about marketplace lending.  Marketplace lending, previously known as “peer to peer” or “P2P” lending, emerged in the aftermath of the financial crisis.  A combination of tightening credit markets and low-interest rates created a perfect marriage between consumers looking for loans and investors looking for profit.  In its first incarnation, peer to peer lending served as an online matchmaking service, allowing prospective borrowers to post requests for loans to be reviewed by individuals willing to make those loans.  “Peer to peer” referred to the fact that the lenders were ordinary people, just like the borrowers.  The loans are non-recourse, meaning that if the borrower fails to repay, the lender is simply out of luck.  Although these would appear to be risky loans, in fact, the default rate has been surprisingly low: 4.9 percent at market-leader Prosper as of the end of 2014, and 5.3 percent at the other leader, Lending Club, during the period between Q1 2007 and Q1 2015.

The loans have performed so well that the market quickly attracted institutional investors and more sophisticated business models.  As the two leading providers of marketplace loans today, Prosper and Lending Club use the same (somewhat complex) model.  The companies issue notes to investors that are obligations of the issuing company. Simultaneously, WebBank, a Utah-based FDIC-insured bank, originates a loan which is sold to the company. The company pays for the loan with the proceeds from the sale of notes to investors. The loan is disbursed to the borrower. The borrower repays the funds in accordance with the terms of the loan. And the payments from the borrower are used to pay the purchasers of the company’s notes. The payment of the notes is explicitly dependent on the borrower’s repayment of the loan.

Since marketplace lending has gained momentum, there have been concerns about its regulation – expressed both by those who worry that it’s completely unregulated (not true, but there have been no new regulations specifically targeting the industry), and by those–like me–who worry that its innovation will be smothered while the industry is still in its infancy.

Although the CFPB has not announced any plans (yet) to write new regulations specifically aimed at marketplace lending, and although there is an argument that much of the industry actually falls under the SEC’s jurisdiction, the move to solicit complaints certainly seems to signal an interest in regulation down the road.  Aside from general concern about regulating an industry still in the process of defining itself (how do you know what problems may work out on their own through better solutions that regulation could provide?), there is a specific problem with using customer complaints as a foundation for regulation.  It’s the same problem supporting the general argument in favor of regulation: There is clear self-selection at play. 

Who is more likely to seek out the CFPB’s complaint portal: the happy borrower who has secured a loan at a favorable rate, or the disgruntled borrower?  While the American public has always been free to “petition the government for a  redress of grievances,” actively seeking out those unhappy with an industry smacks of a regulator looking for a reason to regulate.  If the CFPB is concerned about marketplace lending, a more sound approach would be to hold old fashioned hearings which, while sometimes more a performance than an inquiry, at least tend to include representatives from both sides of an issue.

One law firm has dubbed the CFPB’s announcement the establishment of  a “beachhead” in the marketplace lending industry, planting a flag signaling new regulation ahead.  I, unfortunately, tend to agree.

Many in the Bitcoin community seek increased financial privacy. As I wrote in a 2014 study of the Bitcoin ecosystem, “Bitcoin can facilitate more private transactions, which, when legal in the jurisdictions where they occur, are the business of nobody but the parties to them.” That study identified “algorithmic monitoring of Bitcoin transactions” as a rather likely and somewhat consequential threat to the goal of financial privacy (pg. 18). It was part of a cluster of similar threats.

Good news: The Bitcoin community is doing something about it.

The Open Bitcoin Privacy Project recently issued the second edition of its Bitcoin Wallet Privacy Rating Report. It’s a systematic, comparative study of the privacy qualities of Bitcoin wallets. The report is based on a detailed threat model and published criteria for measuring the “privacy strength” of wallets. (I’ve not studied either in detail, but the look of them is well-thought-out.)

Reports like this are an essential, ecosystem-building market function. The OBPP is at once informing Bitcoin users about the quality of various wallets out there, and at the same time challenging wallet providers to up their privacy game. It’s notable that the wallet with the highest number of users, Blockchain, is 17th in the rankings, and one of the most prominent U.S. providers of exchange, payment processing, and wallet services, Coinbase, is 20th. Those kinds of numbers should be a welcome spur to improvement and change. Blockchain is updating its wallet apps. Coinbase, which has offended some users with intensive scrutiny of their financial behavior, appears wisely to be turning away from wallet services.

Bitcoin guru Andreas Antonopolis rightly advises transferring bitcoins to a wallet you control so that you don’t have to trust a Bitcoin company not to lose it. The folks at the Open Bitcoin Privacy Project are working to make wallets more privacy protective. Kudos, OBPP.

There’s more to do, of course, and if there is a recommendation I’d offer for the next OBPP report, it’s to explain in a more newbie-friendly way what the privacy threats are and how to perceive and weigh them. Another threat to the financial privacy outcome goal—ranked slightly more likely and somewhat more consequential than algorithmic monitoring—was: “Users don’t understand how Bitcoin transactions affect privacy.”

You Ought to Have a Look is a feature from the Center for the Study of Science posted by Patrick J. Michaels and Paul C. (“Chip”) Knappenberger.  While this section will feature all of the areas of interest that we are emphasizing, the prominence of the climate issue is driving a tremendous amount of web traffic.  Here we post a few of the best in recent days, along with our color commentary. 

More and more, harsh reality is stacking up against our ability to achieve the cuts in our national emissions of greenhouse gases that President Obama promised the international community gathered in Paris last December at the UN’s climate conference. In that regard, here some items we think you ought to have a look at.

A couple of weeks ago, we reported that it was looking as if the EPA’s methane emission numbers were a bit, how should we say it, rosy. We suggested that emissions of methane (a strong greenhouse gas) from the U.S. were quite a bit higher than EPA estimates, and that they have been increasing over the past 10 years or so, whereas the EPA reports that they have been in decline. Factoring in this new science meant that the recent decline in total greenhouse gas emissions from the US was about one-third less than being advertised by the EPA and President Obama— imperiling our promise made at the UN’s December 2015 Paris Climate Conference.

Goings-on during the intervening weeks have only acted to further cement our assessment.

EPA has come around to admitting its error—to at least some degree. Wall Street Journal’s energy policy reporter Amy Harder tweeted this from EPA Chief Gina McCarthy:

  

The details behind McCarthy’s statement can be found in a new report from the EPA—a draft of its 2016 edition of the annual US Greenhouse Gas Inventory Report.  In the new draft, the EPA reports that they are in the process of reworking their previous estimates of methane emissions from “natural gas systems” and “petroleum systems.”  They put out a call for public input on their new mythology, which, as a provided example, results in 27% more emissions from those sources in 2013 than the EPA had determined previously.  EPA promises to apply the new methodology to all of its methane emissions from 1990 to the present and notes that:

Trend information has not yet been calculated, but it is expected that across the 1990-2013 time series, compared to the previous (2015) Inventory, in the current (2016) Inventory, the total CH4 emissions estimate will increase, with the largest increases in the estimate occurring in later years of the time series.

Larger increases later in the time series will act to lessen the decline or perhaps even switch the sign of the overall trend.

And even without including the new calculations for natural gas and petroleum systems, the EPA requantified the reported decline in US methane emissions. In last year’s report, they wrote “[m]ethane (CH4) emissions in the United States decreased by almost 15% between 1990 and 2013.” This year, its “[o]verall, from 1990 to 2014…total emissions of CH4 decreased by 37.4 MMT CO2 Eq. (5.0 percent).” The changes arise largely as a result of new examinations and recalculations involving methane release from landfills.

More and more, the EPA’s methane picture is looking, how should we say it, less rosy.

It seems the closer folks look, the more it appears that Obama’s proud accomplishments and promises are proving to be little more than smoke and mirrors.

Take the Clean Power Plan. Almost every analyst alive knew that the plan was a big stretch of the Clean Air Act and that it was going to face legal challenges that were not going to be resolved until the Supreme Court had its say in June, 2017.  A 5-4 decision is almost certain, with the outcome hinging on November’s election, after which the President will nominate a justice to replace Antonin Scalia who will actually be reviewed by the Senate.   Knowing his Plan was in legal hot water, Obama nonetheless told the Paris assembly “we’ve said yes to the first-ever set of national standards limiting the amount of carbon pollution our power plants can release into the sky.” Barely two months later, the Supreme Court said “not so fast” and stayed the Clean Power Plan pending the outcome of all the challenges.

And then, as we mentioned, there’s the methane issue. The EPA said emissions were declining, when in fact they are almost certainly rising. So much so, that the total decline in greenhouse gas emission from the U.S. has likely been overestimated by as much as a third. This situation is a bit grimmer than what President Obama said in Paris: “Over the last seven years, we’ve made…ambitious reductions in our carbon emissions.”

Also, it looks as if the pathway to our promise was rigged.  In a series of recent reports by David Bailey and David Bookbinder for the Niskanen Center, the authors show that the Obama Administration is employing some creative accounting to work the numbers to make it look like there is a clear path towards meeting our Paris target.

From their January report “The Administration’s Climate Confession … and New Deception” comes this assessment:

In the little-noted Second Biennial Report of the United States of America Under the United Nations Framework submitted to the U.N. climate process on December 31, the Administration impliedly admitted that the measures it listed in the INDC would leave us short, by about 500 -800 MMT. The Report itself is a masterpiece of obfuscation in the name of transparency. It includes emission reductions dating back to the 1990s in its list of current measures, and for the majority of measures does not list any reductions numbers. But, not to fear, because “additional measures” of up to 700 MMT, plus a new, secret ingredient [a rapid expansion of US carbon sinks from forestry] worth about another 300 MMT, will still get us to the 2025 target.

And, after having a look at the new EPA draft report, Bailey and Bookbinder responded with “New EPA Data Casts More Doubt on Obama’s Climate Promises,” where they concluded:

The new estimates of carbon sinks are particularly significant. We discussed before how the Administration’s Second Biennial Report to the IPCC indicated that the U.S. is relying on an implausibly large increase in absorption of GHGs in sinks to meet the Paris target, from 912 MMT absorbed in 2005 to over 1,200 MMT absorbed by 2025. The revised estimate for 2005 sinks is now 636 MMT, or less than 70% of what the Biennial Report stated only two months ago. Thus, one of the Administration’s main compliance tools now requires not a 30+% increase to 2025, but nearer to a 100% increase.

The biggest impact of these revisions will be (once again) on the credibility of our Paris commitment to reduce 2005 emissions by 26% by 2025. 

The effect on Obama’s Paris promise of all of the above (and more) is well-summed in this story from Inside Climate News:

New data this week showing how little progress the United States has made in cutting greenhouse gas emissions since President Obama took office is the latest evidence to undercut the pledges the United States made in negotiating the Paris climate treaty.

The Clean Power Plan’s crackdown on coal-fired power plants is on hold, thanks to the Supreme Court. Methane emissions are turning out to be higher than previously thought, as natural gas booms. People are buying more gas-guzzling cars, thanks to low prices at the pump.

And now, in a draft of its annual greenhouse gas emissions tally, the EPA reported that emissions in the year 2014 climbed almost 1 percent from 2013 to 2014. That brought emissions back above the level of Obama’s first year in office, 2009.

In negotiating the Paris treaty, signed in December, the U.S. pledged to cut emissions 26 to 28 percent by 2025, below the level of 2005.

The new data shows that from 2005 to 2014 emissions went down just 7.5 percent, leaving most of those promised reductions off in the distance, like a hazy mirage.

Most of that decline is due to the nosedive in emissions that came with the Great Recession of 2008 and 2009.

In a quarter-century, through Democratic and Republican administrations alike, U.S. greenhouse gas emissions have marched mostly in the wrong direction.

Ouch.

All the while, President Obama is leading the push to get countries to sign the Paris Agreement  at a big press event to be held at the United Nations headquarters in New York City on April 22—Earth Day.  The Agreement must be ratified by at least 55 countries representing at least 55 percent of global greenhouse gas emissions before coming into effect. 

Lest some countries become worried that Obama’s Paris emissions pledge was but a well-orchestrated sham and start to get cold feet about signing the Agreement, the President, this week, did manage to slip $500 million into the U.N.’s  Green Climate Fund.  Perhaps that’ll be enough hush money to keep the complaints muted. A rich-to-poor money transfer more so than climate change mitigation is, after all, arguably the most attractive part of the Paris Agreement for most countries.

            An article in the March 14th issue of the New Yorker describes the negative effects of sex offender laws on juveniles who get caught up in a legal system designed to protect children from adult sexual predators.  Adolescent sexual experimentation, especially when accompanied by age mismatch, and child misbehavior have become criminalized in ways that those interviewed in the article see as unintended, mistaken, and counterproductive.

            The unanalyzed premise of the article, however, is that the public labeling of adult sex offenders is good public policy.  The logic underlying public notification laws for adults would seem to be sound: if a known sex offender is looking for a new victim, isn’t it useful if the offender’s neighbors know the person is a threat and can take measures to reduce their own risk of victimization?

In an article in Regulation Professor J. J. Prescott of the University of Michigan Law School examines the separate effects of police registration and public notification requirements on the incidence of sexual attacks.  He concludes that “each additional sex offender registered per 10,000 people reduces the annual number of sex offenses reported per 10,000 people on average by 0.098 crimes (from a starting point of 9.17 crimes). This sizeable reduction (1.07 percent) buttresses the idea that we may be able to use law enforcement supervision to combat sex offender recidivism.”  But the reduction is confined to friends and neighbors and has no effect on sex offenses against strangers.

In contrast public notification deters those who are not already registered but increases recidivism among those who are.  “… for a registry of average size, instituting a notification regime has the aggregate effect in these data of increasing the number of sex offenses by more than 1.57 percent, with all deterrence gains more than offset.”  “… the more difficult, lonely, and unstable our laws make a registered sex offender’s life, the more likely he is to return to crime—and the less he has to lose by committing these new crimes.”  “…if these laws impose significant burdens on a large share of former offenders, and if only a limited number of potential victims benefit from knowing who and where sex offenders are, then we should not be surprised to observe more recidivism under notification, with recidivism rates rising as notification expands.”            

In the Republican debate last night, CNN’s Dana Bash pressed the candidates on how they would deal with Social Security. Senators Marco Rubio and Ted Cruz gave solid answers, explaining that the system was headed toward insolvency, suggesting ways to slow spending growth, and scolding candidates who denied the need for cost-saving reforms.  

One of the candidates in denial is Donald Trump. He said, “And it’s my absolute intention to leave Social Security the way it is. Not increase the age and to leave it as is.” Trump is a smart man, who presumably understands accounting, so either he hasn’t bothered to examine the finances of the government’s largest program, or he is willfully providing a false narrative about it.

The chart below compares Social Security and defense spending in real 2016 dollars, including Congressional Budget Office (CBO) projections going forward. For decades, the two programs have vied for the title of the government’s largest, but the battle is now over. Social Security spending has soared far above defense spending, and it will keep on soaring without reforms.

Defense is a “normal” program, with spending fluctuating up and down over the years in real, or inflation-adjusted, dollars. But Social Security has taken off like a rocket, and it is consuming more taxpayer resources every year. The government spent the same amount on defense and Social Security in 2008, but it will be spending twice as much on the latter program by 2023.

When the next president enters office in 2017, he will start planning his 2018 budget. In that year, Social Security will become the first trillion-dollar program, and it will be gobbling up an additional $60 billion or so every single year. Where will all the money come from? Pointing only to “waste, fraud, and abuse,” as Trump does, wastes our time, abuses our intelligence, and is a fraudulent story line to peddle.

 

Data notes: CBO baseline projections to 2026, then real defense spending assumed fixed after that, while real Social Security spending is assumed to increase at the same rate as CBO projects for 2026 (3.8 percent). For ways to cut Social Security, see here.

In today’s Washington Post, the Seventh Circuit’s Richard Posner, the most prolific judge the country has ever seen, has again gone to print to tell us that the Republican Senate majority’s decision not to consider any nominee to fill Justice Antonin Scalia’s empty seat until after the fall elections reminds us “that the Supreme Court is not an ordinary court but a political court, or more precisely a politicized court, which is to say a court strongly influenced in making its decisions by the political beliefs of the judges.” Say this for Judge Posner: From his earliest days as a font of law and economics wisdom through his many phases since, he has never ceased to interest us. Whether those iterations have accurately grasped the issue at hand is something else.

Here, as a descriptive matter, Posner is certainly right in noting that justices seem often to be strongly influenced by their political beliefs, however much they may invoke the self-protective “the law made me do it” pose, as he notes. But his claim is deeper, bordering on the normative: “This is not a usurpation of power,” he writes, “but an inevitability.”

Most of what the Supreme Court does—or says it does—is “interpret” the Constitution and federal statutes, but I put the word in scare quotes because interpretation implies understanding a writer’s or speaker’s meaning, and most of the issues that the court takes up cannot be resolved by interpretation because the drafters and ratifiers of the constitutional or statutory provision in question had not foreseen the issue that has arisen. (emphasis added)

By way of example, Posner continues, the drafters “did not foresee or make provision for regulating electronic surveillance, sound trucks, flash-bang grenades, gerrymandering, child pornography, flag-burning or corporate donations to political candidates.”

True, there’s a vast world that the Framers did not foresee, everything from the telephone to the Internet and far beyond. But their purpose was not to anticipate such particulars but to invoke the immutable principles by which future controversies concerning those unforeseen matters might be resolved. And that, precisely, is what Posner calls into question:

When judges are not interpreting, they’re creating, and to understand judicial creation one must understand first of all the concept of “priors.” Priors are what we bring to a new question before we’ve had a chance to do research on it. They are attitudes, presuppositions derived from upbringing, from training, from personal and career experience, from religion and national origin and character and ideology and politics. They are unavoidable tools of decision-making in nontechnical fields, such as law, which is both nontechnical and analytically weak, in the sense that there are no settled principles for resolving the most difficult and consequential legal controversies. (emphasis added)

And Posner adds that “the priors that seem to exert the strongest influence on present-day Supreme Court justices are political ideology and attitudes toward religion.”

To be sure, there are cases in which such “priors” seem dispositive—the abortion issue leaps to mind, yet even there, federalism principles would seem to be in order. More broadly, however, the question remains: Has Posner overstated the matter—and misstated it? As for overstatement, notice that he has moved from “most of the issues that the court takes up cannot be resolved by interpretation” to “there are no settled principles for resolving the most difficult and consequential legal controversies.” Which is it—“most” or “the most difficult”? Truth to tell, the Court has shown itself quite capable of resolving a large number of its cases unanimously or at least with only one or two dissents. In the term before last, for example, it resolved nearly two-thirds of its cases unanimously.

Yet even in the “difficult” cases, one should pause before claiming that there are no “settled principles” for resolving them. First, there are cases in which the principles are clear but their application affords reasonable justices room for reasonable differences. Take simply the first two of Posner’s examples: The Fourth Amendment’s prohibition of “unreasonable” searches (electronic surveillance), and the principles of common law nuisance that stand behind the First Amendment’s speech protections (sound trucks) afford justices ample room to reasonably differ—not about principles but about application.

But second, and more important, there is no question that “settled” may save Posner. Not that there was ever a period in which every constitutional principle was settled, but prior to the rise of Progressivism our understanding of our Constitution of limited government was far more settled than it has been since the Constitution was upended during the New Deal. With the modern “living Constitution” there is far more room for saying that “there are no settled principles for resolving the most difficult and consequential legal controversies.” But that is the subject for another day. For the present it is enough to question whether the Supreme Court is “inevitably” a politicized Court or whether instead it has been made into a politicized Court by political forces beyond its chambers.

In his weekly address last Saturday, President Obama touted the importance of technology and innovation, and his plans to visit the popular South by Southwest festival in Austin, Texas. He said he would ask for “ideas and technologies that could help update our government and our democracy.” He doesn’t need to go to Texas. Simple technical ideas with revolutionary potential continue to await action in Washington, D.C.

Last fall, the White House’s Third Open Government National Action Plan for the United States of America included a commitment to develop and publish a machine-readable government organization chart. It’s a simple, but brilliant step forward, and the plan spoke of executing on it in a matter of months.

What the President Should Do: Transparent Government

Having access to data that represents the organizational units of government is essential to effective computer-aided oversight and effective internal management. Presently, there is no authoritative list of what entities make up the federal government, much less one that could be used by computers. Differing versions of what the government is appear in different PDF documents scattered around Washington, D.C.’s bureaucracies. Opacity in the organization of government is nothing if not a barrier to outsiders that preserves the power of insiders—at a huge cost in efficiency.

One of the most important ideas and technologies that could help update our government and democracy is already a White House promise. In fact, it’s essentially required by law.

Publication of spending data in organized, consistent formats is required under the terms of the DATA Act—the Digital Accountability and Transparency Act—which the president signed in May 2014. To organize spending data, you must have data reflecting the governmental entities that do the spending.

We’ve studied the availability of data from the federal government that reflect deliberations, management, and results, and we reported in November 2012 on the somewhat better progress on transparency in Congress compared to the administration.

Our Deepbills project added computer-readable code to every version of every bill in the 113th Congress, showing where Congress mentioned agencies and bureaus, proposed spending money, or referred to existing law. It would have been that much better were there an authoritative list of what the units of government are.

President Obama noted in his weekly address that improving the government along these lines has been a goal of his since before he was elected. Given the need and the potential, the achievements he cites wouldn’t get a victory lap out of the starting blocks. But there is still time to deliver on a transparency promise by publishing an authoritiative, machine-readable organization chart as the administration promised just last October.

Those that like policies tend to extol the politics that produced them. Praise for the marketplace of ideas or the wisdom of crowds rarely comes from serial losers of policy debates. They are more likely to consider systemic problems that mar debate, like informational asymmetries, special interests, and elite bias.

It shouldn’t then come as a great surprise that we in Cato’s foreign policy department, who oppose most U.S. wars, hosted a panel last fall at the American Political Science Association conference to consider the question of why there isn’t there more scholarly evaluation of U.S. wars. Underlying that question is a sense that U.S. wars, at least lately, follow from rationales that offend political science, or even economics, and that more scholars, whether in the academy or think tanks, should say so. Call it a cry for help in making our case.

Upon invitation, several panelists, me included, recast their remarks in the most recent International Security Studies Forum, a publication of H-Diplo. The contributors agree that scholarly evaluation of war is flawed, though not in short supply. Christopher Preble, in his introduction, argues that journalists and defense experts considering wars defer too much to those that served in the military. But military officers, even once retired, stick to a professional ethos that prompts them to leave strategic issues–why to fight–to civilians and to focus on operational questions of how. Jon Lindsay points to the difficulties scholars face in understanding modern military technologies and the dearth of publically-available information about military operations. Alan Kuperman questions academics’ objectivity, seeing them as captives of dovish or hawkish biases.

My take, which follows from an as yet unpublished essay I wrote with Justin Logan, focuses on Washington’s analysts, as opposed to academia. I argue that defense analysis here generally serves a hawkish, bipartisan consensus. Professional incentives encourage analysts to avoid questioning the consensus’ key tenets, including war rationales. Analysts adopt an “operational mind-set.” Washington’s analysis of its wars is voluminous but shallow.

The underlying problem, to me, isn’t that politics affects analysis. That’s the nature, even the virtue, of pluralistic debate.  The problem is insufficient politics—a lack of competing interests. Because U.S. military power makes war feel cheap, the public and their representatives are often indifferent to the wisdom of wars. The historical exercise of national power meanwhile entrenched a belief among foreign policy elites that U.S security depends on global military exertions. As long as costs stay diffuse, the disinterested majority lets the elite minority have its wars without much fuss about costs, benefits, checks, or balances. Debate improves when costs gather, as in Vietnam or Iraq. That’s a limited consolation. When it comes to U.S. wars, the wisdom of crowds comes late and infrequently.

The world’s forests provide a number of vital ecosystem services that benefit both society and nature alike. However, in recent years many have opined that the future of forests is in doubt. Deforestation, drought, fire, insect outbreaks and global warming represent only a handful of the many challenges that are claimed to be causing a near-term demise in forest health that is predicted to become only worse in the years and decades to come. But how valid are these fears? Are Earth’s forests truly on the eve of destruction?

Though there are indeed some locations that are suffering from a variety of maladies, there are many that are not. In fact, multiple studies reveal forests that are thriving, with many increasing in productivity and expanding their ranges (see, for example, the many reviews posted on the CO2 Science website under the heading Greening of the Earth and Forests). And they are typically accomplishing these things despite all the real and imagined assaults on Earth’s vegetation that have occurred over the past several decades. In fact, forests have more than compensated for any of the negative effects these phenomena may have inflicted upon them.

A recent example of this phenomenon is presented in the work of Poulsen and Hoffman (2015), who examined aerial and ground-based photographs to estimate long-term changes in the distribution of forests on the Cape Peninsula of South Africa. Specifically, the pair of researchers analyzed a series of forest-related characteristics from aerial photographs taken in 1944 and 2008, along with 50 historical ground-based repeat photographs that were initially imaged between 1888 and 1980 and then repeated in 2011 or 2012.

As shown in the table below, examination of the aerial photographs revealed there was an overall increase in forest cover of 65% between 1944 and 2008. And with respect to the ground-based repeat photographs, Poulsen and Hoffman report finding “an overall decrease in cover of more than 5% of visible rock and sand” (indicating more vegetative cover).

Table 1. Changes in forest cover of Western Cape Afrotemperate Forest and Western Cape Milkwood Forest on Cape Peninsula from 1944 and 2008, based on an analysis of aerial photographs. Source: Poulsen and Hoffman (2015).

 

In discussing their findings, Poulsen and Hoffman state “the aerial and repeat ground-based photograph datasets have shown that there has been a significant increase in the number of patches of forest as well as in forest cover on the Cape Peninsula since 1888 when the earliest repeat photos were taken.” In fact, as revealed in Table 1, overall forest cover has increased by more than 65 percent since 1944. And in areas where coverage has not increased, the two authors say they “are primarily situated along the coast where developments have expanded and replaced [the forest].”

As for the cause of the observed forest increase, Poulsen and Hoffman note that “increases in woody vegetation cover have increasingly been attributed to increases in elevated atmospheric CO2 levels,” though they say it is difficult to establish that link here because “there has been no research on the effects of elevated CO2 on South African indigenous forest taxa.” A more likely cause, in their view, is fire exclusion; yet that conclusion may be somewhat shaky, considering the fact that they report mean fire return intervals have declined from 31.6 to 13.5 years since 1975, which decline should not have favored forest growth.

Whatever the cause, or causes, one thing is clear: Cape Peninsula forests are far from approaching any tipping point leading to their destruction. In fact, we find that in many other locations throughout the world (see the many references cited in the links presented above), forests are defying alarmists’ projections of their demise, as they successfully cope with and adapt to the many challenges humanity and nature force upon them. Now that’s good news worth sharing!

 

Reference

Poulsen, Z.C. and Hoffman, M.T. 2015. Changes in the distribution of indigenous forest in Table Mountain National Park during the 20th Century. South African Journal of Botany 101: 49-56.

Last month, the Treasury Department announced new steps to boost the market for private mortgage bonds, not backed by the government or any federal entity, in order to increase homeownership and improve access to credit for working-class Americans who might be having trouble borrowing money to buy a house.  The Administration’s latest effort to boost the market for private mortgage lending begs an essential question:  What are the societal benefits to homeownership, and would more investment in homeownership help the economy?

It’s a long-discussed question, of course.  The pro-home-building folks aver that homeownership fosters civic involvement and helps people become more tied to their community, which encourages other behavior beneficial for the economy.  And for a good proportion of homeowners the majority of their net wealth is in their home, so it can be an important source of savings.

But another way to look at it is that correlation is not causation:  The reason that homeowners are more civic-minded and involved in the community is because such people are much more likely to have the wherewithal to save enough to make a downpayment on a house.  Ed Glaeser, the renowned housing economist from Harvard, puts little stock in the notion that homeownership has significant positive societal externalities.

What’s more, there’s some evidence that high homeownership rates have downsides as well.  In the last four decades the predilection for moving has slowed significantly:  only half as many people moved across state or county lines in any year this decade as was the case in the 1950s, for instance.  This is problematic because it means that our economy is worse at matching up workers with where the available jobs are.  The lingering unemployment in many rust-belt states would be less if some of their unemployed could be persuaded to move to another community where there are jobs.  There has been a decades-long move of people from the midwest to the Sunbelt, of course, but the data suggest there’s ample room for more.  This hasn’t happened in part because people are tied down by the homes that they own and are reluctant to sell while they are underwater.  That people are unable to ignore sunk costs isn’t economically rational, of course, but it nevertheless governs how many people consider whether to move.

In other words, an argument could be made that instead of taking measures to boost homeownership, a better approach to jumpstarting the economy might be to reduce incentives to homeownership and let the proportion of people who own homes fall.  There’s no reason to think that lower homeownership rates would reduce spending on housing:  people have to live somewhere, and fewer home owners would simply mean more renters.  If the average size of a family’s home shrinks slightly because of it, it’s hard to see what the harm would be in that — home sizes increased by one-third from the 1980s to the early 2000’s, so it’s not like we’re returning to the world of tenements.  The net result of pulling back on homeownership incentives would be that new families would wait another year or two before buying the home that becomes their family home, and fewer singles would buy — salutary developments, I would argue.

And I’m morally obligated here to point out that the costliest incentive for homeownership — the mortgage interest deduction — does absolutely nothing to increase homeownership rates, since only the wealthiest third of all households can avail themselves of its benefits.  The amount of the tax subsidy from the deduction that goes to homeowners in Greenwich Connecticut is an order of magnitude greater than the benefits for people in Mossville, Illinois.

Above all else we need to help policymakers get away from this mindset that our ample housing subsidies benefit the economy by creating jobs building homes.  Demand-side fiscal incentives — and that’s 90% of the current political arguments for housing subsidies — are a chimera.  If we spent less on housing we’d spend more somewhere else in the economy.  This notion that the economy consists of various silos — like housing and autos — and that a reduction in any of these is an unmitigated bad thing is a lousy way to approach how an economy works.  The more we spend on building new houses the less money is available for investments in things that might actually boost the productive capacity of an economy.  In other words, the demand-side incentives of housing may reduce the productive capacity of the economy (the supply side of the economy) and with it long-term economic growth.

There’s no disputing that our capital markets aren’t working efficiently at the moment.  Some of this has to do with the collective shell shock many financial institutions still have over the financial market implosion in 2008.  However, government activities like the passage of Dodd-Frank, the management of Fannie Mae and Freddie Mac, the attempt by the CFPB to wipe out title and payday loan companies (with not a few installment loan companies caught in the crossfire), and the punitive fines assessed on various banks for their alleged misdoings (or in the case of the Bank of America, for simply doing what it was asked to do by the government) have left banks extremely hesitant to make anything but the safest loans.  It’s hard to see what the government can do to convince lenders they won’t be accused of exploiting borrowers with poor credit risks again if there’s another recession in the near future.

Capital markets need better and smarter regulation, but the fact that homeownership rates are falling is not a reason to act.

[Cross-posted from Alt-M.org]

Giancarlo Ibarguen, the former president of Francisco Marroquin University (UFM) in Guatemala, passed away today.

Giancarlo was a friend and teacher to many of us in the international freedom movement, and especially in Latin America. His influence at the University, the center of classical-liberal thought in the region, was large. He was an advocate of innovative and age-old techniques to promote ideas and learning. As Argentine scholar Martin Krause notes, he was an enthusiastic proponent of the University’s “New Media” program and of the Socratic method of teaching. As its chairman and founder, he was the proud backer of the Antigua Forum, a novel way of bringing together distinguished thinkers, entrepreneurs and others to solve real world problems. Giancarlo played no small role in making UFM among the most modern universities in the region, something to which thousands of UFM alums and countless visiting professors and other scholars from the Americas can attest. I was proud that, under Giancarlo’s encouragement, we began the first of our successful series of Cato University seminars for Latin Americans at UFM seven years ago.

In addition to strengthening classical liberalism through UFM, Giancarlo did so as a member of the board of directors of Liberty Fund, as a president and vice president of the Association of Private Enterprise Education, and as secretary of the Mont Pelerin Society. His interest in making the world of ideas relevant to improving the way people lived, led to him to advocate both the importance of liberal principles and of public policy reform. In terms of the latter, Giancarlo was an architect, along with Tom Hazlett, of Guatemala’s successful telecommunications privatization, putting the country on the vanguard in that policy area.

Most of us who knew Gianca, as his friends called him, will remember him for his commitment to the “principles of a society of free and responsible persons,” which was also UFM’s mission. Like his mentor Muso Ayau, the founder of the university, Gianca embodied the spirit of liberalism. He was tolerant, curious, modest about his own knowledge and accomplishments, courteous, open-minded and confident about the human potential. He urged students to question everything and to always question themselves. When Muso Ayau died, he told me that one of the things that most impressed him about Muso was that he had “a very strong sense of right and wrong.” The same could be said about Giancarlo.

Giancarlo died of a debilitating disease that he had been battling for several years. To those of us who interacted with him during this time mostly from afar, there was never any indication that anything was wrong, though his condition was no secret and we of course knew better. He kept extremely engaged, responding quickly to emails, sending personal notes and suggestions, recommending readings or events on Twitter, etc. He was a constant source of optimism and inspiration. To the end, he was a model of dignity.

The late publisher of the National Review, Bill Rusher, used to urge the new hires at his magazine to remain on guard. “Politicians will always disappoint you,” he warned. True enough, though sometimes disappointment gives way to disgust. For once, I am not talking about the Republican and Democratic frontrunners, but the socialist Senator from Vermont.

During the recent Democratic debate in economically distressed and racially diverse Flint, Mich., Sen. Sanders pandered to the black electorate in an attempt to outflank Clinton on the issue of race. In an answer to a question, “What racial blind spot do you have?” Sanders responded, “When you’re white… [you] don’t know what it’s like to be poor.” Well… There are some 4,000,000 Americans who came to the United States from Eastern Europe after the fall of the Berlin Wall and they do remember poverty as well as the economic system (oh, the irony) that produced it – socialism. So, below is a HumanProgress chart comparing average incomes in the United States and Eastern Europe over the last 65 years, as well as a telling video of empty shelves in a Soviet grocery store circa 1990.

   


Here in America, you’d be forgiven for believing that things are on a downward spiral, as Donald Trump’s disturbing success in various primaries raises the real, and terrifying prospect that he will be the Republican nominee. So if constant media coverage of the primary season depresses you, you could do worse than consider recent developments in the Middle East, where something truly unusual has been happening in the last few weeks. With a fragile ceasefire in Syria and diplomatic negotiations in Yemen, things actually appear to be improving.

Though these developments are tenuous – and each has many problems - they show the value of diplomatic and even incremental approaches to resolving the region’s ongoing conflicts.

It’s technically incorrect to refer to the current situation in Syria as a ceasefire. For starters, it doesn’t actually prohibit attacks by any party against the conflict’s most extreme groups, ISIS and Jabhat al Nusra. And unlike a true ceasefire, there is no official on-the-ground monitoring and compliance system. Instead, that role is filled in a more ad-hoc way by a communications hotline between Russia and the United States as members of the International Syria Support Group.

There are other problems with the agreement too, particularly its role in freezing the conflict in a way which is extremely advantageous to the Syrian government and its Russian backers. While this was perhaps unavoidable – Russia would probably not have agreed otherwise – it will reduce the bargaining power of the Syrian opposition in peace talks when they restart on March 14th.

Nonetheless, it’s estimated that the cessation of hostilities – which has held for almost two weeks – has dropped the level of violence and death toll inside Syria by at least 80 percent. Violence has dropped so much that anti-regime protestors were able to engage in peaceful protest marches in several towns. Likewise, despite delivery problems and delays, humanitarian aid is flowing into some areas of Syria for the first time in years.  These small advances are all the more astounding given how unthinkable they seemed even a few months ago.

Progress in Yemen is less spectacular, but still encouraging. Following negotiations mediated by northern Yemeni tribal leaders, the combatants arranged to a swap of Jaber al-Kaabi, a Saudi soldier, for the release of seven Yemeni prisoners. At the same time, a truce along the Saudi-Yemeni border is allowing much-needed humanitarian aid to flow into the country.

Again, these are at best a tiny step towards resolving the conflict, which has lasted almost a year and produced extremely high levels of civilian casualties. The truce is temporary and confined to the border region; Saudi airstrikes continue near the contested town of Ta’iz. Yet the negotiations mark the first direct talks between Houthi rebels and the Saudi-led coalition, which had previously insisted that they would deal with the Houthis only through the exiled Hadi government.

In both Syria and Yemen, observers are quick to point out the tenuous nature of these developments, and it is certainly true that any political settlement in either conflict remains an uphill battle. But I prefer to view these developments in a more positive light. As numerous post-Soviet frozen conflicts have demonstrated, ceasefires do not necessarily resolve the major disputes which precipitated the conflict originally. Yet even if the end result is not a more comprehensive peace deal, the lower levels of violence and improved access to humanitarian aid can dramatically improve life for civilians. In Syria in particular, this represents a small - but notable - victory for diplomacy. 

Last week, the Cato Institute held a policy forum on school choice regulations. Two of our panelists, Dr. Patrick Wolf and Dr. Douglas Harris, were part of a team that authored one of the recent studies finding that Louisiana’s voucher program had a negative impact on participating students’ test scores. Why that was the case – especially given the nearly unanimously positive previous findings – was the main topic of our discussion. Wolf and I argued that there is reason to believe that the voucher program’s regulations might have played a role in causing the negative results, while Harris and Michael Petrilli of the Fordham Institute pointed to other factors. 

The debate continued after the forum, including a blog post in which Harris raises four “problems” with my arguments. I respond to his criticisms below.

The Infamous Education Productivity Chart

Problem #1: Trying to discredit traditional public schools by placing test score trends and expenditure changes on one graph. These graphs have been floating around for years. They purport to show that spending has increased much faster than expenditures [sic], but it’s obvious that these comparisons make no sense. The two things are on different scales. Bedrick tried to solve this problem by putting everything in percentage terms, but this only gives the appearance of a common scale, not the reality. You simply can’t talk about test scores in terms of percentage changes.

The more reasonable question is this: Have we gotten as much from this spending as we could have? This one we can actually answer and I think libertarians and I would probably agree: No, we could be doing much better than we are with current spending. But let’s be clear about what we can and cannot say with these data.

Harris offers a reasonable objection to the late, great Andrew Coulson’s infamous chart (shown below). Coulson already addressed critics of his chart at length, but Harris is correct that the test scores and expenditures do not really have a common scale. That said, the most important test of a visual representation of data is whether the story it tells is accurate. In this case, it is, as even Harris seems to agree. Adjusted for inflation, spending per pupil in public schools has nearly tripled in the last four decades while the performance of 17-year-olds on the NAEP has been flat. 

Producing a similar chart with data from the scores of younger students on the NAEP would be misleading because the scale would mask their improvement. But for 17-year-olds, whose performance has been flat on the NAEP and the SAT, the story the chart tells is accurate.

Voucher Regulations Are Keeping Private Schools Away

Problem #2: Repeating arguments that have already been refuted. Bedrick’s presentation repeated arguments about the Louisiana voucher case that I already refuted in a prior post. Neither the NBER study nor the survey by Pat Wolf and his colleagues provide compelling evidence that existing regulations are driving out potentially more effective private schools in the Louisiana voucher program, which was a big focus of the panel.

Here Harris attacks a claim I did not make. He is correct that there is no compelling evidence that regulations are driving out higher-quality private schools, but no one claimed that there was. Rather, I have repeatedly argued that the evidence was “suggestive but not conclusive” and speculated in my presentation that “if the enrollment trends are a rough proxy [for quality], though we can’t prove this, then it would suggest that the higher-quality schools chose not to participate” while lower-quality schools did.

Moreover, what Harris claims he refuted he actually merely disputed – and not very persuasively. In the previous post he mentions, he minimized the role that regulation played in driving away private schools:

As I wrote previouslythe study he cites, by Patrick Wolf and colleagues, actually says that what private schools nationally most want changed is the voucher’s dollar value. In Louisiana, the authors reported that “the top concern was possible future regulations, followed by concerns about the amount of paperwork and reports. When asked about their concerns relating to student testing requirements, a number of school leaders expressed a strong preference for nationally normed tests” (italics added). These quotes give a very different impression that [sic] Bedrick states. The supposedly burdensome current regulations seem like less of a concern than funding levels and future additional regulations–and no voucher policy can ever insure against future changes in policy.

Actually, the results give a very different impression than Harris states. The quote Harris cites from the report is regarding the concerns of participating schools, but the question at hand is why the nonparticipating schools opted out of the voucher program. Future regulations was still the top concern for nonparticipating schools, but current regulations were also major concerns. Indeed, the study found that 9 of the 11 concerns that a majority of nonparticipating private schools said played a role their decision not to participate in the voucher program related to current regulations, particularly around admissions and the state test.

Source: ”Views from Private Schools,” by Brian Kisida, Patrick J. Wolf, and Evan Rhinesmith, American Enterprise Institute (page 19)

Nearly all of the nonparticipating schools’ top concerns related to the voucher program’s ban on private schools using their own admissions criteria (concerns 2, 3, 5, 7, 8 and 11) or requiring schools to administer the state test (concerns 6, 9, 10, and possibly 7 again). It is clear that these regulations played a significant role in keeping private schools away from the voucher program. The open question is whether the regulations were more likely to drive away higher-quality private schools. I explained why that might be the case, but I have never once claimed that we know it is the case.

Market vs. Government Regulations in Education

Problem #3: Saying that unregulated free markets are good in education because they have been shown to work in other non-education markets. […] For example, the education market suffers from perhaps the worst information problem of any market–many complex hard-to-measure outcomes most of which consumers (parents) cannot directly observe even after they’ve chosen a school for their child. Also, since students can realistically only attend schools near their homes, and there are economies of scale in running schools, that means there will generally be few practical options (unless you happen to live in a large city with great public transportation–very rare in the U.S.). And the transaction costs are very high to switch schools. And there are equity considerations. And … I could go on.

Harris claims that a free market in education wouldn’t work because education is uniquely different from other markets. However, the challenges he lists – information asymmetry, difficulty measuring intangible outcomes, difficulties providing options in rural areas, transaction costs for switching schools – aren’t unique to K-12 education at all. Moreover, there is no such thing as an “unregulated” free market because market forces regulate. As I describe below, while not perfect, these market forces are better suited than the government to address the challenges Harris raises. 

Information asymmetry and hard-to-measure/intangible outcomes

Parents need information in order to select quality education providers for their children. But are government regulations necessary to provide that information? Harris has provided zero evidence that it is, but there is much evidence to the contrary. Here the disparity between K-12 and higher education is instructive. Compared to K-12, colleges and universities operate in a relatively free market. Certainly, there are massive public subsidies, but they are mostly attached to students, and colleges have maintained meaningful independence. Even Pell vouchers do not require colleges to administer particular tests or set a single standard that all colleges must follow.

So how do families determine if a college is a good fit or not? There are three primary mechanisms they use: expert reviews, user reviews, and private certification.

The first category includes the numerous organizations that rate colleges, including U.S. News & World Report, the Princeton Review, Forbes, the Economist, and numerous others like them. These are similar to sorts of expert reviews, like Consumer Reports, that consumers regularly consult when buying cars, computers, electronics, or even hiring lawyers – all industries where the non-expert consumer faces a significant information asymmetry problem.

The second category includes the dozens of websites that allow current students and alumni to rate and review their schools. These are similar to Yelp, Amazon.com, Urban Spoon and numerous other platforms for end-users to describe their personal experience with a given product or service.

Finally, there are numerous national and regional accreditation agencies that certify that colleges meet a certain standard, similar to Underwriters Laboratories for consumer goods. This last category used to be private and voluntary, although now it is de facto mandatory because accreditation is needed to get access to federal funds. 

None of these are perfect, but then again, neither are government regulations. Moreover, the market-based regulators have at least four major advantages over the government. First, they provide more comprehensive information about all those hard-to-measure and intangible outcomes that Harris was concerned about. State regulators tend to measure only narrow and more objective outcomes, like standardized test scores in math and English or graduation rates. By contrast, the expert and user reviews consider return-on-investment, campus life, how much time students spend studying, teaching quality, professor accessibility, career services assistance, financial aid, science lab facilities, study abroad options, and much more. 

Second, the diversity of options means parents and students can better identify the best fit for them. As Malcolm Gladwell observed, different people give different weights to different criteria. A family’s preferences might align better with the Forbes rankings than the U.S. News rankings, for example. Alternatively, perhaps no single expert reviewer captures a particular family’s preferences, in which case they’re still better off consulting several different reviews and then coming to their own conclusion. A single government-imposed standard would only make sense if there was a single best way to provide (or at least measure) education, we knew what it was, and there was a high degree of certainty that the government would actually implement it well. However, that is not the case.

Third, a plethora of private certifiers and expert and user reviews are less likely to create systemic perverse incentives than a single, government standard. As it is, the hegemony of U.S. News & World Report’s rankings created perverse incentives for colleges to focus on inputs rather than outputs, monkey around with class sizes, send applications to students who didn’t qualify to increase their “selectivity” rating, etc. If the government imposed a single standard and then rewarded or punished schools based on their performance according to that standard, the perverse incentives would be exponentially worse. The solution here is more competing standards, not a single standard. 

Fourth, as Dr. Howard Baetjer Jr. describes in a recent edition of Cato Journal, whereas “government regulations have to be designed based on the limited, centralized knowledge of legislators and bureaucrats, the standards imposed by market forces are free to evolve through a constant process of evaluation and adjustment based on the dispersed knowledge, values, and judgment of everyone operating in the marketplace.” As Baetjer describes, the incentives to provide superior standards are better aligned in the market than for the government: 

Incentives and accountability also play a central role in the superiority of regulation by market forces. First, government regulatory agencies face no competition from alternative suppliers of quality and safety assurance, because the regulated have no right of exit from government regulation: they cannot choose a better supplier of regulation, even if they want to. Second, government regulators are paid out of tax revenue, so their budget, job security, and status have little to do with the quality of the “service” they provide. Third, the public can only hold regulators to account indirectly, via the votes they cast in legislative elections, and such accountability is so distant as to be almost entirely ineffectual. These factors add up to a very weak set of incentives for government regulators to do a good job. Where market forces regulate, by contrast, both goods and service providers and quality-assurance enterprises must continuously prove their value to consumers if they are to be successful. In this way, regulation by market forces is itself regulated by market forces; it is spontaneously self-improving, without the need for a central, organizing authority. 

In K-12, there are many fewer private certifiers, expert reviewers, or websites for user reviews, despite a significantly larger number of students and schools. Why? Well, first of all, the vast majority of students attend their assigned district school. To the extent that those schools’ outcomes are measured, it’s by the state. In other words, the government is crowding out private regulators. Even still, there is a small but growing number of organizations like GreatSchools, Private School Review, School Digger, and Niche that are providing parents with the information they desire.

Options in rural areas

First, it should be noted that, as James Tooley has amply documented, private schools regularly operate – and outperform their government-run counterparts – even in the most remote and impoverished areas in the world, including those areas that lack basic sanitation or electricity, let alone public transportation. (For that matter, even the numerous urban slums where Tooley found a plethora of private schools for the poor lack the “great public transportation” that Harris claims is necessary for a vibrant education market.) Moreover, to the extent rural areas do, indeed, present challenges to providing education, such challenges are far from unique. Providers of other goods and services also must contend with reduced economies of scale, transportation issues, etc.

That said, innovations in communication and transportation mean these obstacles are less difficult to overcome than ever before. Blended learning and course access are already expanding educational opportunities for students in rural areas, and the rise of “tiny schools” and emerging ride-sharing operations like Shuddle (“Uber for kids”) may soon expand those opportunities even further. These innovations are more likely to be adopted in a free-market system than a highly government-regulated one.

Test Scores Matter But Parents Should Decide 

Problem #4: Using all this evidence in support of the free market argument, but then concluding that the evidence is irrelevant. For libertarians, free market economics is mainly a matter of philosophy. They believe individuals should be free to make choices almost regardless of the consequences. In that case, it’s true, as Bedrick acknowledged, that the evidence is irrelevant. But in that case, you can’t then proceed to argue that we should avoid regulation because it hasn’t worked in other sectors, especially when those sectors have greater prospects for free market benefits (see problem #3 above). And it’s not clear why we should spend a whole panel talking about evidence if, in the end, you are going to conclude that the evidence doesn’t matter.

Once again, Harris misconstrues what I actually said. In response to a question from Petrilli regarding whether I would support “kicking schools out of the [voucher] program” if they performed badly on the state test, I answered:

No, because I don’t think it’s a wise move to eliminate a school that parents chose, which may be their least bad option. We don’t know why a parent chose that school. Maybe their kid was being bullied at their local public school. Maybe their local public school that they were assigned to was not as good. Maybe there was a crime problem or a drug problem.

We’re never going to have a perfect system. Libertarians are not under the illusion that all private schools are good and all public schools are bad… Given the fact that we’ll never have a perfect system, what sort of mechanism is more likely to produce a wide diversity of options, and foster quality and innovation? We believe that the market – free choice among parents and schools having the ability to operate as they see best – has proven over and over again in a variety of industries to have better outcomes than Mike Petrilli sitting in an office deciding what quality is… as opposed to what individual parents think [quality] is.

Harris then responded by claiming that I was saying the evidence was “irrelevant,” to which I replied:

It’s irrelevent in terms of how we should design the policy, in terms of whether we should kick [schools] out or not, but I think it’s very important that we know how well these programs are working. Test scores do measure something. They are important. They’re not everything, but I think they’re a pretty decent proxy for quality…

In other words, yes, test scores matter. But they are far from the only things that matter. Test scores should be one of many factors that inform parents so that they can make the final decision about what’s best for their children, rather than having the government eliminate what might well be their least bad option based on a single performance measure. 

I am grateful that Dr. Harris took the time both to attend our policy forum and to continue the debate on his blog afterward. I look forward to continued dialogue regarding our shared goal of expanding educational opportunity for all children.

According to many politicians and pundits, new financial regulation adopted since 2008 means that financial crises are now less likely than before. President Barack Obama, for example, has suggested that 

Wall Street Reform now allows us to crack down on some of the worst types of recklessness that brought our economy to its knees, from big banks making huge, risky bets using borrowed money, to paying executives in a way that rewarded irresponsible behavior.

Simliarly, Paul Krugman writes that 

financial reform is working a lot better than anyone listening to the news media would imagine…Did reform go far enough? No. In particular, while banks are being forced to hold more capital, a key force for stability, they really should be holding much more. But Wall Street and its allies wouldn’t be screaming so loudly, and spending so much money in an effort to gut the law, if it weren’t an important step in the right direction. For all its limitations, financial reform is a success story.

Krugman is right that, other things equal, forcing banks to issue more capital should reduce the risk of of crises.

But other things have not remained equal.  According to Liz Marshall, Sabrina Pellerin, and John Walter of the Richmond Federal Reserve Bank, the federal government is now protecting a much higher share of private financial sector liabilities than before the crisis:  

If more private liabilities are explicitly or implicitly guaranteed, private parties will at some point take even greater risks than in earlier periods.  And experience from 2008 suggests that government will always bailout major financial intermediaries if risky bets turn south.

So, some of the new regulation may have reduced the risk financial crises; but other government actions have done the opposite.  Time will tell which effect dominates. 

It may not seem necessary to say these two things, but here goes: (1) No person or group of people are omniscient, and (2) all people are different. Why do I state these realities? Because Common Core supporters sometimes seem to need reminders.

Writing on his New York Times blog, the New America Foundation’s Kevin Carey takes Donald Trump to task for saying that if elected he would eliminate the Common Core. Fair enough, though just as Washington strongly coerced adoption of the Core – a reality Carey deceptively sidesteps by saying states “voluntarily” adopted it – the feds could potentially attach money to dropping it. But that would be no more constitutional than the initial coercion, and the primary coercive mechanism – the Race to the Top – was basically a one-shot deal (though reinforced to an appreciable extent by No Child Left Behind waivers).

Carey is also reasonably suspicious of Trump’s suggestion that local control of education works best. Contrary to what Carey suggests, we don’t have good evidence that state or federal control is better than local – meaningful local control has been withering away for probably over a century, and some research does support it – but it is certainly the case that lots of districts have performed poorly and suffer from waste, paralysis, etc. But then we get this:

But states and localities, in a sense, don’t actually have the ability to set educational standards, even if they choose to. The world around us ultimately determines what students need to learn — the demands of highly competitive and increasingly global labor markets, the admissions requirements of colleges and universities, and the march of scientific progress.

The only choice local schools have is whether they will try to meet those expectations. The Common Core is simply a way of organizing and articulating standards that already exist, for the benefit of students, parents and teachers, so that schooling makes sense when children move between different grades, schools, districts and states.

Oh, the Core hubris! While it is true that all people have to respond to the world around them – no man nor district is an island – it is confidence to a fault to suggest that the Common Core has captured exactly what labor markets, colleges, and “the march of scientific progress” demand. At the very least, proof of that would be greatly appreciated – some content experts certainly disagree – but even heaps of evidence about what exists now cannot demonstrate that the Core also anticipates the demands made by future progress. And is it truly realistic to imply that all people face the same demands? The student who wants to become a physicist? A welder? An accountant? A manicurist? A park ranger. A…you get the point.

The irony is that this sort of argument for the Core is perfectly in line with what a lot of people seem to like about Trump: He tells them he’ll just make stuff happen, no need to go deeper! Indeed, Carey even invokes “American greatness” in arguing for the Core. Sound familiar?

While I have my concerns about the content of the Core, I am not an expert on curriculum and think there may well be excellent components to it. I also, however, know enough about humanity to know that no one is omniscient, all people are unique individuals, and a single solution in a complex world is rarely as perfect as supporters would have us believe.

Today we are pleased to launch the Spanish-language Library of Liberty, a project of the Cato Institute—through our website in Spanish, Elcato.org—and Liberty Fund. The library will allow people in Latin America, Spain and beyond to have access to classic works on liberty in Spanish and in various online formats covering a range of topics including economics, law, history, philosophy and political theory.

The first books in the collection include:

  • Bases and Starting Points for the Political Organization of the Argentine Republic by Juan Bautista Alberdi
  • The Road to Serfdom by Friedrich A. Hayek
  • Essay on the Nature of Trade in General by Richard Cantillon
  • Essays on Freedom and Power by John Emerich Edward Dalberg-Acton
  • The Declaration of Independence and the Constitution of the United States of America by Thomas Jefferson, James Madison and others
  • Freedom and the Law by Bruno Leoni
  • Selected Works by Frédéric Bastiat
  • Planning for Freedom by Ludwig von Mises
  • On Power by Bertrand de Jouvenel
  • Theory of the Cortes or the Great National Congresses of the Kingdoms of Leon and Castile by Francisco Martínez Marina

This project is especially important in the Spanish-speaking world, where the predominant texts on the market and in academia promote ideas and interpretations of history that are hostile to free societies. The vast majority of students and lay persons in Latin America or Spain, for example, have never been exposed to classics such as The Road to Serfdom by Nobel laureate F. A. Hayek, much less had the ability to access the book. Indeed, even if one knew and wished to read these books, it is typically hard to find them in a Spanish-language bookstore. 

It should be no surprise that Spain and its former colonies, burdened with a centuries-long legacy of mercantilism and absolutism, would prove a difficult terrain for the dissemination of classic works on liberty. This is the case, for example, of the Essay on the Nature of Trade in General by Richard Cantillon and of many other texts that demolished arguments for mercantilism. Even the writings of many Latin American founding fathers are still unknown within the Spanish-speaking world, like those by the Argentinean Juan Bautista Alberdi. In his book Bases and Starting Points for the Political Organization of the Argentine Republic, Alberdi states:

“The Spanish colonies were formed for the Treasury, not the Treasury for the colonies. Their legislation was consistent with their fate: they were created to increase tax revenues. In the face of the fiscal interest, the interest of the individual was non-existent. Upon beginning the revolution, we wrote the inviolability of private law into our constitutions; but we left the enduring presence of the ancient cult of the fiscal interest. So, despite the revolution and independence, we have continued to be republics made for the Treasury.”

That text and others in this collection examine the challenge of liberty against power. As David Boaz states in his introduction to Libertarianism: A Primer:

“In a sense there have always been but two political philosophies: liberty and power. Either people should be free to live their lives as they see fit, as long as they respect the equal rights of others, or some people should be able to use force to make other people act in ways they wouldn’t choose.” 

We hope that this Library of Liberty, to which we will continue to add works, will contribute to the spread of the ideas of liberty in the Spanish-speaking world so that societies pursue, in words of Lord Acton, freedom as “the highest political end. It is not for the sake of a good public administration that it is required, but for the security in the pursuit of the highest objects of civil society, and of private life.”  

The Spanish-language Library of Liberty can be accessed through Elcato.org and LibertyFund.org.

On April 1st, the federal government will begin accepting petitions for hiring workers on the H-1B visa–a temporary visa for skilled workers.  H-1B visas for highly skilled workers are annually limited to 85,000 for private firms.  There is no numerical limit for H-1Bs employed at non-profit research institutions affiliated with universities.  The numerical cap for private firms was reached in fiscal years 2015 and 2016 within seven days after applications could be submitted.  During poor economic times, the visa cap can take months to fill, but it does do so without fail except for the years 2001 to 2003 when the cap was increased to 195,000 annually during a poor economy.[i] 

There are obvious economic benefits from adding more skilled workers so the numbers should be expanded greatly, preferably without government-created limits. Taking a page from the Senate’s 2013 immigration reform bill (S. 744), one way to expand the numbers and adjust them annually based on market conditions would be through a formula that takes into account labor market conditions.  The formula could be a big improvement to the current system but it also carries several risks. 

There are some rules of thumb the government should follow if it chooses to create such a formula.  It should be simple and based on publicly available economic data like the unemployment rate.  The formula should not include variables such as the opinions of various stakeholders or appointed officials.  For example, unions or technology firms should not be able to pull a number from their respective black boxes to influence the outcome: any decision should be purely based on publicly available economic data.  Finally, if guest worker visas are assigned to sector- or occupation-specific areas of the economy, the economic data applicable to that sector of occupation should be the only data relevant in calculating the number of visas issued. 

Learning from S. 744

S. 744 included a formula that would adjust the number of W-visas issued each year beginning four years after it was created.  The formula was complex and included:

  • The number of applications for the W-visa in the previous year,
  • The number of job positions open for the visa,
  • The number of currently unemployed Americans,
  • The number of unemployed Americans last year,
  • The current Bureau of Labor Statistics (BLS) job openings,
  • The number of BLS job openings from last year, and
  • Numerical recommendations of the Commissioner, or Migration Czar, of the newly-created Bureau of Immigration and Labor Market Research. 

The formula is a manageable improvement over the current restrictive system except for the last two points.

Migration Czar

A Migration Czar’s decisions would be very important in setting the number of W-visas because his decision is heavily weighted.  His input increases economic uncertainty because it could change without regard to any external factor. In a Democratic administration, the appointed Migration Czar could be a supporter of organized labor and, thus, likely recommend lower numbers.  In a Republican administration, the Migration Czar could be a supporter of employers and, thus, likely recommend higher visa numbers.  Far from creating an objective means of determining future visa flows, the Migration Czar could skew the number of visas toward political considerations and away from economic ones. 

Under the formula, if the number of current BLS job openings was as high as it was in January 2001 (its peak according to historical data) and the number of unemployed Americans falls to 5.5 million (the lowest number recorded on an annualized basis since 1980), then the formula would still not grant the maximum number of W-visas unless the Migration Czar recommends that maximum for two consecutive years.  If the Migration Czar recommends that only 100,000 visas should be issued annually during the same prosperous economic conditions described above, then the number of W-visas issued would actually decline for two years before climbing (assuming the number of applications increased by 25 percent per year). 

Previous Years Have Too Much Influence

The formula is too dependent upon the previous year’s number of W-visa positions and applications.  Including previous year’s numbers slows the growth in the number of visas available during times of economic expansion, precisely when the number of visas needs to increase rapidly in response to a growing economy.  If the visa numbers don’t expand fast enough, then illegal immigration will likely increase to fill any gap, undermining one of the best arguments for a large market-based guest worker visa program.  

Conclusion

The W-visa’s method of adjusting the number of visas was a good start and probably the best that could have arisen from the acrimonious negotiations that birthed it.  The coming petition storm for H-1B visas will likely exhaust all of the available visas (for businesses) within a week.  S. 744 W-visa provides some helpful ways to think about redesigning the quota system for H-1B or other visas along the lines of a formula.  However, before any similar visa adjustment program is created in the future, two big changes should be made:

  • The Migration Czar position at the Bureau of Immigration and Labor Market Research should be eliminated.  The formula is complex enough without adding in the possibility of further regulatory capture and rent-seeking.  The immigration system is already political enough.  Adding a political appointee whose recommendations have an enormous amount of power to set the W-visa quota will only intensify the partisan political influence on our migration system. 
  • The previous year’s number of W-visa applications and positions should not be counted as variables.  The future growth of the program during periods of expanding economic activity should not be constrained by previous year’s applications.   

If the goal is to create a market-based migration system, then that system should rely on the market to set the number in an uncapped program where the workers can switch jobs without seeking ex ante government permission. Prices are a far better regulator of the numbers rather than numerical quotas imposed by macro-level economic indicators.  It’s far better to rely on the market than a clunky formula that masquerades as a market-based mechanism.


 

A new piece of scientific research hit the presses last week. It reported finding more warming in one of the (several) satellite-observed temperature histories of the earth’s lower atmosphere than had been previously reported. As these satellite-measured temperatures were the recent subject of comments made by presidential candidate Ted Cruz, a lot of scrutiny and interest surrounds these new findings—findings which seemed to refute some of Cruz’s assertions.

In researching his story on the new study, the Associated Press’s Seth Borenstein solicited my opinion about them and how they may alter climate change skeptics’ way of thinking about the satellite-observed temperatures—temperature datasets which had previously shown precious little warming over the past nearly two decades.

I was happy to offer my thoughts, and equally happy to see some of them reflected in Seth’s AP story. Given topical and length constraints, understandably, Seth had to be selective.

But I do have a bit more to say about the new research finding besides that it “shows ‘how messy the procedures are in putting the satellite data together.’”

Many of my additional thoughts were included in my broader email response to Seth’s initial inquiry and, with his permission, I am reproducing our correspondence below.

To Seth’s summary of my thoughts, I’d add “but even considering the new findings, the complete collection of satellite- and weather balloon-observed temperature histories of the earth’s atmosphere  indicate that climate models are projecting too much warming in this important region.”

Again, my thanks to Seth for reaching out to me in the first place. Here is out question and answer exchange:

Chip, 

Seeing that the climate doubter community has hinged so much on RSS and saying there has been no warming post 1997 _ despite NOAA heat records in 1998, 2005, 2010, 2014 and 2015 _  you’ve seen the RSS update that shows there has been warming in the last 18 years. I’m wondering what your thoughts are on it. Will you and those in your community keep using RSS, even if it shows no warming. Add to that the UAH record warming in February. Are satellites now contradicting the climate doubter community?

Thanks,
Seth

 

Seth,

Thanks for soliciting my opinion.

I can’t speak for the climate doubter community, however that is defined.

Personally, my doubts are not that human-caused climate change as a result of greenhouse gas emissions is not occurring and that a temperature rise as a result is not detectable in large spatial averages, but I have doubts that the change is taking place at the rate projected by the collection of climate models and that its effects are currently detectable on most smaller scale climate/weather metrics.

So with that out of the way, I’ll give some opinions as to the new RSS results and their importance to my way of thinking…

First off, as I have tweeted (https://twitter.com/PCKnappenberger/status/705515578325270529), the overall 1979-2014 trend in the RSS v4 MT data is still pretty far beneath the climate model expectations…far enough to continue to indicate a sizable discrepancy that needs further scientific attention.

Second, the trend in the new RSS v4 MT now makes it the mid-tropospheric (MT) dataset (including other satellite based and weather-balloon based) that has the greatest trend over the 1979-2014 period (see the same tweet mentioned above, as well as this one, https://twitter.com/PCKnappenberger/status/705472903458914305 which shows the old and new RSS data in comparison to weather-balloon compilations).

Given these two things, I don’t think it helps settle any questions regarding the temperature behavior of the mid-troposphere.

But what it does do is shed more light on just how messy the procedures are in putting the satellite data together.  Decisions, guided by science but not specifically defined by it, occur at many points in the procedure. The new RSS paper, again highlights how sensitive the final results are to those decisions. It is good that we have many different groups involved in assembling both the satellite history and the weather-balloon history. That these different groups provide answers that are pretty close to each other helps not to lower the uncertainty in any single result, but that the general result is not indicative as to what is going on in the MT.  The new RSS v4 now lies outside the old envelop of these collective findings. It’ll either prove to move the science in a bit of a different direction, or prove to be an erroneous result.  Time will tell. 

As to the impact on the “pause,” IMO there was too much being made about the “pause” in the first place. No serious student of climate science thought that it would last forever.  The important thing about it was that it provided a challenge to climate science and prompted enhanced research into natural climate variability, climate sensitivity, and other important aspects of climate science. So that it’s now over comes as no surprise.  But, once the El Nino warming subsides, I think we’ll probably see a continuation of the modest (below model mean) rate of warming.

I hope this is useful.  If you have any further questions, I’d be more than happy to try to answer them.

-Chip

In addition to Seth’s story for the AP, more reactions about the new satellite-study can be found at Watts Up With That, Climate Etc., and at Roy Spencer’s blog, among others.

Pages