Feed aggregator

We’ve been fighting over the Common Core national curriculum standards for years now, and at this point the people who “fact check” ought to know the facts. Also, at this point, I should be doing many other things than laying out basic truths about the Core. Yet here I am, about to fact-check fact-checking by The Seventy Four, an education news and analysis site set up by former television journalist Campbell Brown. Thankfully, I am not alone in having to repeat this Sisyphean chore; AEI’s Rick Hess did the same thing addressing Washington Post fact-checkers yesterday.

Because I have done this so many times before – what follows are relatively quick, clarifications beneath the “facts” the “fact check” missed.

FACT: It was the states — more specifically the Council of Chief State School Officers and National Governors Association — that developed the standards. During the Obama administration, the Education Department has played no specific role in the implementation of those standards, and the classroom curriculum used to meet the broad goals set out in Common Core is created by districts and states, as it always has been. Further, states have made tweaks to the Common Core standards since their initial adoption and, in some cases, have decided to drop the standards entirely.

  • The Council of Chief State School Officers (CCSSO) and National Governors Association (NGA) are not states. They are, essentially, professional associations of governors and state superintendents. And they are definitely not legislatures, which much more than governors represent “the people” of their states. So no, it was not “states” that developed the standards.
  • The CCSSO and NGA explicitly called for federal influence to move states onto common, internationally benchmarked standards – what the Core is supposed to be – writing in the 2008 report Benchmarking for Success that the role of the federal government is to offer “incentives” to get states to use common standards, including offering funding and regulatory relief. See page 7 of the report, and note that the same information was once on the Common Core website but has since been removed.
  • The Common Core was dropped into a federally dictated system under the No Child Left Behind Act that required accountability based on state standards and tests, so Washington did have a role in overseeing “implementation” of the standards. And since what is tested for accountability purposes is what is supposed to get taught, it is very deceptive to say, simply, curriculum “is created by districts and states.” The curricula states create is supposed to be heavily influenced by Core, and especially the math section pushes specific content. Indeed, the Core calls specifically for instructional “shifts.” Oh, and the federal government selected and funded two consortia of states to create national tests – the Partnership for the Assessment of Readiness for College and Career (PARCC) and the Smarter-Balanced Assessment Consortium (SBAC) – which the Department of Education, to at least some extent, oversaw.
FACT: States competing for Race to the Top funds in 2009 got more points on their application for the adoption of “internationally benchmarked standards and assessments that prepare students for success in college and the workplace.” Adopting those standards won a state 40 points out of 500 possible, according to the National Conference of State Legislatures.

Congress has not funded Race to the Top grants in the annual appropriations process for several years, and several states – notably Oklahoma and Indiana – have dropped the Common Core.

  • The $4.35 billion Race to the Top was the primary lever to coerce states into Core adoption, and it did far more than give 40 out of 500 points for adopting any ol’ “internationally benchmarked standards and assessments.” It came as close to saying Common Core as possible without actually saying Common Core, which, by the way, reporting by the Washington Post’s Lyndsey Layton suggested the Obama administration wanted to do, but was asked not to because the optics would be bad. So instead the regulations said that to get maximum points states would have to adopt standards common to a “majority” of states – a definition only met by the Core – and went to pains to make sure the adoption timelines suited the Core. Read all about it here, with special attention to page 59689. And note that maximum points were 50 for adopting standards and aligned tests, and 70 for doing that and supporting transition to the new standards and tests.
  • It is true that the Race to the Top pushing Core implementation only happened once – though in multiple phases – but the Obama administration later cemented it by only giving two choices of standards to get waivers out of the most dreaded parts of the No Child Left behind Act: either have standards common to a “significant number of states,” or a public university system certify a state’s own standards as “college- and career-ready.” And all of this happened after states had promised to use the Core in Race to the Top; it would have been tough for state officials to suddenly say they would not use the Core because, well, they only promised to do so for the federal money.

FACT: Federal law already prohibits the government from forcing states to adopt Common Core. 

The Every Student Succeeds Act, which Obama signed into law in December, includes 13 references to the Common Core – all limitations on federal power to meddle in curriculum.

Specifically from the law: “No officer or employee of the federal government shall, through grants, contracts, or other cooperative agreements, mandate, direct, or control a state, local education agency, or school’s specific instructional content, academic standards and assessments, curricula, or other program of instruction…including any requirement, direction, or mandate to adopt the Common Core State Standards.”

To the contrary, ESSA specifically protects states’ rights to “enter into a voluntary partnership with another state to develop and implement” challenging academic standards.

  • This “fact” was invoked to counter promises by Republican presidential candidates to end Common Core if elected. And it is correct that the ESSA singles out the Core as something that cannot be specifically coerced. But, of course, that has already essentially happened, and it is worth noting that federal law has had paper prohibitions against federal influence over curriculum for decades. Precious good they did, not that forcing states to dump the Core would be any more constitutional than the original coercion.

FACT: The federal government already has a limited role in K-12 education. Particularly in the wake of the passage of the Every Student Succeeds Act, the primary federal roles are providing supplemental funds for the education of children in poverty (the Title I program), setting standards for the education of children with disabilities and helping fund those services (the Individuals with Disabilities Education Act), and ensuring children don’t go hungry (the school lunch program, which is run through the Agriculture Department.)

The monetary role is small, too. According to federal data, between 1980 and 2011, between 7 and 13 percent of total annual education funding came from federal sources. And only about half of that funding in 2011 came from the Education Department. Another quarter of that funding came from the Department of Agriculture for the school lunch program. The Defense Department (junior reserve officers’ training program and their own school system for students of military members), Health and Human Services (Head Start pre-school) and about a half-dozen other departments for smaller programs made up the rest. 

  • The federal government has taken on a largely unlimited role in education – everything from funding to coercing curriculum standards – which is why we saw anger on both the left and right spur passage of the ESSA. But it is not clear that the ESSA reduces the federal role to simply providing supplemental funds, standards for children with disabilities, and stopping hunger. The new law requires that states send standard, testing, and accountability plans to Washington for approval; requires uniform statewide testing; and demands interventions in the worst performing schools, among other things. And this is before the regulations – which some groups are pushing to be very prescriptive – have been written.
  • Oh, the school lunch program? It is also about pushing what Washington deems to be proper nutrition and balanced diets on schools, not just “ensuring children don’t go hungry.”
  • It is true that the monetary role as a percentage of total spending is kind of small, but roughly ten percent of funding isn’t nothing, and federal funding was in much demand during the nadir of the Great Recession, when Race to the Top was in effect. And it is very hard to be a politician in any state and say, “I’m going to turn down this $100 million, or that $1 billion, because it’s not that big a percentage of our funding.” This is something of which federal politicians are well aware, and spending roughly $80 billion on K-12 is not chump change, even by federal standards.

FACT: Abolishing the federal Education Department would also wipe out the Office of Innovation and Improvement, which oversees the very initiatives Cruz wants to promote: federal efforts to spur more charter schools and magnet schools; the DC Opportunity Scholarship Program, the only federal school voucher program; and the Office of Non-Public Education.

  • Ironically, after downplaying federal influence in education, The Seventy Four tweaks Presidential candidate Ted Cruz for saying he wants to get rid of the U.S. Department of Education and expand school choice. So while suggesting overall federal spending is kind of puny at 7 to 13 percent of the total, apparently Department of Education spending on charter school grants is too big to kill. But it is only $333 million dollars, or about $147 per charter student. That is a princely 1 percent of what the U.S. spends, on average, per K-12 student. Meanwhile, the DC voucher program is constantly under threat of destruction. And none of these justify keeping the Department which does way, WAY more than these few things.

So there it is, as fast as I could get it out. No doubt I missed some things. But hopefully this is enough for the fact checkers to get things closer to accurate next time. And now, on to other things…

As the number of people enrolling in ObamaCare Exchanges is falling below the Obama administration’s targets, Hillary Clinton faced a tough question at a town hall meeting in Ohio on Sunday night. Theresa O’Donnell, a Democratic-leaning voter complained that ObamaCare caused her family’s health insurance premiums to double from $5,880 per year to $12,972 per year. “I would like to vote Democratic, but it’s costing me a lot of money,” O’Donnell pleaded. “I am just wondering if Democrats really realize how difficult it’s been on working-class Americans to finance ObamaCare.” The audience applauded O’Donnell, showing once again that, really, not even Democrats like ObamaCare.

CNN Ohio Town Hall 3/13/16: Hillary Clinton asked about ACA Premiums

Clinton’s answer was confident, lucid, and totally incoherent. It amounted to this: (1) shop around on the Exchange for a better deal, which might not make a difference; then elect me and I’ll (2) reduce your copays and your deductibles and your premiums; and (3) encourage more non-profit health insurance companies to compete in the Exchanges.

Clinton acknowledged her first solution might not help. She implicitly recognized that ObamaCare may indeed be just as bad as O’Donnell described. Worse, under ObamaCare’s perverse rules, the more people shop around, the worse the coverage gets.

Her second solution was no better. As anyone who knows anything about health insurance can tell you, reducing deductibles and copayments increases the cost of health insurance. In all likelihood, it would also increase the cost of medical services, including the prescription drugs whose prices Clinton decries.

Clinton also seemed to acknowledge that her third solution is no solution, either. ObamaCare tried inject competition into the Exchanges by giving billions of taxpayer dollars to help launch non-profit “co-ops” run by people with no experience running a health-insurance company. Unsurprisingly most of the ObamaCare co-ops failed. Many patients lost their coverage all over again. Taxpayers will never get that money back.

The party that gave us ObamaCare doesn’t even like it. Still they seem to have learned nothing from its failures.

So far as some people are concerned, when it comes to bashing economists, any old stick will do.

That, at least, seems to be true of those anthropologists and fellow-travelers who imagine that, in demonstrating that certain forms of credit must be older than either monetary exchange or barter, they’ve got some of the leading lights of our profession by the short hairs.

The stick in this case consists of anthropological evidence that’s supposed to contradict the theory that monetary exchange is an outgrowth of barter, with credit coming afterwards.  That view is a staple of economics textbooks.  Were it nothing more than that, the attacks would hardly matter, since finding nonsense in textbooks is easier than falling off a log.  But these critics have mostly directed their ire at a more heavyweight target: Adam Smith.

In The Wealth of Nations, Smith observes that

When the division of labour has been once thoroughly established, it is but a very small part of a man’s wants which the produce of his own labour can supply.  He supplies the far greater part of them by exchanging that surplus part of the produce of his own labour, which is over and above his own consumption, for such parts of the produce of other men’s labour as he has occasion for.  Every man thus lives by exchanging, or becomes in some measure a merchant, and the society itself grows to be what is properly a commercial society.

But when the division of labour first began to take place, this power of exchanging must frequently have been very much clogged and embarrassed in its operations.  One man, we shall suppose, has more of a certain commodity than he himself has occasion for, while another has less.  The former consequently would be glad to dispose of, and the latter to purchase, a part of this superfluity.  But if this latter should chance to have nothing that the former stands in need of, no exchange can be made between them.  The butcher has more meat in his shop than he himself can consume, and the brewer and the baker would each of them be willing to purchase a part of it.  But they have nothing to offer in exchange, except the different productions of their respective trades, and the butcher is already provided with all the bread and beer which he has immediate occasion for.  No exchange can, in this case, be made between them.  He cannot be their merchant, nor they his customers; and they are all of them thus mutually less serviceable to one another.  In order to avoid the inconveniency of such situations, every prudent man in every period of society, after the first establishment of the division of labour, must naturally have endeavoured to manage his affairs in such a manner as to have at all times by him, besides the peculiar produce of his own industry, a certain quantity of some one commodity or other, such as he imagined few people would be likely to refuse in exchange for the produce of their industry.

What’s wrong with that?  In the words of Cambridge anthropologist Caroline Humphrey, as quoted in a recent article on the subject in The Atlantic  (the appearance of which inspired the present post), what’s wrong is that “No example of a barter economy, pure and simple, has ever been described, let alone the emergence of money… . All available ethnography suggests that there has never been such a thing.”

Now, the mere lack of historical or anthropological evidence of past barter economies is itself no more evidence against Smith’s account than it is evidence in favor of it:  after all, if barter tends to get as “clogged as embarrassed” as Smith maintains, we should not be surprised to find no evidence of societies that relied on it.  That lack might only mean that societies either came up with money quickly, or perished equally quickly.  Instead of refuting Smith’s theory, in other words, the lack of evidence of barter may simply reflect survivorship bias.  Julio Huato, in his astute review of Graeber’s book, makes the point most cogently: “Graeber’s attitude,” he writes,

is like that of a chemist rejecting the idea that unstable radioactive isotopes of a certain chemical element exist and tend to evolve into stable isotopes because the former are only exceptionally found in nature, while the latter are common.

But the problem with Smith’s understanding, according to Graeber, isn’t merely that anthropologists can find no evidence of barter societies.  It is, rather, that those same anthropologists have plenty of evidence of societies that subsisted, if they didn’t thrive, despite neither having money nor relying upon barter.  Instead of relying on “quid-pro-quo” exchanges, whether direct or indirect, they managed by resorting to subtle forms of credit, if not outright gift-giving.

As our Atlantic correspondent explains:

If you were a baker and needed meat, you didn’t offer your bagels for the butcher’s steaks.  Instead, you got your wife to hint to the butcher’s wife that you two were low on iron, and she’d say something like, “Oh really?  Have a hamburger, we’ve got plenty!”  Down the line, the butcher might want a birthday cake, or help moving to a new apartment, and you’d help him out.

Far be it for me to deny that trade of this sort happens, even in modern societies, or even that entire communities have at various times depended on it.  Heck, I once taught a short course on economic anthropology an entire section of which was devoted to gift giving and other sorts of “ceremonial exchange.”  What I do deny, and vigorously, is anthropologist David Graeber’s claim that the existence of gift economies undermines, not just Adam Smith’s account of money’s origins, but “the entire discourse of economics.”

Hear our correspondent once again:

According to Graeber, once one assigns specific values to objects, as one does in a money-based economy, it becomes all too easy to assign value to people, perhaps not creating but at least enabling institutions such as slavery…and imperialism… .

There you have it.  By claiming that societies could thrive only by means of monetary exchange, Adam Smith is supposed to have given shape to an “economic discourse” according to which all things, including people, are bound to be valued in terms of money, thereby “enabling” slavery and imperialism and…well, the whole capitalist catastrophe.

That nothing could be more grotesquely unjust to Adam Smith than Graeber’s attempt to paint him as an enabler of slavery and imperialism is (or ought to be) painfully obvious.  But if fair play is not Professor Graeber’s forte, neither is a solid, or even a more than exceedingly superficial, understanding of the tenets of modern economics.  Had Graeber’s purpose been, not to document  economists’ ignorance of anthropology, but to show that at least one anthropologist doesn’t know the first thing about economics, I dare say that he could have done no better than to write Debt: The First 5000 Years.

Consider the opening passage of “The Myth of Barter,” Graeber’s second chapter, and the one in which he sets-out his central claim that Smith, by getting the story of money wrong, took a fateful wrong turn:

What is the difference between a mere obligation, a sense that one ought to behave in a certain way, or even that one owes something to someone, and a debt , properly speaking?  The answer is simple: money.  The difference between a debt and an obligation is that a debt can be precisely quantified.  This requires money.

“A history of debt,” Graeber observes two paragraphs later, “is thus necessarily a history of money.”

This is simple, all right.  But a moment’s thought reveals that it is also simply wrong.  One can incur a debt by borrowing some non-monetary good or goods, just as well as by borrowing money, where repayment is also to be made in goods, and is no less precisely quantified than a monetary obligation might be.  To say, “Give me a hamburger today and I’ll repay you two hamburgers on Tuesday,” is to offer to go into debt to the tune of (precisely) two hamburgers.  That money is both fungible and relatively (though in practice not infinitely) divisible makes it an especially convenient object of debt contracts.  But that is a difference in degree rather than in kind.

Far from being innocuous, the error with which Graeber’s chapter opens is but one crack in the severely-flawed foundation upon which his entire critique of both modern economics and commercial society rests.  That foundation consists of the view that money is, not only uniquely (and precisely) quantifiable, but something capable of precisely measuring the value of other things:

What we call “money” isn’t a “thing” at all; it’s a way of comparing things mathematically, as proportions: of saying one of X is equivalent to six of Y.

Monetary exchange, in turn,

is all about equivalence.  It’s a back-and-forth process involving two sides in which each side gives as good as it gets. …[E]ach side in each case is trying to outdo the other, but, unless one side us utterly put to rout, it’s easiest to break the whole thing off when both consider the outcome to be more or less even.

In other words, monetary exchange, being but an “impersonal” matter of mathematics, is a contest that must result either in a stalemate, with neither side winning, or in a bargain by which one side rips the other off.  Gift exchange, on the other hand, “is likely to work precisely the other way around — to become a matter of contests of generosity, of people showing off who can give more away.”

I leave it to the reader to imagine how, by means of repeated appeals to this sort of reasoning, Graeber manages to paint Adam Smith (and most economists since) as an apologist for slavery, imperialism, and pretty much every ungenerous and unkind activity under the sun.

There’s just one problem.  Just as money is in truth no more “quantifiable” than hamburgers, so, too, is it the case than money is no more a “measure” of value than a hamburger is.  By that I mean, not that a hamburger is also capable of measuring the value of other things, but that neither it nor any sort of money is capable of doing so.

The idea that money is a “measure of value,” like the related idea that exchanges are necessarily exchanges of equivalents, is among the hoariest of economic fallacies.  It plays a prominent part in Aristotle’s economics — and, not coincidentally, in Aristotle’s condemnation of all sorts of “capitalist” activity.  Smith himself, in subscribing to a modified labor theory of value,  was unable to break free of it.  It is more than a little ironic that Graeber, in flinging all sorts of undeserved criticism at Smith, cleaves to him when it comes to his one indisputable mistake.

The notion that money is a “measure of value” is but a particular instance — albeit one that has managed to linger on in some economics textbooks — of the mistaken belief that economic exchanges are exchanges of equivalents.  In his book Money: The Authorized Biography, Felix Martin, like Graeber, takes the “measure of value” notion seriously, and attempts to build from it a critique of both modern economics and modern monetary economies.  In reviewing that work, I explained Martin’s mistake by observing that when a diner sells me bacon and eggs for $4.99, “that doesn’t mean that bacon and eggs are worth $4.99, ‘universally’ or otherwise.  It means that to the diner they are worth less, and to me, more.”

Grasp this little strand of truth.  Pull on it.  Keep on pulling.  And watch Martin’s critique unravel. Graeber’s critique, with its fatuous dichotomy of generous credit transactions on one hand and antagonistic monetary transactions on the other, rests on the same fallacy, and is no less gimcrack.

My concern, though, isn’t with Graeber’s sweeping condemnation of modern economics, or of the  economic arrangements for which modern economists are supposedly to blame.  It’s with his particular claim that there’s no merit in Smith’s account of the origin of money, or in the later  accounts of other economists, including Carl Menger.  Despite what these economists have argued, money couldn’t have grown out of barter, Graeber insists, because the “fabled land of barter” that these accounts posit never existed.  Instead, credit came first, sometimes in subtle and elaborate forms that made it indistinguishable from gift-giving; then came money, in the form of coins.  Barter, finally,

appears to be largely a kind of accidental byproduct of the use of coinage or paper money: historically it has mainly been what people who are used to cash transactions do when for one reason or another they have no access to currency (my emphasis).

So, how true is Graeber’s account, and just how fatal is it to the “fable” that economists like to tell?  For answers, we need look no further than the evidence Graeber himself supplies.  For on close inspection, that evidence itself suffices to show that, notwithstanding the fact that credit is older than barter, Smith’s theory is, after all, not all that far removed from the truth.

A paradox?  Nothing of the sort.  The simple explanation is that, while subtle forms of credit or outright gift giving may suffice for affecting exchanges within tightly-knit communities, exchange within such communities hardly begins to take advantage of opportunities for specialization and division of labor that arise once one allows for trade, not just within such communities, but between them, that is, for trade between or among strangers.  One need only recognize this simple truth to resuscitate Smith’s theory from Graeber’s seemingly fatal blow.  Simple forms of credit may come first; but such credit only goes so far, because it depends on a repeated interaction, and the trust that such interaction both allows and sustains.  That affection and other such “moral sentiments,” to use Smith’s own term, also play a large part is evident from the fact that, within families even today,  monetary exchange and barter play hardly any role: every family is, if you like, a vestigial “gift” economy.

It’s absurd to suppose that Smith himself failed to recognize that credit (or something like it) functions in place of either barter or money in families; and hardly more so to suppose that he denied that it might do the same in somewhat larger but still tightly-knit communities.  Little Adam Smith did not, presumably, bargain with his mother over bed and board, or find his efforts to secure those and other necessities “embarrassed and choked” for want of either double coincidences or cash.  Nor could anyone aware of passages like the following, from Smith’s Theory of Moral Sentiments, suppose that he considered mutual aid unimportant except within nuclear families:

In pastoral countries, and in all countries where the authority of law is not alone sufficient to give perfect security to every member of the state, all the different branches of the same family commonly chuse to live in the neighbourhood of one another.  Their association is frequently necessary for their common defence.  They are all, from the highest to the lowest, of more or less importance to one another.  Their concord strengthens their necessary association; their discord always weakens, and might destroy it.  They have more intercourse with one another, than with the members of any other tribe.  The remotest members of the same tribe claim some connection with one another; and, where all other circumstances are equal, expect to be treated with more distinguished attention than is due to those who have no such pretensions.  It is not many years ago that, in the Highlands of Scotland, the Chieftain used to consider the poorest man of his clan, as his cousin and relation.  The same extensive regard to kindred is said to take place among the Tartars, the Arabs, the Turkomans, and, I believe, among all other nations who are nearly in the same state of society in which the Scots Highlanders were about the beginning of the present century.

If Smith recognized, at least implicitly, that, in families and other tight-knit communities, “credit” serves in place of either barter or money, Graeber for his part is forced to admit that, when it comes to trade between strangers, credit won’t serve:

Now, all this (meaning the lack of evidence of a “fabled land of barter”) hardly means that barter does not exist — or even that it’s never practiced by the sort of people Smith would have referred to as “savages.”  It just means that it’s almost never employed, as Smith imagined, between fellow villagers.  Ordinarily, it takes place between strangers, even enemies (my emphasis).

Later Graeber writes,

What all … cases of trade through barter have in common is that they are meetings with strangers who will, likely or not, never meet again, and with whom one certainly will not enter into any ongoing relations. …

…Barter is what you do with those to whom you are not bound by ties of hospitality (or kinship, or much of anything else).

No doubt.  But how big a problem is this for Smith?  Let pass the silly remark about “savages.”  (An anthropologist ought, one would think, to be capable of resisting the temptation to pass judgement on an 18th-century Scotsman’s choice of words according to 21st-century notions of political correctness.)  The question is, what did Smith really “imagine”?  His story of the butcher and the baker notwithstanding, his reference to pastoral societies makes it perfectly evident that he understood the difference between conduct among “villagers” and conduct among strangers.  His theory of the origins of money ought to be understood accordingly.  It is a theory of how, when opportunities for trade arise among strangers, bringing with them further scope for the division of labor, trade will be “choked and embarrassed” if it must occur by means of barter, but will cease to be so once barter gives way to the employment of money.  In portraying such cases as exceptions to the rule that “credit” proceeds barter, Graeber simply fails to understand that such “exceptions” are all that matters in assessing Smith’s theory.

Nor will it do to suggest that Smith’s understanding of money’s origins confuses what happens within societies or communities with what happens between them.  Such a view depends on arbitrarily rigid definitions of “community” and “society” that overlook these concepts’ inherently elastic nature:  formerly separate communities cease to be so precisely to the extent that commerce takes place between them.  Smith, for his part, recognizes this.  Moreover he understands that the rise of commerce, meaning commerce among strangers, serves in turn to reduce the relative importance of ties of kinship and such, further increasing thereby the importance of monetary exchange.  Here is the passage from the Theory of Moral Sentiments that immediately follows the previously-quoted one on pastoral societies:

In commercial countries, where the authority of law is always perfectly sufficient to protect the meanest man in the state, the descendants of the same family, having no such motive for keeping together, naturally separate and disperse, as interest or inclination may direct.  They soon cease to be of importance to one another;  and, in a few generations, not only lose all care about one another, but all remembrance of their common origin, and of the connection which took place among their ancestors.  Regard for remote relations becomes, in every country, less and less, according as this state of civilization has been longer and more completely established.  It has been longer and more completely established in England than in Scotland;  and remote relations are, accordingly, more considered in the latter country than in the former, though, in this respect, the difference between the two countries is growing less and less every day.  Great lords, indeed, are, in every country, proud of remembering and acknowledging their connection with one another, however remote.  The remembrance of such illustrious relations flatters not a little the family pride of them all; and it is neither from affection, nor from any thing which resembles affection, but from the most frivolous and childish of all vanities, that this remembrance is so carefully kept up.  Should some more humble, though, perhaps, much nearer kinsman, presume to put such great men in mind of his relation to their family, they seldom fail to tell him that they are bad genealogists, and miserably ill-informed concerning their own family history.  It is not in that order, I am afraid, that we are to expect any extraordinary extension of, what is called, natural affection.

In short, a generous reading of Smith, far from making him out to be a right bungler when it comes to matters ethnographic, yields a relatively sophisticated view, according to which kinship and “credit” first predominate, but then give way, as strangers meet, first to barter, but eventually to monetary exchange, which in turn allows for the growth of commerce, which ends up reducing the role of kinship and kin-based credit relationships.

If Graeber’s reading of Smith is ungenerous, his reading of Carl Menger is…well, it’s obvious that Graeber hadn’t read Menger at all, for if he had he could not possibly have written that Menger improved upon Smith’s theory mostly “by adding various mathematical equations” to it, or that Menger “assumed that in all communities without money, economic life could only have taken the form of barter.”  (Nor, for that matter, could he have failed to note that the senior Menger, unlike his mathematician son, spelled Carl with a “C.”)  Instead, Graeber would have had to admit that Menger understood perfectly well that “credit,” in Graeber’s loose sense of the term, is older than either monetary exchange or barter.

Menger’s appreciation of the importance of what he sometimes referred to as “no-exchange” economies is especially evident in his 1892 article, “Geld,” in the Handwörterbuch der Staatswissenschaften, from which his more well-known article “On the Origins of Money” is extracted.  According to Menger,

Voluntary as well as compulsory unilateral transfers of assets (that is, transfers arising neither from a ‘reciprocal contract’ in general nor from an exchange transaction in particular, although occasionally based on tacitly recognized reciprocity), are among the oldest forms of human relationships as far as we can go back in the history of man’s economizing.  Long before the exchange of goods appears in history, or becomes of more than negligible importance…we already find a variety of unilateral transfers: voluntary gifts and gifts made more or less under compulsion, compulsory contributions, damages or fines, compensation for killing someone, unilateral transfers within families, etc.*

Far from exemplifying Graeber’s claim that economists “begin the story of money in an imaginary world from which credit and debt have been entirely erased,” Menger explicitly recognizes that

people had probably tried to satisfy their wants, over immeasurable periods of time, essentially in tribal and family no-exchange economies until, aided by the emergence of private property, especially personal property, there gradually appeared multifarious forms of trade in preparation for the exchange proper of goods. …Only then, and hardly before the extent of barter and its importance for the population or for certain segments of the population had made it a necessity, was the objective basis and precondition for the emergency of money established.

In light of such evidence — which, bear in mind, comes from a work published several decades before Mauss’s pathbreaking work on gift exchange — the attention given to Graeber’s critique, and the fact that even some economists saw merit in it (if only temporarily), tells us that there is, after all, at least one impulse among humans that’s more deep-seated than their “propensity to truck, barter, and exchange.”  I mean, of course, their propensity to let themselves be thoroughly bamboozled.

____________________________________

*From the English Translation “Money,” by Leland Yeager (with Monika Streissler).  In Michael Latzer and Stefan W. Schmitz, eds., Carl Menger and the Evolution of Payments Systems: From Barter to Electronic Money (Cheltenham, UK: Edward Elgar), pp. 25-108.

[Cross-posted from Alt-M.org]

The Department of Health and Human Services (HHS) is America’s first $1 trillion bureaucracy. HHS will spend $1.1 trillion in 2016, which is one $1 million repeated one million times.

You are paying for it, so you might want to know that:

  • The department spends more than $8,800 a year for every household in the United States.
  • It runs 528 different subsidy programs for state and local governments, businesses, and individuals.
  • The largest HHS subsidy program is Medicare at $589 billion in 2016, followed by Medicaid at $367 billion.
  • HHS has 73,000 employees.
  • Real, or inflation-adjusted, HHS spending has exploded ten-fold since 1970, as shown in the chart below.
  • At the 1965 signing ceremony for Medicare, President Lyndon Johnson said “No longer will young families see their own incomes, and their own hopes, eaten away simply because they are carrying out their deep moral obligations to their parents.” But since there is no Santa Claus, that is exactly what is happening today as government health spending is imposing huge debt burdens on young families.
  • HHS programs have huge fraud and abuse, which costs taxpayers tens of billions of dollars a year. The programs also distort the health care industry in serious ways because of their top-down structure and masses of regulations. The way to fix the mess is to slash federal spending and move toward a consumer-directed health care system.

By the next president’s fourth budget, annual HHS spending will have grown another $300 billion or so to $1.4 trillion. That is another $300 billion the government will have to borrow from Wall Street, China, and other places that presidential candidates are abusing on the campaign trail. Along with exploding Social Security spending, rising HHS spending is not making America great again, but pushing us into a financial crisis. What are the candidates proposing to do about it?  

The Consumer Financial Protection Bureau (CFPB) recently announced that it would start accepting consumer complaints about marketplace lending.  Marketplace lending, previously known as “peer to peer” or “P2P” lending, emerged in the aftermath of the financial crisis.  A combination of tightening credit markets and low-interest rates created a perfect marriage between consumers looking for loans and investors looking for profit.  In its first incarnation, peer to peer lending served as an online matchmaking service, allowing prospective borrowers to post requests for loans to be reviewed by individuals willing to make those loans.  “Peer to peer” referred to the fact that the lenders were ordinary people, just like the borrowers.  The loans are non-recourse, meaning that if the borrower fails to repay, the lender is simply out of luck.  Although these would appear to be risky loans, in fact, the default rate has been surprisingly low: 4.9 percent at market-leader Prosper as of the end of 2014, and 5.3 percent at the other leader, Lending Club, during the period between Q1 2007 and Q1 2015.

The loans have performed so well that the market quickly attracted institutional investors and more sophisticated business models.  As the two leading providers of marketplace loans today, Prosper and Lending Club use the same (somewhat complex) model.  The companies issue notes to investors that are obligations of the issuing company. Simultaneously, WebBank, a Utah-based FDIC-insured bank, originates a loan which is sold to the company. The company pays for the loan with the proceeds from the sale of notes to investors. The loan is disbursed to the borrower. The borrower repays the funds in accordance with the terms of the loan. And the payments from the borrower are used to pay the purchasers of the company’s notes. The payment of the notes is explicitly dependent on the borrower’s repayment of the loan.

Since marketplace lending has gained momentum, there have been concerns about its regulation – expressed both by those who worry that it’s completely unregulated (not true, but there have been no new regulations specifically targeting the industry), and by those–like me–who worry that its innovation will be smothered while the industry is still in its infancy.

Although the CFPB has not announced any plans (yet) to write new regulations specifically aimed at marketplace lending, and although there is an argument that much of the industry actually falls under the SEC’s jurisdiction, the move to solicit complaints certainly seems to signal an interest in regulation down the road.  Aside from general concern about regulating an industry still in the process of defining itself (how do you know what problems may work out on their own through better solutions that regulation could provide?), there is a specific problem with using customer complaints as a foundation for regulation.  It’s the same problem supporting the general argument in favor of regulation: There is clear self-selection at play. 

Who is more likely to seek out the CFPB’s complaint portal: the happy borrower who has secured a loan at a favorable rate, or the disgruntled borrower?  While the American public has always been free to “petition the government for a  redress of grievances,” actively seeking out those unhappy with an industry smacks of a regulator looking for a reason to regulate.  If the CFPB is concerned about marketplace lending, a more sound approach would be to hold old fashioned hearings which, while sometimes more a performance than an inquiry, at least tend to include representatives from both sides of an issue.

One law firm has dubbed the CFPB’s announcement the establishment of  a “beachhead” in the marketplace lending industry, planting a flag signaling new regulation ahead.  I, unfortunately, tend to agree.

Many in the Bitcoin community seek increased financial privacy. As I wrote in a 2014 study of the Bitcoin ecosystem, “Bitcoin can facilitate more private transactions, which, when legal in the jurisdictions where they occur, are the business of nobody but the parties to them.” That study identified “algorithmic monitoring of Bitcoin transactions” as a rather likely and somewhat consequential threat to the goal of financial privacy (pg. 18). It was part of a cluster of similar threats.

Good news: The Bitcoin community is doing something about it.

The Open Bitcoin Privacy Project recently issued the second edition of its Bitcoin Wallet Privacy Rating Report. It’s a systematic, comparative study of the privacy qualities of Bitcoin wallets. The report is based on a detailed threat model and published criteria for measuring the “privacy strength” of wallets. (I’ve not studied either in detail, but the look of them is well-thought-out.)

Reports like this are an essential, ecosystem-building market function. The OBPP is at once informing Bitcoin users about the quality of various wallets out there, and at the same time challenging wallet providers to up their privacy game. It’s notable that the wallet with the highest number of users, Blockchain, is 17th in the rankings, and one of the most prominent U.S. providers of exchange, payment processing, and wallet services, Coinbase, is 20th. Those kinds of numbers should be a welcome spur to improvement and change. Blockchain is updating its wallet apps. Coinbase, which has offended some users with intensive scrutiny of their financial behavior, appears wisely to be turning away from wallet services.

Bitcoin guru Andreas Antonopolis rightly advises transferring bitcoins to a wallet you control so that you don’t have to trust a Bitcoin company not to lose it. The folks at the Open Bitcoin Privacy Project are working to make wallets more privacy protective. Kudos, OBPP.

There’s more to do, of course, and if there is a recommendation I’d offer for the next OBPP report, it’s to explain in a more newbie-friendly way what the privacy threats are and how to perceive and weigh them. Another threat to the financial privacy outcome goal—ranked slightly more likely and somewhat more consequential than algorithmic monitoring—was: “Users don’t understand how Bitcoin transactions affect privacy.”

You Ought to Have a Look is a feature from the Center for the Study of Science posted by Patrick J. Michaels and Paul C. (“Chip”) Knappenberger.  While this section will feature all of the areas of interest that we are emphasizing, the prominence of the climate issue is driving a tremendous amount of web traffic.  Here we post a few of the best in recent days, along with our color commentary. 

More and more, harsh reality is stacking up against our ability to achieve the cuts in our national emissions of greenhouse gases that President Obama promised the international community gathered in Paris last December at the UN’s climate conference. In that regard, here some items we think you ought to have a look at.

A couple of weeks ago, we reported that it was looking as if the EPA’s methane emission numbers were a bit, how should we say it, rosy. We suggested that emissions of methane (a strong greenhouse gas) from the U.S. were quite a bit higher than EPA estimates, and that they have been increasing over the past 10 years or so, whereas the EPA reports that they have been in decline. Factoring in this new science meant that the recent decline in total greenhouse gas emissions from the US was about one-third less than being advertised by the EPA and President Obama— imperiling our promise made at the UN’s December 2015 Paris Climate Conference.

Goings-on during the intervening weeks have only acted to further cement our assessment.

EPA has come around to admitting its error—to at least some degree. Wall Street Journal’s energy policy reporter Amy Harder tweeted this from EPA Chief Gina McCarthy:

  

The details behind McCarthy’s statement can be found in a new report from the EPA—a draft of its 2016 edition of the annual US Greenhouse Gas Inventory Report.  In the new draft, the EPA reports that they are in the process of reworking their previous estimates of methane emissions from “natural gas systems” and “petroleum systems.”  They put out a call for public input on their new mythology, which, as a provided example, results in 27% more emissions from those sources in 2013 than the EPA had determined previously.  EPA promises to apply the new methodology to all of its methane emissions from 1990 to the present and notes that:

Trend information has not yet been calculated, but it is expected that across the 1990-2013 time series, compared to the previous (2015) Inventory, in the current (2016) Inventory, the total CH4 emissions estimate will increase, with the largest increases in the estimate occurring in later years of the time series.

Larger increases later in the time series will act to lessen the decline or perhaps even switch the sign of the overall trend.

And even without including the new calculations for natural gas and petroleum systems, the EPA requantified the reported decline in US methane emissions. In last year’s report, they wrote “[m]ethane (CH4) emissions in the United States decreased by almost 15% between 1990 and 2013.” This year, its “[o]verall, from 1990 to 2014…total emissions of CH4 decreased by 37.4 MMT CO2 Eq. (5.0 percent).” The changes arise largely as a result of new examinations and recalculations involving methane release from landfills.

More and more, the EPA’s methane picture is looking, how should we say it, less rosy.

It seems the closer folks look, the more it appears that Obama’s proud accomplishments and promises are proving to be little more than smoke and mirrors.

Take the Clean Power Plan. Almost every analyst alive knew that the plan was a big stretch of the Clean Air Act and that it was going to face legal challenges that were not going to be resolved until the Supreme Court had its say in June, 2017.  A 5-4 decision is almost certain, with the outcome hinging on November’s election, after which the President will nominate a justice to replace Antonin Scalia who will actually be reviewed by the Senate.   Knowing his Plan was in legal hot water, Obama nonetheless told the Paris assembly “we’ve said yes to the first-ever set of national standards limiting the amount of carbon pollution our power plants can release into the sky.” Barely two months later, the Supreme Court said “not so fast” and stayed the Clean Power Plan pending the outcome of all the challenges.

And then, as we mentioned, there’s the methane issue. The EPA said emissions were declining, when in fact they are almost certainly rising. So much so, that the total decline in greenhouse gas emission from the U.S. has likely been overestimated by as much as a third. This situation is a bit grimmer than what President Obama said in Paris: “Over the last seven years, we’ve made…ambitious reductions in our carbon emissions.”

Also, it looks as if the pathway to our promise was rigged.  In a series of recent reports by David Bailey and David Bookbinder for the Niskanen Center, the authors show that the Obama Administration is employing some creative accounting to work the numbers to make it look like there is a clear path towards meeting our Paris target.

From their January report “The Administration’s Climate Confession … and New Deception” comes this assessment:

In the little-noted Second Biennial Report of the United States of America Under the United Nations Framework submitted to the U.N. climate process on December 31, the Administration impliedly admitted that the measures it listed in the INDC would leave us short, by about 500 -800 MMT. The Report itself is a masterpiece of obfuscation in the name of transparency. It includes emission reductions dating back to the 1990s in its list of current measures, and for the majority of measures does not list any reductions numbers. But, not to fear, because “additional measures” of up to 700 MMT, plus a new, secret ingredient [a rapid expansion of US carbon sinks from forestry] worth about another 300 MMT, will still get us to the 2025 target.

And, after having a look at the new EPA draft report, Bailey and Bookbinder responded with “New EPA Data Casts More Doubt on Obama’s Climate Promises,” where they concluded:

The new estimates of carbon sinks are particularly significant. We discussed before how the Administration’s Second Biennial Report to the IPCC indicated that the U.S. is relying on an implausibly large increase in absorption of GHGs in sinks to meet the Paris target, from 912 MMT absorbed in 2005 to over 1,200 MMT absorbed by 2025. The revised estimate for 2005 sinks is now 636 MMT, or less than 70% of what the Biennial Report stated only two months ago. Thus, one of the Administration’s main compliance tools now requires not a 30+% increase to 2025, but nearer to a 100% increase.

The biggest impact of these revisions will be (once again) on the credibility of our Paris commitment to reduce 2005 emissions by 26% by 2025. 

The effect on Obama’s Paris promise of all of the above (and more) is well-summed in this story from Inside Climate News:

New data this week showing how little progress the United States has made in cutting greenhouse gas emissions since President Obama took office is the latest evidence to undercut the pledges the United States made in negotiating the Paris climate treaty.

The Clean Power Plan’s crackdown on coal-fired power plants is on hold, thanks to the Supreme Court. Methane emissions are turning out to be higher than previously thought, as natural gas booms. People are buying more gas-guzzling cars, thanks to low prices at the pump.

And now, in a draft of its annual greenhouse gas emissions tally, the EPA reported that emissions in the year 2014 climbed almost 1 percent from 2013 to 2014. That brought emissions back above the level of Obama’s first year in office, 2009.

In negotiating the Paris treaty, signed in December, the U.S. pledged to cut emissions 26 to 28 percent by 2025, below the level of 2005.

The new data shows that from 2005 to 2014 emissions went down just 7.5 percent, leaving most of those promised reductions off in the distance, like a hazy mirage.

Most of that decline is due to the nosedive in emissions that came with the Great Recession of 2008 and 2009.

In a quarter-century, through Democratic and Republican administrations alike, U.S. greenhouse gas emissions have marched mostly in the wrong direction.

Ouch.

All the while, President Obama is leading the push to get countries to sign the Paris Agreement  at a big press event to be held at the United Nations headquarters in New York City on April 22—Earth Day.  The Agreement must be ratified by at least 55 countries representing at least 55 percent of global greenhouse gas emissions before coming into effect. 

Lest some countries become worried that Obama’s Paris emissions pledge was but a well-orchestrated sham and start to get cold feet about signing the Agreement, the President, this week, did manage to slip $500 million into the U.N.’s  Green Climate Fund.  Perhaps that’ll be enough hush money to keep the complaints muted. A rich-to-poor money transfer more so than climate change mitigation is, after all, arguably the most attractive part of the Paris Agreement for most countries.

            An article in the March 14th issue of the New Yorker describes the negative effects of sex offender laws on juveniles who get caught up in a legal system designed to protect children from adult sexual predators.  Adolescent sexual experimentation, especially when accompanied by age mismatch, and child misbehavior have become criminalized in ways that those interviewed in the article see as unintended, mistaken, and counterproductive.

            The unanalyzed premise of the article, however, is that the public labeling of adult sex offenders is good public policy.  The logic underlying public notification laws for adults would seem to be sound: if a known sex offender is looking for a new victim, isn’t it useful if the offender’s neighbors know the person is a threat and can take measures to reduce their own risk of victimization?

In an article in Regulation Professor J. J. Prescott of the University of Michigan Law School examines the separate effects of police registration and public notification requirements on the incidence of sexual attacks.  He concludes that “each additional sex offender registered per 10,000 people reduces the annual number of sex offenses reported per 10,000 people on average by 0.098 crimes (from a starting point of 9.17 crimes). This sizeable reduction (1.07 percent) buttresses the idea that we may be able to use law enforcement supervision to combat sex offender recidivism.”  But the reduction is confined to friends and neighbors and has no effect on sex offenses against strangers.

In contrast public notification deters those who are not already registered but increases recidivism among those who are.  “… for a registry of average size, instituting a notification regime has the aggregate effect in these data of increasing the number of sex offenses by more than 1.57 percent, with all deterrence gains more than offset.”  “… the more difficult, lonely, and unstable our laws make a registered sex offender’s life, the more likely he is to return to crime—and the less he has to lose by committing these new crimes.”  “…if these laws impose significant burdens on a large share of former offenders, and if only a limited number of potential victims benefit from knowing who and where sex offenders are, then we should not be surprised to observe more recidivism under notification, with recidivism rates rising as notification expands.”            

In the Republican debate last night, CNN’s Dana Bash pressed the candidates on how they would deal with Social Security. Senators Marco Rubio and Ted Cruz gave solid answers, explaining that the system was headed toward insolvency, suggesting ways to slow spending growth, and scolding candidates who denied the need for cost-saving reforms.  

One of the candidates in denial is Donald Trump. He said, “And it’s my absolute intention to leave Social Security the way it is. Not increase the age and to leave it as is.” Trump is a smart man, who presumably understands accounting, so either he hasn’t bothered to examine the finances of the government’s largest program, or he is willfully providing a false narrative about it.

The chart below compares Social Security and defense spending in real 2016 dollars, including Congressional Budget Office (CBO) projections going forward. For decades, the two programs have vied for the title of the government’s largest, but the battle is now over. Social Security spending has soared far above defense spending, and it will keep on soaring without reforms.

Defense is a “normal” program, with spending fluctuating up and down over the years in real, or inflation-adjusted, dollars. But Social Security has taken off like a rocket, and it is consuming more taxpayer resources every year. The government spent the same amount on defense and Social Security in 2008, but it will be spending twice as much on the latter program by 2023.

When the next president enters office in 2017, he will start planning his 2018 budget. In that year, Social Security will become the first trillion-dollar program, and it will be gobbling up an additional $60 billion or so every single year. Where will all the money come from? Pointing only to “waste, fraud, and abuse,” as Trump does, wastes our time, abuses our intelligence, and is a fraudulent story line to peddle.

 

Data notes: CBO baseline projections to 2026, then real defense spending assumed fixed after that, while real Social Security spending is assumed to increase at the same rate as CBO projects for 2026 (3.8 percent). For ways to cut Social Security, see here.

In today’s Washington Post, the Seventh Circuit’s Richard Posner, the most prolific judge the country has ever seen, has again gone to print to tell us that the Republican Senate majority’s decision not to consider any nominee to fill Justice Antonin Scalia’s empty seat until after the fall elections reminds us “that the Supreme Court is not an ordinary court but a political court, or more precisely a politicized court, which is to say a court strongly influenced in making its decisions by the political beliefs of the judges.” Say this for Judge Posner: From his earliest days as a font of law and economics wisdom through his many phases since, he has never ceased to interest us. Whether those iterations have accurately grasped the issue at hand is something else.

Here, as a descriptive matter, Posner is certainly right in noting that justices seem often to be strongly influenced by their political beliefs, however much they may invoke the self-protective “the law made me do it” pose, as he notes. But his claim is deeper, bordering on the normative: “This is not a usurpation of power,” he writes, “but an inevitability.”

Most of what the Supreme Court does—or says it does—is “interpret” the Constitution and federal statutes, but I put the word in scare quotes because interpretation implies understanding a writer’s or speaker’s meaning, and most of the issues that the court takes up cannot be resolved by interpretation because the drafters and ratifiers of the constitutional or statutory provision in question had not foreseen the issue that has arisen. (emphasis added)

By way of example, Posner continues, the drafters “did not foresee or make provision for regulating electronic surveillance, sound trucks, flash-bang grenades, gerrymandering, child pornography, flag-burning or corporate donations to political candidates.”

True, there’s a vast world that the Framers did not foresee, everything from the telephone to the Internet and far beyond. But their purpose was not to anticipate such particulars but to invoke the immutable principles by which future controversies concerning those unforeseen matters might be resolved. And that, precisely, is what Posner calls into question:

When judges are not interpreting, they’re creating, and to understand judicial creation one must understand first of all the concept of “priors.” Priors are what we bring to a new question before we’ve had a chance to do research on it. They are attitudes, presuppositions derived from upbringing, from training, from personal and career experience, from religion and national origin and character and ideology and politics. They are unavoidable tools of decision-making in nontechnical fields, such as law, which is both nontechnical and analytically weak, in the sense that there are no settled principles for resolving the most difficult and consequential legal controversies. (emphasis added)

And Posner adds that “the priors that seem to exert the strongest influence on present-day Supreme Court justices are political ideology and attitudes toward religion.”

To be sure, there are cases in which such “priors” seem dispositive—the abortion issue leaps to mind, yet even there, federalism principles would seem to be in order. More broadly, however, the question remains: Has Posner overstated the matter—and misstated it? As for overstatement, notice that he has moved from “most of the issues that the court takes up cannot be resolved by interpretation” to “there are no settled principles for resolving the most difficult and consequential legal controversies.” Which is it—“most” or “the most difficult”? Truth to tell, the Court has shown itself quite capable of resolving a large number of its cases unanimously or at least with only one or two dissents. In the term before last, for example, it resolved nearly two-thirds of its cases unanimously.

Yet even in the “difficult” cases, one should pause before claiming that there are no “settled principles” for resolving them. First, there are cases in which the principles are clear but their application affords reasonable justices room for reasonable differences. Take simply the first two of Posner’s examples: The Fourth Amendment’s prohibition of “unreasonable” searches (electronic surveillance), and the principles of common law nuisance that stand behind the First Amendment’s speech protections (sound trucks) afford justices ample room to reasonably differ—not about principles but about application.

But second, and more important, there is no question that “settled” may save Posner. Not that there was ever a period in which every constitutional principle was settled, but prior to the rise of Progressivism our understanding of our Constitution of limited government was far more settled than it has been since the Constitution was upended during the New Deal. With the modern “living Constitution” there is far more room for saying that “there are no settled principles for resolving the most difficult and consequential legal controversies.” But that is the subject for another day. For the present it is enough to question whether the Supreme Court is “inevitably” a politicized Court or whether instead it has been made into a politicized Court by political forces beyond its chambers.

In his weekly address last Saturday, President Obama touted the importance of technology and innovation, and his plans to visit the popular South by Southwest festival in Austin, Texas. He said he would ask for “ideas and technologies that could help update our government and our democracy.” He doesn’t need to go to Texas. Simple technical ideas with revolutionary potential continue to await action in Washington, D.C.

Last fall, the White House’s Third Open Government National Action Plan for the United States of America included a commitment to develop and publish a machine-readable government organization chart. It’s a simple, but brilliant step forward, and the plan spoke of executing on it in a matter of months.

What the President Should Do: Transparent Government

Having access to data that represents the organizational units of government is essential to effective computer-aided oversight and effective internal management. Presently, there is no authoritative list of what entities make up the federal government, much less one that could be used by computers. Differing versions of what the government is appear in different PDF documents scattered around Washington, D.C.’s bureaucracies. Opacity in the organization of government is nothing if not a barrier to outsiders that preserves the power of insiders—at a huge cost in efficiency.

One of the most important ideas and technologies that could help update our government and democracy is already a White House promise. In fact, it’s essentially required by law.

Publication of spending data in organized, consistent formats is required under the terms of the DATA Act—the Digital Accountability and Transparency Act—which the president signed in May 2014. To organize spending data, you must have data reflecting the governmental entities that do the spending.

We’ve studied the availability of data from the federal government that reflect deliberations, management, and results, and we reported in November 2012 on the somewhat better progress on transparency in Congress compared to the administration.

Our Deepbills project added computer-readable code to every version of every bill in the 113th Congress, showing where Congress mentioned agencies and bureaus, proposed spending money, or referred to existing law. It would have been that much better were there an authoritative list of what the units of government are.

President Obama noted in his weekly address that improving the government along these lines has been a goal of his since before he was elected. Given the need and the potential, the achievements he cites wouldn’t get a victory lap out of the starting blocks. But there is still time to deliver on a transparency promise by publishing an authoritiative, machine-readable organization chart as the administration promised just last October.

Those that like policies tend to extol the politics that produced them. Praise for the marketplace of ideas or the wisdom of crowds rarely comes from serial losers of policy debates. They are more likely to consider systemic problems that mar debate, like informational asymmetries, special interests, and elite bias.

It shouldn’t then come as a great surprise that we in Cato’s foreign policy department, who oppose most U.S. wars, hosted a panel last fall at the American Political Science Association conference to consider the question of why there isn’t there more scholarly evaluation of U.S. wars. Underlying that question is a sense that U.S. wars, at least lately, follow from rationales that offend political science, or even economics, and that more scholars, whether in the academy or think tanks, should say so. Call it a cry for help in making our case.

Upon invitation, several panelists, me included, recast their remarks in the most recent International Security Studies Forum, a publication of H-Diplo. The contributors agree that scholarly evaluation of war is flawed, though not in short supply. Christopher Preble, in his introduction, argues that journalists and defense experts considering wars defer too much to those that served in the military. But military officers, even once retired, stick to a professional ethos that prompts them to leave strategic issues–why to fight–to civilians and to focus on operational questions of how. Jon Lindsay points to the difficulties scholars face in understanding modern military technologies and the dearth of publically-available information about military operations. Alan Kuperman questions academics’ objectivity, seeing them as captives of dovish or hawkish biases.

My take, which follows from an as yet unpublished essay I wrote with Justin Logan, focuses on Washington’s analysts, as opposed to academia. I argue that defense analysis here generally serves a hawkish, bipartisan consensus. Professional incentives encourage analysts to avoid questioning the consensus’ key tenets, including war rationales. Analysts adopt an “operational mind-set.” Washington’s analysis of its wars is voluminous but shallow.

The underlying problem, to me, isn’t that politics affects analysis. That’s the nature, even the virtue, of pluralistic debate.  The problem is insufficient politics—a lack of competing interests. Because U.S. military power makes war feel cheap, the public and their representatives are often indifferent to the wisdom of wars. The historical exercise of national power meanwhile entrenched a belief among foreign policy elites that U.S security depends on global military exertions. As long as costs stay diffuse, the disinterested majority lets the elite minority have its wars without much fuss about costs, benefits, checks, or balances. Debate improves when costs gather, as in Vietnam or Iraq. That’s a limited consolation. When it comes to U.S. wars, the wisdom of crowds comes late and infrequently.

The world’s forests provide a number of vital ecosystem services that benefit both society and nature alike. However, in recent years many have opined that the future of forests is in doubt. Deforestation, drought, fire, insect outbreaks and global warming represent only a handful of the many challenges that are claimed to be causing a near-term demise in forest health that is predicted to become only worse in the years and decades to come. But how valid are these fears? Are Earth’s forests truly on the eve of destruction?

Though there are indeed some locations that are suffering from a variety of maladies, there are many that are not. In fact, multiple studies reveal forests that are thriving, with many increasing in productivity and expanding their ranges (see, for example, the many reviews posted on the CO2 Science website under the heading Greening of the Earth and Forests). And they are typically accomplishing these things despite all the real and imagined assaults on Earth’s vegetation that have occurred over the past several decades. In fact, forests have more than compensated for any of the negative effects these phenomena may have inflicted upon them.

A recent example of this phenomenon is presented in the work of Poulsen and Hoffman (2015), who examined aerial and ground-based photographs to estimate long-term changes in the distribution of forests on the Cape Peninsula of South Africa. Specifically, the pair of researchers analyzed a series of forest-related characteristics from aerial photographs taken in 1944 and 2008, along with 50 historical ground-based repeat photographs that were initially imaged between 1888 and 1980 and then repeated in 2011 or 2012.

As shown in the table below, examination of the aerial photographs revealed there was an overall increase in forest cover of 65% between 1944 and 2008. And with respect to the ground-based repeat photographs, Poulsen and Hoffman report finding “an overall decrease in cover of more than 5% of visible rock and sand” (indicating more vegetative cover).

Table 1. Changes in forest cover of Western Cape Afrotemperate Forest and Western Cape Milkwood Forest on Cape Peninsula from 1944 and 2008, based on an analysis of aerial photographs. Source: Poulsen and Hoffman (2015).

 

In discussing their findings, Poulsen and Hoffman state “the aerial and repeat ground-based photograph datasets have shown that there has been a significant increase in the number of patches of forest as well as in forest cover on the Cape Peninsula since 1888 when the earliest repeat photos were taken.” In fact, as revealed in Table 1, overall forest cover has increased by more than 65 percent since 1944. And in areas where coverage has not increased, the two authors say they “are primarily situated along the coast where developments have expanded and replaced [the forest].”

As for the cause of the observed forest increase, Poulsen and Hoffman note that “increases in woody vegetation cover have increasingly been attributed to increases in elevated atmospheric CO2 levels,” though they say it is difficult to establish that link here because “there has been no research on the effects of elevated CO2 on South African indigenous forest taxa.” A more likely cause, in their view, is fire exclusion; yet that conclusion may be somewhat shaky, considering the fact that they report mean fire return intervals have declined from 31.6 to 13.5 years since 1975, which decline should not have favored forest growth.

Whatever the cause, or causes, one thing is clear: Cape Peninsula forests are far from approaching any tipping point leading to their destruction. In fact, we find that in many other locations throughout the world (see the many references cited in the links presented above), forests are defying alarmists’ projections of their demise, as they successfully cope with and adapt to the many challenges humanity and nature force upon them. Now that’s good news worth sharing!

 

Reference

Poulsen, Z.C. and Hoffman, M.T. 2015. Changes in the distribution of indigenous forest in Table Mountain National Park during the 20th Century. South African Journal of Botany 101: 49-56.

Last month, the Treasury Department announced new steps to boost the market for private mortgage bonds, not backed by the government or any federal entity, in order to increase homeownership and improve access to credit for working-class Americans who might be having trouble borrowing money to buy a house.  The Administration’s latest effort to boost the market for private mortgage lending begs an essential question:  What are the societal benefits to homeownership, and would more investment in homeownership help the economy?

It’s a long-discussed question, of course.  The pro-home-building folks aver that homeownership fosters civic involvement and helps people become more tied to their community, which encourages other behavior beneficial for the economy.  And for a good proportion of homeowners the majority of their net wealth is in their home, so it can be an important source of savings.

But another way to look at it is that correlation is not causation:  The reason that homeowners are more civic-minded and involved in the community is because such people are much more likely to have the wherewithal to save enough to make a downpayment on a house.  Ed Glaeser, the renowned housing economist from Harvard, puts little stock in the notion that homeownership has significant positive societal externalities.

What’s more, there’s some evidence that high homeownership rates have downsides as well.  In the last four decades the predilection for moving has slowed significantly:  only half as many people moved across state or county lines in any year this decade as was the case in the 1950s, for instance.  This is problematic because it means that our economy is worse at matching up workers with where the available jobs are.  The lingering unemployment in many rust-belt states would be less if some of their unemployed could be persuaded to move to another community where there are jobs.  There has been a decades-long move of people from the midwest to the Sunbelt, of course, but the data suggest there’s ample room for more.  This hasn’t happened in part because people are tied down by the homes that they own and are reluctant to sell while they are underwater.  That people are unable to ignore sunk costs isn’t economically rational, of course, but it nevertheless governs how many people consider whether to move.

In other words, an argument could be made that instead of taking measures to boost homeownership, a better approach to jumpstarting the economy might be to reduce incentives to homeownership and let the proportion of people who own homes fall.  There’s no reason to think that lower homeownership rates would reduce spending on housing:  people have to live somewhere, and fewer home owners would simply mean more renters.  If the average size of a family’s home shrinks slightly because of it, it’s hard to see what the harm would be in that — home sizes increased by one-third from the 1980s to the early 2000’s, so it’s not like we’re returning to the world of tenements.  The net result of pulling back on homeownership incentives would be that new families would wait another year or two before buying the home that becomes their family home, and fewer singles would buy — salutary developments, I would argue.

And I’m morally obligated here to point out that the costliest incentive for homeownership — the mortgage interest deduction — does absolutely nothing to increase homeownership rates, since only the wealthiest third of all households can avail themselves of its benefits.  The amount of the tax subsidy from the deduction that goes to homeowners in Greenwich Connecticut is an order of magnitude greater than the benefits for people in Mossville, Illinois.

Above all else we need to help policymakers get away from this mindset that our ample housing subsidies benefit the economy by creating jobs building homes.  Demand-side fiscal incentives — and that’s 90% of the current political arguments for housing subsidies — are a chimera.  If we spent less on housing we’d spend more somewhere else in the economy.  This notion that the economy consists of various silos — like housing and autos — and that a reduction in any of these is an unmitigated bad thing is a lousy way to approach how an economy works.  The more we spend on building new houses the less money is available for investments in things that might actually boost the productive capacity of an economy.  In other words, the demand-side incentives of housing may reduce the productive capacity of the economy (the supply side of the economy) and with it long-term economic growth.

There’s no disputing that our capital markets aren’t working efficiently at the moment.  Some of this has to do with the collective shell shock many financial institutions still have over the financial market implosion in 2008.  However, government activities like the passage of Dodd-Frank, the management of Fannie Mae and Freddie Mac, the attempt by the CFPB to wipe out title and payday loan companies (with not a few installment loan companies caught in the crossfire), and the punitive fines assessed on various banks for their alleged misdoings (or in the case of the Bank of America, for simply doing what it was asked to do by the government) have left banks extremely hesitant to make anything but the safest loans.  It’s hard to see what the government can do to convince lenders they won’t be accused of exploiting borrowers with poor credit risks again if there’s another recession in the near future.

Capital markets need better and smarter regulation, but the fact that homeownership rates are falling is not a reason to act.

[Cross-posted from Alt-M.org]

Giancarlo Ibarguen, the former president of Francisco Marroquin University (UFM) in Guatemala, passed away today.

Giancarlo was a friend and teacher to many of us in the international freedom movement, and especially in Latin America. His influence at the University, the center of classical-liberal thought in the region, was large. He was an advocate of innovative and age-old techniques to promote ideas and learning. As Argentine scholar Martin Krause notes, he was an enthusiastic proponent of the University’s “New Media” program and of the Socratic method of teaching. As its chairman and founder, he was the proud backer of the Antigua Forum, a novel way of bringing together distinguished thinkers, entrepreneurs and others to solve real world problems. Giancarlo played no small role in making UFM among the most modern universities in the region, something to which thousands of UFM alums and countless visiting professors and other scholars from the Americas can attest. I was proud that, under Giancarlo’s encouragement, we began the first of our successful series of Cato University seminars for Latin Americans at UFM seven years ago.

In addition to strengthening classical liberalism through UFM, Giancarlo did so as a member of the board of directors of Liberty Fund, as a president and vice president of the Association of Private Enterprise Education, and as secretary of the Mont Pelerin Society. His interest in making the world of ideas relevant to improving the way people lived, led to him to advocate both the importance of liberal principles and of public policy reform. In terms of the latter, Giancarlo was an architect, along with Tom Hazlett, of Guatemala’s successful telecommunications privatization, putting the country on the vanguard in that policy area.

Most of us who knew Gianca, as his friends called him, will remember him for his commitment to the “principles of a society of free and responsible persons,” which was also UFM’s mission. Like his mentor Muso Ayau, the founder of the university, Gianca embodied the spirit of liberalism. He was tolerant, curious, modest about his own knowledge and accomplishments, courteous, open-minded and confident about the human potential. He urged students to question everything and to always question themselves. When Muso Ayau died, he told me that one of the things that most impressed him about Muso was that he had “a very strong sense of right and wrong.” The same could be said about Giancarlo.

Giancarlo died of a debilitating disease that he had been battling for several years. To those of us who interacted with him during this time mostly from afar, there was never any indication that anything was wrong, though his condition was no secret and we of course knew better. He kept extremely engaged, responding quickly to emails, sending personal notes and suggestions, recommending readings or events on Twitter, etc. He was a constant source of optimism and inspiration. To the end, he was a model of dignity.

The late publisher of the National Review, Bill Rusher, used to urge the new hires at his magazine to remain on guard. “Politicians will always disappoint you,” he warned. True enough, though sometimes disappointment gives way to disgust. For once, I am not talking about the Republican and Democratic frontrunners, but the socialist Senator from Vermont.

During the recent Democratic debate in economically distressed and racially diverse Flint, Mich., Sen. Sanders pandered to the black electorate in an attempt to outflank Clinton on the issue of race. In an answer to a question, “What racial blind spot do you have?” Sanders responded, “When you’re white… [you] don’t know what it’s like to be poor.” Well… There are some 4,000,000 Americans who came to the United States from Eastern Europe after the fall of the Berlin Wall and they do remember poverty as well as the economic system (oh, the irony) that produced it – socialism. So, below is a HumanProgress chart comparing average incomes in the United States and Eastern Europe over the last 65 years, as well as a telling video of empty shelves in a Soviet grocery store circa 1990.

   


Here in America, you’d be forgiven for believing that things are on a downward spiral, as Donald Trump’s disturbing success in various primaries raises the real, and terrifying prospect that he will be the Republican nominee. So if constant media coverage of the primary season depresses you, you could do worse than consider recent developments in the Middle East, where something truly unusual has been happening in the last few weeks. With a fragile ceasefire in Syria and diplomatic negotiations in Yemen, things actually appear to be improving.

Though these developments are tenuous – and each has many problems - they show the value of diplomatic and even incremental approaches to resolving the region’s ongoing conflicts.

It’s technically incorrect to refer to the current situation in Syria as a ceasefire. For starters, it doesn’t actually prohibit attacks by any party against the conflict’s most extreme groups, ISIS and Jabhat al Nusra. And unlike a true ceasefire, there is no official on-the-ground monitoring and compliance system. Instead, that role is filled in a more ad-hoc way by a communications hotline between Russia and the United States as members of the International Syria Support Group.

There are other problems with the agreement too, particularly its role in freezing the conflict in a way which is extremely advantageous to the Syrian government and its Russian backers. While this was perhaps unavoidable – Russia would probably not have agreed otherwise – it will reduce the bargaining power of the Syrian opposition in peace talks when they restart on March 14th.

Nonetheless, it’s estimated that the cessation of hostilities – which has held for almost two weeks – has dropped the level of violence and death toll inside Syria by at least 80 percent. Violence has dropped so much that anti-regime protestors were able to engage in peaceful protest marches in several towns. Likewise, despite delivery problems and delays, humanitarian aid is flowing into some areas of Syria for the first time in years.  These small advances are all the more astounding given how unthinkable they seemed even a few months ago.

Progress in Yemen is less spectacular, but still encouraging. Following negotiations mediated by northern Yemeni tribal leaders, the combatants arranged to a swap of Jaber al-Kaabi, a Saudi soldier, for the release of seven Yemeni prisoners. At the same time, a truce along the Saudi-Yemeni border is allowing much-needed humanitarian aid to flow into the country.

Again, these are at best a tiny step towards resolving the conflict, which has lasted almost a year and produced extremely high levels of civilian casualties. The truce is temporary and confined to the border region; Saudi airstrikes continue near the contested town of Ta’iz. Yet the negotiations mark the first direct talks between Houthi rebels and the Saudi-led coalition, which had previously insisted that they would deal with the Houthis only through the exiled Hadi government.

In both Syria and Yemen, observers are quick to point out the tenuous nature of these developments, and it is certainly true that any political settlement in either conflict remains an uphill battle. But I prefer to view these developments in a more positive light. As numerous post-Soviet frozen conflicts have demonstrated, ceasefires do not necessarily resolve the major disputes which precipitated the conflict originally. Yet even if the end result is not a more comprehensive peace deal, the lower levels of violence and improved access to humanitarian aid can dramatically improve life for civilians. In Syria in particular, this represents a small - but notable - victory for diplomacy. 

Last week, the Cato Institute held a policy forum on school choice regulations. Two of our panelists, Dr. Patrick Wolf and Dr. Douglas Harris, were part of a team that authored one of the recent studies finding that Louisiana’s voucher program had a negative impact on participating students’ test scores. Why that was the case – especially given the nearly unanimously positive previous findings – was the main topic of our discussion. Wolf and I argued that there is reason to believe that the voucher program’s regulations might have played a role in causing the negative results, while Harris and Michael Petrilli of the Fordham Institute pointed to other factors. 

The debate continued after the forum, including a blog post in which Harris raises four “problems” with my arguments. I respond to his criticisms below.

The Infamous Education Productivity Chart

Problem #1: Trying to discredit traditional public schools by placing test score trends and expenditure changes on one graph. These graphs have been floating around for years. They purport to show that spending has increased much faster than expenditures [sic], but it’s obvious that these comparisons make no sense. The two things are on different scales. Bedrick tried to solve this problem by putting everything in percentage terms, but this only gives the appearance of a common scale, not the reality. You simply can’t talk about test scores in terms of percentage changes.

The more reasonable question is this: Have we gotten as much from this spending as we could have? This one we can actually answer and I think libertarians and I would probably agree: No, we could be doing much better than we are with current spending. But let’s be clear about what we can and cannot say with these data.

Harris offers a reasonable objection to the late, great Andrew Coulson’s infamous chart (shown below). Coulson already addressed critics of his chart at length, but Harris is correct that the test scores and expenditures do not really have a common scale. That said, the most important test of a visual representation of data is whether the story it tells is accurate. In this case, it is, as even Harris seems to agree. Adjusted for inflation, spending per pupil in public schools has nearly tripled in the last four decades while the performance of 17-year-olds on the NAEP has been flat. 

Producing a similar chart with data from the scores of younger students on the NAEP would be misleading because the scale would mask their improvement. But for 17-year-olds, whose performance has been flat on the NAEP and the SAT, the story the chart tells is accurate.

Voucher Regulations Are Keeping Private Schools Away

Problem #2: Repeating arguments that have already been refuted. Bedrick’s presentation repeated arguments about the Louisiana voucher case that I already refuted in a prior post. Neither the NBER study nor the survey by Pat Wolf and his colleagues provide compelling evidence that existing regulations are driving out potentially more effective private schools in the Louisiana voucher program, which was a big focus of the panel.

Here Harris attacks a claim I did not make. He is correct that there is no compelling evidence that regulations are driving out higher-quality private schools, but no one claimed that there was. Rather, I have repeatedly argued that the evidence was “suggestive but not conclusive” and speculated in my presentation that “if the enrollment trends are a rough proxy [for quality], though we can’t prove this, then it would suggest that the higher-quality schools chose not to participate” while lower-quality schools did.

Moreover, what Harris claims he refuted he actually merely disputed – and not very persuasively. In the previous post he mentions, he minimized the role that regulation played in driving away private schools:

As I wrote previouslythe study he cites, by Patrick Wolf and colleagues, actually says that what private schools nationally most want changed is the voucher’s dollar value. In Louisiana, the authors reported that “the top concern was possible future regulations, followed by concerns about the amount of paperwork and reports. When asked about their concerns relating to student testing requirements, a number of school leaders expressed a strong preference for nationally normed tests” (italics added). These quotes give a very different impression that [sic] Bedrick states. The supposedly burdensome current regulations seem like less of a concern than funding levels and future additional regulations–and no voucher policy can ever insure against future changes in policy.

Actually, the results give a very different impression than Harris states. The quote Harris cites from the report is regarding the concerns of participating schools, but the question at hand is why the nonparticipating schools opted out of the voucher program. Future regulations was still the top concern for nonparticipating schools, but current regulations were also major concerns. Indeed, the study found that 9 of the 11 concerns that a majority of nonparticipating private schools said played a role their decision not to participate in the voucher program related to current regulations, particularly around admissions and the state test.

Source: ”Views from Private Schools,” by Brian Kisida, Patrick J. Wolf, and Evan Rhinesmith, American Enterprise Institute (page 19)

Nearly all of the nonparticipating schools’ top concerns related to the voucher program’s ban on private schools using their own admissions criteria (concerns 2, 3, 5, 7, 8 and 11) or requiring schools to administer the state test (concerns 6, 9, 10, and possibly 7 again). It is clear that these regulations played a significant role in keeping private schools away from the voucher program. The open question is whether the regulations were more likely to drive away higher-quality private schools. I explained why that might be the case, but I have never once claimed that we know it is the case.

Market vs. Government Regulations in Education

Problem #3: Saying that unregulated free markets are good in education because they have been shown to work in other non-education markets. […] For example, the education market suffers from perhaps the worst information problem of any market–many complex hard-to-measure outcomes most of which consumers (parents) cannot directly observe even after they’ve chosen a school for their child. Also, since students can realistically only attend schools near their homes, and there are economies of scale in running schools, that means there will generally be few practical options (unless you happen to live in a large city with great public transportation–very rare in the U.S.). And the transaction costs are very high to switch schools. And there are equity considerations. And … I could go on.

Harris claims that a free market in education wouldn’t work because education is uniquely different from other markets. However, the challenges he lists – information asymmetry, difficulty measuring intangible outcomes, difficulties providing options in rural areas, transaction costs for switching schools – aren’t unique to K-12 education at all. Moreover, there is no such thing as an “unregulated” free market because market forces regulate. As I describe below, while not perfect, these market forces are better suited than the government to address the challenges Harris raises. 

Information asymmetry and hard-to-measure/intangible outcomes

Parents need information in order to select quality education providers for their children. But are government regulations necessary to provide that information? Harris has provided zero evidence that it is, but there is much evidence to the contrary. Here the disparity between K-12 and higher education is instructive. Compared to K-12, colleges and universities operate in a relatively free market. Certainly, there are massive public subsidies, but they are mostly attached to students, and colleges have maintained meaningful independence. Even Pell vouchers do not require colleges to administer particular tests or set a single standard that all colleges must follow.

So how do families determine if a college is a good fit or not? There are three primary mechanisms they use: expert reviews, user reviews, and private certification.

The first category includes the numerous organizations that rate colleges, including U.S. News & World Report, the Princeton Review, Forbes, the Economist, and numerous others like them. These are similar to sorts of expert reviews, like Consumer Reports, that consumers regularly consult when buying cars, computers, electronics, or even hiring lawyers – all industries where the non-expert consumer faces a significant information asymmetry problem.

The second category includes the dozens of websites that allow current students and alumni to rate and review their schools. These are similar to Yelp, Amazon.com, Urban Spoon and numerous other platforms for end-users to describe their personal experience with a given product or service.

Finally, there are numerous national and regional accreditation agencies that certify that colleges meet a certain standard, similar to Underwriters Laboratories for consumer goods. This last category used to be private and voluntary, although now it is de facto mandatory because accreditation is needed to get access to federal funds. 

None of these are perfect, but then again, neither are government regulations. Moreover, the market-based regulators have at least four major advantages over the government. First, they provide more comprehensive information about all those hard-to-measure and intangible outcomes that Harris was concerned about. State regulators tend to measure only narrow and more objective outcomes, like standardized test scores in math and English or graduation rates. By contrast, the expert and user reviews consider return-on-investment, campus life, how much time students spend studying, teaching quality, professor accessibility, career services assistance, financial aid, science lab facilities, study abroad options, and much more. 

Second, the diversity of options means parents and students can better identify the best fit for them. As Malcolm Gladwell observed, different people give different weights to different criteria. A family’s preferences might align better with the Forbes rankings than the U.S. News rankings, for example. Alternatively, perhaps no single expert reviewer captures a particular family’s preferences, in which case they’re still better off consulting several different reviews and then coming to their own conclusion. A single government-imposed standard would only make sense if there was a single best way to provide (or at least measure) education, we knew what it was, and there was a high degree of certainty that the government would actually implement it well. However, that is not the case.

Third, a plethora of private certifiers and expert and user reviews are less likely to create systemic perverse incentives than a single, government standard. As it is, the hegemony of U.S. News & World Report’s rankings created perverse incentives for colleges to focus on inputs rather than outputs, monkey around with class sizes, send applications to students who didn’t qualify to increase their “selectivity” rating, etc. If the government imposed a single standard and then rewarded or punished schools based on their performance according to that standard, the perverse incentives would be exponentially worse. The solution here is more competing standards, not a single standard. 

Fourth, as Dr. Howard Baetjer Jr. describes in a recent edition of Cato Journal, whereas “government regulations have to be designed based on the limited, centralized knowledge of legislators and bureaucrats, the standards imposed by market forces are free to evolve through a constant process of evaluation and adjustment based on the dispersed knowledge, values, and judgment of everyone operating in the marketplace.” As Baetjer describes, the incentives to provide superior standards are better aligned in the market than for the government: 

Incentives and accountability also play a central role in the superiority of regulation by market forces. First, government regulatory agencies face no competition from alternative suppliers of quality and safety assurance, because the regulated have no right of exit from government regulation: they cannot choose a better supplier of regulation, even if they want to. Second, government regulators are paid out of tax revenue, so their budget, job security, and status have little to do with the quality of the “service” they provide. Third, the public can only hold regulators to account indirectly, via the votes they cast in legislative elections, and such accountability is so distant as to be almost entirely ineffectual. These factors add up to a very weak set of incentives for government regulators to do a good job. Where market forces regulate, by contrast, both goods and service providers and quality-assurance enterprises must continuously prove their value to consumers if they are to be successful. In this way, regulation by market forces is itself regulated by market forces; it is spontaneously self-improving, without the need for a central, organizing authority. 

In K-12, there are many fewer private certifiers, expert reviewers, or websites for user reviews, despite a significantly larger number of students and schools. Why? Well, first of all, the vast majority of students attend their assigned district school. To the extent that those schools’ outcomes are measured, it’s by the state. In other words, the government is crowding out private regulators. Even still, there is a small but growing number of organizations like GreatSchools, Private School Review, School Digger, and Niche that are providing parents with the information they desire.

Options in rural areas

First, it should be noted that, as James Tooley has amply documented, private schools regularly operate – and outperform their government-run counterparts – even in the most remote and impoverished areas in the world, including those areas that lack basic sanitation or electricity, let alone public transportation. (For that matter, even the numerous urban slums where Tooley found a plethora of private schools for the poor lack the “great public transportation” that Harris claims is necessary for a vibrant education market.) Moreover, to the extent rural areas do, indeed, present challenges to providing education, such challenges are far from unique. Providers of other goods and services also must contend with reduced economies of scale, transportation issues, etc.

That said, innovations in communication and transportation mean these obstacles are less difficult to overcome than ever before. Blended learning and course access are already expanding educational opportunities for students in rural areas, and the rise of “tiny schools” and emerging ride-sharing operations like Shuddle (“Uber for kids”) may soon expand those opportunities even further. These innovations are more likely to be adopted in a free-market system than a highly government-regulated one.

Test Scores Matter But Parents Should Decide 

Problem #4: Using all this evidence in support of the free market argument, but then concluding that the evidence is irrelevant. For libertarians, free market economics is mainly a matter of philosophy. They believe individuals should be free to make choices almost regardless of the consequences. In that case, it’s true, as Bedrick acknowledged, that the evidence is irrelevant. But in that case, you can’t then proceed to argue that we should avoid regulation because it hasn’t worked in other sectors, especially when those sectors have greater prospects for free market benefits (see problem #3 above). And it’s not clear why we should spend a whole panel talking about evidence if, in the end, you are going to conclude that the evidence doesn’t matter.

Once again, Harris misconstrues what I actually said. In response to a question from Petrilli regarding whether I would support “kicking schools out of the [voucher] program” if they performed badly on the state test, I answered:

No, because I don’t think it’s a wise move to eliminate a school that parents chose, which may be their least bad option. We don’t know why a parent chose that school. Maybe their kid was being bullied at their local public school. Maybe their local public school that they were assigned to was not as good. Maybe there was a crime problem or a drug problem.

We’re never going to have a perfect system. Libertarians are not under the illusion that all private schools are good and all public schools are bad… Given the fact that we’ll never have a perfect system, what sort of mechanism is more likely to produce a wide diversity of options, and foster quality and innovation? We believe that the market – free choice among parents and schools having the ability to operate as they see best – has proven over and over again in a variety of industries to have better outcomes than Mike Petrilli sitting in an office deciding what quality is… as opposed to what individual parents think [quality] is.

Harris then responded by claiming that I was saying the evidence was “irrelevant,” to which I replied:

It’s irrelevent in terms of how we should design the policy, in terms of whether we should kick [schools] out or not, but I think it’s very important that we know how well these programs are working. Test scores do measure something. They are important. They’re not everything, but I think they’re a pretty decent proxy for quality…

In other words, yes, test scores matter. But they are far from the only things that matter. Test scores should be one of many factors that inform parents so that they can make the final decision about what’s best for their children, rather than having the government eliminate what might well be their least bad option based on a single performance measure. 

I am grateful that Dr. Harris took the time both to attend our policy forum and to continue the debate on his blog afterward. I look forward to continued dialogue regarding our shared goal of expanding educational opportunity for all children.

According to many politicians and pundits, new financial regulation adopted since 2008 means that financial crises are now less likely than before. President Barack Obama, for example, has suggested that 

Wall Street Reform now allows us to crack down on some of the worst types of recklessness that brought our economy to its knees, from big banks making huge, risky bets using borrowed money, to paying executives in a way that rewarded irresponsible behavior.

Simliarly, Paul Krugman writes that 

financial reform is working a lot better than anyone listening to the news media would imagine…Did reform go far enough? No. In particular, while banks are being forced to hold more capital, a key force for stability, they really should be holding much more. But Wall Street and its allies wouldn’t be screaming so loudly, and spending so much money in an effort to gut the law, if it weren’t an important step in the right direction. For all its limitations, financial reform is a success story.

Krugman is right that, other things equal, forcing banks to issue more capital should reduce the risk of of crises.

But other things have not remained equal.  According to Liz Marshall, Sabrina Pellerin, and John Walter of the Richmond Federal Reserve Bank, the federal government is now protecting a much higher share of private financial sector liabilities than before the crisis:  

If more private liabilities are explicitly or implicitly guaranteed, private parties will at some point take even greater risks than in earlier periods.  And experience from 2008 suggests that government will always bailout major financial intermediaries if risky bets turn south.

So, some of the new regulation may have reduced the risk financial crises; but other government actions have done the opposite.  Time will tell which effect dominates. 

It may not seem necessary to say these two things, but here goes: (1) No person or group of people are omniscient, and (2) all people are different. Why do I state these realities? Because Common Core supporters sometimes seem to need reminders.

Writing on his New York Times blog, the New America Foundation’s Kevin Carey takes Donald Trump to task for saying that if elected he would eliminate the Common Core. Fair enough, though just as Washington strongly coerced adoption of the Core – a reality Carey deceptively sidesteps by saying states “voluntarily” adopted it – the feds could potentially attach money to dropping it. But that would be no more constitutional than the initial coercion, and the primary coercive mechanism – the Race to the Top – was basically a one-shot deal (though reinforced to an appreciable extent by No Child Left Behind waivers).

Carey is also reasonably suspicious of Trump’s suggestion that local control of education works best. Contrary to what Carey suggests, we don’t have good evidence that state or federal control is better than local – meaningful local control has been withering away for probably over a century, and some research does support it – but it is certainly the case that lots of districts have performed poorly and suffer from waste, paralysis, etc. But then we get this:

But states and localities, in a sense, don’t actually have the ability to set educational standards, even if they choose to. The world around us ultimately determines what students need to learn — the demands of highly competitive and increasingly global labor markets, the admissions requirements of colleges and universities, and the march of scientific progress.

The only choice local schools have is whether they will try to meet those expectations. The Common Core is simply a way of organizing and articulating standards that already exist, for the benefit of students, parents and teachers, so that schooling makes sense when children move between different grades, schools, districts and states.

Oh, the Core hubris! While it is true that all people have to respond to the world around them – no man nor district is an island – it is confidence to a fault to suggest that the Common Core has captured exactly what labor markets, colleges, and “the march of scientific progress” demand. At the very least, proof of that would be greatly appreciated – some content experts certainly disagree – but even heaps of evidence about what exists now cannot demonstrate that the Core also anticipates the demands made by future progress. And is it truly realistic to imply that all people face the same demands? The student who wants to become a physicist? A welder? An accountant? A manicurist? A park ranger. A…you get the point.

The irony is that this sort of argument for the Core is perfectly in line with what a lot of people seem to like about Trump: He tells them he’ll just make stuff happen, no need to go deeper! Indeed, Carey even invokes “American greatness” in arguing for the Core. Sound familiar?

While I have my concerns about the content of the Core, I am not an expert on curriculum and think there may well be excellent components to it. I also, however, know enough about humanity to know that no one is omniscient, all people are unique individuals, and a single solution in a complex world is rarely as perfect as supporters would have us believe.

Pages