Policy Institutes

The academic year now closing has seen more than its normal share of student, professorial, and administrative moral posturing, so much so that we’re seeing signs of a healthy backlash. Two recent invitations came to me to speak on the subject, for example, one on academic freedom, the other more broadly on tolerance. And very recently we’ve seen that the campus protests over naming the George Mason University Law School after the late Justice Antonin Scalia were just settled after Virginia’s State Council of Higher Education declined to block the name change. 

But don’t think the battle against leftist academic intolerance has been won. Witness Nicholas Kristof’s op-ed in today’s New York Times, “The Liberal Blind Spot.” In a column a few weeks ago, Kristof offered “a confession of liberal intolerance” in which he criticized his fellow progressives for their hypocrisy in promoting all kinds of diversity on campuses—except ideological. The reader reaction? 

It’s rare for a column to inspire widespread agreement, but that one led to a consensus: Almost every liberal agreed that I was dead wrong.

“You don’t diversify with idiots,” asserted the reader comment on The Times’s website that was most recommended by readers (1,099 of them). Another: Conservatives “are narrow-minded and are sure they have the right answers.”

NYT readers aside, how skewed are the numbers in academia? Well at Princeton during the 2012 presidential election, 157 faculty and staff donated to Barack Obama’s campaign, 2 to Mitt Romney’s—a visiting engineering professor and a janitor. From 2011 to 2014 at Cornell, 96 percent of the funds the faculty donated to political candidates or parties went to Democratic campaigns; only 15 of 323 donors gave to conservative causes— perhaps a product of Cornell’s agricultural school. And that same ratio, 96 percent, describes the contributions of Harvard’s Faculty of Arts and Sciences to Democratic candidates during that same period. For a broad picture of the ideological complexion of American law schools, see the splendid article by Northwestern University Law School’s Jim Lindgren in the 2016 Harvard Journal of Law & Public Policy, published by the law school’s Federalist Society chapter. 

Numbers that skewed don’t come about by accident. As Kristof notes, “When a survey finds that more than half of academics in some fields would discriminate against a job seeker who they learned was an evangelical, that feels to me like bigotry.” Fortunately, a noted progressive has had the courage to call this for what it is. Kristof’s piece is worth reading.

One of the problems with big government is that it stimulates the worst sort of behavior from people and attracts legions of cheaters on the inside and outside.

On the outside, the more than 2,300 federal subsidy programs are under constant assault by dishonest individuals, businesses, and criminal gangs. The improper payment rates for the earned income tax credit and school breakfast programs, for example, are more than 20 percent. Medicare and Medicaid are ripped off by tens of billions of dollars a year. It’s a sad reality that when the government dangles free money, millions of people will falsify application forms to try and get some of it.

On the inside, the bad behavior of some federal bureaucrats never fails to amaze me. The official responsible for recent security failings at the TSA apparently bent the rules to line his own pockets and bullied his subordinates to silence any dissent. Kelly Hoggan was the person in charge when “undercover agents from the inspector general’s office … were able to penetrate security checkpoints at U.S. airports while carrying illegal weapons or simulated bombs, 95 percent of the time.”

The Washington Post describes how Hoggan filled his pockets with an extra $90,000 on top of his regular salary of $181,500:

The downfall of a top official in the Transportation Security Administration this week came amid allegations of under-the-radar bonuses and targeted retribution at the highest levels of the agency.

One of the practices that led to Kelly Hoggan’s removal as head of the TSA’s crucial security division is common enough to have a name: smurfing. … Hoggan received bonuses of $10,000 on six different occasions, and three others just above or below that amount, over a 13-month period…

The inspector general, in a report last year, outlined a convoluted process through which Hoggan received the bonus pay. His boss, then-TSA Deputy Administrator John Halinski, told one of Hoggan’s subordinates to recommend Hoggan for the bonus money. That subordinate, Deputy Assistant Administrator Joseph Salvator, recommended that Hoggan receive bonuses. Halinski then approved them.

As for the bullying, the Post reports:

Hoggan also was identified as one of the senior TSA officials who used forced transfers to punish agency employees who spoke out about security lapses or general mismanagement. Those allegations, first raised by TSA whistleblowers, caused considerable anger among members of Congress at three hearings held this month and last. Three of the whistleblowers appeared before the House Committee on Oversight and Government Reform on April 27.

“Many of the people who broke our agency remain in key positions,” testified Jay Brainard, the TSA security director in Kansas. “These leaders are some of the biggest bullies in government.”

Mark Livingston, a manager in the Office of the Chief Risk Officer at TSA headquarters, told the committee that his pay was reduced by two grades after he reported misconduct by TSA officials and security violations.

“If you tell the truth in TSA you will be targeted,” Livingston said.

Directed reassignments have been punitively used by TSA senior leadership as a means to silence dissent, force early retirements or resignations,” said [Andrew] Rhoades, a TSA manager at Minneapolis-St. Paul International Airport.

The solution to the TSA mess is to demonopolize and decentralize aviation security, as I discuss here and here.

For more on the federal government’s failing bureaucracies, see here.

You Ought to Have a Look is a feature from the Center for the Study of Science posted by Patrick J. Michaels and Paul C. (“Chip”) Knappenberger.  While this section will feature all of the areas of interest that we are emphasizing, the prominence of the climate issue is driving a tremendous amount of web traffic.  Here we post a few of the best in recent days, along with our color commentary.

—-

In this week’s YOTHAL edition, we’ll focus on some recent climate science findings that deserve further mention and are worthy of a deeper dive. If and when you have the time and/or inclination, you ought to have a look.

First up is a collection of papers that describe the results of a several experiments looking into cloud formation—or rather, into the availability and development of the aerosol particles that aid in cloud formation. The tiny aerosols are called cloud condensation nuclei (CCN) and without them, it is very difficult for clouds to form. 

It’s well known that sulfate particles, formed as a by-product of fossil fuel burning (primarily coal and oil), make for a good source of CCN. In fact, the change in cloud characteristics resulting from this form of air pollution are thought to have asserted a cooling pressure on the earth’s surface temperature—a cooling that has acted to offset a certain portion of the warming caused by the co-incidental emissions of carbon dioxide and other greenhouse gases.

Just how much warming has been offset by human-induced changes in cloud characteristics is one of the great unknowns in climate science today. Which is unfortunate, as it is a key to understanding how sensitive the earth’s climate is to increasing atmospheric concentrations of greenhouse gases. The less warming offset by enhanced cloud cooling, the less warming caused by greenhouse gas increases.

What the new research found was even in the absence of sulfate aerosols, there are plenty of other sources of potential CCN—a primary one being chemical emissions (known as volatile organic compounds, or VOCs) from plants. Through various processes, which the researchers found involve cosmic galactic rays, the plant VOCs can pretty efficiently transform and grow into CCN.

The bottom line from the new research findings is that the world was probably a cloudier place in the pre-industrial period than it has been generally realized. The implication is that human sulfate emissions haven’t altered cloud characteristics to the degree currently assumed—which means that current assumptions overestimate the magnitude of the anthropogenic cooling enhancement and thus overestimate the warming influence of greenhouse gas emissions (that is, the earth’s climate sensitivity is less than previously determined).

A good review of these three new experimental results (two of which were published in Nature and the other, simultaneously, in Science) and their implications is found in this news piece in Science that accompanied the papers’ publication. Here’s a teaser:

In other words, Earth is less sensitive to greenhouse gases than previously thought, and it may warm up less in response to future carbon emissions, says Urs Baltensperger of the Paul Scherrer Institute, who was an author on all three papers. He says that the current best estimates of future temperature rises are still feasible, but “the highest values become improbable.” The researchers are currently working toward more precise estimates of how the newly discovered process affects predictions of the Earth’s future climate.

At the very least, the Science overview article is worth a read. If you are interested further, you can have a look at the papers themselves (see links in reference list)—although, fair warming, they are quite technical.

Next up is an excellent review paper on wildfire occurrence in a warming world. The article, jointly authored by Stefan Doerr and Cristina Santín of Swansea University is part of a special issue of the Philosophical Transactions of the Royal Society B dedicated to “The interaction of fire and mankind.” Doerr and Santín take us through the extant literature of the trends and variability of fire occurrence and the factors influencing them. What they find is in stark opposition to the conclusion that you’d come to by reading the mainstream press. To hear the authors tell it:

Wildfire has been an important process affecting the Earth’s surface and atmosphere for over 350 million years and human societies have coexisted with fire since their emergence. Yet many consider wildfire as an accelerating problem, with widely held perceptions both in the media and scientific papers of increasing fire occurrence, severity and resulting losses. However, important exceptions aside, the quantitative evidence available does not support these perceived overall trends. Instead, global area burned appears to have overall declined over past decades, and there is increasing evidence that there is less fire in the global landscape today than centuries ago.

This is an eye-opening read in light of the hype surrounding the Ft. McMurray fires of recent weeks and the general warming-is-causing-more-fires-trope that is paraded out every time there is a fire burning somewhere in the US.  The authors go on to note that “[t]he media still promote perceptions of wildfire as the enemy even in very fire-prone regions, such as the western USA…”

And finally is a paper examining what the paleo-history of Greenland tells us about the relationship between higher temperatures and snowfall there. A research team led by University at Buffalo’s Elizabeth Thomas analyzed “aquatic leaf wax” records from sediment cores extracted from a lakebed in western Greenland to reconstruct a temperature and precipitation profile there over the past 8,000 years. Thomas and colleagues found that winter precipitation (snowfall) during a multi-millennial period of warmer-than-current temperatures in Greenland (extending from about 4,000 to 6,000 years ago) was substantially increased.

The proposed mechanism is that the warmer temperatures resulted in reduced sea ice in the nearby Baffin Bay and Labrador Sea which raised the regional moisture availability and increased snowfall.  The enhanced snowfall acted to offset some of the summer ice sheet melting that occurred with the higher temperatures, thereby slowing sea level rise. The authors suggest that a similar mechanism should accompany the current period of rising temperatures. They summarize:

The response of the western GrIS [Greenland Ice Sheet] to higher summer temperatures may have been muted due to increased accumulation in the middle Holocene. Our results suggest that in the future, as Arctic seas warm and sea ice retreats, increased winter precipitation may enhance accumulation on parts of the GrIS and partly offset summer ablation, particularly in areas close to modern winter sea ice fronts.

This result would seem to temper the scare stories of several meters of sea level rise in the coming century that have been circulating around the press—but, predictably, it’s been crickets from those press outlets.

Read more about in this press release, and/or from the paper itself.

 

References:

Bianchi, F., et al., 2016. New particle formation in the free troposphere: A question of chemistry and timing. Sciencedoi: 10.1126/science.aad5456.

Doerr, S. and C. Santín, 2016. Global trends in wildfire and its impacts: perceptions versus realities in a changing world.Philosophical Transactions of the Royal Society B, doi: 10.1098/rstb.2015.0345.

Kirkby, J., et al., 2016. Ion-induced nucleation of pure biogenic particles. Nature, 533, 521–526, doi:10.1038/nature17953.

Thomas, E., et al., 2016. A major increase in winter snowfall during the middle Holocene on western Greenland caused by reduced sea ice in Baffin Bay and the Labrador Sea. Geophysical Research Letters, doi: 10.1002/2016GL068513.

Tröstle, J., et al., 2016. The role of low-volatility organic compounds in initial particle growth in the atmosphere. Nature, 533, 527–531, doi:10.1038/nature18271.

 

California has a “mandatory mediation and conciliation process” whereby unions can force agricultural employers into collective bargaining and also bind the employers to the terms of a collective-bargaining agreement drawn up by a “neutral” mediator. This is the only such compulsory-bargaining law in the country. 

One employer successfully challenged the process in the California court of appeal on the grounds of “class of one” discrimination (treating this employer differently than others)—and separation-of-powers violation. That ruling is now on appeal to the California Supreme Court.

Cato has joined the National Federation of Independent Business and four agricultural associations on an amicus brief supporting the farming company. We argue that the compulsion regime is unconstitutional for two reasons.

First, it imposes mini-labor codes to govern the relations of individual employers and their employees’ unions. It doesn’t provide any safeguard to ensure that similarly situated employers or unions will be treated similarly. It allows mediators to wield legislative authority irrationally and arbitrarily. It therefore denies affected parties the equal protection of the laws, in violation of the U.S. and California Constitutions.

Second, the compulsion regime delegates substantial legislative authority to private-party mediators. It doesn’t provide these mediators with any goal or purpose that they must achieve in drafting collective bargaining agreements. It doesn’t give them any standard or rule by which to achieve any goal or purpose. It fails to establish any adequate safeguards against the abusive exercise of the power delegated. The compulsion regime therefore violates the non-delegation doctrine—delegating legislative powers to an executive agency—and the separation of powers.

In the case of Gerawan Farming, Inc. v. Agricultural Labor Relations Board, the California high court should affirm the judgment below.

There are two ways to become an illegal immigrant in the United States.  The first is to enter illegally, usually across the Southwest border.  Those folks are sometimes called EWIs, short for entered without inspection.  The second way to become an illegal immigrant is to enter legally and then lose legal status, often by overstaying a temporary visa.  

The majority of new illegal immigrants were EWIs until recently.  A recent paper by Robert Warren and Donald Kerwin at the Center for Migration Studies found that overstays accounted for 58 percent of new illegal immigrants in 2012, a rapid increase over the course of a decade (Chart 1).

Chart 1

Overstays as a Percent of all Illegal Entries

 

Source: Warren and Kerwin.

 

At an immigration hearing last week, several witnesses emphasized that continued illegal immigrant entries along the Southwest border and a rising percentage of overstays mean that America’s immigration system is insecure.  In contrast, the higher overstay rate is evidence of fewer illegal immigrants crossing the border as EWIs. 

In calculating the percent of new illegal immigrants who are overstays, the number of EWIs is in the denominator added to the number of overstays.  The number of overstays is the numerator.  The falling number of illegal immigrants crossing the Southwest border without inspection shrinks the denominator on its own, thus boosting the overstay rate.  The surge in the overstay rate is not a lack of security at points of entry and exit but caused by a yuuuuge fall in illegal immigrants crossing the border. 

As evidence for that, I kept Warren and Kerwin’s estimates of the overstay population unchanged but held constant at 2000 levels the number of illegal immigrants entering without inspection.  In other words, I didn’t change the flows in the overstay population but just froze the number of illegal immigrants entering without inspection at the higher 2000 number.  Doing that lowers the 2012 overstay rate to 24 percent – less than half of the rate it actually was and lower than at any point during the entire 30 year period in their paper.

Warren and Kerwin admit that their overstay rate results are sensitive to their estimates of EWIs and how many overstays actually stay long enough to become illegal immigrants.  Small changes in those numbers can shift their findings dramatically.  However, the relationship between the number of CBP apprehensions and the overstay rate supports my simple point (Chart 2).  As the number of apprehensions fell because fewer immigrants attempted to enter the United States, overstays provided a greater percentage of new illegal immigrants.

Chart 2

Overstay Rate and CBP Apprehensions

 

Sources: Warren and Kerwin and Customs and Border Protection. 

The increasing contribution of overstays to the illegal immigrant population is a result of a relatively secure border rather than a worsening vulnerability that is itself worsening.        

1. A dozen California metropolitan areas – including big cities like Fresno, Stockton, Bakersfield and Modesto – already have unemployment rates from 8.0% to 18.6%. Yet California’s statewide minimum wage is now scheduled to rise every year through 2022.

2.  News reports imagine that raising the minimum wage will push up other wages, so average wages would supposedly rise more quickly. On the contrary, three of the four most recent increases in the federal minimum wage were quickly followed by prolonged stagnation in average wages.  

3. In 2015, twice as many earned less than the $7.25 federal minimum wage (1,691,000) as the number paid that minimum wage (870,000).

4. Every time the federal minimum wage has been increased the number earning less than that minimum always increased dramatically.  This was not just true of teenagers but (as the graph below shows) also for those over 25.  When the minimum wage is pushed up faster than the market would have moved it, the effect is to greatly increase the proportion of jobs paying less than the minimum (including working for cash in the informal economy).  Employers offering less than the minimum, legally or otherwise, then enjoy a flood of unskilled applicants unable to compete for scarcer opportunities among larger businesses subject to minimum wage laws. Such intensified rivalry for sub-minimum-wage jobs then pushes the lowest wages even lower.

 5. Regardless of federal, state or city laws, the actual minimum wage is always zero.

Introducing their important work, Buhaug et al. (2015) note that earlier research suggests there is “a correlational pattern between climate anomalies and violent conflict” due to “drought-induced agricultural shocks and adverse economic spillover effects as a key causal mechanism linking the two phenomena.” But is this really so?

Seeking an answer to this question, the four Norwegian researchers compared half a century of statistics on climate variability, food production and political violence across Sub-Saharan Africa, which effort, in their words, “offers the most precise and theoretically consistent empirical assessment to date of the purported indirect relationship.” And what did they thereby find?

Buhaug et al. report that their analysis “reveals a robust link between weather patterns and food production where more rainfall generally is associated with higher yields.” However, they also report that “the second step in the causal model is not supported,” noting that “agricultural output and violent conflict are only weakly and inconsistently connected, even in the specific contexts where production shocks are believed to have particularly devastating social consequences,” which fact leads them to suggest that “the wider socioeconomic and political context is much more important than drought and crop failures in explaining violent conflict in contemporary Africa.”

“Instead,” as they continue, “social protest and rebellion during times of food price spikes may be better understood as reactions to poor and unjust government policies, corruption, repression and market failure,” citing the studies of Bush (2010), Buhaug and Urdal (2013), Sneyd et al. (2013) and Chenoweth and Ulfelder (2015). In fact, they state that even the IPCC’s Fifth Assessment Report concludes “it is likely that socioeconomic and technological trends, including changes in institutions and policies, will remain a relatively stronger driver of food security over the next few decades than climate change,” citing Porter et al. (2014).”

And so we learn that alarmist claims of future climate-change-induced reductions in agricultural production that lead to social unrest and violent conflicts simply are not supported by real-world observations.

 

References

Buhaug, H., Benjaminsen, T.A., Sjaastad, E. and Theisen, O.M. 2015. Climate variability, food production shocks, and violent conflict in Sub-Saharan Africa. Environmental Research Letters 10: 10.1088/1748-9326/10/12/125015.

Buhaug, H. and Urdal, H. 2013. An urbanization bomb? Population growth and social disorder in cities. Global Environmental Change 23: 1-10.

Bush, R. 2010. Food riots: poverty, power and protest. Journal of Agrarian Change 10: 119-129.

Chenoweth, E. and Ulfelder, J. 2015. Can structural conditions explain the onset of nonviolent uprisings? Journal of Conflict Resolution 10.1177/0022002715576574.

Porter, J.R. et al. 2014. Food security and food production systems. Climate Change 2014: Impacts, Adaptation, and Vulnerability. Part A: Global and Sectoral Aspects. Contribution of Working Group II to the Fifth Assessment Report of the Intergovernmental Panel on Climate Change. Ed. C.B. Field et al. (Cambridge: Cambridge University Press) pp. 485-533.

Sneyd, I.Q., Legwegoh, A. and Fraser, E.D.G. 2013. Food riots: media perspectives on the causes of food protest in Africa. Food Security 5: 485-497.

In previous installments of this primer I’ve tried to convince you, first, that monetary policy is ultimately about keeping the available quantity of money from differing substantially, if only temporarily, from the quantity demanded and, second, that doing this boils down in practice to having a money stock that adjusts so as to maintain a steadily-growing level of overall spending on goods and services.

If we’re to pick the right arrangements for achieving this goal, we’d better have a good understanding of the determinants of an economy’s money stock, and of how that stock can be made to expand or contract just enough to keep total spending stable.  Although I eventually plan to talk about monetary arrangements that might make maintaining a steady flow of spending a lot easier than our present system does, for now I’m going to stick to discussing how the same goal might be achieved, at least in principle, in our present monetary system or, more precisely, in the system we had until the subprime crisis of 2008.  (A later post will discuss how things have changed since the crisis.)  This means talking about the Fed’s “instruments of monetary control,” which include devices for regulating the total quantity of bank reserves and circulating Federal Reserve notes, and also for regulating the quantity of bank deposits and other forms of privately-created money that will be supported by any given quantity of bank reserves.

Money Proper and Money Substitutes

In trying to explain how these instruments of monetary control work, I’m tempted, if only for the time being, to revert to some old-fashioned terminology that, whatever its other shortcomings, seems more useful than modern terms are for shedding light upon the nature of money creation.  Nowadays economists use the term “money” to refer to anything that’s a generally-accepted medium of exchange.  Hence the manifold measures of the U.S. money stock — M1, M2, M3, MZM, and so forth — all of which include various sorts of bank deposits.  To refer specifically to the dollars that the Fed itself creates, including both bank reserves and Federal Reserve notes circulating outside of the banking system, they use the terms “high-powered money,” or “base money,” or “the monetary base.”

In the old days, in contrast, economists — or many of them, in any event [1] — liked to distinguish between what they considered money in the strict sense of the term, or “money proper,” and “money substitutes.”  Both money proper and money substitutes serve as generally accepted means of exchange.  The difference is that, while “money substitutes” consist of various kinds of instantly-redeemable IOUs or promises to pay, “money proper” refers to the stuff that the promises promise, that is, what a bank customer expects to get in exchange for the substitutes if he or she asks the bank to pay up.

A century ago, when the terms were still current, in most industrialized economies “money proper” consisted of gold coins, while paper banknotes and demand deposits that were redeemable in gold were mere money substitutes.  Today the same terminology might be used to distinguish the irredeemable currency supplied directly by the Fed from the redeemable exchange media created by commercial banks and other private financial firms.  According to it, and thanks to a few twists of fate, paper Federal Reserve notes are now “money proper,” while bank deposits, and checkable deposits especially, are “money substitutes.”  Note that “money proper” in this context isn’t quite the same thing as what modern economists call “high-powered” or “base” money, because the last includes bank reserves, which aren’t actually “money” at all: they are, true enough, means of payment so far as banks themselves are concerned, but so far as the general public is concerned, it’s bank deposits, rather than the bank reserves that stand behind those deposits, that serve as money.

Real Money as “Raw Material” for Banks

Why drag-in the old-fashioned distinction between money proper and money substitutes?  Because it serves to remind us that even today the “money” that commercial banks and other private-market financial firms produce is in an important respect not the real McCoy at all, but ersatz (if often more convenient) stuff that serves in place of it, and does so only because the firms that supply it, not only make it very convenient to use (e.g., by swiping a debit card), but at the same time offer its users something akin to money-back (which is to say, a “money proper”-back) guarantees.  It’s owing to such guarantees — that is, to the fact that bank deposits are, or are supposed to be, readily redeemable in central bank notes — that bank deposits usually command the same value as the “money proper” for which they’re a stand-in.  Today, of course, those guarantees are for most depositors further reinforced by the presence of deposit insurance, as well as by the knowledge that government authorities consider some banks “too big to fail.”  But such government guarantees don’t allow banks to manage without reserves: they only reduce the likelihood that a bank’s panicking customers will all rush at once to exchange its money substitutes for “real” money.

The understanding that bank deposits and such derive their value at least partly from the fact that banks are prepared to convert them into “money proper” in turn helps us to appreciate how private financial institutions’ ability to create money substitutes depends on their access to “real” (that is, central-bank-created) money.  It depends, in the first place, on the amount of such “real” money that these firms keep on hand, either in the shape of actual central bank currency (“vault cash”) or in that of deposit balances they maintain at the central bank that are themselves readily convertible into central bank notes, and, in the second place, on their ability to borrow “real” money either from other private firms or from the central bank itself should their own inventories of it run out.  One might even go so far as to think of “real” money (central bank notes and deposit balances) as a crucial “raw material” from which money substitutes (various sorts of bank deposits) are made.

The importance of these insights for a proper understanding of central banks’ devices for monetary control becomes instantly apparent once one realizes that, by regulating the actual quantity of its outstanding notes and deposit balances, together with the terms upon which it is willing to make more of the last available on credit to private sector financial firms, a central bank is able to control, not just the quantity of circulating paper money, but the quantity of money substitutes created by the private sector.  Indeed, since the quantity of circulating currency tends to grow along with the extent of commercial-bank deposit creation, that quantity itself ultimately depends on the quantity of reserves that central banks make available to private financial firms.

Open-Market Operations

It follows from this that the most obvious way in which a modern central bank can regulate an economy’s total money stock is by adjusting the available quantity of bank reserves and circulating currency.  Central banks can most readily do that by adjusting the total size of their balance sheets, which they do by either acquiring or selling assets.  For example, if the Fed wants to increase the stock of bank reserves by, say, $100 billion (admittedly a mere trifle, these days), it has only to purchase $100-billion worth of Treasury securities or other assets from dealers in the secondary or “open” market.[2]  To pay for the securities, the Fed wires funds into the sellers’ bank accounts, instantly increasing the total quantity of bank reserves by the same amount.  The banks that receive the new reserves will then have more “raw material” on hand to support their own and, eventually, other financial firms’ creation of various kinds of money substitutes.  Just how this happens–and especially how it is that each dollar in fresh reserves can ultimately inspire the creation of several dollars-worth of substitutes, will be among the subjects of our  next installment.

To shrink the money supply, on the other hand, the Fed has only to sell-off some of its securities, or to let them “roll” off its balance sheet as they mature, instead of replacing them.  When the Fed sells $100 billion in securities, the sellers have their banks wire funds to the Fed for the amounts they purchase, essentially instructing the Fed to deduct the wired amounts from their banks’ reserve balances with it.

Although changes in the size of the Fed’s balance sheet — that is, in its total assets and liabilities — often involve like changes in the quantity of high-powered or base money (currency and bank reserves), and corresponding changes in the total money stock, this isn’t always so.  Although banks’ reserve balances and outstanding Federal Reserve notes make up the bulk of the Federal Reserve System’s total liabilities, those liabilities also include deposit balances of the U.S. Treasury, of foreign central banks, and of some GSEs.  Because these other Fed customers are, unlike banks, not in the business of creating money substitutes, their share of the Fed’s total liabilities doesn’t contribute, as the banks’ share does, to the creation of such substitutes.  It’s possible, therefore, for the quantity of base money, and of various monetary aggregates, to change independently of any overall change in the size of the Fed’s balance sheet.  An increase in the share of Federal Reserve deposit balances belonging to ordinary U.S. banks, rather than to the Treasury, foreign central banks, or GSEs, will, for example, lead to an increase in the total money stock, other things unchanged, while a decline in that share will reduce it.

Repurchase Agreements

Outright Fed security purchases or sales are only one of two sorts of “open-market operations” the Fed resorts to to change the total size of its balance sheet.  The other involves so-called “repurchase agreements” — “repos” for short.  A security repurchase agreement is literally a sale of a security that’s coupled with an agreement to buy the security back for a specific price and at a specific time.  (A “reverse” repo is thus a purchase combined with an agreement to resell.)  However in practice repos (and reverse repos) are practically equivalent to securitized loans, where the security that’s temporarily “sold” serves as collateral to secure a loan from the purchaser to the buyer of an amount equal to the purchase price, and the difference between that price and the later, “repurchase” price is the interest on the loan.  The self-reversing nature of the Fed’s repos and reverse repos, many of which are “overnight” rather than “term” agreements (that is, ones providing for repurchase a day after the original purchase) has caused the Fed to prefer them as a means for achieving temporary adjustments to the money stock, while treating outright security purchases as a way of providing for permanent monetary expansion, and especially for secular growth in the demand for Federal Reserve notes.  Since the crisis, however, the Fed has come to treat repos, and particularly overnight reverse repos (ON RRPs) with Money Market Mutual Funds and GSEs, as a means for securing long-term monetary control.[3]

As I’ve said, by altering the size of its own balance sheet, and especially by altering the available quantity of bank reserves, the Fed is able, not only to influence its own direct contribution to the money stock, consisting of the quantity of Federal Reserve notes circulating within U.S. borders, but to influence the availability of “raw material” that banks must have in order to “manufacture” readily transferable deposits and other “substitutes” for cash.  But while comparing a bank to a factory is helpful up to a point, we mustn’t take the comparison too seriously.  For while it’s true that banks can only create and manage deposits provided they have access to reserves, including vault cash, the connection between reserve “input” and deposit “output” is rather different from what goes on in any factory.  Indeed, it’s different because it depends, not just on what any single bank can “make” out of a fresh increment of reserves, but on what the banking industry as a whole can make from it, which turns out to be something else again.

Explaining the relation between the Fed’s creation (or destruction) of bank reserves and banks’ creation (or destruction) of deposits takes a little effort, not in the least because doing so means confronting the different ways in which economists on one hand and bankers and banking consultants on the other look at the process, and deciding whether the difference is due to substantive disagreement, or mere semantics.  Since that’s going to take more than one or two paragraphs, and this post is already long, I’ll take it up next time.

Next: The Reserve-Deposit “Multiplier”

________________________________

[1] Ludwig von Mises and Irving Fisher are two of the more prominent economists who employed this terminology, which can be traced to the early-to-mid 19th century writings of members of the British Banking School.

[2] Unlike some other central banks, the Fed is prohibited from purchasing Treasury securities from the government.  Instead, it purchases securities already outstanding from a group of designated “primary dealers.”  The financial losses of two such dealers — Bear Stearns and Lehman Brothers — figured prominently in the recent financial crisis.

[3] In particular, the Fed has used ON RPPs to encourage MMMFs and GSE’s to lend to (or park funds with) it, and to thereby reduce the quantity of Federal Reserve dollars available to banks.  The Fed is thus able to reward non-banks for holding (instead of lending or investing) cash, despite the fact that they can’t keep interest-bearing balances with it.  By simultaneously raising both the interest rate it pays on bank reserves and the ON RPP rate, as it did in mid-December 2015, the Fed is able to engage in monetary “tightening” without having to reduce the overall size of its balance sheet.  I will have more to say about this and other post-crisis changes in the way the Fed conducts monetary policy in a later post.

[Cross-posted from Alt-M.org]

You Ought to Have a Look is a feature from the Center for the Study of Science posted by Patrick J. Michaels and Paul C. (“Chip”) Knappenberger.  While this section will feature all of the areas of interest that we are emphasizing, the prominence of the climate issue is driving a tremendous amount of web traffic.  Here we post a few of the best in recent days, along with our color commentary.

—-

First up in this week’s edition of You Ought to Have a Look is an op-ed by Ross McKitrick (one-time Cato Adjunct who is now Chair of Energy, Ecology & Prosperity at the Frontier Centre for Public Policy and Economic Professor at the University of Guelph) who shreds the energy policy being forwarded by Kathleen Wynne, the Liberal Party Premier of Ontario. Wynne’s proposed plan—aimed to combat climate change—includes, among other things, a requirement that all homes eventually be heated by electricity (i.e., no natural gas, etc.).

McKitrick describes up the plan,

Around the time that today’s high-school students are readying to buy their first home, it will be illegal for builders to install heating systems that use fossil fuels, in particular natural gas. Having already tripled the price of power, Queen’s Park will make it all but mandatory to rely on electricity for heating.

There will be new mandates and subsidies for biofuels, electric buses for schools, extensive new bike lanes to accommodate all those bicycles Ontario commuters will be riding all winter, mandatory electric recharging stations on all new buildings, and many other Soviet-style command-and-control directives.

distills what’s wrong with it,

[E]ven if the…plan were to stop global warming in its tracks, the policies would do more economic harm than the averted climate change.

and, in inimitable Ross fashion, throws in this zinger,

The scheme is called the Climate Change Action Plan, or CCAP, but it would be more appropriately called the Climate Change Coercion Plan: the CCCP.

The entire op-ed appearing in the Financial Post is a must read.

Next up is a post at the blog IPKat (a U.K.-based Intellectual Property news blog) by Nicola Searle that provides an interesting review of a new book by Paul Cairney titled The Politics of Evidence-Based Policy Making.

Evidenced-based policy making (EBPM) is the idea that, well, policy should be based on some sort of evidence. But as Searle (and Cairney) point out, this is a lot more complicated than it seems. Searle eloquently describes the situation as: “Policymaking isn’t a Mondrian, it’s a Monet.”

Rather than the (utopian) linear view that “evidence” clearly informs the best “policy,” the situation is much more complex and involves uncertainties, interpretations, personal beliefs, outside pressures, policy goals, etc.

Searle provides this analogy:

As Cairney puts it, “in the real world, the evidence is contested, the policy process contains a large number of influential actors, and scientific evidence is one of many sources of information.” I’d described policy making in general as akin to an extended family choosing which film to watch. Uncle Alex campaigns for Barbarella, cousin Vic, holding the remote, decides you’re all watching Hulk until your sister Pat throws a tantrum unless you watch Frozen. You might consult the Rotten Tomatoes rating, but you’re convinced that critic from the New York Post is on the payroll of a major studios and the popular rating seems to have been spammed by bots… In the end you watch a Jude Law rom-com. And that’s the simplified version.

For more insight, check out Searle’s full post, or perhaps even Cairney’s book. This is a topic that is quite relevant to the subject of climate change policy (as well as a litany of policy that is rooted in U.S. Environmental ProtectionAgency “evidence”).

And finally, we’d be remiss if we didn’t draw attention to a new study appearing in the AGU journal Earth and Space Science by University College Dublin’s J. Ray Bates that finds that the equilibrium climate sensitivity—that is, the earth’s total surface temperature rise that results from a doubling of the atmospheric effective concentration of carbon dioxide—is “~1°C.”

Bates’ work is an update and extension of the methods and findings of (Cato Center for the Study of Science’s Distinguished Senior Fellow) Richard Lindzen and Yong-Sang Choi and represents another estimate of the climate sensitivity that falls well below the average of the climate models (3.2°C) used in the most recent IPCC report.  The lower the climate sensitivity to greenhouse gas increases, the lower the overall impacts when measured over comparative time-scales.

We’ve added the new Bates results to our lower-than-model climate sensitivity compilation (Figure 1).

 

Figure 1. Equilibrium climate sensitivity (ECS) estimates from new research beginning in 2011 (colored), compared with the assessed range given in the Intergovernmental Panel on Climate Change (IPCC) Fifth Assessment Report (AR5) and the collection of climate models used in the IPCC AR5. The “likely” (greater than a 66% likelihood of occurrence) range in the IPCC Assessment is indicated by the gray bar. The arrows indicate the 5 to 95 percent confidence bounds for each estimate along with the best estimate (median of each probability density function; or the mean of multiple estimates; colored vertical line). The right-hand side of the IPCC AR5 range is actually the 90% upper bound (the IPCC does not actually state the value for the upper 95% confidence bound of their estimate). Ring et al. (2012) present four estimates of the climate sensitivity and the red box encompasses those estimates. Likewise, Bates (2016) presents eight estimates and the green box encompasses them. Spencer and Braswell (2013) produce a single ECS value best-matched to ocean heat content observations and internal radiative forcing.

As the Bates results are just-released, we await to see how they stand up to scrutiny (and the test of time).

The journal Earth and Space Science is open access, so everyone can go and have a look for themselves (although, fair warning, the article is very technical).

A small pro-national-ID group called “Keep Identities Safe” is dancing a little jig because New Hampshire Governor Maggie Hassan (R) has signed legislation to move her state closer to compliance with our national ID law, the REAL ID Act.

In a blog post—apparently their first ever (so that link will be broken if they blog again)—they try to declare the end of the REAL ID Rebellion. I often say it was founded when Rep. Neal Kurk (R-Weare) inveighed against the national ID law in a 2006 speech on the floor of the New Hampshire House of Representatives.

But the heart of the matter is the denial that REAL ID is a national ID law. The New Hampshire legislation says, “Any records received pursuant to this paragraph shall not be used, further transferred, or otherwise made available to any other person or entity for the purpose of creating or enhancing a federal identification database.”

How can New Hampshire offer a REAL ID option and not participate in a national identification database?

It’s a good question, so I’ll cite again directly to the terms of the REAL ID law, which requires states to:

(12) Provide electronic access to all other States to information contained in the motor vehicle database of the State.

(13) Maintain a State motor vehicle database that contains, at a minimum –

(A) all data fields printed on drivers’ licenses and identification cards issued by the State; and
(B) motor vehicle drivers’ histories, including motor vehicle violations, suspensions, and points on licenses.

The U.S. Department of Homeland Security has temporarily written that requirement out of the law in an effort to get states committed to REAL ID. When enough states are in the tank, the agency will move the compliance goalposts and require states to begin sharing their drivers’ data via the nationwide database system that the law specifically requires. The American Association of Motor Vehicle Administrators is currently touting on its website that Iowa has signed up to its “State-to-State Verification Service.”

The New Hampshire legislation also suffers from poor draftsmanship. New Hampshire’s motor vehicle bureaucrats can sign their state up to AAMVA’s information-sharing system simply by allowing themselves to believe that they are not doing so “for the purpose of creating or enhancing a federal identification database.” Governor Hassan has left New Hampshirites unprotected from seeing their data shared nationally.

Notably, the REAL ID law does not exempt “non-federal” licenses from the information-sharing requirement. New Hampshirites who choose that option will still have their information shared nationally.

Has the REAL ID Rebellion ended? Perhaps. DMV bureaucrats and their pro-national-ID allies at “Keep Identities Safe” have been working for years to wear down and evade state legislatures’ resistance to the national ID law. They have had some success in growing the power of the federal government in yet another direction.

Should this be a source of pride? A national ID is a poor security tool that wastes taxpayer dollars, upsets the constitutionally prescribed state-federal balance of power, and compromises law-abiding Americans’ privacy and liberties. The move toward national ID compliance in New Hampshire is probably not something to dance a jig about.

Recently, the New York Times ran an opinion piece by Gregg Easterbrook, which draws attention to the disconnect between the gloomy public on the one hand and the real state of America on the other hand. The prevailing mood in the United States is one of pessimism. For prominent politicians on both sides of the aisle, to use Easterbrook’s words, “the impending apocalypse has been issue number one.” Yet in almost every measurable way, this is the best time in history to be alive. The evidence goes on and on [links added]:   

Pollutiondiscriminationcrime and most diseases are in an extended decline; living standardslongevityand education levels continue to rise … A century ago, most Americans worked in agriculture: Today hardly any do, and we’re all better off, including farmers. That manual labor, farm or factory, has given way to 60 percent of Americans employed in white-collar circumstances … In 1990, 37 percent of humanity lived in what the World Bank defines as extreme poverty; today it’s 10 percent.  

Where did all this progress come from? Easterbrook rightly credits, “interconnected global economics.” Through an intricate symphony of competition and exchange, humanity has driven technology forward and achieved heights of prosperity that would be unimaginable to our ancestors.   

Unfortunately, Easterbrook also gives credit to top-down government planning where none is due. He cites the Affordable Care Act as an example of a successful reform, but rising life expectancy and improved health outcomes are long-term trends that both predate Obamacare and extend far beyond U.S. borders. It is far too soon to attribute any part of those trends to that highly problematic policy.   

Easterbrook even claims that, “In almost every case, reform has made America a better place, with fewer unintended consequences and lower transaction costs than expected. This is the strongest argument for the next round of reforms.” That is a sweeping overgeneralization, as it obviously hinges on the specific nature of reforms. Plenty of reforms throughout American history are now universally recognized as horrible mistakes – just look at alcohol prohibition.   

Despite some confusion about the drivers of progress, Easterbrook’s opinion piece is a refreshing reminder of the incredible progress humanity has made and well worth a read. It ends with this heartening quote that the data backs up:   

Recently Warren Buffett said that because of the “negative drumbeat” of politics, “many Americans now believe their children will not live as well as they themselves do. That view is dead wrong: The babies being born in America today are the luckiest crop in history.” 

Libertarians and other advocates of a noninterventionist foreign policy—or its close cousin, a policy of realism and restraint —have grappled with how to respond to the candidacy of Donald Trump.  Some of Trump’s policy positions are refreshing and sensible.  His hostility to wars for regime change and nation building are a gratifying contrast to the enthusiasm for such ventures that both neoconservative Republicans and humanitarian interventionist Democrats have exhibited in recent decades.  Trump’s insistence that America’s longstanding allies in both Europe and East Asia do far more for their own defense also has at least the potential to significantly reduce the republic’s excessive and obsolete security burdens. Finally, his desire to avoid confrontational relationships with major powers such as Russia and China  is a rare voice of prudence among America’s political elite, and it has understandable appeal to noninterventionists.

But there are other Trump positions that are deeply disturbing, if not outright offensive to the kind of noninterventionists (or “cosmopolitan realists”) who have filled the ranks of Cato’s foreign policy program.  Trump’s hostility to free trade is both disappointing and myopic.  But his stance on immigration is even worse.  His proposal to build a wall along the border with Mexico to keep out undocumented Hispanic migrants is not only impractical, it conveys a message of hostility to such populations. Trump’s stance on Muslim immigration, especially his call for a “temporary” ban, conveys such hostility with even greater clarity.

His support for trade protectionism, combined with adamant opposition to liberal (or even reasonably humane) immigration policies, and indeed his overall xenophobic rhetoric understandably alienate more cosmopolitan noninterventionists.  What they may find difficult to admit, though, is that Trump’s type of insular, intolerant nationalism has a long history within the noninterventionist camp.

Some of the same political figures who staunchly opposed U.S. involvement in foreign wars and participation in “entangling alliances” during the first half of the twentieth century also supported extremely restrictive immigration quotas in the 1920s and subsequent decades. Even Sen. Robert A. Taft (R-OH) the leader of the dwindling post-World War II noninterventionist contingent in Congress had a mixed record.  He correctly warned that some of the new security partners Washington was acquiring would both besmirch American values and drag the republic into avoidable conflicts.  He even dared to oppose NATO membership for the United States, warning presciently that the European nations would come to depend far too much and far too long on America for their security. 

But Taft was not especially good on trade and immigration issues.  And other prominent noninterventionists of that same period, such as Sen. William Jenner (R-IN), Sen. Kenneth Wherry (R-NE), and Sen. .John Bricker (R-OH) were even more stridently insular.  Bricker is best known for sponsoring a constitutional amendment insisting that no treaty could supersede or override any provision of the U.S. Constitution.  There was nothing wrong with that position per se, but Bricker and his allies used the campaign to whip up public hostility toward the United Nations and America’s other international obligations—even entirely nonmilitary obligations.

Moreover, one ought to keep in mind that all of the men mentioned above (even the usually sensible Taft) were strong supporters of Senator Joseph McCarthy’s unconstrained and often irresponsible hunt for domestic communists in the 1950s—a witch hunt that shattered careers and lives.  They were willing to sacrifice important civil liberties in the name of national security.  In other words, enthusiasm for authoritarian methods has been part of the makeup of some noninterventionists for a long time.  Donald Trump did not invent that behavior.

Such an admission is very difficult for noninterventionists who have worked hard to chart a different, far more open and generous course for that doctrine.  Cosmopolitan realism seeks to promote maximum, peaceful interaction among diverse populations around the world.  Therefore, it is hardly surprising that such noninterventionists favor free trade and liberal immigration policies, even as they vehemently oppose the use of force except in the direct defense of the independence, security, and liberty of the American people.  The question remains, though, how such committed cosmopolitan realists should respond to potential ideological allies who share some, but definitely not all, of those values.  Trump’s candidacy has now brought those concerns to the forefront.

Much of my work on fiscal policy is focused on educating audiences about the long-run benefits of small government and modest taxation.

But what about the short-run issue of how to deal with a fiscal crisis? I have periodically weighed in on this topic, citing research from places like the European Central Bank and International Monetary Fund to show that spending restraint is the right approach.

And I’ve also highlighted the success of the Baltic nations, all of which responded to the recent crisis with genuine spending cuts (and I very much enjoyed exposing Paul Krugman’s erroneous attack on Estonia).

Today, let’s look at Cyprus. That Mediterranean nation got in trouble because of an unsustainable long-run increase in the burden of government spending. Combined with the fallout caused by an insolvent banking system, Cyprus suffered a deep crisis earlier this decade.

Unlike many other European nations, however, Cyprus decided to deal with its over-spending problem by tightening belts in the public sector rather than the private sector.

This approach has been very successful according to a report from the Associated Press.

…emerging from a three-year, multi-billion euro rescue program, Cyprus boasts one of the highest economic growth rates among the 19 Eurozone countries — an annual rate of 2.7 percent in the first quarter. Finance Minister Harris Georgiades says Cyprus turned its economy around by aggressively slashing costs but also by avoiding piling on new taxes that would weigh ordinary folks down and put a serious damper on growth. “We didn’t raise taxes that would burden an already strained economy,” he told The Associated Press in an interview. “We found spending cuts that weren’t detrimental to economic activity.”

Cutting spending and avoiding tax hike? This is catnip for Dan Mitchell!

But did Cyprus actually cut spending, and by how much?

That’s not an easy question to answer because the two main English-language data sources don’t match.

According to the IMF data, outlays were sliced to €8.1 billion in 2014, down from a peak of €8.5 in 2011. Though the IMF indicates that those numbers are preliminary.

The European Commission database shows a bigger drop, with outlays of €7.0 billion in 2015 compared to €8.3 billion in 2011 (also an outlay spike in 2014, presumably because of a bank bailout).

The bottom line is that, while it’s unclear which numbers are most accurate, Cyprus has experienced a multi-year period of spending restraint.

And having the burden of government grow slower than the private sector always has been and always will be the best gauge of good fiscal policy.

By contrast, there’s no evidence that tax increases are a route to fiscal probity.

Indeed, the endless parade of tax hikes in Greece shows that such an approach greatly impedes economic recovery.

Though not everybody in Cyprus supports prudent policy.

Critics have accused the government of working its fiscal gymnastics on the backs of the poor — essentially chopping salaries for public sector workers. Pambis Kyritsis, head of the left-wing PEO trade union, said the government’s “neo-liberal” policies coupled with the creditors’ harsh terms have widened the chasm between the have and have-nots to huge proportions. …Georgiades turned Kyritsis argument around to reinforce his point that there shouldn’t be any let-up in the government’s reform program and fiscal discipline.

In the European context, “liberal” or “neo-liberal” means pro-market and small government (akin to “classical liberal” or “libertarian” in the United States).

Semantics aside, it will be interesting to see whether Finance Minister Georgiades is correct about maintaining spending discipline as the economy rebounds.

As the above table indicates, there are several examples of nations getting good results by limiting the growth of government spending. But there are very few examples of long-run success since very few nations have politicians with the fortitude to control outlays if the economy is growing and generating an uptick in tax revenue (which is why states like California periodically get in trouble).

This is why the best long-run answer is some sort of constitutional spending cap, similar to what exists in Switzerland or Hong Kong.

The bottom line if that spending restraint is good short-run policy and good long-run policy. Though I doubt Hillary Clinton will learn the right lesson.

P.S. Cyprus also is a reasonably good role model for how to deal with a banking crisis.

Of the Equal Employment Opportunity Commission’s record in court, I wrote last summer that 

…it’s not easy to think of an agency to whose views federal courts nowadays give less deference than the EEOC. As I’ve noted in a series of posts, judges appointed by Presidents of both political parties have lately made a habit of smacking down the commission’s positions, often in cases where it has tried to get away with a stretchy interpretation of existing law. See, for example, the Fourth Circuit’s rebuke of “pervasive errors and utterly unreliable analysis“ in EEOC expert testimony, Justice Stephen Breyer’s scathing majority opinion in Young v. U.P.S. on the shortcomings of the EEOC’s legal stance (in a case the plaintiff won), or these stinging defeats dealt out to the commission in three other cases. 

Occasionally, as in the Abercrombie & Fitch case, the commission manages to prevail anyway. But in last week’s Supreme Court decision in CRST Van Expedited, Inc. v. EEOC, it was back to the dunking booth for the much-disrespected commission. The ruling, written by Justice Anthony Kennedy, was unanimous. It laid out in detail a long tale of shoddy EEOC litigation waged against the Iowa-based trucking company CRST, in which the commission took a female driver’s complaint of sexual harassment during training and attempted to expand it into a giant “pattern and practice” lawsuit that might have been settled for millions. Rather than settling, the trucking company decided to fight. The ensuing litigation did not, to understate things, show the EEOC at its best.

It eventually became clear that the federal anti-bias agency had failed to investigate or otherwise adequately advance more than 150 of the claims it had tried to add, which were accordingly dismissed, leaving only two intact. A federal judge granted CRST attorneys’ fees on the prevailing Supreme Court standard of Christiansburg Garment, which permits defendants to recover fees when an employment discrimination claim is “frivolous, unreasonable, or groundless.”  The EEOC, however, resisted the fee order on the grounds that, under a quirky Eighth Circuit interpretation, even a frivolous claim does not generate a fee entitlement unless decided “on the merits.” And the 150 claims it had bungled had not been dismissed “on the merits” – they hadn’t gotten even that far.

In a brief concurrence, Justice Clarence Thomas notes that while the Kennedy opinion is correct and welcome, the Court really ought to be reconsidering the Christiansburg standard itself, under which a prevailing plaintiff “ordinarily is to be awarded attorney’s fees in all but special circumstances,” while a prevailing defendant may get fees only “upon a finding that the plaintiff’s action was frivolous, unreasonable, or without foundation.” Not only does that create a baldly asymmetrical and inequitable fee regime, but it departs from a natural reading of the language of Title VII itself.

In the mean time, my colleague Ilya Shapiro has one more case to add to his long list of the Obama administration’s “unanimous losses, where President Obama doesn’t even get the votes of the two justices he appointed.” 

Rep. Darrell Issa proposes Cato-style aviation reforms in a CNN op-ed today. The congressman does an excellent job laying out problems with the Transportation Security Administration (TSA) and arguing that privatized screening would increase both efficiency and security.

Here are some excerpts:

These firestorms online and in the media [regarding security lines] have brought new attention to our broken airport security system, a problem that has been slowly growing for years. But if we really “hate the wait” and want to fix it, the solution couldn’t be any simpler: let’s get the TSA out of the airport screening business altogether.

The idea of privatizing airport security isn’t a new one. Look no further than Canada and almost every single European country, which all use private airport screeners.

Last year, an internal investigation revealed that undercover agents were able to sneak mock explosives or banned weapons through the agency’s security checkpoints a whopping 95% of the time.

A number of case studies show that private screeners are not only more efficient at their jobs, allowing them to screen more passengers in less time, but are also better at detecting threats.

Under the TSA’s “Screening Partnership Program,” 22 airports have been allowed to contract with private companies to administer airport screening operations. Numerous studies of those programs … offer ample evidence that private security screeners are much better able to detect dangerous objects, including explosives and weapons, than their government-employed counterparts.

Private screeners are also shown to process passengers more efficiently, too, meaning faster-moving lines and more taxpayer savings.

For more on privatizing the TSA, see here and here.

Global Science Report is a feature from the Center for the Study of Science, where we highlight one or two important new items in the scientific literature or the popular media. For broader and more technical perspectives, consult our monthly “Current Wisdom.”

Making headlines today (like the one above) is a new paper by Zoë Doubleday and colleagues documenting an increase the population of cephalopods (octopuses, cuttlefish, and squid) over the past 61 years.  The authors, after assembling a data set of historical catch rates, note that this population increase, rather than being limited to a few localized areas, seems to be occurring globally.

End of analysis.

From then on its speculation.

And the authors speculate that human-caused climate change may be behind the robust cephalopod increase. After all, the authors reason, what else has had a consistent large-scale impact over the past six decades? No analysis relating temperature trends (spatially or temporally) to cephalopod trends, no examination of other patterns of climate change and cephalopod change, just speculation.  And a new global warming meme is born—“Swarms of octopus are taking over the oceans.”

There is an overwhelming tendency to relate global warming to all manner of bad things and a great hesitation to suggest a potential link when the outcome is seemingly beneficial. We refer to this as the global-warming-is-bad-for-good-and-good-for-bad phenomenon. It holds a great majority of the time.

In the case of octopuses, squids, and cuttlefish, the authors are a bit guarded as to their speculation of impact of the increase in cephalopod numbers—will they decimate their prey populations or will they themselves provide more prey to their predators? Apparently we’ll have to wait and see.

No doubt, the outcome will be a complex one as is the case behind the observed population increases. Depletion of fish stocks, a release of competitive pressure, and good old-fashioned natural environmental variability are also suggested as potential factors in the long-term population expansion. But complex situations don’t make for great scare stories. Global-warming-fueled bands of marauding octopuses and giant squid certainly do. 

Reference:

Doubleday, Z. A., et al., 2016. Global proliferation of cephalopods. Current Biology, 26, R387–R407.

Polls recently have found that millennials have a more favorable view of socialism than older Americans do. Of course, Emily Ekins suggests that those attitudes are likely to fade as they start paying taxes. But I was interested to read this in the Washington Post today:

another Pew poll found that 95 percent of Vietnamese felt that people were better off in a free-market economy.

Wow, 95 percent. Rand Paul should run for president there. Today’s Vietnamese, of course, grew up in a Stalinist political and economic system. Since 1986 the Communist party government has pursued “market economy with socialist direction.” That’s not a Western-style free(ish) market, but it’s a lot better than Stalinist socialism, and the economy has prospered. Sounds like the Vietnamese people want more market, less socialist direction.

U.S. millennials grew up in a market economy, and after the fall of the Soviet Union they didn’t even hear much criticism of socialist economies, so they can support some imaginary vision of “socialism.” Even there, though, Ekins notes that 

millennials tend to reject the actual definition of socialism — government ownership of the means of production, or government running businesses. Only 32 percent of millennials favor “an economy managed by the government,” while, similar to older generations, 64 percent prefer a free-market economy. 

Yesterday, Sue Desmond-Hellmann, CEO of the Bill and Melinda Gates Foundation, made an important admission in an open letter about the Common Core:

Deep and deliberate engagement is essential to success. Rigorous standards and high expectations are meaningless if teachers aren’t equipped to help students meet them.

Unfortunately, our foundation underestimated the level of resources and support required for our public education systems to be well-equipped to implement the standards. We missed an early opportunity to sufficiently engage educators – particularly teachers – but also parents and communities so that the benefits of the standards could take flight from the beginning.

This has been a challenging lesson for us to absorb, but we take it to heart. The mission of improving education in America is both vast and complicated, and the Gates Foundation doesn’t have all the answers.

Think about this. One of numerous objections to the Core has been that the Obama administration, at the behest of Core advocates including Gates, attempted to impose the standards on the entire country without the Core ever having been tested. Avoiding the sort of implementation obstacles that Desmond-Hellmann laments is exactly why testing – in a federalist system, typically done by a state or two voluntarily trying something – is so important. It is how you learn what works and what doesn’t, how to improve it, and it is how you keep the whole country from suffering when something fails. But no, Gates and other Core supporters could not wait for that – they had to impose the Core on everyone because, well, they just knew what America needed.

Or maybe they didn’t.

No one – not the Gates Foundation, not the Obama administration, no one – is omniscient, which is one reason it is so dangerous to impose one “solution” on everyone. There is a very good chance that the solution, even if it seems foolproof, will have lots of major, unanticipated problems.

The question now is, will Gates and other Core advocates learn from the ill effects of their hubris, and cease their efforts to impose a single solution on all people?

We can only hope.

President Obama’s trip to Asia is off to a running start with the announcement that the United States will lift a decades-long American arms embargo on Vietnam. Initial commentary on the announcement has been generally positive, portraying the end of the embargo as the most recent in a string of events signaling improved relations with America’s former adversary in an increasingly dangerous region. So, what comes next in the U.S.-Vietnam defense relationship?

1. How will China react?

China’s Ministry of Foreign Affairs had a relatively quiet response to the announcement thus far. However, increased American military support for Vietnam fits into the narrative of a U.S.-led effort to contain China. It would not be surprising if more aggressive rhetoric comes to the fore in Chinese media over the coming days. China has also shown a willingness to respond to U.S. shows of force or resolve with military displays of its own. Vietnam’s capacity to resist Chinese coercion should increase once arms sales begin, but if China responds to such sales with assertive counter-moves then the security dilemma in the South China Sea (SCS) could become worse.

2. What equipment will Vietnam buy?

Given the challenges it faces in the SCS, Vietnam will likely place a premium on military hardware that improves maritime domain awareness and the ability to quickly respond to infringement on its claimed territories. For example, in 2015 the United States pledged $18 million to help Vietnam purchase U.S.-made Metal Shark patrol boats for its coast guard. Sales of more advanced or lethal systems may be more difficult given the challenges of integrating such systems into an arsenal already dominated by Russian weapons and the high price tag of U.S. hardware. Additionally, Vietnam has overlapping territorial claims with the Philippines, a U.S. treaty ally. Vietnam-Philippine squabbling is not the primary threat in the SCS right now, but Washington policymakers have an incentive not to approve sales of equipment that could give Vietnam a significant advantage over the Philippines.

3. How does lifting the arms embargo advance U.S. goals in the SCS?

In a press conference announcing the end of the embargo, President Obama stated “the decision to lift the ban was not based on China,” but was part of a broader process of normalization with Vietnam. This statement is only partly true. On the one hand, U.S.-Vietnam relations have greatly improved over the years and this is the next logical step in normalization. On the other hand, assertive Chinese activity in the SCS is the most pressing security concern in the region and lifting the arms embargo should improve Vietnam’s ability to deal with it. Improving the military capacity of U.S. allies and partners is a low-risk way to increase the costs of Chinese actions, which seems to be the current U.S. objective in the SCS. Unfortunately, “imposing costs” isn’t an end state.

Lifting the arms embargo on Vietnam is an important step toward the best course of action for the United States in the SCS: using weapons sales and economic support to bolster the self-defense capabilities of friendly states. It will be virtually impossible for America’s partners to achieve military parity with China on their own, but with the right mix of weapons systems and strategy they could present serious challenges to Chinese military action. More capable allies and partners should enable the United States to be a balancer of last resort in the SCS, instead of the first line of defense. 

For November, voters turned off by Trump and Clinton may be interested in the likely Libertarian Party ticket of Gary Johnson and William Weld. Johnson is a former governor of New Mexico (1995-2003), and Weld is a former governor of Massachusetts (1991-1997).

David Boaz gives an overview of their records, noting that both governors scored well on Cato’s fiscal report cards. Since 1992, the report cards have examined the tax and spending records of the nation’s governors every two years.  

Cato report cards are here. The best governors get an “A” and the worst get an “F.” The reports covering Johnson and Weld were written by Steve Moore and various coauthors.

Here are Johnson’s grades, with a few notes from the reports:

  • 1996, “B.” Johnson is “aggressively trying to make the state more taxpayer friendly. To control spending, Johnson has vetoed 200 bills passed by a liberal legislature.”
  • 1998, “B.” Johnson is “a true citizen-lawmaker who calls himself a libertarian … In a big-government state like New Mexico … Johnson’s staunch fiscal conservatism has been much needed, but also much resisted.” Johnson “reduced the number of state employees by nearly 10 percent, and he has set a state record for legislative vetoes.”
  • 2000, “B.” Johnson “has gained a well deserved reputation as a maverick governor. More so than just about any prominent politician in America today, Johnson has a libertarian attitude when it comes to government.” In “battling the legislators at every turn, Johnson has succeeded in cutting the state income tax, the gasoline tax, the state capital gains tax, and the unemployment tax. In 1999 he vetoed a 12-cent-a pack cigarette tax hike—not because he likes smoking, he says, but because he opposes all tax hikes.”
  • 2002, “B.” Johnson “has done much to create private-sector jobs and to erode the culture of dependence on government in New Mexico.”

Why didn’t Johnson get some “A” grades from Cato? In most of the reports, he scored rather middling on spending. Also, the 2002 report suggests that the legislature blocked many of his reforms.

Here are Weld’s grades:

  • 1992, “A.” Weld cut the budget and pushed to reduce income and capital gains taxes.
  • 1994, “B.” Weld cut spending, balanced the budget, improved the state’s bond rating, and cut numerous taxes. Even with a Democratic legislature, “Weld has a stunningly successful fiscal record.”
  • 1996, “B.” Weld “began to engage in a whirlwind of government downsizing. In his first two years in office, the state budget actually declined in nominal terms—an astonishing achievement given the pro-spending inclinations of the legislature. Weld privatized state services, slashed the public payroll, and cut general welfare assistance for employable adults. Weld has also been a supply-side tax cutter.”

Look for a new Cato fiscal report card in October.

Pages