Cato Op-Eds

Individual Liberty, Free Markets, and Peace
Subscribe to Cato Op-Eds feed

Those who fear anthropogenerated climate change have long claimed that global warming will negatively impact Earth’s ecosystems, including old-growth forests, where it is hypothesized that these woodland titans of several hundred years age will suffer decreased growth and increased mortality as a consequence of predicted increases in temperature and drought. However, others see the situation as the opposite – one in which trees are enhanced by the aerial fertilization effect of rising atmospheric CO2 concentrations, which is expected to increase growth and make trees less susceptible to the deleterious effects of drought.

So which vision of the future appears more likely to come about? According to the seven member research team of Urrutia-Jalabert et al. (2015), the much more optimistic future not only coming, it is already here.

Working in the Andean Cordilleras region of southern Chile, Urrutia-Jalabert et al. performed a series of analyses on tree ring cores they obtained from long-lived Fitzroya cupressoides stands, which they say “may be the slowest-growing and longest-lived high biomass forest stands in the world.”

Focusing on two of the more pertinent findings of their study, as shown in Figure 1 below, both the basal area increment (a surrogate for aboveground woody biomass accumulation) and intrinsic water use efficiency (a measure of drought resistance) of Fitzroya dramatically increased over the past century. Commenting on these trends, the authors write “the sustained positive trend in tree growth is striking in this old stand, suggesting that the giant trees in this forest have been accumulating biomass at a faster rate since the beginning of the [20th] century.” And coupling that finding with the 32 percent increase in water use efficiency over the same time period, Urrutia-Jalabert et al. conclude the trees “are actually responding to environmental change.” Indeed they are. Magnificently.

Figure 1. Basal area increment chronology (left panel, AD 1355-2010) and intrinsic water use efficiency (right panel, AD 1900-2010) of long-lived Fitzroya cupressoides trees in the Andean Cordilleras of southern Chile.

With respect to the cause of these favorable developments, the researchers state “we believe that this increasing growth trend…has likely been driven by some combination of CO2 and/or surface radiation increases,” adding that “pronounced changes in CO2 have occurred in parallel with changes in climate, making it difficult to distinguish between both effects.” Thus, it is clear that of the two views predicting the future of old-growth forests, the one most likely to occur (and which is actually occurring in the southern Chilean Andes) is the one in which the benefits of CO2 win out over model projections of climate-induced demise.

 

Reference

Urrutia-Jalabert, R., Malhi, Y., Barichivich, J., Lara, A., Delgado-Huertas, A., Rodríguez, C.G. and Cuq, E. 2015. Increased water use efficiency but contrasting tree growth patterns in Fitzroya cupressoides forests of southern Chile during recent decades. Journal of Geophysical Research, Biogeosciences 120: 2505-2524.

The long security lines at some of the nation’s major airports in recent weeks have been nuts. Over and over, we have seen that it was a big mistake for the Bush administration and Congress to nationalize airport screening back in 2001.

One of the issues playing out is the lack of workforce flexibility in the Transportation Security Administration (TSA), which is a centralized, bureaucratic monopoly. I have written that we should separate airport screening from the regulatory oversight of aviation security. We should move responsibility for passenger and baggage screening from TSA to the nation’s airports. The airports would then be free to contract screening to expert security companies.

Yesterday, chairman of the House Homeland Security Committee, Michael McCaul, affirmed my observations about the problems of centralized control and the rigid TSA bureaucracy:

I think one of the biggest takeaways that I have is the lack of transparency and a lack of local input that each of the airports and airline authorities have with the local TSA field rep director… There appears to be a line of non-communication centralized here in Washington.

[The TSA has] centralized and all the decisions are being made out of Washington with no flexibility on staffing decisions, that if they have local input from the airlines and airport authorities it could result in a lot of these problems. If you don’t know the peak airline times of when the planes are coming in, how can you possibly staff and have a model that makes any sense?

The flexibility issue is a huge problem we heard [about] from the airlines and airport associations in terms of the local director doesn’t have discretion over where to staff the TSO or TSA officer.

McCaul is right. Unfortunately, the knee-jerk Washington response when problems arise in society is to centralize power and control, as we saw after 9/11. That is nearly always a mistake. Even if central planning made sense in theory, members of Congress simply don’t have the time to oversee the vast empire of programs that they have accumulated.

Remarkably, the federal budget is 100 times larger than the average state government budget. Federal policymakers have no idea what’s really going in the hundreds of bureaus they have created. So, not surprisingly, the only time Congress tries to fix anything is when crises rise to the top of the news cycle, like now with airports.

McCaul is also right on one of the short-term fixes for the current airport mess: repurpose the 3,000 TSA “behavioral officers” that roam around airports, and add them to the TSA screening teams. Federal auditors have concluded essentially that those officers do little in terms of reducing risks, so let’s put them to work reducing congestion and serving the travelling public.

A recent Marginal Revolution post has alerted me to Eric Posner’s January 2016 working paper, “What Legal Authority Does the Fed Need During a Financial Crisis?”  Posner’s paper is remarkable, both for its assessment of the legality of the Fed’s emergency lending operations during the recent crisis, and for the policy recommendations Posner offers based on that assessment.[1]

Posner’s account of the Fed’s actions reads like a long bill of indictment.  The Fed’s Bear Stearns rescue, for starters, “was legally questionable.”  The Fed couldn’t legally purchase Bear’s toxic assets, and it knew it.  Instead it created a “Special Purpose Vehicle” (SPV), named it Maiden Lane, and lent Maiden Lane $28.82 billion so that it could buy Bear’s toxic assets.  Voila!  What would have been an illegal Fed purchase of toxic assets was  transformed into a Fed loan “secured” by the very same assets.

But clever as the Fed’s gambit was, it  wasn’t so clever as to render it entirely innocent of legal hanky-panky.  “The problem,” Posner observes,

is that the transaction provided that the value of the Fed’s interest would be tightly connected to the value of the underlying assets.  If the assets fell in value by as little as 4%, the Fed would lose money… By contrast, in a [properly] secured loan…the lender bears very little to no risk from the fluctuation of asset values.  Functionally, the Maiden Lane transaction was a sale of assets, not a secured loan.

In rescuing AIG, the Fed resorted to the same “legally dubious” bag of tricks it employed in saving Bear, creating two more Maiden Lane vehicles, and again assuming considerable downside risk.  The Fed also grabbed a 79.9 percent equity stake in AIG, which it placed in a trust established for the sole benefit of the U.S. Treasury.  That transaction was later held by the Court of Federal Claims to have been been unauthorized by the Federal Reserve Act, and therefore illegal.

In fact, according to Posner, all of the Fed’s emergency lending programs, the TALF alone excepted, “raised legal problems.”  In each case the Fed violated the spirit of 13(3), which requires that its loans be “secured to the satisfaction of the Federal Reserve bank.”  Posner is especially good at explaining the speciousness of  Fed lawyers’ claims that the Fed’s loans were indeed secured:

Imagine, for example that the Fed would like to make an unsecured loan to Joe Shmo, who has no assets.  Following the legal division’s advice, the Fed could create an SPV called Shmo LC.  Shmo LC would then lend money to Joe, and in return receive an unsecured note from him, that is, an IOU.  Shmo LC would get its money from the Fed, which would make a section 13(3) loan to Shmo LC secured by Shmo’s note.

All of which would be just dandy, were it not for the inconvenient fact, pointed out by Posner, “that the Fed, Congress, and all other relevant actors have always understood section 13(3) to [require] ‘real’ security — in the sense of collateral that would render the loan riskless or close to that.”

Suppose, on the other hand, that the Fed’s lawyers were in fact correct.  Suppose that it was perfectly legal for the Fed to have “secured” many of its 13(3) loans using assets of doubtful value.  In that case, the Fed’s claim that it was unable legally to rescue Lehman Brothers was itself a sham, for a Fed that might have legally lent to Joe Shmo could certainly have lent legally to what was at the time the United States fourth-largest investment bank.  Fed officials can’t have it both ways: either they lied about Lehman, or they broke the law left-and-right with the 13(3) loans they did make.

Posner himself believes that the Fed let Lehman fail for political and “operational” reasons rather than legal ones, and that the questionable legality of yet another Joe Shmo-type operation — and an especially blatant one at that — merely provided it with “a convenient excuse” for avoiding political backlash.  I’m pretty sure he’s right.  But although he recognizes the capricious nature of the Fed’s decision to let Lehman fail, neither that awareness nor his understanding that the Fed rode roughshod over existing laws causes Posner to see any need to limit the Fed’s last-resort lending powers.

Quite the contrary: so far as Posner is concerned, the fact that the Fed has been inclined to bend or break the law, or to invoke it only when it found doing so convenient, is reason, not for strengthening the rules governing the Fed’s last-resort lending, but for getting rid of them altogether!  According to him, the problem isn’t that the Fed thumbed its nose at existing laws.  It’s that “gaps in the government’s powers” made all that nose-thumbing necessary.  What we need to do, Posner says, is fill those “gaps.”

For Posner, filling the power gaps means, first of all, consolidating within a single “Financial Crisis Response Authority” (FCRA), and preferably within the Fed itself, all of the crisis-response powers presently divided among it, the Treasury, and the FDIC.  It also means granting to that authority the power to:

  • buy assets, including equity.
  • make unsecured loans to non-bank financial institutions.
  • control non-bank financial institutions…in order to force them to pay off counterparties, lend money, and so on.
  • wind up insolvent non-bank financial institutions….
  • force non-bank financial institutions to raise capital.
  • dictate terms of transactions, control the behavior of firms (for example, forcing them to lend), or acquire them where necessary.

The recent crisis shows, Posner insists, that “all these powers are necessary”:

Because of the fear of stigma, even liquidity-constrained financial institutions will be inclined to delay before borrowing from emergency credit facilities.  The FCRA needs the authority to force those firms to borrow, and also to force healthy firms to borrow at the same time in order to prevent the market from picking off the weakest firm.  Moreover, the crisis showed that when financial institutions accept emergency loans, they have strong incentives to hoard cash when the system as a whole benefits only if they lend into the market a portion of the money they borrow.  For this reason, the FCRA needs the authority to order firms to enter financial transactions.  Finally, the crisis showed that financial institutions that should be given emergency money may not be able to offer collateral for a loan, and it may be very difficult to value the collateral in any event.  The FCRA needs the authority to make capital injections, unsecured loans, and partially secured loans; and to buy assets

Posner fails to recognize that the stigma problem to which he refers can be solved, without forcing anyone to borrow, by substituting an auction-style lending facility for the Fed’s discount window, as was in fact done during the crisis when the TAF was established; and his suggestion that financial firms’ hoarding of cash was something the Fed would have prevented had it had the power to do so, rather than something the Fed deliberately encouraged, doesn’t square with the facts.  But these are secondary points.  What’s most troublesome about Posner’s proposal is that it fills the “gaps” in the Fed’s power so generously as to do away with practically all limits upon the Fed’s ability to meddle with people’s property.  His suggestion that a FCRA should be perfectly free to make unsecured loans and commandeer equity, for example, would allow it to lend to Joe Shmo on whatever terms it likes, and to nationalize private firms at will, with utter impunity.

Posner insists nonetheless that his proposed FCRA would not command unlimited power.  Instead, its power would be checked by means of “a robust legal regime” that would “correct abuses after [a] crisis.”  Specifically, firm shareholders would “be entitled to sue” the FCRA “and receive a remedy if they can show that the FCRA’s actions were unreasonable”:

The usual post-crisis analyses by independent government agencies with the power to compel testimony and discover documents from the FCRA will facility the litigation by collecting facts and making them publicly available.

With all due respect to Professor Posner, he seems here to be putting a great deal of weight on an awfully thin read, if not a broken one.  If it was far from easy for Starr International to convince a court that the New York Fed acted illegally, and if Starr received no “remedy” even despite doing so, how much harder would it be for a plaintiff to establish that the actions of a much more powerful Fed (or FCRA), though perfectly legal, were nevertheless both harmful and “unreasonable”?  And suppose such a plaintiff somehow managed to prevail.  Might Professor Posner, or some like-minded legal scholar, not be inclined in that case to regret the discovery of still another “gap” in the Fed’s power, and to recommend, on the basis of the very same arguments Posner offers for filling already apparent gaps, further reforms aimed at ruling-out such lawsuits, to guard against the possibility, however remote, that the threat of them might discourage some desirable (if not obviously “reasonable”) anti-crisis measure?

Yet it would be unfair to accuse Posner of depending entirely on lawsuits to constrain his proposed FCRA.  For it’s clear that Posner’s case for an FCRA wielding vast powers mainly rests, not on the naive belief that such an authority could be adequately constrained by the threat of successful litigation alone, but on the assumption that it will never (or hardly ever) abuse its powers  in the first place.

What’s the basis for that bold assumption?  Posner certainly can’t claim that it’s difficult to conceive of ways in which his proposed FCRA might behave badly.  The rescuing of firms that might safely be allowed to fail, including ones that richly deserve to fail, is only one obvious example.[2]

Instead, Posner’s postulate of an infallible Fed appears to take shape as a mutation of his much less heroic (if still doubtful) claim that the Fed did nothing wrong during the recent crisis.  Early in his paper he explains that he plans “to assume that the mainstream view that the Fed acted properly during the financial crisis by lending widely is correct.”  As a means for determining what reforms would have allowed the Fed to avoid errors of omission (but not ones of commission) during the recent crisis, the assumption makes perfect sense, even if it happens to be false.  But Posner isn’t content to limit himself to such a counterfactual exercise: he wants to draw conclusions concerning “what reforms are necessary to supply [the Fed] with the proper legal authority” going forward.  To do so he segues, perhaps unconsciously, from assuming, arguendo, that the Fed acted correctly this last time around, to assuming, implicitly, not only that it will act correctly in any future crisis, but that it will do so even if it wields much vaster powers than before.

Is it being uncharitable to suggest that, once we grant the assumption that the only errors that a government agency is ever likely to commit are errors of omission, we do not really need a legal scholar, or any other sort of expert, to tell us how to make that agency function ideally?  The granting of unlimited power is in that case a no-brainer.  To anyone who isn’t prepared to altogether set-aside the possibility of errors of commission, on the other hand, the sort of reforms Posner proposes must seem naive — and dangerous — in the extreme.

And that’s what troubles me most about Posner’s proposal.  It’s not that his logic is bad.  It’s seeing a legal scholar, and a very intelligent one, blithely cast aside the very idea of the rule of law, while championing its opposite: the arbitrary decisions of (practically) omnipotent bureaucrats.

Nor does Posner not realize what he’s doing.  On the contrary: he’s aware of the complaint of other scholars that the Fed already demonstrates the dangers of substituting  the rule of men for the rule of law, even citing a recent Cato Journal article by CMFA Adjunct Scholar Lawrence White to that effect.  But having recognized White’s complaint, Posner dismisses it summarily, on the technical grounds that

the constitutional limitations on delegation of power to agencies — embodied in the nondelegation doctrine – are effectively nil.  The requirement that the LLR use its powers to unfreeze the financial system would supply the intelligible principle required by the nondelegation doctrine under recent precedents.

But White isn’t arguing a point of U.S. Constitutional Law.  He’s appealing to a fundamental legal principle that’s older than the U.S. Constitution, and one that transcends the laws of any particular nation.  Posner’s response therefore misses the point entirely.  The question isn’t whether or not Congress may grant Fed officials unlimited power.  It’s whether it ought to grant them so much power.  Posner thinks it should.  I don’t know about you, but I hope Congress puts more weight on the advice of John Locke, David Hume, and John Adams.

_______________________

[1] Although I concern myself here only with his account of the Fed’s emergency lending, Posner also assesses the legality of the crisis-related actions of the Treasury and the FDIC.

[2] I dare readers to peruse the aforementioned list of proposed FCRA powers, and to submit comments — under their real names — to the effect that they are unable to imagine others.

Posner himself (p. 30) recognizes that Congress had reasons for not awarding the Fed unlimited last-resort lending powers.  However, he notes that the reasons have been “mostly political, including distrust of the Fed, and popular resentment of the bailouts of Wall Street firms,” and appears to dismiss them simply for that reason.  As for the moral hazard problem — the sole non-“political” reason he recognizes for limits upon the Fed’s last-resort lending power —  Posner dismisses it as well, claiming that it can be adequately contained by means of “ex ante regulation such as capital requirements, which are independent of the LLR’s power.”  But as Richard Fisher has argued,

Requiring additional capital against risk sounds like a good idea but is difficult to implement.  What should count as capital? How does one measure risk before an accident occurs?  And how does one counteract the strong impulse of the regulated to minimize required capital in highly complex ways?  History has shown these issues to be quite difficult.  While we do not have many examples of effective regulation of large, complex banks operating in competitive markets, we have numerous examples of regulatory failure with large, complex banks.

Nor is it clear that even the strictest capital requirements can suffice to rule out excessive risk taking when its the case, as it would be under Posner’s proposed regime, that regulatory authorities need not hesitate to bail out, not only firms’ uninsured creditors, but their shareholders.

[Cross-posted at Alt-M.org]

We’ve been waiting for months for presumptive Republican presidential nominee Donald Trump to release his list of potential Supreme Court appointees. Today he actually came through on that promise. The would-be justices, in the (alphabetical) order in which they appear in the AP story that broke the news, are:

  • Judge Steve Colloton of the U.S. Court of Appeals for the Eighth Circuit (Iowa)
  • Justice Allison Eid of the Colorado Supreme Court
  • Judge Raymond Gruender of the U.S. Court of Appeals for the Eighth Circuit (Missouri)
  • Judge Thomas Hardiman of the U.S. Court of Appeals for the Third Circuit (Pennsylvania)
  • Judge Raymond Kethledge of the U.S. Court of Appeals for the Sixth Circuit (Michigan)
  • Justice Joan Larsen of the Michigan Supreme Court
  • Justice Thomas Lee of the Utah Supreme Court
  • Judge William Pryor of the U.S. Court of Appeals for the Eleventh Circuit (Alabama)
  • Justice David Stras of the Minnesota Supreme Court
  • Judge Diane Sykes of the U.S. Court of Appeals for the Seventh Circuit (Wisconsin)
  • Justice Don Willett of the Texas Supreme Court

This is an exceptional list. I’m not intimately familiar with all 11 judges and I don’t expect to agree with all of them on everything, but those whose jurisprudence I know well are excellent and the others have sterling reputations. These are not squishes or lightweights.

Also notable and commendable is that 5 of the 11 are state supreme court justices; not all judicial talent is already on the federal bench and the U.S. Supreme Court could use that sort of different perspective. I’ll forego quibbling over this or that pick – whom to drop for a top 10 or 5, whom to add to round out to 15, whether Senator Mike Lee would be better than his brother – but want to emphasize that these are among the very best judges who are young and smart enough to be on the Court.

I’m no fan of the Donald – and who knows whether he’d follow through if elected? – but he’s listening to the right advisers here. As I’ve previously written, Trump may not know originalism from origami, but there are better reasons to vote against him than judges.

All the presidential candidates are opposed to the Trans-Pacific Partnership, some because protectionism seems to sell well during elections and others because they generally oppose foreign trade.  But the most immediate obstacle to passage of the TPP may be a counterproductive demand from one of the agreement’s most ardent congressional supporters.

Orrin Hatch (R-Utah) is Chair of the Senate Finance Committee, which oversees trade policy, where he has been instrumental in securing trade promotion authority and building support among Republicans for the TPP.  Hatch is also a committed advocate for the inclusion of strong intellectual property protections in trade agreements and has been very critical of one provision in the TPP that he doesn’t think goes far enough to provide regulatory exclusivity for biologic drugs.  He’s made his support for the entire agreement contingent on whether the administration can fix that one provision. 

Exclusivity (often called data exclusivity) is the practice of denying regulatory approval of a competitor’s drug until the original drug has had a numbers of years of monopoly status in the market.  This is unrelated to whether the drug is covered by a patent. 

The U.S. government provides 12 years of exclusivity, the highest of any country in the world, for a special class of drugs known as biologics.  At Hatch’s insistence, U.S. negotiators officially pushed for a minimum 12 year term in the TPP. 

But the TPP was never going to have a 12 year term for biologic exclusivity, because the other members were dead set against it.  Japan and Canada currently provide 8 years, while the other TPP members provide either 5 years or zero.  There was no interest from the other members to increase their own level of protection, so the United States wasn’t going to be able set an international standard of 12 years.  Indeed, U.S. negotiators knew that they best they could really hope for was 8 years. 

Even the 8-year proposal met a lot of resistance.  Australia was especially adamant that they would not increase exclusivity beyond the 5 years they currently impose.  This made biologic exclusivity an especially intractable issue in the TPP negotiations, and the text we now have was agreed to under tremendous pressure to finish the deal and was reached in the closing hours of the negotiations. 

The final compromise is a peculiar dual provision that gives TPP members two options.  Under one option, TPP members are required to provide at least 8 years of exclusivity.  Under the second option they can provide only 5 years of exclusivity as long as they also use “other measures” to provide a “comparable market outcome” to the first option.  What makes the second option extra curious is that “market circumstances” may be relied upon to ensure that 5 years of exclusivity provides a comparable outcome to 8 years.  This arrangement enables the U.S. government to claim they negotiated an 8 year term, while Australia can still claim that they didn’t agree to anything more than 5 years. 

Senator Hatch has seen through this simple scheme and correctly describes the provision as requiring only 5 years.  Because the requirement to provide a comparable outcome through “other measures” allows reliance on “market circumstances,” there is no requirement for government intervention aside from the 5 year minimum of regulatory exclusivity.  The Obama administration is trying to say that 5 + 0 = 8.

The pharmaceutical industry has expressed disappointment at the outcome.  They think they should have gotten 12 years, and Senator Hatch seems to agree.  But the idea that the TPP could ever have had a 12-year term is impressively wishful thinking.

Hatch and the pharmaceutical industry really ought to be happy that they got the 5+ deal in the TPP.  That’s more than they had before and it shows that momentum is on their side in adding this new issue into the trade agenda.  Creating a special provision that treats biologics differently than other drugs and offers some sort of additional protection sets a bar that can be expanded on in future agreements.  The TPP’s dual provision provides a good starting point for U.S. business interests to further develop a more-stringent international norm.

It’s been reported that Hatch may be trying to secure a commitment from the administration that it will pressure TPP members who provide only 5 years of exclusivity to adopt particularly effective “other measures” to achieve an 8-year outcome when they implement the TPP.  Even after implementation, the provision will serve as a mechanism for the U.S. Trade Representative to exert continued pressure on TPP members.

Hatch is essentially complaining that the TPP doesn’t do enough of a good thing.  The idea that the agreement could be better is hardly a reason to oppose it altogether.  That’s especially true in this case, where the “better agreement” that Hatch and the pharmaceutical industry were asking for was unrealistic.  

The whole episode shows how the politics of trade agreements have gotten mixed up in regulatory issues that don’t have a direct connection to trade.  A lot of the regulatory provisions in trade agreements were originally included at the behest of protectionists who wanted to put conditions on trade.  That’s why trade agreements include labor and environment rules. 

It’s a shame that supporters of free trade are now also putting so much emphasis on regulatory issues, elevating arcane regulatory tweaks above actual free trade.  Senator Hatch’s inflexibility on biologics is only making it harder to achieve the real gains that come from trade liberalization.  If Senator Hatch values free trade, he ought not oppose the agreement simply because it doesn’t do as much he would like to benefit one U.S. industry. 

The familiar chart below illustrates the depth of the decline in real output during the 2007-09 Great Recession (the shaded period), and the failure of the recovery to return real output to its “potential” path (in other words, to eliminate the estimated “output gap”) during the subsequent years up to the present day. The second chart puts the same 2007-16 period in the context of the previous decades, showing how exceptionally prolonged the current below-potential period is by contrast to previous postwar recessions and recoveries.

It is instructive to decompose the path of real GDP into its components, nominal GDP and the price deflator. Here are the natural logs of nominal GDP (call it Y) and real GDP (call it y). From the definition y = Y/P, it follows that ln y = ln Y – ln P, so the growing vertical difference between the two series reflects the rising price level.

Here is the path of the natural log of the GDP deflator, which is the difference between the real and nominal GDP series. (Note: FRED uses 2009=100 as the index base; I have re-normalized to 2004=1 by dividing by 90).

Monetarist theory sharply distinguishes real from nominal variables. Nominal shocks (changes in the path of the money stock or its velocity) have only transitory effects on real variables like real GDP. Accordingly an account of the path of real GDP in the long run (and 6+ years of recovery should be enough time to reach the long run) must be explained by real and not merely by nominal factors. An account of the path of nominal GDP, by contrast, cannot avoid reference to nominal factors. So we need distinct (but compatible) accounts of the two paths.

To account for the path of nominal GDP we can use the simple accounting decompositions M = φY and ln M = ln φ + ln Y, where φ (“phi”), following the notation of Michael Darby’s canonical monetarist textbook, is a mnemonic for fluidity, defined as M/Y, and thus the inverse of velocity. Before the Great Recession, the velocity of M2 was on a gradual downward path, falling around 1% per year. During the Recession it fell much more steeply. Since the Recession it has been falling around 2% per year. Correspondingly, φ was on the rise, but its path shifted upward and become slightly steeper. Meanwhile, the path of M2 has hardly budged, with M2 rising at an annualized 5.9% rate between the midpoint of the previous recession and the midpoint of the Great Recession, and at a 6.7% rate since. Here are the observed paths of log M2 and log nominal GDP on the same scale, and in the next chart the path of log M2 fluidity:

As a stylized approximation, let us treat the path of log M2 as a smooth line without variation. Then all the variation in nominal GDP is explained by the variation in fluidity. The downward displacement of the path of nominal GDP during the Great Recession, and its slower growth afterward, corresponds to the upward displacement of fluidity (drop in velocity) during the Recession and its more rapid growth since the recession. Further simplifying, let us treat the shift in fluidity as due to an upward shift in desired fluidity at a single date t*. We can explain the downward shift in the steady-state path of the log of nominal Y as the result of an exogenous upward shift in the steady-state path of the log of fluidity resulting from the public’s demand to hold larger M2 balances relative to income Y, in the face of an unvarying path of M2. The next diagram shows the shifts in steady-state paths, constrained by the accounting identity that ln M = ln φ + ln Y.

Assuming gradual adjustment of actual to desired fluidity, we can add hand-drawn transitory adjustment paths. This yields a stylized representation of the actual time series seen above.

A downward displacement of the steady-state path of nominal GDP, due to a contractionary money supply or velocity shock, can bring a transitory decline in real GDP. As is familiar, people try to remedy a felt deficiency in money balances by reducing their spending. In the face of sticky prices, reduced spending generates unsold inventories and hence cutbacks in production and layoffs until prices fully adjust. As Market Monetarists have long argued, the Federal Reserve could have avoided the downward displacement of the path of nominal GDP if policy-makers had immediately recognized and met the rise in fluidity (the drop in velocity) with appropriately sized expansionary monetary policy. (I leave aside the Traditional Monetarist critique of the track record of stabilization policy, which argues that central bankers cannot be expected to get the timing and magnitude right in a world where to do so they would have to forecast velocity shocks better than market price-setters.)

Although a one-time un-countered rise in desired fluidity can temporarily knock real GDP below course, such a nominal shock should not persistently displace its growth path, as the first chart above indicates has happened. We can expect monetary equilibrium to be roughly restored by appropriate adjustment in the price level relative to the nominal money stock, bringing real balances up to the newly desired level, within three or four years at most. (In the data plot above, we see the GDP deflator shift to a lower path already in 2009.) Real GDP should around the same time return to its steady-state path as determined by non-monetary factors (labor force size and skills, capital stock, total factor productivity as governed by technologic improvements, policy distortions, and so on). To explain the continued low level of real GDP relative to estimated potential since 2011 or so, we need a persistent shock to the path of real GDP.

I suggest that real GDP has shifted to a lower path because of a shrinkage in the economy’s productive capital stock — a problem that better monetary policy (not feeding the boom) could have helped to avoid, but cannot now fix. During the housing boom, investible resources that could have gone into augmenting human capital, building useful machines and sustainable enterprises, and conducting commercial research and development, were instead diverted to housing construction. In the crisis it became evident that the housing built was not worth the opportunity cost of the resources allocated to it. That major misallocation of resources has lowered the path of the capital stock below its previous trend. I do not know precisely how the contribution of capital input is measured when the CBO estimates potential output, but I hypothesize that potential output is currently overestimated because capital wastage has not been fully recognized. I welcome comments and evidence on this hypothesis.

[Cross-posted from Alt-M.org]

When the Berlin Wall fell, Warsaw Pact dissolved, and Soviet Union split apart, U.S. foreign policy became obsolete almost overnight. For a brief moment advocates of a quasi-imperial foreign policy seemed worried.

For instance, NATO advocates were reduced to talking about having the anti-Soviet military compact promote student exchanges and battle drug smuggling. But advocates of preserving every commitment, alliance, and deployment quickly recovered their confidence, insisting that the status quo now was more important than ever. Since then the political elite has remained remarkably united in backing America’s expanding international military role.

GOP presidential candidates competed over how much to intervene. The Democratic frontrunner pushed for U.S. military intervention in the Balkans as First Lady, voted for the Iraq war as Senator, and orchestrated the Libya campaign as Secretary of State.

Breaking with this pro-war consensus is Donald Trump. No one knows what he would do as president and his foreign policy pronouncements fall far short of a logical and consistent foreign policy program. Nevertheless, he was the most pacific GOP contender, perhaps save Sen. Rand Paul.

Unsurprisingly, Trump’s views have dismayed the guardians of conventional wisdom. However, for the first time in years, if not ever, many advocates of American dominance believe it necessary to defend their views, which they had previously considered to be self-evident. For instance, NATO supporters are trying to explain why the U.S. must defend European states which, collectively, are wealthier and more populous than America.

Nevertheless, the downsides of Trump as messenger are obvious. While his opinions on allied free-riding are well-established, on other issues he has shifted back and forth. Who knows if he means what he says about much of anything?

Moreover, even when he is right conceptually, he often misses the mark practically. For instance, the answer to allied free- (or cheap-) riding is not to charge other countries for America’s efforts. Rather, Washington should simply turn the defense of other nations over to them.

Trump also mixes sensible foreign policy opinions with misguided and overwrought attacks on trade, immigration, and Muslims. And his manner is more likely to repel than attract.

Yet almost in spite of himself he is likely to change U.S. foreign policy. Imagine Trump living down to expectations and losing badly. Then the usual advocates of war and intervention would insist that his foreign policy approach has been discredited and seek to squelch any further debate. Everything would simply go back to normal.

If Trump does respectably but loses narrowly, he will have demonstrated popular discontent with a policy in which average folks pay and die for utopian foreign policy fantasies advanced by Washington policy elites. That would encourage future political leaders to seek votes by challenging today’s interventionist consensus.

Finally, if Trump triumphs he will be in a position to transform U.S. foreign policy. What he would actually do is anyone’s guess. For the first time in decades there would be a serious debate over foreign policy and a meaningful opportunity to change current policies.

As I wrote for National Interest: “There are lots of reasons to criticize Donald Trump. However, he offers the most serious challenge in a generation to today’s interventionist zeitgeist. In spite of everything, he just might end up changing U.S. foreign policy.”

Donald Trump isn’t happy with the Washington Post, which has steadfastly opposed his presidential campaign on its editorial pages and now has assigned a reporter team to write a book about him. And he has repeatedly responded in Trump fashion: by threatening the business interests of the newspaper and its owner Jeff Bezos. Trump cited the Post by name in his February comments about how he wants to “open up” libel law so that “when The Washington Post… writes a hit piece, we can sue them and win money instead of having no chance of winning because they’re totally protected.” And he said then of Amazon, of which Bezos is CEO, “If I become president, oh do they have problems. They’re going to have such problems.” He has claimed for months that Bezos was using the newspaper as either itself a tax dodge or as a tool of influence to prevent Amazon from having to pay “fair taxes,” a theory hard to square with the institutional arrangements involved (Bezos owns the Post separately from his Amazon stake; the Post editors credibly deny that Bezos has interfered, and as it happens Amazon itself supports the idea of an internet sales tax.) More recently Trump has opened up a second front, arguing last week on the Sean Hannity show that Bezos employs the paper “as a political instrument to try and stop antitrust” and implying that he, Trump, would hit Amazon with antitrust charges. 

As you might expect, many critics are crying foul. “He’s basically giving us a preview of how he will abuse his power as president. … he is clearly trying to intimidate Bezos and in turn The Washington Post from running negative stories about him,” writes Boston Globe columnist Michael A. Cohen.  “Mr. Trump knows U.S. political culture well enough to know that gleefully, uninhibitedly threatening to use government’s law-enforcement powers to attack news reporters and political opponents just isn’t done. Maybe he thinks he can get away with it,” writes Wall Street Journal columnist Holman Jenkins

But as I wrote in this space four years ago, if you think blatant use of the machinery of government to punish newspaper owners or interfere in papers’ management somehow happens only in other countries, think again:

[Since-convicted Illinois Gov. Rod] Blagojevich, Harris and others are also alleged [in the federal indictment] to have withheld state assistance to the Tribune Company in connection with the sale of Wrigley Field. The statement says this was done to induce the firing of Chicago Tribune editorial board members who were critical of Blagojevich.

And in 1987, at the secret behest of the late Sen. Edward Kennedy (D-MA), Sen. Ernest Hollings (D-SC) inserted a legislative rider aimed at preventing Rupert Murdoch from simultaneously owning broadcast and newspaper properties in Boston and New York. The idea was to force him to sell the Boston Herald, the most persistent editorial voice criticizing Kennedy in his home state.

More recently, when the Tribune Company encountered financial difficulties and explored the sale of several large papers, city councils in more than one city passed resolutions opposing the sale to politically “bad” prospective owners; powerful Congressional figure Henry Waxman (D-CA) could not resist the urge to meddle as well in management issues affecting his hometown paper, the L.A. Times. 

This all goes back much farther, of course. Historian David Beito, writing about the FDR-Truman era, cites a long series of federal investigations of and retaliations against the distributors of pro-liberty books and pamphlets, including the proposal of Indiana Democratic Senator and New Dealer Sherman Minton (D-Ind.) to make it “a crime to publish anything as a fact anything known to be false,” a downright Trumpian idea that Minton let drop after an outcry.

It is thoroughly appalling – but, alas, it is not new.

 

 

Following up a proposal announced last year, the Obama administration is set to raise the salary threshold at which employers must pay time-and-half for overtime hours (normally, those above 40 hours per week). Currently these rules apply to workers with annual salaries up to $23,660; the new rule raises this threshold to $47,476.  This will affect about 4.2 million workers, according to administration estimates.

What impact will this regulation have? 

In the short run, many employers will indeed pay higher total compensation to affected employees, given limited options for offsetting the mandated increase in wage costs. This is the outcome sought by regulation advocates.

In the medium term, however, employers will offset these costs by re-arranging work schedules so that fewer employees hit 40 hours, by laying off employees who work more than 40 hours, or by pushing such employees to work overtime hours off the books.

In the longer term, employers will reduce base-level wages so that, even with overtime, total compensation for employees working more than 40 hours is no different than before.  

Thus, expanded overtime will benefit some employees in the short term; cost others their jobs or lower their compensation in the medium term; and have no meaningful impact in the long term.

Is that good policy?

Some recent studies have presented evidence that inequality of mortality is increasing, and that life expectancy is actually falling for some groups. Those studies generally focus on life expectancy at a certain age threshold, usually 50. In a recent paper in the Journal of Economic Perspectives, authors Janet Currie and Hannes Schwandt instead focus primarily on gains in life expectancy at birth. They find that, along this metric, inequality in mortality has fallen significantly in recent years, and that the gains in life expectancy have been widely shared. Given the impact of childhood health on health later in life, these findings suggest “today’s children are likely to face considerably less inequality in mortality as they age than current adults.” Despite the headlines, there have been impressive gains in life expectancy and reductions in mortality in recent years, especially for children.

There are three different methods of analyzing inequality in mortality: across counties, by educational attainment, and by career earnings. The latter two approaches have been more commonplace in the recent literature. As the authors note, analyzing by education suffer from changes in the underlying composition of the categories they are using. In people with only a high school education in 2010 was much different than the group with the same level of educational attainment twenty years ago. Using relative income levels runs into some data limitations and problems with reverse causality, in that some cases poverty or limited might be caused by health issues instead of the other way around. For these reasons, the authors instead turn to the third method and analyze inequality in mortality by geographic region, counties in this case.

One of their most important findings is that over the twenty year period 1990 to 2010, life expectancy at birth increased everywhere for both men and women, from the most poverty-stricken counties to the most prosperous locales in the country.  The rising tide of life expectancy at birth lifted all boats.

Life Expectancy at Birth across County Poverty Percentiles, 1990-2010

 

Source: Currie and Schwandt.

In the first decade they analyze (1990-2000), there were greater reductions in mortality in poorer counties for both young children (under 5) and older children (5 to 19) than in richer ones, leading to a reduction in mortality inequality at the same time overall income inequality was increasing. Even in the working age group (20 to 49) men in poorer counties experienced significant declines in mortality while those in richer ones did not. The exception to this trend was in the group over age 50. Here the authors corroborate some of the earlier studies, where gains in life expectancy were larger in richer counties, even as all counties saw improvements over the twenty year period.

The authors also extend their analysis to age and race-specific mortality, and here there are more positive developments, with a “truly remarkable reduction in black mortality rates” for children under five from 1990 to 2000, with smaller but still notable gains in the second decade. For people over 50, mortality rates fell for every race and gender category they investigated. The exception to these broad-based gains is for white women between ages 20 to 59. Mortality rates for this group stagnated, and in some poorer counties even increased, over the twenty year period, echoing findings from some earlier studies like this one from Anne Case and Angus Deaton.

These different trends in different age cohorts and locations cut against attempts to explain them through any one mechanism, like changes in income inequality. This paper also offers a respite from the drumbeat of studies highlighting increases in mortality inequality at older ages, and some evidence that the better outcomes the authors observe for younger age cohorts will continue, and they “may well be healthier and suffer less mortality inequality in the future.” While analyses of mortality rates and life expectancy are always valuable due to their implications for a wide range of policy spheres, studies like this one can be even more important as a counterbalance to the news and a political landscape that can sometimes seem so unrelentingly bleak. 

Donald Trump is headed toward the Republican Party’s presidential nomination. He’s among the most pugnacious of candidates.

Many of his political battles could reduce his chance of getting elected president. But his fight with foreign policy professionals might help. Given the disastrous course of U.S. foreign policy in recent years, there’s little public support for more military adventurism in the Middle East.

Trump clearly is out-of-step with the neoconservatives and militaristic interventionists who dominated the Republican Party of late. One of Trump’s most important pledges addressed personnel, not policy

He declared: “My goal is to establish a foreign policy that will endure for several generations. That’s why I also look and have to look for talented experts with approaches and practical ideas, rather than surrounding myself with those who have perfect resumes but very little to brag about except responsibility for a long history of failed policies and continued losses at war. We have to look for new people.”

Trump may have been reacting against the open letter from 117 self-described members of “the Republican national security community,” including leading neoconservatives and right-leaning interventionists of other stripes. They denounced Trump as “fundamentally dishonest,” acting like “a racketeer,” being “hateful,” and having a vision that “is wildly inconsistent and unmoored in principle.” Their critique contained important truths, but was fueled by Trump’s lack of enthusiasm for new wars.

Trump’s promise to ignore the usual foreign policy suspects also may reflect media coverage of some members of the very same policy elite publicly stating their willingness to serve Trump—though only reluctantly, of course. An unnamed GOP official told the Washington Post: “Leaving any particular president completely alone and bereft from the best advice people could give him just doesn’t sound responsible.”

Of course, it’s all about advancing the national interest, and not gaining attractive, influential, prestigious, and career-enhancing jobs. No wonder Trump apparently sees no need for advice from such folks.

Author Evan Thomas defended the “global corps of diplomats, worldly financiers and academics.” Thomas seemed to miss Trump’s point. Trump endorsed diplomacy, which would require the assistance of a variety of seasoned professionals.

Not needed, however, are “advisers” with the reverse Midas Touch, whose counsel has proved to be uniformly disastrous. Indeed, every recent intervention, such as Iraq, has created new problems, creating calls from the usual suspects for more military action.

Trump may be feeling especially dismissive of those who never learn from their mistakes—like supporting the wars in Iraq and Libya, for instance. In August 2011, after the ouster of Moammar Khadafy, Anne-Marie Slaughter celebrated the success in an article entitled “Why Libya skeptics were proved badly wrong.” Once that country imploded and the Islamic State made an appearance, she dropped any discussion of who had been “proved badly wrong” by that conflict.

Samantha Power later criticized the public for losing its faith in her strategy of constant war: “I think there is too much of, ‘Oh, look, this is what intervention has wrought’ … one has to be careful about overdrawing lessons.”

Of course, what she really sought was to avoid responsibility for supporting multiple foreign policy blunders. As I wrote in Forbes online, “consider what the Iraq invasion has wrought: thousands of American dead, bloody sectarian war, promiscuous suicide attacks, hundreds of thousands of Iraqis killed, trillions of dollars squandered, rise of the Islamic State, destruction of the historic Christian community, dramatic increase in Iranian influence. Why should anyone listen to such people with such ideas?”

There are many reasons to fear Donald Trump’s candidacy. However, he is right to dismiss Washington’s interventionist foreign policy crowd. Their policies have cost America precious lives, abundant wealth, international credibility, and global influence. The next president should reject the same failed advisers with their same failed proposals.

The Supreme Court issued a short, unanimous opinion in the contraceptive-mandate cases known as Zubik v. Burwell. There’s plenty of punditry out there for you to read so I’ll just offer three thoughts:

1. This was the biggest punt in Supreme Court history.

The Court abdicated its duty to say what the law is in favor of wiping the slate clean and telling the lower courts to facilitate a settlement between the parties. This is a cop-out. If the justices were hopelessly deadlocked 4-4 regarding the legal issue presented in the case – whether there was a way for the government to achieve its free-contraceptives-for-all goal in a way less-burdensome on religious employers – then they should’ve held the case for reargument at such a time as there’s a ninth justice to provide the tie-breaking vote.

2. The challengers win.

Although the opinion goes out of its way to state that the Court takes no position on the questions of whether (a) the contraceptive mandate poses a “substantial burden” on religious free exercise, (b) the government’s interest was compelling, and (c) the current regulations are the least-restrictive means of achieving that interest, it’s clear that the justices think that the government and nonprofit challengers are close enough in their legal positions that they can just “work it out.” In other words, after an unusual round of supplemental briefing wherein the Court asked the parties to speculate on what kinds of regulations might avoid involving religious employers in what they consider to be sinful behavior, the justices think there may well be a workable solution: simply have the employers object and let the government handle the rest (dealing with insurers and otherwise). Even Justices Sotomayor and Ginsburg, who concurred specially to protest (too much?) that lower courts should be free to continue ruling for the government, essentially accepted the “I object” solution so long as the resulting insurance/contraceptive coverage is “seamless.” 

If that’s correct, then the Court has essentially conceded that the challengers should have won their claim under the Religious Freedom Restoration Act – wherein, as the Hobby Lobby case showed two years ago, the government loses if there’s a way it can achieve its goal in a way that imposes less of a burden on the religious rights of the objecting parties. So really, the end result, assuming the lower courts don’t engage in a bout of judicial disobedience (not a slam-dunk assumption given how some courts have treated the Second Amendment in the wake of Supreme Court rulings in that regard) should be the same as if the Court, with Justice Scalia, had ruled 5-4 for the Little Sisters et al.

3. The legal precedent doesn’t matter.

RFRA is an unusual statute in that it’s meant to be applied case-by-case, so a ruling in favor of peyote-smokers says nothing about how judges should rule regarding prison beards – or objectors to contraceptive mandates. Contra the progressive alarmists in the wake of Hobby Lobby, a successful RFRA claim regarding one aspect of Obamacare implementation doesn’t mean that left-handed lesbians now have to sit in the back of the bus (or however the tired parade of horribles went). So even though the Court declined to adopt my argument (or any other) in the case, it doesn’t mean that our liberties are diminished in future or that the government can exceed its lawful powers in some future case.

In sum, the wily chief justice persuaded his colleagues to go in for a non-judicial non-decision. I may like the practical result here, but this isn’t law.

People criticize business subsidies because they harm taxpayers. But there is another group harmed by business subsidies: the recipients. Government welfare for low-income families induces unproductive behaviors, but the same is true for companies taking corporate welfare.

When subsidized, businesses get lazy and less agile. Their cost structures get bloated, and they make decisions divorced from market realities. Lobbying replaces innovation.

Corporate leaders get paid the big bucks for their decisionmaking skills, yet many of them get duped by politicians promoting faddish subsidy schemes.

From the Wall Street Journal yesterday:

A Mississippi power plant intended as a showcase for clean-coal technology has turned into a costly mess for utility Southern Co., which is now facing an investigation by the Securities and Exchange Commission, a lawsuit from unhappy customers and a price tag that has more than doubled to $6.6 billion.

The power plant, financed in part with federal subsidies, aims to take locally mined coal, convert it into a flammable gas and use it to make electricity.

Conceived as a first-of-a-kind plant, it currently looks to be the last of its kind in the U.S., though China and other nations have expressed interest in the technology. Kemper costs have swelled to $6.6 billion, far above the $3 billion forecast in 2010.

A former Kemper project manager said he was let go after complaining to top company officials that public estimates for the project’s completion were unrealistic and misleading.

Brett Wingo, who was a project manager for the gasification portion of the plant, said he thinks the company put a positive spin on construction so it wouldn’t have to acknowledge to investors it was likely to lose federal subsidies due to delays.

To date, Southern has paid back $368 million in federal tax credits for missing deadlines, but believes it will be able to keep $407 million in grants from the Energy Department.

My advice to corporate leaders: don’t take corporate welfare. Grabbing hand-outs will undermine your cost control, induce you to make bad investments, and distract you from serving your customers. Subsidies will make you weaker.

More on Southern’s subsidized coal blunder see here.

Under our criminal justice system, ignorance of the law is no defense.  But what if the law is undefined?  Or what if it seems to change with every new case that’s brought?  What if unelected judges (with life tenure) started to invent crimes, piece by piece, case by case?  Holding people accountable for knowing the law is just only if the law is knowable, and only if those creating the law are accountable to the people. 

On Friday, Cato filed an amicus brief in Salman v. U.S. that is aimed at limiting the reach of just such an ill-defined, judicially created law. “Insider trading” is a crime that can put a person away for more than a decade, and yet this crime is judge-made and, as such, is ever-changing. Although individuals may know generally what is prohibited, the exact contours of the crime have remained shrouded, creating traps for the unwary.

The courts, in creating this crime, have relied on a section of the securities laws that prohibits the use of “any manipulative or deceptive device or contrivance” in connection with the purchase or sale of a security. The courts’ rationale has been that by trading on information belonging to the company, and in violation of a position of trust, the trader has committed a fraud.  The law, however, does not mention “insiders” or “insider trading.”  And yet, in 2015 alone, the Securities and Exchange Commission (SEC) charged 87 individuals with insider trading violations.  

Broadly speaking, insider trading occurs when someone uses a position of trust to gain information about a company and later trades on that company, without permission, to receive a personal benefit.  But what constitutes a “benefit”?  The law doesn’t say.

Left to their own devices, the SEC has pushed the boundaries of what constitutes a “benefit,” making it more and more difficult for people to know when they are breaking the law.  In the case currently before the Court, Bassam Salman was charged with trading on information he received from his future brother-in-law, Mounir Kara, who had, in turn, received the information from his own brother, Maher.  The government has never alleged that Maher Kara received anything at all from either his brother or Salman in exchange for the information.  The government has instead claimed that the simple familial affection the men feel for each other is the “benefit.”  Salman’s trade was illegal because he happens to love the brothers-in-law who gave him the inside information.

Under this rationale, a person who trades on information received while making idle talk in a grocery line would be safe from prosecution while the same person trading on the same information heard at a family meal would be guilty of a felony.  Or maybe not.  After all, if we construe “benefit” this broadly, why not say that whiling away time chit-chatting in line is a “benefit”?    

No one should stumble blindly into a felony.  We hope the Court will take this opportunity to clarify the law and return it to its legislative foundation.  Anything else courts tyranny. 

In January, the International Monetary Fund (IMF) told us that Venezuela’s annual inflation rate would hit 720 percent by the end of the year. The IMF’s World Economic Outlook, which was published in April, stuck with the 720 percent inflation forecast. What the IMF failed to do is tell us how they arrived at the forecast. Never mind. The press has repeated the 720 percent inflation forecast ad nauseam.

Since the IMF’s 720 percent forecast has been elevated to the status of a factoid, it is worth a bit of reflection and analysis. We can reverse engineer the IMF’s inflation forecast to determine the Bolivar to U.S. greenback exchange rate implied by the inflation forecast.

When we conduct that exercise, we calculate that the VEF/USD rate moves from today’s black market (read: free market) rate of 1,110 to 6,699 by year’s end. So, the IMF is forecasting that the bolivar will shed 83 percent of its current value against the greenback by New Year’s Day, 2017. The following chart shows the dramatic plunge anticipated by the IMF.

Changes in the general level of prices are capable, as we’ve seen, of eliminating shortages or surpluses of money, by adding to or subtracting from the purchasing power of existing money holdings.  But because such changes place an extra burden on the price system, increasing the likelihood that individual prices will fail to accurately reflect the true scarcity of different goods and services at any moment, the less they have to be relied upon, the better.  A better alternative, if only it can somehow be achieved, or at least approximated, is a monetary system that adjusts the stock of money in response to changes in the demand for money balances, thereby reducing the need for changes in the general level of prices.

Please note that saying this is not saying that we need to have a centrally-planned money supply, let alone one that’s managed by a committee that’s unconstrained by any explicit rules or commitments.  Whether such a committee would in fact come closer to the ideal I’m defending than some alternative arrangement is a crucial question we must come to later on.  For now I will merely observe that, although it’s true that unconstrained central monetary planners might manage the money stock according to some ideal, that’s only so because there’s nothing that such planners might not do.

The claim that an ideal monetary regime is one that reduces the extent to which changes in the general level of prices are required to keep the quantity of money supplied in agreement with the quantity demanded might be understood to imply that what’s needed to avoid monetary troubles is a monetary system that avoids all changes to the general level of prices, or one that allows that level to change only at a steady and predictable rate.  We might trust a committee of central bankers to adopt such a policy.  But then again, we could also insist on it, by eliminating their discretionary powers in favor of having them abide by a strict stable price level (or inflation rate) mandate.

Monetary and Non-Monetary Causes of Price-Level Movements

But things aren’t quite so simple.  For while changes in the general price level are often both unnecessary and undesirable, they aren’t always so.  Whether they’re desirable or not depends on the reason for the change.

This often overlooked point is best brought home with the help of the famous “equation of exchange,” MV = Py.   Here, M is the money stock, V is its “velocity” of circulation, P is the price level, and y is the economy’s real output of goods and services.  Since output is a flow, the equation necessarily refers to an interval of time.  Velocity can then be understood as representing how often a typical unit of money is traded for output during that interval.  If the interval is a year, then both Py and MV stand for the money value of output produced during that year or, alternatively, for that years’ total spending.

From this equation, it’s apparent that changes in the general price level may be due to any one of three underlying causes: a change in the money stock, a change in money’s velocity, or a change in real output.

Once upon a time, economists (or some of them, at least) distinguished between changes in the price level made necessary by developments in the “goods” side of the economy, that is, by changes in real output that occur independently of changes in the flow of spending, and those made necessary by changes in that flow, that is, in either the stock of money or its velocity.   Deflation — a decline in the equilibrium price level — might, for example, be due to a decline in the stock of money, or in its velocity, either of which would mean less spending on output.  But it could also be due to a greater abundance of goods that, with spending unchanged, must command lower prices.  It turns out that, while the first sort of deflation is something to be regretted, and therefore something that an ideal monetary system should avoid, the second isn’t.  What’s more, attempts to avoid the second, supply-driven sort of deflation can actually end up doing harm.  The same goes for attempts to keep prices from rising when the underlying cause is, not increased spending, but reduced real output of goods and services.  In short, what a good monetary system ought to avoid is, not fluctuations in the general price level or inflation rate per se, but fluctuations in the level or growth rate of total spending.

Prices Adjust Readily to Changes in Costs

But what about those “sticky” prices?  Aren’t they a reason to avoid any need for changes in the price level, and not just those changes made necessary by underlying changes in spending?  It turns out that they aren’t, for a number of reasons.[1]

First of all, whether a price is “sticky” or not depends on why it has to adjust.  When, for example, there’s a general decline in spending, sellers have all sorts of reasons to resist lowering their prices.  If the decline might be temporary, sellers would be wise to wait and see before incurring price-adjustment costs.  Also, sellers will generally not profit by lowering their prices until their own costs have also been lowered, creating what Leland Yeager calls a “who goes first” problem.  Because the costs that must themselves adjust downwards in order for sellers to have a strong motive to follow suit include very sticky labor costs, the general price level may take a long time “groping” its way (another Yeager expression) to its new, equilibrium level.  In the meantime, goods and services, being overpriced, go unsold.

When downward pressure on prices comes from an increase in the supply of goods, and especially when the increase reflects productivity gains, the situation is utterly different.  For gains in productivity are another name for falling unit costs of production; and for competing sellers to reduce their products’ prices in response to reduced costs is relatively easy.  It is, indeed, something of a no-brainer, because it promises to bring a greater market share, with no loss in per-unit profits.  Heck, companies devote all sorts of effort to being able to lower their costs precisely so that they can take advantage of such opportunities to profitably lower their prices.  By the same token, there is little reason for sellers to resist raising prices in response to adverse supply shocks. The widespread practice of “mark-up pricing” supplies ample proof of these claims.  Macroeconomic theories and models (and their are plenty of them, alas) that simply assign a certain “stickiness” parameter to prices, without allowing for the possibility that they respond more readily to some underlying changes than to others, lead policymakers astray by overlooking this important fact.

A Changing Price Level May be Less “Noisy” Than a Constant One

Because prices tend to respond relatively quickly to productivity gains and setbacks, there’s little to be gained by employing monetary policy to prevent their movements related to such gains or setbacks.  On the contrary: there’s much to lose, because productivity gains and losses tend to be uneven across firms and industries, making any resulting change to the general price level a mere average of quite different changes to different equilibrium prices.  Economists’ tendency — and it hard to avoid — to conflate a “general” movement in prices, in the sense of a change in their average level, with a general movement in the across-the-board sense, is in this regard a source of great mischief.  A policy aimed at avoiding what is merely a change in the average, stemming from productivity innovations, increases instead of reducing the overall burden of price adjustment, introducing that much more “noise” into the price system.

Nor is it the case that a general decline or increase in prices stemming from productivity gains or setbacks itself conveys a noisy signal.  On the contrary: if things are generally getting cheaper to produce, a falling price level conveys that fact of reality in the most straightforward manner possible.  Likewise, if productivity suffers — if there is a war or a harvest failure or OPEC-inspired restriction in oil output or some other calamity — what better way to let people know, and to encourage them to act economically, than by letting prices generally go up?  Would it really help matters if, instead of doing that, the monetary powers-that-be decided to shrink the money stock, and thereby MV, for the sake of keeping the price level constant?  Yet that is what a policy of strict price-level stability would require.

Reflection on such scenarios ought to be enough to make even the most die-hard champion of price-level or inflation targeting reconsider.  But in case it isn’t, allow me to take still another tack, by observing that, when policymakers speak of stabilizing the price level or the rate of inflation, they mean stabilizing some measure of the level of output prices, such as the Consumer Price Index, or the GDP deflator, or the current Fed favorite, the PCE (“Personal Consumption Expenditure”) price-index.  So long as changes in total spending (“aggregate demand”) are the only source of changes in the overall level of prices,  those changes will tend to affect input as well as output prices, so policies that stabilize output prices will also tend to stabilize input prices.   General changes in productivity, in contrast, necessarily imply changes in the relation of input to output prices: general productivity gains (meaning gains in numerous industries that outweigh setbacks in others) mean that output prices must decline relative to input prices; while general productivity setbacks mean that output prices must increase relative to input prices.  In such cases, to stabilize output prices is to destabilize input prices, and vice versa.

So, which?  Appeal to menu costs supplies a ready answer: if a burden of price adjustment there must be, let the burden fall on the least sticky prices.  Since “input” prices include wages and salaries, that alone makes a policy that would impose the burden on them a poor choice, and a dangerous one at that.  As we’ve seen, it means adding insult to injury during productivity setbacks, when wage earners would have to take cuts (or settle for smaller or less frequent raises).  It also increases the risk of productivity gains being associated with asset-price bubbles, because those gains will inspire corresponding boosts to aggregate demand which, in the presence of sticky input prices, can cause profits to swell temporarily.  Unless the temporary nature of the extraordinary profits is recognized, asset prices will be bid up, but only for as long as it takes for costs to clamber their way upwards in response to the overall increase in spending.

What About Debtor-Creditor Transfers?

But if the price level is allowed to vary, and to vary unexpectedly, doesn’t that mean that the terms of fixed-interest rate contracts will be distorted, with creditors gaining at debtors expense when prices decline, and the opposite happening when they rise?

Usually it does; but, when price-level movements reflect underlying changes in productivity, it doesn’t.  That’s because productivity changes tend to be associated with like changes in  “neutral” or “full information” interest rates.  Suppose that, with each of us anticipating a real return on capital of four percent, and zero inflation, I’d happily lend you, and you’d happily borrow, $1000 at four percent interest.  The anticipated real interest rate is of course also four percent.  Now suppose that productivity rises unexpectedly, raising the actual real return on capital by two percentage points, to six percent rather than four percent.  In that case, other things equal, were I able to go back and renegotiate the contract, I’d want to earn a real rate of six percent, to reflect the higher opportunity cost of lending.  You, on the other hand, can also employ your borrowings more productively, or are otherwise going to be able (as one of the beneficiaries of the all-around gain in productivity) to bear a greater real interest-rate burden, other things equal, and so should be willing to pay the higher rate.

Of course, we can’t go back in time and renegotiate the loan.  So what’s the next best thing?  It is to let the productivity gains be reflected in proportionately lower output prices — that is, in a two percent decline in the the price level over the course of the loan period — and thus in an increase, of two percentage points, in the real interest rate corresponding to the four percent nominal rate we negotiated.

The same reasoning applies, mutatis mutandis, to the case of unexpected, adverse changes in productivity.  Only the argument for letting the price level change in this case, so that an unexpected increase in prices itself compensates for the unexpected decline in productivity, is even more compelling.  Why is that?  Because, as we’ve seen, to keep the price level from rising when productivity declines, the authorities would have to shrink the flow of spending.  Ask yourself whether doing that will make life easier or harder for debtors with fixed nominal debt contracts, and you’ll see my point.

Next: The Supply of Money

_____________________________

[1] What follows is a brief summary of arguments I develop at greater length in my 1997 IEA pamphlet, Less Than Zero.  In that pamphlet I specifically make the case for a rate of deflation equal to an economies (varying) rate of total factor productivity growth.  But the arguments may just as well be read as supplying grounds for preferring a varying yet generally positive inflation rate to a constant rate.

[Cross-posted from Alt-M.org]

May 16, 1966, is regarded as the beginning of Mao Zedong’s Cultural Revolution in China. Post-Maoist China has never quite come to terms with Mao’s legacy and especially the disastrous Cultural Revolution

Many countries have a founding myth that inspires and sustains a national culture. South Africa celebrates the accomplishments of Nelson Mandela, the founder of that nation’s modern, multi-racial democracy. In the United States, we look to the American Revolution and especially to the ideas in the Declaration of Independence of July 4, 1776. 

The Declaration of Independence, written by Thomas Jefferson, is the most eloquent libertarian essay in history, especially its philosophical core:

We hold these truths to be self-evident, that all men are created equal, that they are endowed by their Creator with certain unalienable Rights, that among these are Life, Liberty and the pursuit of Happiness.–That to secure these rights, Governments are instituted among Men, deriving their just powers from the consent of the governed, –That whenever any Form of Government becomes destructive of these ends, it is the Right of the People to alter or to abolish it, and to institute new Government, laying its foundation on such principles and organizing its powers in such form, as to them shall seem most likely to effect their Safety and Happiness.

The ideas of the Declaration, given legal form in the Constitution, took the United States of America from a small frontier outpost on the edge of the developed world to the richest country in the world in scarcely a century. The country failed in many ways to live up to the vision of the Declaration, notably in the institution of chattel slavery. But over the next two centuries, that vision inspired Americans to extend the promises of the Declaration—life, liberty, and the pursuit of happiness—to more and more people.

China, of course, followed a different vision, the vision of Mao Zedong. Take Mao’s speech on July 1, 1949, as his Communist armies neared victory. The speech was titled, “On the People’s Democratic Dictatorship.” Instead of life, liberty, and the pursuit of happiness, it spoke of “the extinction of classes, state power and parties,” of “a socialist and communist society,” of the nationalization of private enterprise and the socialization of agriculture, of a “great and splendid socialist state” in Russia, and especially of “a powerful state apparatus” in the hands of a “people’s democratic dictatorship.”

Tragically, and unbelievably, this vision appealed not only to many Chinese but even to Americans and Europeans, some of them prominent. But from the beginning, it went terribly wrong, as should have been predicted. Communism created desperate poverty in China. The “Great Leap Forward” led to mass starvation. The Cultural Revolution unleashed “an extended paroxysm of revolutionary madness” in which “tens of millions of innocent victims were persecuted, professionally ruined, mentally deranged, physically maimed and even killed.” Estimates of the number of unnatural deaths during Mao’s tenure range from 15 million to 80 million. This is so monstrous that we can’t really comprehend it. What inspired many American and European leftists was that Mao really seemed to believe in the communist vision. And the attempt to actually implement communism leads to disaster and death.

When Mao died in 1976, China changed rapidly. His old comrade Deng Xiaoping, a victim of the Cultural Revolution, had learned something from the 30 years of calamity. He began to implement policies he called “socialism with Chinese characteristics,” which looked a lot like freer markets: decollectivization and the “responsibility system” in agriculture, privatization of enterprises, international trade, liberalization of residency requirements.

The changes in China over the past generation are the greatest story in the world—more than a billion people brought from totalitarianism to a largely capitalist economic system that is eroding the continuing authoritarianism of the political system. On its 90th birthday, the CCP still rules China with an iron fist. There is no open political opposition, and no independent judges or media. And yet the economic changes are undermining the party’s control, a challenge of which the party is well aware. In 2008, Howard W. French reported in the New York Times:

Political change, however gradual and inconsistent, has made China a significantly more open place for average people than it was a generation ago.

Much remains unfree here. The rights of public expression and assembly are sharply limited; minorities, especially in Tibet and Xinjiang Province, are repressed; and the party exercises a nearly complete monopoly on political decision making.

But Chinese people also increasingly live where they want to live. They travel abroad in ever larger numbers. Property rights have found broader support in the courts. Within well-defined limits, people also enjoy the fruits of the technological revolution, from cellphones to the Internet, and can communicate or find information with an ease that has few parallels in authoritarian countries of the past.

The Chinese Communist Party remains in control. And there’s a resurgence of Maoism under the increasingly authoritarian rule of Xi Jinping, as my former colleague Jude Blanchette is writing about. But at least one study finds ideological groupings in China divided between statists who are both socialist and culturally conservative, and liberals who tend toward “constitutional democracy and individual liberty, … market-oriented reform … modern science and values such as sexual freedom.” 

Xi’s government struggles to protect its people from acquiring information, routinely battling with Google, Star TV, and other media. Howard French noted that “the country now has 165,000 registered lawyers, a five-fold increase since 1990, and average people have hired them to press for enforcement of rights inscribed in the Chinese Constitution.” People get used to making their own decisions in many areas of life and wonder why they are restricted in other ways. I am hopeful that the 100th anniversary of the CCP in 2021 will be of interest mainly to historians of China’s past and that the Chinese people will by then enjoy life, liberty, and the pursuit of happiness under a government that derives its powers from the consent of the governed. 

Computer modeling plays an important role in all of the sciences, but there can be too much of a good thing. A simple semantic analysis indicates that climate science has become dominated by modeling. This is a bad thing.

What we did

We found two pairs of surprising statistics. To do this we first searched the entire literature of science for the last ten years, using Google Scholar, looking for modeling. There are roughly 900,000 peer reviewed journal articles that use at least one of the words model, modeled or modeling. This shows that there is indeed a widespread use of models in science. No surprise in this.

However, when we filter these results to only include items that also use the term climate change, something strange happens. The number of articles is only reduced to roughly 55% of the total.

In other words it looks like climate change science accounts for fully 55% of the modeling done in all of science. This is a tremendous concentration, because climate change science is just a tiny fraction of the whole of science. In the U.S. Federal research budget climate science is just 4% of the whole and not all climate science is about climate change.

In short it looks like less than 4% of the science, the climate change part, is doing about 55% of the modeling done in the whole of science. Again, this is a tremendous concentration, unlike anything else in science.

We next find that when we search just on the term climate change, there are very few more articles than we found before. In fact the number of climate change articles that include one of the three modeling terms is 97% of those that just include climate change. This is further evidence that modeling completely dominates climate change research.

To summarize, it looks like something like 55% of the modeling done in all of science is done in climate change science, even though it is a tiny fraction of the whole of science. Moreover, within climate change science almost all the research (97%) refers to modeling in some way.

This simple analysis could be greatly refined, but given the hugely lopsided magnitude of the results it is unlikely that they would change much.

What it means

Climate science appears to be obsessively focused on modeling. Modeling can be a useful tool, a way of playing with hypotheses to explore their implications or test them against observations. That is how modeling is used in most sciences.

But in climate change science modeling appears to have become an end in itself. In fact it seems to have become virtually the sole point of the research. The modelers’ oft stated goal is to do climate forecasting, along the lines of weather forecasting, at local and regional scales.

Here the problem is that the scientific understanding of climate processes is far from adequate to support any kind of meaningful forecasting. Climate change research should be focused on improving our understanding, not modeling from ignorance. This is especially true when it comes to recent long term natural variability, the attribution problem, which the modelers generally ignore. It seems that the modeling cart has gotten far ahead of the scientific horse.

Climate modeling is not climate science. Moreover, the climate science research that is done appears to be largely focused on improving the models. In doing this it assumes that the models are basically correct, that the basic science is settled. This is far from true.

The models basically assume the hypothesis of human-caused climate change. Natural variability only comes in as a short term influence that is negligible in the long run. But there is abundant evidence that long term natural variability plays a major role climate change. We seem to recall that we have only very recently emerged from the latest Pleistocene glaciation, around 11,000 years ago.

Billions of research dollars are being spent in this single minded process. In the meantime the central scientific question – the proper attribution of climate change to natural versus human factors – is largely being ignored.

I spent the latter part of last week on a too-short trip to Alicante, Spain, to present some of my latest work on the reuse of former defense facilities in the United States. The occasion was a conference on “Defence Heritage” – the third since 2012 – hosted by the Wessex Institute (.pdf) in which scholars from more than a dozen countries shared their findings about how various defense installations around the world have been repurposed for everything from recreational parks to educational institutions to centers of business and enterprise.

This sort of research is sorely needed as Congress appears poised to deny the Pentagon’s request to close unneeded or excess bases. It is the fifth time that Congress has told the military that it must carry surplus infrastructure, and continue to misallocate resources where they aren’t needed, in order to protect narrow parochial interests in a handful of congressional districts that might house an endangered facility.

In a cover letter to a new Pentagon report that provides ample justification for the need to close bases, Deputy Secretary of Defense Robert Work explained:

Under current fiscal restraints, local communities will experience economic impacts regardless of a congressional decision regarding BRAC authorization. This has the harmful and unintended consequence of forcing the Military Departments to consider cuts at all installations, without regard to military value. A better alternative is to close or realign installations with the lowest military value. Without BRAC, local communities’ ability to plan and adapt to these changes is less robust and offers fewer protections than under BRAC law.

Work is almost certainly correct. But in my latest post at The National Interest’s The Skeptics, I urge him “and other advocates for another BRAC round” not to “limit themselves to green-eyeshade talk of cost savings and greater efficiency. They must also show how former defense sites don’t all become vast, barren wastelands devoid of jobs and people.”

It obviously isn’t enough to stress the potential savings, even though the savings are substantial. The DoD report estimates that the five BRAC rounds, plus the consolidation of bases in Europe, have generated annual recurring savings of $12.5 billion, and that a new BRAC round would save an additional $2 billion per year, after a six-year implementation period. A GAO study conducted in 2002 concluded that “BRAC savings are real and substantial and are related to cost reduction in key operational areas.”

Members of Congress who are uninterested in such facts, and who remain adamantly opposed to any base closures, anywhere, should consider what has actually happened to many of the bases dealt with during the five BRAC rounds, and the hundreds of other bases closed in the 1950s and 60s, before there was a BRAC. 

They don’t have to go far. They could start by speaking with the Association of Defense Communities and the Pentagon’s Office of Economic Adjustment, who keep track of these stories.

House Armed Services Committee Chairman Mac Thornberry (R-TX) could visit Austin-Bergstrom International Airport in Austin, Texas. He probably has, many times. The closure of Bergstrom Air Force Base was a thinly disguised blessing for a city that had struggled for years to find an alternative for its inadequate regional airport. Austin-Bergstrom today services millions of passengers, and has won awards for its design and customer service.

Sen. Kelly Ayotte (R-NH), Chair of the Senate Armed Services Readiness Subcommittee, might stop by the former Pease Air Force Base in Portsmouth, New Hampshire, during one of her trips home. One of the very first bases closed under the BRAC process, the sprawling site still hosts several massive runways, and the 157th Air Refueling Wing of the Air National Guard. But the base has chiefly been reborn as the Pease International Tradeport, which is now home to over 250 businesses that employ more than 10,000 people.

And both would benefit from a visit to the former Brunswick Naval Air Station in my home state of Maine. They used to launch P-3 submarine-hunting airplanes (pictured), now they host dozens of businesses, including 28 start-ups in a new business incubator, TechPlace, that opened 14 months ago. 

It’s particularly lovely in the summer time, if you don’t mind all the tourists. If they go, Thornberry and Ayotte should talk to some of the people who are responsible for its rapid turnaround, including Steve Levesque, the Executive Director of the Midcoast Regional Redevelopment Authority (MRRA), who contributed a chapter in this forthcoming volume on the renovation and reuse of former military sites, and Jeffrey Jordan, the MRRA’s Deputy Director, who I interviewed in 2014. I’m sure they’d be happy to show HASC and SASC members around Brunswick Landing.

You Ought to Have a Look is a feature from the Center for the Study of Science posted by Patrick J. Michaels and Paul C. (“Chip”) Knappenberger.  While this section will feature all of the areas of interest that we are emphasizing, the prominence of the climate issue is driving a tremendous amount of web traffic.  Here we post a few of the best in recent days, along with our color commentary.

Badges? Do we need these stinking badges?

Need, perhaps not, but apparently some of us actually want them and will go to lengths to get them. We‘re not talking about badges for say, for example, being a Federal Agent At-Large for the Bureau of Narcotics and Dangerous Drugs:

 

(source: Smithsonianmag.org)

But rather badges like these, being given out by the editors of Psychological Journal for being a good data sharer and playing well with others:

A new paper, authored by Mallory Kidwell and colleagues, examined the impact of the Psychological Journal’s badge/award system and found it to be quite effective at getting authors to make their data and material available to others via an open access repository. Compared with four “comparison journals,” the implementation of the badge system at Psychological Journal led to a rapidly rising rate of participation and level of research transparency (Figure 1).

Figure 1. Percentage of articles reporting open data by half year by journal. Darker line indicates Psychological Science, and dotted red line indicates when badges were introduced in Psychological Science and none of the comparison journals. (Source: Kidwell et al., 2016).

Why is this important? They authors explain:

Transparency of methods and data is a core value of science and is presumed to help increase the reproducibility of scientific evidence. However, sharing of research materials, data, and supporting analytic code is the exception rather than the rule. In fact, even when data sharing is required by journal policy or society ethical standards, data access requests are frequently unfulfilled, or available data are incomplete or unusable. Moreover, data and materials become less accessible over time. These difficulties exist in contrast to the value of openness in general and to the move toward promoting or requiring openness by federal agencies, funders, and other stakeholders in the outcomes of scientific research.

For an example of data protectionism taken to the extreme, we remind you of the Climategate email tranche, where you’ll find gems like this:

“We have 25 or so years invested in the work. Why should I make the data available to you, when your aim is to try and find something wrong with it.”
-Phil Jones email Feb. 21, 2005

This type of attitude, on display throughout the Climategate emails, makes the need for a push for more transparency plainly evident.

Kidwell et al. conclude that the badge system, as silly as it may seem, actually works quite well:

Badges may seem more appropriate for scouts than scientists, and some have suggested that badges are not needed. However, actual evidence suggests that this very simple intervention is sufficient to overcome some barriers to sharing data and materials. Badges signal a valued behavior, and the specifications for earning the badges offer simple guides for enacting that behavior. Moreover, the mere fact that the journal engages authors with the possibility of promoting transparency by earning a badge may spur authors to act on their scientific values. Whatever the mechanism, the present results suggest that offering badges can increase sharing by up to an order of magnitude or more. With high return coupled with comparatively little cost, risk, or bureaucratic requirements, what’s not to like?

The entire findings of Kidwell et al. are to be found here, in the open access journal PLOS Biology—and yes, they’ve made all their material readily available!

Another article that caught our eye this week provides further indication why transparency in science is more necessary than ever. In a column in Nature magazine, Daniel Sarewitz, suggests that the pressure to publish has the tendency to result in lower quality papers through what he describes as “a destructive feedback between the production of poor-quality science, the responsibility to cite previous work and the compulsion to publish.” He cites and example of a contaminated cancer cell line that gave rise to hundreds of (wrong) published studies which receive over 10,000 citations per year. Sarewitz points to the internet and other search engines for the hyper citations which make doing literature searches vastly easier than the old days, which required a trip to the library stacks and lots of flipping through journals, etc.  Now, not so much.

Sarewitz doesn’t see this rapid expansion of the scientific literature and citation numbers as a good trend, and offers an interesting way out:

More than 50 years ago, [it was] predicted that the scientific enterprise would soon have to go through a transition from exponential growth to “something radically different”, unknown and potentially threatening. Today, the interrelated problems of scientific quantity and quality are a frightening manifestation of what [was foreseen]. It seems extraordinarily unlikely that these problems will be resolved through the home remedies of better statistics and lab practice, as important as they may be. Rather, they would seem …to announce that the enterprise of science is evolving towards something different and as yet only dimly seen.

Current trajectories threaten science with drowning in the noise of its own rising productivity… Avoiding this destiny will, in part, require much more selective publication. Rising quality can thus emerge from declining scientific efficiency and productivity. We can start by publishing less, and less often…

With the publish or perish culture securely ingrained in our universities, coupled with evaluation systems based on how often your research is cited by others, its hard to see Sarewitz’s suggestion taking hold anytime soon, as good as it may be.

In the same vein is this article by Paula Stephan and colleagues titled “Bias against novelty in science: A cautionary tale for users of bibliometric indicators.” Here the authors make the point that:

There is growing concern that funding agencies that support scientific research are increasingly risk-averse and that their competitive selection procedures encourage relatively safe and exploitative projects at the expense of novel projects exploring untested approaches. At the same time, funding agencies increasingly rely on bibliometric indicators to aid in decision making and performance evaluation.

This situation, the authors argue, depresses novel research and instead encourages safe research that supports the status quo:

Research underpinning scientific breakthroughs is often driven by taking a novel approach, which has a higher potential for major impact but also a higher risk of failure.  It may also take longer for novel research to have a major impact, because of resistance from incumbent scientific paradigms or because of the longer time-frame required to incorporate the findings of novel research into follow-on research…

The finding of delayed recognition for novel research suggests that standard bibliometric indicators which use short citation time-windows (typically two or three years) are biased against novelty, since novel papers need a sufficiently long citation time window before reaching the status of being a big hit.

Stephan’s team ultimately concludes that this “bias against novelty imperils scientific progress.”

Does any of this sound familiar?

Pages