Cato Op-Eds

Individual Liberty, Free Markets, and Peace
Subscribe to Cato Op-Eds feed

Why are independent, strong-minded courts so important to a free society? One reason is that they – and often only they – are the ones who can stop government agencies from trampling on the rights of the citizens.

Consider, for example, the Obama Administration’s present aggressive campaign to push the bounds of federal employment and labor law far beyond anything Congress has been willing to pass. As I’ve noted before, judges have repeatedly found these administration power plays to overstep the law. See, for example, posts here (Equal Employment Opportunity Commission suffers epic Sixth Circuit loss in EEOC v. Kaplan), here (Breyer and liberal Supreme Court majority, even while siding with plaintiff in underlying case, smack around EEOC “guidance” ploy); see also here (many more examples, at Overlawyered).  

Now here are four more examples from recent months.

* The U.S. Department of Labor sued oil field contractor Gate Guard demanding it reclassify some independent contract workers as employees. As our friends at the Washington Legal Foundation recount, Judge Edith Jones ruled on behalf of a Fifth Circuit appeals panel that Gate Guard was entitled to fees under an unusual “bad faith” provision (footnotes omitted here and below): 

It is often better to acknowledge an obvious mistake than defend it. When the government acknowledges mistakes, it preserves public trust and confidence. It can start to repair the damage done by erroneously, indeed vindictively, attempting to sanction an innocent business. Rather than acknowledge its mistakes, however, the government here chose to defend the indefensible in an indefensible manner. As a result, we impose attorneys’ fees in favor of Gate Guard as a sanction for the government’s bad faith.

 
At nearly every turn, this Department of Labor investigation and prosecution violated the department’s internal procedures and ethical litigation practices. Even after the DOL discovered that its lead investigator conducted an investigation for which he was not trained, concluded Gate Guard was violating the Fair Labor Standards Act based on just three interviews, destroyed evidence, ambushed a low-level employee for an interview without counsel, and demanded a grossly inflated multi-million dollar penalty, the government pressed on. In litigation, the government opposed routine case administration motions, refused to produce relevant information, and stone-walled the deposition of its lead investigator.

 

* We commented one year ago on the amazing case of EEOC v. Freeman Cos., in which the Fourth Circuit found that the federal commission had relied on “pervasive errors and utterly unreliable analysis” in its attempt to go after a Maryland employer’s policies on criminal background checks of employees. The appeals court sent the case back for further proceedings to district judge Roger Titus, who had previously shredded the EEOC’s proffered expert evidence as “laughable” and “mind-boggling.” Then the EEOC – feeling that perhaps its luck was due to turn – resisted an award of attorneys’ fees to the defendant. As Alison Somin recounts for the Federalist Society, this was a sure loser bet. Somin quotes the resulting order in which Judge Titus wrote:

World-renowned poker expert Kenny Rogers once sagely advised, “You’ve got to know when to hold ‘em. Know when to fold ‘em. Know when to walk away.” In the Title VII context, the plaintiff who wishes to avoid paying a defendant’s attorneys’ fees must fold ‘em once its case becomes so groundless that continuing to litigate is unreasonable, i.e. once it is clear it cannot have a winning hand. In this case, once Defendant Freeman revealed the inexplicably shoddy work of the EEOC’s expert witness in its motion to exclude that expert, it was obvious Freeman held a royal flush, while the EEOC held nothing. Yet, instead of folding, the EEOC went all in and defended its expert through extensive briefing in this Court and on appeal. Like the unwise gambler, it did so at its peril. Because the EEOC insisted on playing a hand it could not win, it is liable for Freeman’s reasonable attorneys’ fees.”  

* That wasn’t the only bad news for the EEOC’s legal team recently. A Wisconsin federal judge in EEOC v. Flambeau has rejected the commission’s notion that employers violate the Americans with Disabilities Act when they ask employees to take medical exams as part of so-called wellness programs in their health insurance coverage (discussion, Littler and Proskauer; background here and here). 

* And in another widely watched case, the Seventh Circuit in EEOC v. CVS Pharmacy (via Jon Hyman) has rejected the commission’s position that employers violate the law when they proffer widely used garden-variety exit agreements to departing workers (on the theory that the language is not sufficiently encouraging of later legal action, which supposedly constitutes “retaliation”).

Imagine what these agencies and others would be getting away with were our judiciary someday reduced to a spirit of subservience to the executive branch of government.

 

 

Calls for higher tax rates often suffer from a myopic focus on the one percent, but these proposals largely fail to acknowledge that tax rates, and the incentives they create, influence work decisions for everyone.  Nowhere is narrow focus more evident than the tax proposals from the two rivals for the Democratic nomination. Bernie Sanders has proposed more than $19 trillion in new taxes over the next decade, and Hillary Clinton’s own plans only look modest by comparison. My colleague Alan Reynolds briefly alluded to a recent paper from Mario Alloza of University College London that examines the relationship between tax rates and income mobility. He finds that higher marginal tax rates reduced mobility over the period analyzed, particularly for people with low incomes or less education. These findings imply that proposals to significantly increase taxes could make it harder for people at the bottom of the income distribution to work their way up.

Alloza looks at panel data between 1967 and 1996 to examine whether tax rates affect the probability of staying in the same decile in the following two years. He examines different scenarios including pre-tax, post-tax and post-tax and transfer. Most of the paper focuses on federal taxes, but he also examines a case where state and payroll taxes are included as well. Increases in the marginal tax rate are associated with a reduction in short-run relative income mobility. Households are roughly 6 percent more likely to stay in the same income quintile when the marginal tax rate is increased by one percentage point. This mechanism holds for all of the different tax and transfer scenarios. Even accounting for the impact of transfers and benefits, higher rates curbed the upward mobility of people at the lower end of the income distribution. This suggests that the impact of tax rates on income mobility is not confined to redistribution effects, but the changes in labor market incentives.

These effects are even more pronounced for people with low-income or less than a college degree. Tax changes focused on compressing the income distribution by taking more from those at the top could also make it harder for these people at the bottom to climb the economic ladder. When Alloza restricts his sample to non-college households, he finds that a one percentage point increase in the marginal tax rate increases the probability of moving down to lower deciles by roughly one percent, increases the likelihood of remaining in the same decile by roughly the same amount, and reduces the probability of moving up to a higher income decile by almost one and a half percent. For households in the lowest income decile, an increase in the marginal tax rate reduces their probability of moving up to a higher decile by almost one and half percent in the post-tax and transfer scenario.  Higher marginal tax rates reduce the mobility for these groups in particular.

These results provide more evidence that taxes matter for all people when they make decisions about work. Higher tax rates limit income mobility by changing work incentives, particularly for people near the bottom of the income distribution. Public policy should not further reduce the scope of opportunity for these people, and increasing tax rates would likely do just that.  

In a confounding ruling that breaks with a general consensus among federal courts, federal District Court Judge Mark Kearney of the Eastern District of Pennsylvania has ruled that recording police officers is not protected by the 1st Amendment unless the recorders are making an effort to “challenge or criticize” the police.  On Judge Kearney’s logic, standing silently and recording the police is not sufficiently expressive to warrant 1st Amendment protection.

The reasoning behind this distinction is bizarre, and is out of step with rulings in several federal circuits that recording police in public is constitutionally protected without regard for whether the recorder is attempting to make a statement or issue a challenge to law enforcement.  

A couple quick takes from civil liberties scholars disputing Judge Kearney’s attempt to distinguish the facts of this case:

 Radley Balko’s take at The Washington Post:

 Under Kearney’s standard, most of the citizen-shot videos of police abuse and shootings we’ve seen over the past several years would not have been protected by the First Amendment. In the overwhelming majority of these videos, there’s none of the “expressive conduct” Kearney apparently wants to see from the camera-wielder. In many of them, the police officers are never made aware that they’re being recorded. That’s how some of these videos were able to catch the officers lying about the incident in subsequent police reports.

I suppose you could argue that recording something as noteworthy as a police shooting or an incident of clear brutality would be self-evidently an act of either expression or news-gathering. But judging from his opinion, it’s far from clear that Kearney would make this distinction. It’s also hard to see how he could. It would mean that whether or not your decision to record the police is covered by the First Amendment would be dependent on whether the recording itself captures the police violating someone’s rights or doing something newsworthy. Even the courts often disagree over what is and isn’t a violation of someone’s constitutional rights (this ruling itself is as good an example as any). And “newsworthiness” is of course a highly subjective standard. You could make a strong argument that both of the events in these two cases — an anti-fracking protest and a 20+ officer police response to a house party — are plenty newsworthy.

 And over at Volokh Conspiracy, Eugene Volokh notes:

 [T]he court held, simply “photograph[ing] approximately twenty police officers standing outside a home hosting a party” and “carr[ying] a camera” to a public protest to videotape “interaction between police and civilians during civil disobedience or protests” wasn’t protected by the First Amendment.

I don’t think that’s right, though. Whether one is physically speaking (to challenge or criticize the police or to praise them or to say something else) is relevant to whether one is engaged in expression. But it’s not relevant to whether one is gathering information, and the First Amendment protects silent gathering of information (at least by recording in public) for possible future publication as much as it protects loud gathering of information.

Your being able to spend money to express your views is protected even when you don’t say anything while writing the check (since your plan is to use the funds to support speech that takes place later). Your being able to associate with others for expressive purposes, for instance by signing a membership form or paying your membership dues, is protected even when you aren’t actually challenging or criticizing anyone while associating (since your plan is for your association to facilitate speech that takes place later). The same should be true of your recording events in public places.

The ACLU has already announced an appeal, which would give the 3rd Circuit Court of Appeals an opportunity to knock down the strange distinction drawn by Judge Kearney.

The ability of individuals to record police in public without fear of reprisal is an essential mechanism for injecting transparency where it is sorely lacking, for holding the government accountable for misconduct, and in many cases for protecting good police officers from misattributed blame.

 For more of our work on recording police, check out this video:

Cops on Camera

These are challenging times for monetary economists like myself, what with central banks making one dramatic departure after another from conventional ways of conducting monetary policy.

Yet so far as I’m concerned, coming to grips with negative interest rates, overnight reverse repos, and  other newfangled monetary control devices is a cinch compared to meeting a challenge that nowadays confronts, not just monetary economists, but economists of all sorts.  I mean the challenge of  getting one’s ideas noticed by that great arbiter of all things economic, Tyler Cowen.

Last week, however, Tyler may have given me just the break I need, in the shape of a brief Marginal Revolution post entitled,  “Simple Points about Central Banking and Monetary Policy.”

Tyler’s “simple points” are these:

Central banks around the world could raise rates of price inflation, and boost aggregate demand, if they were allowed to buy corporate bonds and other higher-yielding assets.  Admittedly this could require changes in law and custom in many countries[.]

There is no economic theory which says central banks could not do this, as supposed liquidity traps would not apply.  These are not nearly equivalent assets with nearly equivalent yields.

Tyler isn’t one to traffic in banalities, so it’s no surprise that his claims are controversial.  Why so? Because the prevailing monetary policy orthodoxy, here in the U.S. at least, insists that, rare emergencies aside, the Fed should stick to a “Treasury’s only” policy, meaning that it should limit its open-market purchases to various Treasury securities.  For the Fed to do otherwise, the argument goes, would be for it to involve itself in “fiscal” policy,  because its security purchases would then influence, not just the overall availability of credit, but its allocation across different firms and industries.  So far as the proponents of “Treasuries only” are concerned, Tyler’s remedy for deflation would create a set of privileged or “pet” corporate securities, analogous to, and no less obnoxious than, the “pet banks” of the Jacksonian era.

All of which is good news for me, because I’m prepared, not only to side with Tyler in this debate, but to offer further arguments in support of his position.  For I took essentially the same position in a paper I prepared for Cato’s 2011 Monetary Conference.  In that paper, I first counter various arguments against having the Fed purchase private securities, and then proceed to recommend a set of Fed operating-system reforms involving broad-based security purchases.  I figure that, with a little luck, Tyler may find those arguments and suggestions worthy of other economists’ attention.

Here is a quick summary of my paper’s arguments and suggestions.

Concerning the “pet corporate securities” argument, to give it that name, I find both it and the anti-Jacksonians’ original complaint against pet banks equally unpersuasive.  If those state banks to which Jackson distributed government’s funds yanked from the  second Bank of the United States were “pet banks,” just what, prey tell, was the B.U.S. itself while it held all of the government’s deposits, if not a single (and correspondingly more odious) government “pet”?

Likewise, if purchasing corporate bonds means favoring particular corporations, and venturing thereby into “fiscal” policy, isn’t “Treasuries only” not itself a means of shunting scarce credit to one particular economic entity — in this case, the federal government — at the expense of all the others?  Is there not, indeed, something positively Orwellian about the suggestion that, by buying Treasury securities, the Fed steers clear of “fiscal” policy?

Though they never heard of Orwell, the Fed’s founders would certainly have considered such talk perverse.  Far from seeing “Treasuries only” as a means for keeping the Fed and the fisc at arms length, they took precisely the opposite view: so far as they were concerned, to allow a central bank to purchase government debt was to risk having it become a tool of inflationary finance. Consequently they favored a “commercial paper only” rule, or rather a “commercial paper and gold only” rule, with a loophole allowing purchases of government paper only for the sake of stabilizing the Fed’s earnings at times of low discount activity.  Like all loopholes in the Federal Reserve Act, this one was not left unexploited for long.  Yet it was not until 1984 that the opposite, Treasuries only alternative took force.  Nor is Treasuries only the rule elsewhere.  The ECB, in particular, ordinarily accepts euro-denominated corporate and bank bonds with ratings of A- or better as collateral for its temporary open-market operations.

The operating system reform I recommended involved replacing both the discount window and the anachronistic and unnecessary primary dealer system with an arrangement resembling the Term Auction Facility (TAF) created in December 2007, at which the Fed auctioned off credit to depository institutions against the same relatively broad set of collateral instruments, including corporate bonds, accepted at its discount window.  To assure competitive allocation of credit among bidders offering different types of collateral, the facility could make use of a “product-mix” auction of the sort Paul Klemperer developed for the Bank of England.  To rule out subsidies and limit its exposure to loss, the Fed could also follow the Bank of England’s example by setting bid rates for the various types of eligible collateral, reflecting predetermined penalties or “haircuts.”  Finally, to allow emergency credit to be supplied as broadly as possible, and therefore in a manner fully consistent with Walter Bagehot’s last-resort lending principles, the Fed could open its auction facility to various non-depository counter-parties, including money market mutual funds.

Besides making liquidity traps relatively easy to avoid, as Tyler suggests, adopting such an alternative system would have many other advantages.  It would reduce the systemic importance of  present primary dealers.  It would guard against the risk of having the Fed gobble-up collateral that’s essential to private-sector credit creation.  It would allow a single operating system to meet both ordinary and emergency demands for credit.  Like the TAF, it would avoid the “stigma” of discount-window lending.  In fact, it would dispense entirely with the need for direct lending to troubled financial (and perhaps some troubled non-financial) institutions.  Most importantly, it would render any sort of ad-hoc central bank lending during financial crises otiose, and by so doing would bring the Federal Reserve System one step closer to being based on the rule of law, instead of the arbitrary rule of bureaucrats.

[Cross-posted from Alt-M.org]

While we at the Center for the Study of Science recommend you listen to the Cato Daily Podcast, well, daily, today’s edition may be of particular interest. Host Caleb Brown spoke with Senator James Inhofe (R-OK) about the Clean Power Plan, regulatory overreach and American competitiveness. While they didn’t delve much into climate science, they touched on the inadequacy of global climate agreements.

We don’t want to spoil all the fun, so take a look below–or, better yet, subscribe to the Cato Daily Podcast on your app of choice (iTunes / Google Play / CatoAudio).

The Clean Power Plan (Sen. James Inhofe)

New research on Louisiana’s voucher program revealed mixed results. Yesterday, the Education Research Alliance for New Orleans (Tulane University) and the School Choice Demonstration Project (the University of Arkansas) released four new reports examining the Louisiana Scholarship Program’s impact on participating students’ test performance and non-cognitive skills, level of racial segregation statewide, and the effect of competition on district-school students. Here are the key findings:

  • Students who use the voucher to enroll in private schools end up with much lower math achievement than they would have otherwise, losing as much as 13 percentile points on the state standardized test, after two years. Reading outcomes are also lower for voucher users, although these are not statistically different from the experimental control group in the second year.
  • There is no evidence that the Louisiana Scholarship Program has positive or negative effects on students’ non-cognitive skills, such as “grit” and political tolerance.
  • The program reduced the level of racial segregation in the state. The vast majority of the recipients are black students who left schools with student populations that were disproportionally black relative to the broader community and moved to private schools that had somewhat larger white populations.
  • The program may have modestly increased academic performance in public schools, consistent with the theory behind school vouchers that they create competition between public and private schools that “lifts all boats.” [Emphasis added.]

The positive impact on racial integration and evidence that competition improved district-school student performance are both positive signs, but the significant negative impact on the performance of participating students is troubling. (Ironically, the evidence suggests that the voucher program may have improved the performance of non-voucher students more than the voucher students.) That said, although the impact on student performance is negative, the second year results show improvement over the first year. 

What caused the negative effects is a topic of intense debate. Until NBER published a study on Louisiana’s voucher program last month, all the previous random-assignment studies had found neutral-to-positive effects on students’ test performance and student outcomes such as the likelihood of graduating high school and enrolling in college. Several education policy researchers (myself included) suspect that Louisiana’s high degree of regulations drove away higher-performing private schools, leaving only the most desperate private schools that were willing to accept intrusive government regulations in order to slow or reverse declining enrollment. Louisiana’s voucher program forbids private schools from charging more than the value of the voucher, requires an admissions lottery rather than the school’s own admissions standards, and mandates that schools administer the state standardized test. The findings of the latest study are consistent with this view, although they are not conclusive:

Less than one-third of the private schools in Louisiana chose to participate in the LSP in its first year, possibly because of the extensive regulations placed on the program by government authorities (Kisida, Wolf, & Rhinesmith, 2015) combined with the relatively modest voucher value relative to private school tuition (Mills, Sude & Wolf, 2015). Although it is only speculation at this point, the Louisiana Scholarship Program regulatory requirements may have played a role in preventing the private school choice program from attracting the kinds of private schools that would deliver better outcomes to its participants. 

However, other researchers, including the authors of the latest study, offer other plausible explanations. For example, it’s possible that the performance of private-school students suffered on the mandatory state test because the schools’ curriculum was not yet aligned with the state curriculum. If so, merely adjusting to the new test (rather than actual gains in learning) would explain at least a part of the improvement in performance in the second year. It’s also possible that reforms improving district and charter schools may have made the private schools look relatively worse. However, the authors not that there were negative effects outside of New Orleans (where the reforms over the last decade have been the most intense), so this does “not completely explain [the negative] results.”

Next Friday (March 4th) at noon, the Cato Institute will be hosting a policy forum exploring whether Louisiana-style school choice regulations are helpful or harmful. Neal McCluskey, director of the Cato Institute’s Center for Educational Freedom, will moderate a discussion featuring two of the recent study’s authors, Dr. Patrick Wolf of the University of Arkansas and Dr. Douglas Harris of Tulane University, along with Michael Petrilli, President of the Thomas B. Fordham Institute, and yours truly. The forum will be followed by a sponsored lunch.

Readers interested in attending can RSVP at this link.

If you can’t make it to the event, you can watch it live online at www.cato.org/live and join the conversation on Twitter using #SchoolChoiceRegs.

An important and timely paper from Columbia University economist Karl Mertens finds that amount of income reported on tax returns is highly sensitive to marginal tax rates, and that the effect is mainly from changes in real activity not tax avoideance.   Mertens estimates “elasticities of taxable income of around 1.2 based on time series from 1946 to 2012. Elasticities are larger in the top 1% of the income distribution but are also positive and statistically significant for other income groups… . Marginal rate cuts lead to increases in real GDP and declines in unemployment.”  Other recent research also shows that “higher marginal tax rates reduce income mobility” while eliminating higher tax brackets improves upward mobility.   Both Democrat candidates for the presidency, Sanders and Clinton, want to greatly increase marginal tax rates on high incomes and on realized capital gains. By contrast, all Republican candidates propose to reduce marginal tax rates.    Mertens’ research unambiguously predicets that economic growth would slow or stop under the Democrats’ proposed tax increases, but accelerate under Republicans’ tax reforms.  

Arguing that “It’s been clear that the detention center at Guantanamo Bay does not advance our national security,” and that “It undermines our standing in the world,” President Obama has at last presented a plan to close Gitmo. The plan Obama outlined today was already well-known in most of its particulars. After transferring the 35 detainees already eligible and quickly reviewing the threat posed by the rest, the United States would then seek to move the remaining detainees to American prisons and military bases. 

The arguments for closing Gitmo are powerful. As Obama himself has long argued, the facility has provided terrorists with a potent recruiting narrative. The tortured policy of labeling the prisoners non-combatants in order to circumvent Geneva Convention prohibitions on torture and the need for due process violated both the Constitution and American ideals of justice. As Obama noted today, “Keeping this facility open is contrary to our values. It undermines our standing in the world. It is viewed as a stain on our broader record of upholding the highest standards of rule of law.” Closing the facility will not only deprive terrorist organizations of recruiting material it will also save the United States a good deal of money.

Unfortunately, the reality is that Obama’s plan is unlikely to go anywhere fast. In 2010 Congress passed a ban on bringing detainees to domestic prisons and there is little support among Congressional Republicans for lifting the ban. Speaker of the House Paul Ryan responded by arguing that “It is against the law – and it will stay against the law – to transfer terrorist detainees to American soil.” Obama might seek to close Guantanamo through an executive order, but the legality of that approach is highly dubious, and even the White House acknowledges that it is unclear whether that would be a politically viable route. Representative Lynn Jenkins (R-Kan)  summarized the sentiment among Republicans in Congress: “Submitting a plan to close the prison at Guantanamo Bay is yet another sign that President Obama is more focused on his legacy than the will of the American people. Republicans and Democrats are united on this issue: bringing the inmates housed at Guantanamo Bay to the United States is a nonstarter.”

The immediate beneficiaries of Obama’s plan won’t be the detainees; it will be the leading Republican candidates, all of whom oppose the plan. Last December Donald Trump criticized Obama’s plan to close Gitmo, saying “I would leave it just the way it is, and I would probably fill it up with more people that are looking to kill us.” At a recent town hall in South Carolina Ted Cruz argued that “The people in Guantanamo at this point, it’s down to the worst of the worst. A really alarming percentage of the people released from Guantanamo return immediately to waging Jihad, return immediately to going back trying to murder Americans.” And during the GOP debate in January that followed word that the administration was preparing a plan for closing Gitmo, Marco Rubio seized the moment to propose how he would deal with Islamic State supporters:  “The most powerful intelligence agency in the world is going to tell us where they are; the most powerful military in the world is going to destroy them; and if we capture any of them alive, they are getting a one-way ticket to Guantanamo Bay, Cuba, and we are going to find out everything they know.”

That’s bad enough for Obama, but it might wind up worse for Hillary Clinton. Clinton is on the record repeatedly calling on Obama to speed up the process of closing the base. In a secret memo to Obama in 2013 Clinton argued that “We must signal to our old and emerging allies alike that we remain serious about turning the page of GTMO and the practices of the prior decade.” Though this plays well with Clinton’s Democratic base during the primaries, it will prove a touchier subject during the general election. Even though a November Washington Post/ABC News poll showed that the public trusts Clinton more than any of the Republican candidates to handle the threat of terrorism, polling on Guantanamo specifically shows that a consistent and sizeable majority of the public supports keeping Gitmo open for business.

In the short run the most likely outcome of this latest clash is a few news cycles dominated by Republican criticism of Obama’s plan and little change in the status quo. Obama may desperately want to close Gitmo before he leaves office, but Republican control of Congress and the presidential election will combine to make that impossible. Obama’s biggest legacy will be to have reduced the number of detainees and to have avoided sending any new detainees there on his watch. In the long run, however, the decision about whether to close Gitmo for good lies with the next president.

Last week, a man in Kalamazoo, Michigan went on a shooting rampage, killing six people seemingly at random.

The suspected shooter is a man named Jason Dalton, who reportedly owns several firearms.  Up until this point, Dalton had no criminal record and has apparently never been adjudicated mentally ill.  In legal terms, this means that Dalton would have had no problem passing a background check to purchase his firearms.

Despite this fact, President Obama took time this week to suggest that his gun control measures make it more difficult for would-be spree shooters to acquire firearms.

Speaking to the National Governor’s Association, President Obama claimed:

As many of you read, six people were gunned down in a rampage in Kalamazoo, Michigan.  Before I joined all of you, I called the mayor, the sheriff, and the police chief there, and told them that they would have whatever federal support they needed in their investigation.  Their local officials and first responders, by the way, did an outstanding job in apprehending the individual very quickly.  But you got families who are shattered today.

Earlier this year, I took some steps that will make it harder for dangerous people, like this individual, to buy a gun.  

There is no support for that statement.

As I detailed last month, President Obama’s executive actions on guns amounted to a lot more pomp and rhetoric than substance.  Reforms included a “clarification” of who the federal government will consider “engaged in the business” of selling firearms and thus who is required to perform background checks, some alterations to the way heavily-regulated items like machine guns and suppressors could be obtained, and a loosening of privacy protections on the sharing of mental health information between states and the federal background check system.

There was no reason at that time to believe that President Obama’s executive actions would have any substantial effect on mass shootings or the rate of gun crime generally, and that remains the case today.

It’s a truism that a person who can pass (or has already passed) a background check will not be prevented from acquiring a firearm by expanding the categories of consumers required to undergo checks.

 Spree shootings are a tragedy, and policymakers should be open-minded about how to solve such a horrifying problem. But any good policy must be rooted in logic and evidence rather than appeals to emotion and a general sense that “we have to do something.”

Far from proving President Obama correct about background checks, tragedies like this emphasize the same flaws in President Obama’s logic on gun crime and mass shootings that we’ve highlighted in the past:

For all the pomp and ceremony, nothing in the president’s proposals is going to put a dent in U.S. gun crime or even substantially change the federal legal landscape. 

[…]

The most disappointing aspect of the proposals is that there is so little in them to suggest that President Obama is willing to address any of the major drivers of gun crime in America.  The sad irony is that President Obama could do far more to protect American lives and clean up our streets by ending the drug war than by expanding background checks.  Criminals, from gang members to spree shooters, have no trouble passing checks, finding straw purchasers, or simply buying guns on the inherently unregulated black market.  As long as there are hundreds of billions of dollars changing hands in the illicit drug market every year, the black market for firearms and the violent competition for market shares will continue to claim thousands of lives annually and make a mockery of the idea of gun control.

Gun crime is a serious problem, and it deserves attention.  Unfortunately, these proposals do not offer effective solutions. 

Note:  David Wojick, who holds a doctorate in the history and philosophy of science, sent me this essay.  It is thought provoking and deserves a read.

The US National Science Foundation seems to think that natural decades-to-centuries climate change does not exist unless provoked by humans. This ignores a lot of established science.

One of the great issues in climate science today is the nature of long-term, natural climate change. Long-term here means multiple decades to centuries, often called “dec-cen” climate change. The scientific question is how much of observed climate change over the last century or so is natural and how much is due to human activities? This issue even has a well known name – The Attribution Problem.

 This problem has been known for a long time. See for example these National Research Council reports: “Natural Climate Variability on Decade-to-Century Timescales (NAP, 1995)” and “Decade-to-Century-Scale Climate Variability and Change (NAP, 1998). The Preface of the 1998 Report provides a clear statement of the attribution problem:

The climate change and variability that we experience will be a commingling of the ever changing natural climate state with any anthropogenic change. While we are ultimately interested in understanding and predicting how climate will change, regardless of the cause, an ability to differentiate anthropogenic change from natural variability is fundamental to help guide policy decisions, treaty negotiations, and adaptation versus mitigation strategies. Without a clear understanding of how climate has changed naturally in the past, and the mechanisms involved, our ability to interpret any future change will be significantly confounded and our ability to predict future change severely curtailed.

Thus we were shocked to learn that the US National Science Foundation denies that this great research question even exists. The agency has a series of Research Overviews for its various funded research areas, fifteen in all. Their climate change research area is funded to the tune of over $300 million a year, or $3 billion a decade.

The NSF Research Overview for climate change begins with this astounding claim:

Weather changes all the time. The average pattern of weather, called climate, usually stays the same for centuries if it is undisturbed.

This is simply not true. To begin with, there is the Little Ice Age to consider. This is a multi-century period of exceptional cold that is thought to have ended in the 19th century. Since then there have been two periods of warming, roughly from 1910 to 1940, and then from 1976 through 1998.  There’s real controversy about what happened since then.  Until our government joggled the measured ocean surface temperatures last summer, scientists could all see that warming had pretty much stopped—what happened has been attended to here, and to say the least, the new record is controversial. 

But the two agreed-upon warmings indeed are indistinguishable in magnitude—only the first one could not have been caused by increasing atmospheric carbon dioxide, because we had emitted so little by then.   If it were, i.e. if climate is that “sensitive”, it would be so hot now that there wouldn’t be a scientific debate on the Attribution Problem. 

Prior to the Little Ice Age there is good evidence that we had what is called the Medieval Warm Period, which may even have been as warm as today.

Thus it is clearly not the case that climate “stays the same for centuries.” So far as we can tell it has never done this. Instead, dec-cen natural variability appears to be the rule.

Why has NSF chosen to deny dec-cen natural variability? The next few sentences in the Research Overview may provide an answer. NSF says this:

However, Earth is not being left alone. People are taking actions that can change Earth and its climate in significant ways. Carbon dioxide is the main culprit. Burning carbon-containing “fossil fuels” such as coal, oil and gas has a large impact on climate because it releases carbon dioxide gas into the atmosphere.

NSF has chosen to promote the alarmist view of human-induced climate change. This is the official view of the Obama Administration. In order to do this it must deny the possibility that long-term natural variability may play a significant role in observed climate change, despite the obvious evidence from the Medieval Warm Period, the Little Ice Age and the early 20th century warming.  As an editorial this might be tolerable, but this is a Research Overview of a multi-billion dollar Federal research program.

NSF is supposed to be doing the best possible science, which means pursuing the most important scientific questions. This is what Congress funds the agency to do. But if NSF is deliberately ignoring the attribution problem, in order to promote the alarmism of human-induced climate change, then it may be misusing its research budget. This would be very bad science indeed.

In technology policy there is a standard rule that says the Government should not pick winners and losers. It appears we need a similar rule in science policy. In the language of science what we seem to have here is the National Science Foundation espousing one research paradigm – human induced climate change and no other cause – at the expense of a competing paradigm – long-term natural variability.

Thomas Kuhn, who coined the term paradigm for the fundamental assumptions that guide research, pointed out that it is common for the proponents of one paradigm to shield it from a competitor. NSF’s actions look like a clear case of this kind of paradigm protection.

An Uber driver is accused of killing six people and wounding two others in a shooting rampage that took place in Kalamazoo, Michigan on Saturday. The victims seem to have been picked at random and were shot at three different locations. An unnamed source told CNN that the suspected killer, Jason Dalton, completed rides in between the shootings, which took place over a seven-hour period. It might be tempting to think in the wake of the Kalamazoo shooting that Uber should reform its background check system, but this would be an overreaction to a problem a different background check process wouldn’t have solved. 

Uber screens its drivers by checking county, state, and federal criminal records. As I explained in my Cato Institute paper on ridesharing safety, Uber is oftentimes stricter than taxi companies in major American cities when it comes to preventing felons and those with a recent history of dangerous driving from using its platform. And Dalton did pass Uber’s background check.

However, it’s important to keep in mind a disturbing detail: according to Kalamazoo Public Safety Chief Jeff Hadley, the suspected shooter did not have a criminal record and was not known to the authorities. In fact, Dalton, a married father of two, does not seem to have prompted many concerns from anyone. The Washington Post reports that Dalton’s neighbors noticed “nothing unusual” about him, although the son of one neighbor did say that he was sometimes a “hothead.”

That an apparently normal man with no criminal history can murder six people is troubling, but it’s hard to blame Uber for this. It’s not clear what changes Uber could make to its background check system in order to prevent incidents like the Kalamazoo shooting. What county court record, fingerprint scan, or criminal database would have been able to tell Uber that a man with no criminal record would one day go on a shooting rampage?

The Kalamazoo shooting is a tragedy, but it shouldn’t distract from the fact that Uber and other ridesharing companies like Lyft have features such as driver and passenger ratings as well as ETA (estimated time of arrival) sharing that make their rides safer than those offered by traditional competitors.

With the information we have it looks like Dalton could have passed a background check to have been a taxi driver or a teacher. While perhaps an unnerving fact, criminal background checks cannot predict the future, whether they are used to screen potential school bus drivers, police officers, or rideshare drivers. 

The European Union faces so many different crises that it has been–until now–impossible to predict the precise catalyst for its likely demise. The obvious candidates for destroying the EU include the looming refugee crisis, the tottering banking structure that is resistant to both bail-outs and bail-ins, the public distrust of the political establishment, and the nearly immobilized EU institutions.

But the most immediate crisis that could spell the EU’s doom is Prime Minister David Cameron’s failure to wrest from Brussels concessions that he needs in order to placate the increasingly euro-skeptic British public. Prime Minster Cameron has failed because the EU cannot grant the necessary concessions. There are three special reasons, as well as one underlying reality, that have made Cameron’s task impossible.

First, a profound reform of the EU-British relationship, which Cameron initially promised, was always impossible, because it required a “treaty change” in each of the twenty-eight EU member-states. That could not happen without approval either by parliamentary votes or, as this is especially difficult, a national referendum that Brussels dreads. There is simply no appetite in Europe to run such risks just to appease the UK.

Second, Cameron was willing to settle for a compromise but failed to obtain most of what he needed, because Brussels fears that other member states will follow the British example and demand similar accommodations. That would result in a “smorgasbord EU” in which each country could pick and choose what serves its own best interests. In other words, it would make a mockery of “an ever closer Europe.”

And so, Cameron ended up with what one Conservative Member of Parliament, Jacob Rees-Mogg, called a “thin gruel [of a reform that] has been further watered down.”

To make matters worse, Cameron’s “thin gruel” will require a vote by the EU Parliament after the British referendum takes place. As a sovereign body, the EU Parliament will be able to make changes to any deal approved by the British public–an uncomfortable and inescapable fact that the advocates of “Brexit” will surely utilize to their advantage during the referendum campaign.

At the root of the conundrum faced by the British and European negotiators is a struggle between a national political system anchored in parliamentary supremacy and a supra-national technocracy in Brussels that requires pooling of national sovereignty in order to achieve a European federal union. The public debate over the referendum is driving this fundamental incompatibility home. The choice between one and the other can no longer be deferred.

The most common image of a failing Europe is that of something falling apart, unraveling, crumbling away, or even evaporating into thin air. This imagery is misleading. Following the British referendum, and perhaps even before it, the EU will most likely implode.

Once the odds of Britain’s exit from the EU increase from merely likely to near-certain, the rush for the exits will begin in earnest. It is impossible to predict which of the remaining member-states will lead the charge, but the floodgates are sure to open. Those pro-EU governments that still remain in office are under siege in almost every member-state. As in Britain, the establishment parties will have to appease increasingly hostile electorates by getting a “better deal” from the EU. Or they could face electoral oblivion.

You Ought to Have a Look is a feature from the Center for the Study of Science posted by Patrick J. Michaels and Paul C. (“Chip”) Knappenberger.  While this section will feature all of the areas of interest that we are emphasizing, the prominence of the climate issue is driving a tremendous amount of web traffic. Here we post a few of the best in recent days, along with our color commentary.

Let’s begin this installment of You Ought to Have a Look with a peek at the heroic attempt by Rep. Michael Burgess (R-TX) to try to reel in the fanatical actions by the Department of Energy (DoE) to regulate the energy usage (operation) of virtually all the appliances in your home. The DoE effort is being undertaken as part of President Obama’s broader actions to mitigate climate change as directed under his Climate Action Plan. It is an extremely intrusive action and one that interferes with the operation of the free market.

We have been pushing back (through the submission of critiques during the public comment period of each new proposed regulation), but the sheer number and repetition of newly proposed regulations spilling forth from the DoE overwhelms our determination and wherewithal.

Rep. Burgess’s newly introduced legislation seeks to help lighten our suffering.

Bill H.R. 4504, the “Energy Efficiency Free Market Act of 2016” would “strike all government-mandated energy efficiency standards currently required on a variety of consumer products found in millions of American homes.”

Burgess reasons:

“The federal government must trust the American people to make the right decisions when it comes to the products they buy. When the government sets the efficiency standard for a product, that often becomes the ceiling. I have long been a firm believer in energy efficiency; however, when the market drives the standard, there’s no limit to how fast and how aggressive manufacturers will be when consumers demand more efficient and better made products.”

“Government standards have proven to be unworkable. The Commerce Clause of the U.S. Constitution was meant as a limitation on federal power. It was never intended to allow the federal government to micromanage everyday consumer products that do not pose a risk to human health or safety.”

 

Bravo!

The full text of H.R. 5404, the “Energy Efficiency Free Market Act of 2016” can be found here.

The bigger point is that the free market will drive efficiency improvements at the rate that the market bears. It doesn’t need government “help.”

Take the shale gas revolution that came about via the technologies of fracking and horizontal drilling. Not only did this unlock loads of natural gas from geologic formations never thought to relinquish their holdings economically, but the fuel being recovered (natural gas) produces fewer climate relevant carbon dioxide emissions to generate electricity than does coal, in effect, doing the DoE’s and President Obama’s work for them.

But is the President happy about this? Of course not. In fact, buried with the now stayed Clean Power Plan are disincentives to further build out natural gas fueled power plants.

But, the Administration efforts to reign in natural gas development aren’t anticipated to constrain the adoption of the natural gas extraction technologies to the rest of the world.

In its recently-released Annual Energy Outlook, BP anticipates:

Technological innovation and productivity gains have unlocked vast resources of tight oil and shale gas, causing us to revise the outlook for US production successively higher…

Globally, shale gas is expected to grow by 5.6% p.a. between 2014 and 2035, well in excess of the growth of total gas production.  As a result, the share of shale gas in global gas production more than doubles from 11% in 2014 to 24% by 2035…

As with the past 10 years, the growth of shale gas supply is dominated by North American production, which accounts for around two-thirds of the increase in global shale gas supplies. But over the Outlook period, we expect shale gas to expand outside of North America, most notably in Asia Pacific and particularly in China, where shale gas production reaches 13 Bcf/d by 2035.

The rapidly expanding global production of natural gas means that when it comes to business-as-usual projections of future greenhouse gas emissions, we ought to be favoring the ones with faster declines in the carbon intensity of the energy supply. These BAU scenarios, coupled with an equilibrium climate sensitivity towards the low end of consensus estimates means that the overall global temperature rise (over the pre-industrial conditions) of somewhere in the neighborhood 2.0°C by the end of this century is well within the realm of possibilities—even without an international climate agreement (or a carbon tax in the U.S.).

You are not hearing of this possibility many places besides here (for example, here and here). So stay tuned in!

Finally, on the lighter side, in the all-bad-things-come-from-climate-change department, is a story a town in Australia that is experiencing a “hairy panic.” No, it’s not a crisis brought about by a local shortage of razor blades, or the lingering effects from no-shave November,  but rather an invasion of a type of Australian tumbleweed.

Check it out!

Of course, no opportunity to relate any sort of human misery to global warming passes by. In this case, to shield yourself from the exposure to unnecessary nonsense, you probably ought not have a look! 

Given the “facts” that have been bandied about in the media since Justice Antonin Scalia’s death concerning presidential election year nominations and confirmations to the Supreme Court, I asked Anthony Gruzdis, our crack research assistant for the Center for Constitutional Studies, to do an exhaustive study of the subject, and here, in summary, are the most relevant facts.

It turns out that most election year resignations and deaths were in the pre-modern (pre-1900) era—many in the era before today’s two major parties were established. And the pre-1900 picture is further complicated by several multiple nominations and confirmations of the same person, both before and after the election, so it’s not until the modern era that we get a picture that is more clearly relevant and instructive for the current situation.

Looking at the history of the matter since 1900, then, until last week only four vacancies have occurred during an election year, two in 1916, one in 1932, and one in 1956. (Three more occurred during the previous year, in 1911, 1939, and 1987; the nominees in each case were confirmed, respectively, in February, early January, and early February of the election year that followed.) The first three were filled when the president’s party also controlled the Senate, so that’s not the situation we have now. And when Justice Sherman Minton resigned for health reasons on October 15, 1956, President Eisenhower made a recess appointment that same day of William J. Brennan, Jr., nominating him for the seat on January 14, 1957, for which Brennan was confirmed by voice vote on March 19, 1957. In 1956 the Senate was closely divided with 48 Democrats, 47 Republicans, and 1 Independent. In 1957 it was also closely divided with 49 Democrats and 47 Republicans, although in both cases the Southern Democrats often voted with the Republicans.

The resignation of Chief Justice Earl Warren in June 1968 and President’s Johnson’s nomination of Justice Abe Fortas to succeed Warren has been cited as a parallel for today, but the complex details of that case hardly make it so. In a nutshell, after a heated debate concerning speaking fees Fortas had accepted plus his political activities while on the bench, Fortas asked President Johnson on October 1, 1968 to withdraw his nomination to be chief justice. He resigned from the Court on May 14, 1969, shortly after which President Nixon nominated Warren Burger to be chief justice.

More often the nomination of Justice Anthony Kennedy is cited as a parallel for today, but here too there are important differences. In particular, the seat Kennedy holds became vacant not in an election year but in late June 1987 when Justice Lewis Powell, Jr. announced his retirement. The stormy hearings for Judge Robert Bork followed. After that nomination failed, President Reagan named Judge Douglas Ginsburg, who withdrew his name shortly thereafter. Finally, the president nominated then-Judge Kennedy on November 11, 1987, still not in an election year. Kennedy was confirmed on February 3, 1988. The one parallel to today is that President Reagan faced a Senate that was 55-45 Democratic. It is likely, however, that the president’s popularity, plus the wish to bring to an end the exhausting struggle of the previous seven months, explains the confirmation vote of 97-0.

In sum, in the modern era there is no close parallel to the situation today when the presidential primary elections are already underway, the White House and the Senate are held by different parties, the parties are deeply divided, and the most recent off-year elections reflected that divide fairly clearly. The Constitution gives the president the power to nominate a successor to Justice Scalia. But it also gives the Senate the power to confirm, or not. In the end, this is a political matter.

The U.S. is bankrupt. Of course, Uncle Sam has the power to tax. But at some point even Washington might not be able to squeeze enough cash out of the American people to pay its bills.

President Barack Obama would have everyone believe that he has placed federal finances on sound footing. The deficit did drop from over a trillion dollars during his first years in office to “only” $439 billion last year. But the early peak was a result of emergency spending in the aftermath of the financial crisis and the new “normal” is just short of the pre-financial crisis record set by President George W. Bush. The reduction is not much of an achievement.

Worse, the fiscal “good times” are over. The Congressional Budget Office expects the deficit to jump this year, to $544 billion.

The deficit is not caused by too little money collected by Uncle Sam. Revenues are rising four percent this year, and will account for 18.3 percent of GDP, well above the last 50-year average of 17.4 percent. But outlays are projected to rise six percent, leaving expenditures at 21.2 percent of GDP, greater the 20.2 percent average of the last half century.

Alas, this year’s big deficit jump is just the start. Revenues will rise from $3.4 trillion to $5 trillion between 2016 and 2026. As a share of GDP they will remain relatively constant, ending up at 18.2 percent. However, outlays will rise much faster, from about $4 trillion this year to $6.4 trillion in 2026. As a percent of GDP spending will jump from 21.2 percent to 23.1 percent over the same period,

Thus, the amount of red ink steadily rises and is expected to be back over $1 trillion in 2026. The cumulative deficit from 2017 through 2026 will run $9.4 trillion. Total debt will rise by around 70 percent, to roughly $24 trillion in 2026.

Reality is likely to be worse. So-called discretionary spending, subject to annual appropriations, is to be held to record, and probably unrealistically, low levels. In contrast, entitlements will be exploding.

Last June, CBO published a report looking at the federal budget through 2040. Warned the agency: “the extended baseline projections show revenues that fall well short of spending over the long term, producing a substantial imbalance in the federal budget.” By 2040 the agency imagines revenues rising sharply, to 19.4 percent of GDP, but spending going up even further, to 25.3 percent of GDP.

Using its revised figures, CBO warned: “Three decades from now debt held by the public is projected to equal 155 percent of GDP, a higher percentage than any previously recorded in the United States.” Even when exiting World War II—106 percent in 1946, the previous record.

CBO noted the potentially destructive consequences of such indebtedness. Washington’s interest burden would rise sharply. Moreover, “because federal borrowing reduces total saving in the economy over time, the nation’s capital stock would ultimately be smaller than it would be if debt was smaller, and productivity and total wages would be lower.” Americans would be poorer and have less money to fund the steadily rising budgets.

Worse, investors could come to see federal debt as unpayable. Warned CBO: “There would be a greater risk that investors would become unwilling to finance the government’s borrowing needs unless they were compensated with very high interest rates; if that happened, interest rates on federal debt would rise suddenly and sharply.” This in turn “would reduce the market value of outstanding government bonds, causing losses for investors and perhaps precipitating a broader financial crisis by creating losses for mutual funds, pension funds, insurance companies, banks, and other holders of government debt—losses that might be large enough to cause some financial institutions to fail.”

As I wrote for American Spectator: “There’s no time to waste. Uncle Sam is headed toward bankruptcy. Without serious budget reform, we all will be paying the high price of fiscal failure.”

The Current Wisdom is a series of occasional articles in which Patrick J. Michaels and Paul C. “Chip” Knappenberger, from Cato’s Center for the Study of Science, review interesting items on global warming in the scientific literature or of a more technical nature. These items may not have received the media attention that they deserved or have been misinterpreted in the popular press.

We hardly need a high tech fly-swatter (although they are fun and effective) to kill this nuisance—it’s so languorous that one can just put their thumb over it and squish.

Jeb Bush’s candidacy? No, rather the purported connection between human-caused global warming and the highly-publicized spread of the Zika virus.

According to a recent headline in The Guardian (big surprise) “Climate change may have helped spread Zika virus, according to WHO scientists.”

Here are a few salient passages from The Guardian article:

“Zika is the kind of thing we’ve been ranting about for 20 years,” said Daniel Brooks, a biologist at University of Nebraska-Lincoln. “We should’ve anticipated it. Whenever the planet has faced a major climate change event, man-made or not, species have moved around and their pathogens have come into contact with species with no resistance.”

And,

“We know that warmer and wetter conditions facilitate the transmission of mosquito-borne diseases so it’s plausible that climate conditions have added the spread of Zika,” said Dr. Diarmid Campbell-Lendrum, a lead scientist on climate change at WHO.

Is it really “plausible?”

Hardly.

The Zika virus is transmitted by two species of mosquitoes, Aedes aegypti and Aedes albopictus, that are now widespread in tropical and sub-tropical regions of the globe (including the Southeastern U.S.), although they haven’t always been.

These mosquito species, respectively, have their origins in the jungles of Africa and Asia—where they largely remained for countless thousands of years. It’s hypothesized that Aedes aegypti found its way out of Africa during the slave trade, which brought the mosquitoes into the New World, and from there they spread throughout the tropics and subtropics of North and South America. Aedes albopictus, also known as the Asian tiger mosquito, is a more recent global traveler, spreading out from the forests of Southeast Asia to the rest of the world during the 1980s thanks to the interconnectedness of the modern transportation network. Figure 1 below shows the modeled geographic distribution of each of these species.

Figure 1. (Top) Global map of the modelled distribution of Aedes aegypti. (Bottom) Global map of the modelled distribution of Aedes albopictus. The map depicts the probability of occurrence (from 0 blue to 1 red) at a spatial resolution of 5 km × 5 km. (figures from Kraemer et al., 2015)

The distribution explosion from confined forests to the global tropics and subtropics had nothing whatsoever to do with climate change, rather, it was due to the introduction of the mosquito species into favorable existing climates.

Since the Aedes mosquitoes are now widely present in many areas also highly populated by the human species, the possibility exists for outbreaks and the rapid spread of any of the diseases that the mosquitoes may carry, including dengue fever, yellow fever, chikungunya, West Nile and Zika.

But what about climate change? It’s got to make things better for warmth-loving mosquitos and the diseases they spread, right? After all, we read that in the New York Times!

Hardly. Climate change acts at the periphery of the large extant climate-limited distributions. And its impact on those margins is anything but straightforward.

Many scientific studies have used various types of climate and ecological niche modelling to try to project how Aedes mosquito populations and geographic range may change under global warming and they all project a mixed bag of outcomes. The models project range/population expansions in some areas, and range/population contractions in others—with the specific details varying among the different studies. The one take-home message from all of these studies is that the interaction between mosquitos and climate and disease, is, in a word, complex.

For example, here are some typical results (from Khormi and Kumar, 2014) showing the current distribution of Aedes aegypti mosquitos along with the projected distribution in the latter part of this century under a pretty much business-as-usual climate change scenario.

Figure 2. (Top) Modeled suitability for Aedes aegyptibased on the current climate conditions. (Bottom) Estimated suitability for Aedes aegyptiin 2070 based on SRES A1B (business-as-usual) emission scenario. White = areas unfavorable for the Aedesmosquito; blue = marginally suitable areas; blue/yellow = favorable areas; yellow/red = very favorable areas. (adapted from Khormi and Kumar, 2014).

Good luck seeing much of a difference. Which is our point: the bulk of the large distribution of Aedes aegypti remains the same now as 50 years from now, with small changes (of varying sign) taking place at the edges. Similar results have been reported for Aedes albopictus by Yiannis Proestos and colleagues. The authors of the Aedes aegypti study write:

Surprisingly, the overall result of the modelling reported here indicates an overall contraction in the climatically suitable area for Aedes in the future… The situation is not straightforward but rather complicated as some areas will see an upsurge in transmission, while others are likely to see a decline. [emphasis added]

The human response to the presence of the mosquitos and the disease is equally complicated, if not more so.

As Dr. Paul Reiter pointed out in his 2001 classic “Climate Change and Mosquito-Borne Disease”:

[T]he histories of three such diseases—malaria, yellow fever, and dengue—reveal that climate has rarely been the principal determinant of their prevalence or range; human activities and their impact on local ecology have generally been much more significant.

More recently, Christofer Aström and colleagues found this to be the case in their modelling efforts for dengue fever (a similar result would be expected for Zika, carried by the same mosquitos). They report:

Empirically, the geographic distribution of dengue is strongly dependent on both climatic and socioeconomic variables. Under a scenario of constant [socioeconomic development], global climate change results in a modest but important increase in the global population at risk of dengue. Under scenarios of high [socioeconomic development], this adverse effect of climate change is counteracted by the beneficial effect of socioeconomic development.

The bottom line is that even 50-100 years from now, under scenarios of continued global warming, the changes to the Aedes populations are extremely difficult to detect and make up an exceedingly small portion of the extant distributions and range. The impact of those changes is further complicated by the interactions of the mosquitos, humans, and disease—interactions which can be mitigated with a modicum of prevention.

Linking climate change to the current mosquito range and outbreak of Zika (or any other of your favorite Aedes-borne diseases)?  No can do.

The threat of the spread of vector-borne diseases, like most every other horror purported to be brought about by climate change, is greatly overblown. The scientific literature shows the impact, if any, to be ill-defined and minor. As such, it is many times more effectively addressed by directed, local measures, than by attempting to alter the future course of the earth’s climate through restrictions on greenhouse gas emissions (produced from the burning of fossil fuels to generate power)—the success and outcome  of which is itself uncertain

 

References:

Aström, C, et al., 2012. Potential Distribution of Dengue Fever Under Scenarios of Climate Change and Economic Development. EcoHealth, 9, 448-454.

Khormi, H. M., and L. Kumar, 2014. Climate change and the potential global distribution of Aedes

aegypti: spatial modelling using geographical information system and CLIMEX. Geospatial Health, 8, 405-415.

Kraemer, M. U. G., et al., 2015. The global distribution of the arbovirus vectors Aedes aegypti and Ae. albopictus, eLife, 4, doi: 10.7554/eLife.08374

Proestos, Y., et al., 2015. Present and future projections of habitat suitability of the Asian tiger mosquito, a vector of viral pathogens, from global climate simulation. Philosophical Transactions of the Royal Society B, 370,20130554, http://dx.doi.org/10.1098/rstb.2013.0554

Reiter, P., 2001. Climate change and mosquito-borne disease, Environmental Health Perspectives, 109, 141-161.

Last year, the comedy duo Key & Peele’s TeachingCenter sketch imagined what it would be like if teachers were treated like pro-athletes, earning millions, being drafted in widely televised events, and starring in car commercials. We’re not likely to see the latter two anytime soon, but some teachers are already earning seven figures.


The Key & Peele sketch inspired think pieces arguing that K-12 teachers should be paid more, but without making any fundamental changes to the existing system. Matt Barnum at The Seventy-Four brilliantly satirized this view in calling for pro-athletes to be treated more like teachers: stop judging teams based on wins or players based on points scored, eliminate performance pay in favor of seniority pay, and get rid of profits.

Barnum’s serious point, of course, is that these factors all contribute to athletes’ high salaries. There are at least two other major factors: the relative scarcity of highly talented athletes and their huge audience. The world’s best curlers don’t make seven figures because no one cares about curling (apologies to any Canadian readers), and while high-quality football referees are crucial to a sport with a huge audience, they’re a lot more replaceable than a good quarterback.

But what if we combined these ingredients? What if there was a for-profit system where high-quality teachers had access to a huge audience and they were paid based on their performance?

Actually, such a system already exists:

Kim Ki-hoon earns $4 million a year in South Korea, where he is known as a rock-star teacher—a combination of words not typically heard in the rest of the world. Mr. Kim has been teaching for over 20 years, all of them in the country’s private, after-school tutoring academies, known as hagwons. Unlike most teachers across the globe, he is paid according to the demand for his skills—and he is in high demand.

He may be a “rock star” but how does Mr. Kim have an audience large enough that he earns more than the average Major League Baseball player? Answer: The Internet.

Mr. Kim works about 60 hours a week teaching English, although he spends only three of those hours giving lectures. His classes are recorded on video, and the Internet has turned them into commodities, available for purchase online at the rate of $4 an hour. He spends most of his week responding to students’ online requests for help, developing lesson plans and writing accompanying textbooks and workbooks (some 200 to date).

In the United States, several companies are taking a similar approach to higher education. Last week, Time Magazine profiled Udemy, one of the largest providers of digital higher education:

On Feb. 12, Udemy will announce that more than 10 million students have taken one of its courses. In the U.S., there were about 13 million students working toward a four-year degree during fall 2015 semester, according to the Department of Education. It is another example of the rising popularity of online education as college costs have boomed in the United States. Americans hold $1.2 trillion in student loan debt, second only to mortgages in terms of consumer obligations. Entering the workforce deep in the red could be a handicap that follows graduates the rest of their careers, economists say.

Digital instruction is still in the early stages of development, and research on its impact so far has been mixed. It’s not for everyone. However, it holds the promise of providing students much greater access to top instructors at a lower cost. At the same time, as Joanne Jacobs highlighted, it also gives great instructors access to a much larger audience, and that can translate into significant earnings. As Time reports:

Udemy courses can be rewarding for the platform’s instructors, too. Rob Percival, a former high school teacher in the United Kingdom, has made $6.8 million from a Udemy web development course that took him three months to build. “It got to the stage several months ago where I hit a million hours of viewing that particular month,” he says. “It’s a very different experience than the classroom. The amount of good you can do on this scale is staggering. It’s a fantastic feeling knowing that it’s out there, and while I sleep people can still learn from me.”

Digital instruction is not a panacea for all our education policy challenges (nothing is), and it’s unlikely that it will replace in-person learning, especially for younger students. But it is a good example of how harnessing the market can improve the lot of both students and teachers.

Yet again North Korea has angered “the world.” Pyongyang violated another United Nations ban, launching a satellite into orbit. Washington is leading the campaign to sanction the North.

Announced UN Ambassador Samantha Power: “The accelerated development of North Korea’s nuclear and ballistic missile program poses a serious threat to international peace and security—to the peace and security not just of North Korea’s neighbors, but the peace and security of the entire world.”

The Democratic People’s Republic of Korea is a bad actor. No one should welcome further enhancements to the DPRK’s weapons arsenal.

Yet inflating the North Korean threat also doesn’t serve America’s interests. The U.S. has the most powerful military on earth, including 7100 nuclear warheads and almost 800 ICBMs/SLBMs/nuclear-capable bombers. Absent evidence of a suicidal impulse in Pyongyang, there’s little reason for Washington to fear a North Korean attack.

Moreover, the North is surrounded by nations with nuclear weapons (China, Russia) and missiles (those two plus Japan and South Korea). As a “shrimp among whales,” any Korean government could understandably desire to possess the ultimate weapon.

Under such circumstances, allied complaints about the North Korean test sound an awful lot like whining. For two decades U.S. presidents have said that Pyongyang cannot be allowed to develop nuclear weapons. It has done so. Assertions that the DPRK cannot be allowed to deploy ICBMs sound no more credible.

After all, the UN Security Council still is working on new sanctions after the nuclear test last month. China continues to oppose meaningful penalties. Despite U.S. criticism, the People’s Republic of China has reason to fear disintegration of the North Korean regime: loss of political influence and economic investments, possible mass refugee flows, violent factional combat, and loose nukes, and the creation of a reunified Korea hosting American troops on China’s border.

Moreover, Beijing blames the U.S. for creating the hostile security environment which encourages the North to develop WMDs. Why should Beijing sacrifice its interests to solve a problem of its chief global adversary’s making?

Pyongyang appears to have taken the measure of its large neighbor. The Kim regime announced its satellite launch on the same day that it reported the visit of a Chinese envoy, suggesting another insulting rebuff for Beijing.

Even if China does more, the North might not yield.

Thus, the U.S. and its allies have no better alternatives in dealing with Pyongyang today than they did last month after the nuclear test. War would be foolhardy, sanctions are a dead-end, and China remains unpersuaded.

As I point out in National Interest: “The only alternative that remains is some form of engagement with the DPRK. Cho Han-bum of the Korea Institute for National Unification argued that the North was using the satellite launch to force talks with America. However, Washington showed no interest in negotiation, so the DPRK launched.”

Of course, no one should bet on negotiating away North Korea’s weapons. If nothing else, Pyongyang watched American and European governments oust Libya’s Moammar Khadafy after, in its view, at least, he foolishly traded away his nuclear weapons and missiles.

Nevertheless, there are things which the North wants, such as direct talks with America, a peace treaty, and economic assistance. Moreover, the DPRK, rather like Burma’s reforming military regime, appears to desire to reduce its reliance on Beijing. This creates an opportunity for the U.S. and its allies.

Perhaps negotiation would temper the North’s worst excesses. Perhaps engagement would encourage domestic reforms. Perhaps a U.S. initiative would spur greater Chinese pressure on Pyongyang.

Perhaps not. But current policy has failed.

Yet again the North has misbehaved. Yet again the allies are talking tough. Samantha Power insisted that “we cannot and will not allow” the North to develop “nuclear-tipped intercontinental ballistic missiles.”

However, yet again Washington is only doing what it has done before. Unfortunately, the same policy will yield the same result as before. It is time to try something different.

 

President Obama has issued his final federal budget, which includes his proposed spending for 2017. With this data, we can compare spending growth over eight years under Obama to spending growth under past presidents.

Figures 1 and 2 show annual average real (inflation-adjusted) spending growth during presidential terms back to Eisenhower. The data comes from Table 6.1 here, but I made two adjustments, as discussed below.

Figure 1 shows total federal outlays. Ike is negative because defense spending fell at the end of the Korean War. LBJ is the big-spending champ. He increased spending enormously on both guns and butter, as did fellow Texan George W. Bush. Bush II was the biggest spender since LBJ. As for Obama, he comes out as the most frugal president since Ike, based on this metric.

Figure 2 shows total outlays other than defense. Recent presidents have presided over lower spending growth than past presidents. Nixon still stands as the biggest spender since FDR, and the mid-20th century was a horror show of big spenders in general. The Bush II and Obama years have been awful for limited government, but the LBJ-Nixon tag team was a nightmare—not just for rapid spending during their tenures, but also for the creation of many spending and regulatory programs that still haunt us today.

I made two adjustments to the official budget data, both for 2009. First, the official data includes an outlay of $151 billion for TARP in 2009 (page 4 here). But TARP ended up costing taxpayers virtually nothing, and official budget data reverses out the spending in later years. So I’ve subtracted $151 billion from the official 2009 amount. Second, 2009 is the last budget year for Bush II, but 2009 was extraordinary because Obama signed into law a giant spending (stimulus) bill, which included large outlays immediately in 2009. It is not fair to blame Bush II for that (misguided) spending, so I’ve subtracted $114 billion in stimulus spending for that year, per official estimates.

Readers will note that Congress is supposed to have the “power of the purse,” not presidents. But I think that the veto gives presidents and congresses roughly equal budget power today. So Figures 1 and 2 can be interpreted as spending growth during each president’s tenure, reflecting the fiscal disposition of both the administration and Congress at the time. Spending growth during the Clinton and Obama years, for example, was moderated by Republican congresses that leaned against the larger domestic spending favored by those two presidents.

One more caveat is that presidents have limited year-to-year control over spending that is on auto-pilot. Some presidents luck out with slower growth on such spending, as Obama has with relatively slow growth on Medicare in recent years.

Finally, it is good news that recent spending growth is down from past decades, but economic growth is slower these days, so the government’s share of the economy still expanded during the Bush II and Obama years. Besides, the FDR-to-Carter years bestowed on us a massive federal government that is actively damaging American society, so we should be working to shrink it, not just grow it more slowly.

A recent Cato policy forum on European over-regulation took an unexpected turn when my friend and colleague Richard Rahn suggested that falling prices and increasing availability of writing paper may have been responsible for increasing the number and length of our laws and regulations. (Dodd-Frank is longer than the King James’ Bible, to give just one obvious example.)

Goodness knows what will happen when new legislation stops being printed on writing paper and starts appearing only on the internet. (I never read Apple’s “terms and conditions,” do you?)

Anyhow, Richard’s hypothesis will soon be put to the test in Great Britain, where the lawmakers have just decided to stop writing the Acts of Parliament on calfskin – a tradition dating back to the Magna Carta – and use paper instead. Will the length of British laws increase, as Rahn’s hypothesis predicts? We shall see.

In the meantime, Americans remain stuck with a Niagara Falls of laws and regulations that our lawmakers generate every year. Many distinguished scholars have wondered how to slow our Capitol Hill busybodies down a little. The great Jim Buchanan had some good ideas, but the most effective, if not harshest, means of preventing over-regulation was surely developed by the Locrians in the 7th century BC.

As Edward Gibbon narrates in The History of the Decline and Fall of the Roman Empire, “A Locrian who proposed any new law, stood forth in the assembly of the people with a cord round his neck, and if the law was rejected, the innovator was instantly strangled.” (Vol. IV, chapter XLI; V pp. 783–4 in volume 2 of the Penguin edition.)

Ours is, of course, an enlightened Republic, not an iron-age Greek settlement on the southern tip of Italy. The life and limb of our elected officials must, therefore, remain safe. But, what if instead of physical destruction, a failed legislative proposal resulted in, so to speak, “political death?” What if the Congressman or Senator, whose name appeared on a bill that failed to pass through Congress, were prevented from running for reelection after their term in office came to an end? 

Just a happy thought before the weekend.

Pages