Policy Institutes

Today, legislative efforts began in eleven cities (see right) aimed at requiring police departments to be more transparent about the surveillance technology they use. The bills will also reportedly propose increased community control over the use of surveillance tools. These efforts, spearheaded by the ACLU and other civil liberty organizations, are important at a time when surveillance technology is improving and is sometimes used without the knowledge or approval of local officials or the public.

Many readers will be familiar with CCTV cameras and wiretap technology, which police use to investigate crimes and gather evidence. Yet there is a wide range of surveillance tools that are less well-known and will become more intrusive as technology advances.

Facial recognition software is already used by some police departments. As this technology improves it will be easier for police to identify citizens, especially if it is used in conjunction with body cameras. But our faces are not our only biometric identifiers. Technology in the near future will make it easier to identify us by analyzing our gait, voice, irises, and ears.

This is concerning given that (thanks to the state of legal doctrine) police are not carrying out a Fourth Amendment search when they analyze your features with biometric tools. As Duke law professor Nita Farahany explained to the Senate Judiciary Subcommittee on Privacy, Technology and the Law in 2012:

If the police use facial recognition technology to scan an individual’s face while in a public place, and that individual is not detained or touched as he is scanned, then no Fourth Amendment search has occurred. Neither his person nor his effects have been disturbed, and he lacks any legal source to support a reasonable expectation that his facial features will be hidden from government observation. He has chosen to present his face to the world, and he must expect that the world, including the police, may be watching.

Even features below your skin may soon be used by police to identify you with ease. British and American Intelligence officials reportedly used vein recognition analysis while seeking to identify “Jihadi John,” a British terrorist responsible for a string of beheadings in Syria.

Potential terrorism and national security concerns are often cited to justify the secrecy surrounding the “Stingray.” Stingrays, which work by mimicking cell-towers, collect identifying data from cellphones within range. This data allows investigators to track and identify targets. Although dozens of law enforcement agencies have used Stingrays, their use is shrouded in secrecy thanks in part to FBI non-disclosure agreements. The Federal Communications Commission (which oversees radio-emitting tools) has granted the FBI authority over the regulation of Stingrays. The FBI’s non-disclosure agreements are so secretive that they can require prosecutors to drop charges rather than publicly reveal that a Stingray has been used.

As my colleague Adam Bates has explained, Stingrays are almost certainly overwhelmingly used for routine investigations that have nothing to do with terrorism:

Despite repeated references to “terrorists” and “national security” as a means for maintaining secrecy about Stingray use, the data that has been released detailing the purposes of actual Stingray investigations - such as this breakdown from the Tallahassee Police Department that contains not a single terrorism reference - suggests that Stingrays are used virtually entirely for routine law enforcement investigations.

Cell tower simulators can be mounted on airplanes, but advances in drone technology means that flying surveillance tools above cities won’t always require manned aircraft. The prospect of cell tower simulators mounted on huge solar-powered drones capable of staying aloft for months is worrying enough, but as technology improves drones will be getting smaller as well as bigger. Drones the size of small birds have already been developed and used for military operations, and we should expect similar drones to be routinely used by domestic law enforcement in the not-too-distant future.

As I pointed out yesterday, persistent surveillance technology, which provides users with highly detailed views of areas the size of an entire town, can be mounted on drones and have already been used by the military. Less invasive but nonetheless worrying persistent aerial surveillance technology has been used in Baltimore. Many Baltimore officials were not told about the surveillance, which began in January and is funded by a billionaire couple.

This level of secrecy surrounding the use and funding of surveillance is one of the issues that activists are hoping to address in their campaign. According to Community Control Over Police Surveillance’s guiding principles, “Surveillance technologies should not be funded, acquired or used without the knowledge of the public and the approval of their elected representatives on the city council.”

Even staying indoors will not necessarily keep you safe from secretive police snooping. Last year it was revealed that the New York Police Department had been using x-ray vans capable of seeing through the walls of buildings and vehicles. City officials refused to answer basic questions about the vans related to funding and judicial authorization.

Law enforcement agencies across the country will continue to take advantage of surveillance technology as it improves. Lawmakers can ensure that police departments are transparent about the kind of surveillance tools they’re using and how these tools are paid for. This is especially important given the rate of technological change and the history of police secrecy.

Global Science Report is a feature from the Center for the Study of Science, where we highlight one or two important new items in the scientific literature or the popular media. For broader and more technical perspectives, consult our “Current Wisdom.”

In our continuing examination of U.S. flood events, largely prompted by the big flood in Louisiana last month and the inevitable (and unjustified) global-warming-did-it stories that followed, we highlight a just-published paper by a research team led by Dr. Stacey Archfield of the U.S. Geological Survey examining trends in flood characteristics across the U.S. over the past 70 years.

Previous studies we’ve highlighted have shown that a) there is no general increase in the magnitude of heavy rainfall events across the U.S., and thus, b) unsurprisingly, “no evidence was found for changes in extreme precipitation attributable to climate change in the available observed record.”  But since heavy rainfall is not always synonymous with floods, the new Archfield paper provides further perspective.

The authors investigated changes in flood frequency, duration, magnitude and volume at 345 stream gauges spread across the country. They also looked to see if there were any regional consistencies in the changes and whether or not any of the observed changes could be linked to large-scale climate indices, like El Niño.

What they found could best be described largely as a “negative” result—basically, few departures from the a priori expectation (often called the null hypothesis) that there are no coherent changes in flood characteristics occurring across the U.S.  Here’s their summary of their research findings:

Trends in the peak magnitude, frequency, duration and volume of frequent floods (floods occurring at an average of two events per year relative to a base period) across the United States show large changes; however, few trends are found to be statistically significant. The multidimensional behavior of flood change across the United States can be described by four distinct groups, with streamgages either experiencing: 1) minimal change, 2) increasing frequency, 3) decreasing frequency, or 4) increases in all flood properties. Yet, group membership shows only weak geographic cohesion. Lack of geographic cohesion is further demonstrated by weak correlations between the temporal patterns of flood change and large-scale climate indices. These findings reveal a complex, fragmented pattern of flood change that, therefore, clouds the ability to make meaningful generalizations about flood change across the United States.

The authors added:

Observed changes in short term precipitation intensity from previous research and the anticipated changes in flood frequency and magnitude expected due to enhanced greenhouse forcing are not generally evident at this time over large portions of the United States for several different measures of flood flows.

“Negative” results of this kind are a refreshing change from the “positive” results—results which find something “interesting” to the researchers, the journal publishers, or the funding agencies—that have come to dominate the scientific literature, not just on climate change, but in general. The danger in “positive” results is that they can ingrain falsehoods both in the knowledgebase of science itself, and also in the minds of the general public.  We’ve discussed how the appetite for producing “interesting” results—which in the case of climate change means results that indicate the human impact on weather events/climate is large, unequivocal, and negative—leads to climate alarm becoming “a self promulgating collective belief.”

What is needed to break this positive feedback loop, aka availability cascade, are researchers like Archfield et al. who aren’t afraid to follow though and write up experiments that don’t find something “interesting” together with a media that’s not afraid to report them. In today’s world dominated by climate hype, it is the non-interesting results that are, in fact, the most interesting. But, bear in mind that the interesting aspect of them stems not so much from the results themselves being rare, but rather from the rarity of the reporting of and reporting on such results.

Reference:

Archfield, S. A., et al., 2016. Fragmented patterns of flood change across the United States. Geophysical Research Letters, doi: 10.1002/2016GL070590.

Yesterday, police in Oklahoma released aerial and dash camera footage of an unarmed man named Terence Crutcher being shot by an officer as he stood beside his SUV with his hands in the air. Tulsa Police Chief Chuck Jordan described the footage as “very difficult to watch,” and the officer who shot Crutcher is on administrative leave. The aerial footage of the shooting ought to remind us how important transparency policy is in the age of the “Pre-Search” and the role persistent aerial surveillance may come to play in police misconduct investigations.

Reporting from earlier this year revealed that police in Baltimore have been testing persistent aerial surveillance technology, described by its developer as “Google Earth with TiVo,” which allows users to keep around 30 square miles under surveillance. The technology, developed by Persistent Surveillance Systems (PSS), has helped Baltimore police investigate thefts and shootings. But the trial was conducted in secret, without the knowledge of key Baltimore city officials, and was financed by a billionaire couple.

Shortly after news of the persistent surveillance in Baltimore was reported I and others noted that it should cause concern. Citizens engaged in lawful behavior deserve to know if their movements are being filmed and tracked by police for hours at a time. Yet, as disturbing as the secretive persistent surveillance in Baltimore is, technology already exists that is far more intrusive.

PSS’ surveillance allows analysts to track individuals, but not to see any of that individual’s identifying features. On the massive image put together by PSS one individual takes up one pixel. ARGUS-IS, a persistent surveillance system designed for the military, mounted on a drone or aircraft at 20,000 feet allows users to see 6 inch details in a 1.8 billion pixel image showing a 10 square mile area. When ARGUS-IS technology is incorporated into Gorgon Stare, another surveillance system, the coverage area expands to about 39 square miles. A video highlighting ARGUS-IS’ capabilities is below:

It’s currently not feasible for most American law enforcement agencies to use Gorgon Stare and other military surveillance equipment given the cost. However, domestic law enforcement has not been shy about using military equipment, and technological advances suggest that we should be asking “when” questions, not “if” questions, when discussing police using persistent aerial surveillance tools that capture highly detailed images.

Baltimore police don’t seem concerned about the privacy worries associated with persistent surveillance. From Vice:

For its part, the police department denies that officers have done anything wrong, or that the planes even amount to a form of surveillance. TJ Smith, media relations chief for the Baltimore Police Department, told VICE the aerial program “doesn’t infringe on privacy rights” because it captures images available in public spaces.

Baltimore police aren’t alone. Dayton, Ohio Police Chief Richard Biehl said while discussing PSS surveillance, “I want them to be worried that we’re watching, […] I want them to be worried that they never know when we’re overhead.”

I’ve written before about how aerial surveillance Supreme Court cases from the 1980s grant police a great deal of freedom when it comes to snooping on citizens from the sky. Lawmakers can provide increased privacy and restrict the kind of persistent aerial surveillance that has been taking place in Baltimore. However, if lawmakers want to allow for constant “eyes in the sky” they should at least take steps to ensure that persistent surveillance equipment is used to increase police accountability and transparency, not just to aid criminal investigations. As body cameras have shown, police tools can help law enforcement and criminal justice reform advocates, but only if the right policies are in place.

News stories are now reporting that the Minnesota stabber Dahir Adan entered the United States as a Somali refugee when he was 2 years old.  Ahmad Khan Rahami, the suspected bomber in New York and New Jersey, entered as an Afghan asylum-seeker with his parents when he was 7 years old.  The asylum and refugee systems are the bedrocks of the humanitarian immigration system and they are under intense scrutiny already because of fears over Syrian refugees.    

The vetting procedure for refugees, especially Syrians, is necessarily intense because they are overseas while they are being processed.  The security protocols have been updated and expanded for them.  This security screening should be intense.  The process for vetting asylum-seekers, who show up at American ports of entry and ask for asylum based on numerous criteria, is different.  Regardless, no vetting system will prevent or detect child asylum-seekers or child refugees from growing up and becoming terrorists any more than a child screening program for U.S.-born children will be able to prevent or detect those among us will grow up to be a terrorist. 

Adan and Rahami didn’t manage to murder anyone due to their incompetence, poor planning, potential mental health issues, luck, armed Americans, and the quick responses by law enforcement.  Regardless, some may want to stop all refugees and asylum seekers unless they are 100 percent guaranteed not to be terrorists or to ever become terrorists.  Others are more explicit in their calls for a moratorium on all immigration due to terrorism.  These folks should know that precautionary principle is an inappropriate standard for virtually every area of public policy, even refugee screening.   

Even so, these systems are surprisingly safe.  According to a new Cato paper, from 1975 to the end of 2015, America allowed in just over 700,000 asylum-seekers and 3.25 million refugees.  Four of those asylum-seekers became terrorists and killed four people in attacks on U.S. soil.  Twenty of the 3.25 million refugees became terrorists and they killed three Americans on U.S. soil.  Neither figure includes refugees or asylum-seekers who travelled overseas to become terrorists abroad as I was solely focused on terrorists targeting the homeland.

The chance of being murdered in a terrorist attack committed by an asylum-seeker was one in 2.7 billion a year.  The chance of being murdered in a terrorist attack committed by a refugee is one in 3.4 billion a year.  These recent attacks in New York, New Jersey, and Minnesota will make the refugee and asylum programs look more dangerous by increasing the number of terrorists who entered under them.  Fortunately, the attackers didn’t kill anybody so the chance of dying in a terrorist attack committed by immigrants who entered in these categories won’t increase – although that is small comfort to the victims who were wounded.   

The terrorism risk posed by refugees and asylum-seekers could skyrocket in the future and justify significant changes in either humanitarian immigration programs, including more intense screening or other actions.  The recent attacks in Minnesota, New York, and New Jersey, however, do not justify such drastic changes.  

It won’t surprise anyone who follows data security to know that this past summer saw a hack of databases containing Louisiana driver information. A hacker going by the ironic handle “NSA” offered the data for sale on a “dark web” marketplace.

Over 290,000 residents of Louisiana were apparently affected by the data breach. The information stolen was typical of that which is held by motor vehicle bureaus: first name, middle name, last name, date of birth, driver’s license number, state in which the driver’s license was issued, address, phone number, email address, a record of driving offenses and infractions, and any and all fines paid to settle tickets and other fines.

This leak highlights the risks of state participation in the REAL ID Act. One of the problems with linking together the databases of every state to create a national ID is that the system will only be as secure as the state with the weakest security.

REAL ID mandates that states require drivers to present multiple documents for proof of identity, proof of legal presence in the United States, and proof of their Social Security number. The information from these documents and digital copies of the documents themselves are to be stored state-run databases just like the one that was hacked in Louisiana.

For the tiniest increment in national security—inconveniencing any foreign terrorist who might use a driver’s license in the U.S.—REAL ID increases the risk of wholesale data breaches and wide-scale identity fraud. It’s not a good trade-off.

A National Bureau of Economic Research working paper by David Autor, David Dorn and Gordon Hanson, titled “The China Shock: Learning from Labor Market Adjustment to Large Changes in Trade,” has created Piketty-like buzz in U.S. trade policy circles this year.  Among the paper’s findings is that the growth of imports from China between 1999 and 2011 caused a U.S. employment decline of 2.4 million workers, and that wages and employment prospects for those who lost jobs remained depressed for many years after the initial effect. 

While commentators on the left have trumpeted these findings as some long-awaited refutation of Adam Smith and David Ricardo, the authors have distanced themselves from those conclusions, portraying their analysis as an indictment of a previously prevailing economic consensus that the costs of labor market adjustment to increased trade would be relatively subdued (although I’m skeptical that such a consensus ever existed). But in a year when trade has been scapegoated for nearly everything perceived to be wrong in society, the release of this paper no doubt reinforced fears – and fueled demagogic rants – about trade and globalization being scourges to contain, and even eradicate.

Last week, Alan Reynolds explained why we should take Autor, et. al.’s job-loss figures with a pinch of salt, but there is an even more fundamental point to make here. That is: Trade has one role to perform – to grow the economic pie. Trade fulfills that role by allowing us to specialize. By expanding the size of markets to enable more refined specialization and economies of scale, trade enables us to produce and, thus, consume more.  Nothing more is required of trade. Nothing!

Still, politicians, media, and other commentators blame trade for an allegedly unfair distribution of that pie and for the persistence of frictions in domestic labor markets. But reducing those frictions and managing distribution of the larger economic pie are not matters for trade policy.  They are matters for domestic policy. Trade does its job. Policymakers must do their jobs, too.

Trade is disruptive, no doubt. When consumers and businesses enjoy the freedom to purchase goods and industrial inputs from a greater number of suppliers, those suppliers are kept on their toes. They must be responsive to their customers needs and, if they fail, they inevitably contract or perish. Yes, trade renders domestic production of certain products (and the jobs that go with those activities) relatively inefficient and, ultimately, unviable. Unfortunately, people are just as quick to observe this trade-induced destruction as they are to overlook the creation of new domestic industries, firms, and products that emerge elsewhere in the economy as a result of this process. In other words, the losses attributable to trade’s destruction are seen, the gains from trade’s creation are invisible, and popular discord is the inevitable outcome.

The adoption of new technology disrupts the status quo, as well – and to a much greater extent than trade does. Technological progress accounts for far more job displacement.  Yet we don’t hear calls for taxing or otherwise impeding innovation. You know all those apps on your mobile phone – the flashlight, map, camera, clock, and just about every other icon on your screen? They’ve made hundreds of thousands of manufacturing jobs redundant. But as part of the same process, we got Uber, AirBnb, Amazon, the Apps-development industry itself, and all the value-added and jobs that come with those disruptive technologies.

Trade and technology (as well as changing consumer tastes, demand and supply shocks, etc.) are catalysts of both destruction and creation. In 2014, the U.S. economy shed 55.1 million jobs. That’s a lot of destruction. But in the same year, the economy added 57.9 million jobs – a net increase of 2.8 million jobs.

Overcoming scarcity is a fundamental objective of economics. Making more with less (fewer inputs) is something we celebrate – we call it increasing productivity. It is the wellspring of greater wealth and higher living standards. Imagine a widget factory where 10 workers make $1000 worth of widgets in a day. Then management purchases a new productivity enhancing machine that enables 5 workers to produce $1000 worth of widgets in a day. Output per worker has just doubled. But in order for the economy to benefit from that labor productivity increase, the skills of the 5 workers no longer needed on the widget production line need to be redeployed elsewhere in the economy.  New technology, like trade, frees up resources to be put to productive use in other firms, industries, or sectors.

Whether and how we (as a society; as an economy) are mindful of this process of labor market adjustment are questions relevant to our own well-being and are important matters of public policy. What policies might reduce labor market frictions? What options are available to expedite adjustment for those who lose their jobs?  Have policymakers done enough to remove administrative and legal impediments to labor mobility? Have policymakers done enough to make their jurisdictions attractive places for investment? Are state-level policymakers aware that our federalist system of government provides an abundance of opportunity to identify and replicate best practices?

U.S. labor market frictions are to some extent a consequence of a mismatch between the supply and demand for certain labor skills.  Apprenticeship programs and other private sector initiatives to hire people to train them for the next generation of jobs can help here. But the brunt of the blame for sluggish labor market adjustment can be found in the collective residue of bad policies being piled atop bad policies. Reforming a corporate tax system that currently discourages repatriation of an estimated $2 trillion of profits parked in U.S. corporate coffers abroad would induce domestic investment and job creation. Curbing excessive and superfluous regulations that raise the costs of establishing and operating businesses without any marginal improvements in social, safety, environmental, or health outcomes would help. Permanently eliminating imports duties on intermediate goods to reduce production costs and make U.S.-based businesses more globally competitive would attract investment and spur production and job creation. Eliminating occupational licensing practices would bring competition and innovation to inefficient industries. Adopting best practices by replicating the policies of states that have been more successful at attracting investment and creating jobs (and avoiding the policies of states that lag in these metrics) could also contribute to the solution of reducing labor market adjustment costs.

But we should keep in mind that there are no circumstances under which curtailing the growth of the pie — curbing trade — can be considered a legitimate aim of public policy. The problem to solve is not trade. The problem is domestic policy that impedes adjustment to the positive changes trade delivers.

CNN contributor Van Jones has an op-ed in the San Francisco Chronicle in which he worries that the Trans Pacific Partnership (TPP) will undermine a California program to promote solar energy:

Because TPP would threaten a successful California rebate program for green technologies that are made in-state, the deal could result in the elimination of good-paying green jobs in fields like solar and wind manufacturing and energy efficiency. Green jobs employ all kinds of people — truck drivers, welders, secretaries, scientists — all across the state. These jobs can pull people out of poverty while protecting the planet.

I have some good news for him: That California rebate program probably already violates WTO rules, and, in fact, is one of eight U.S. state renewable energy programs that were just challenged by India in a formal WTO complaint. As a result, the TPP is not particularly important on this issue.  The WTO already has it covered.

I also have some even better news for him: These kinds of programs do not create jobs, and are bad for the environment as well, so we should be happy to see them go (either eliminating them on our own, or in response to findings of violation under international trade agreements).

What exactly is wrong with these programs?  Think about the impact of having eight U.S. states (that’s how many were mentioned in India’s complaint), and countless localities around the world, encouraging local production of solar panels or other forms of energy. The end result of such policies is clear: Lots of small, inefficient producers, leading to more expensive energy. That doesn’t sound very green.

As for the jobs argument, advocates of policies that discriminate in favor of local companies should also factor into their calculations the jobs lost when the rest of the world adopts similar policies. These programs are not secret, and once someone starts doing it, the practice proliferates. Even if the California policies led to the purchase of locally made goods, when other governments do the same thing it reduces the sales of California made goods to other jurisdictions.  The end result, therefore, is not additional jobs in California, but rather products that are more expensive and less efficiently made.

As widely reported, the soft employment data for August and declines in August retail sales and industrial production (manufacturing IP also down) have reduced market odds on a Fed rate hike at its meeting 20-21 September. According to the CME FedWatch Tool, based on trading in federal funds, the probability of a rate hike tomorrow is only 0.12. The same CME tool gives a probability of .46 the Fed will stand pat through December. Now what? I wish I knew. Here is how I think about the question.

First, it now appears that the Fed will go into its December meeting, as it did last year, with forward guidance on the table for a federal funds rate increase. The FOMC might, of course, alter its 2016 forward guidance at its September meeting. If the Committee reduces its guidance to indicate a fed funds range 25 basis points higher than now, but below prior guidance, will that create a strengthened implied “promise” to act in December? That would double down on its current problem with forward guidance. Will the FOMC hike even if employment data through November remain soft? Or, suppose employment growth resumes; will the market take seriously that the FOMC would consider a 50 bps hike in December as implied by current forward guidance?

Second, what are Janet Yellen’s incentives? A year from now, looking back, is the Fed likely to be in a better position and her reputation enhanced if the Fed has raised the federal funds target rate in 2016 and it turns out to be premature or the if Fed has held steady whereas it would have been better to have tightened in 2016? Given the data in hand as I write, it seems to me that waiting makes more sense. Yes, unemployment is below 5 percent and recent employment growth solid, but softening. However, there is little sign of rising inflation. On conventional measures, there is still slack in the labor market; for example, the labor-force participation rate is still well below prior levels. And, don’t forget that in 1999 unemployment fell to almost 4 percent.

Third, if the Fed gets behind by not moving in 2016, how hard will it be to catch up? How much difference can it make if the Fed moves in early 2017 rather than in 2016? Only an old-fashioned fine-tuner can believe it makes much difference.

We can replay this same argument at every future FOMC meeting. What must happen to create a compelling case for the Fed to move? My interpretation of the rate increase last December is that it had less to do with compelling new information than with the fact that the Fed had long promised to move in 2015. That says much more about the wisdom of forward guidance than about sensible monetary policy.

Here is a suggestion for the FOMC, which seems so obvious that I assume the Committee must already be considering it. The FOMC should recast its forward guidance away from the calendar. At its September meeting, the guidance should apply to end of third quarter 2017, 2018 and 2019 rather than end of those calendar years. At each meeting, the guidance would then apply to 4 quarters ahead, 8 quarters ahead and 12 quarters ahead. With this approach, the Committee would never again face an apparent calendar deadline to act.

Seems obvious to me, and very simple. Yes, perhaps guts forward guidance and that would be a good thing. The mantra should be “data dependence, not date dependence.”

The Spin Cycle is a reoccurring feature based upon just how much the latest weather or climate story, policy pronouncement, or simply poobah blather spins the truth. Statements are given a rating between 1-5 spin cycles, with fewer cycles meaning less spin. For a more in-depth description, visit the inaugural edition.

In mid-August a slow moving unnamed tropical system dumped copious amounts of precipitation in the Baton Rouge region of Louisiana. Reports were of some locations receiving over 30 inches of rain during the event. Louisiana’s governor John Bel Edwards called the resultant floods “historic” and “unprecedented.”

Some elements in the media were quick to link in human-caused climate change (just as they are to seemingly every extreme weather event). The New York Times, for example, ran a piece titled “Flooding in the South Looks a Lot Like Climate Change.”

We were equally quick to point out that there was no need to invoke global warming in that the central Gulf Coast is prime country for big rain events and that similar, and even larger, rainfall totals have been racked up there during times when there were far fewer greenhouse gases in the atmosphere—like in 1979 when 45 inches of precipitation fell over Alvin, TX from the slow passage of tropical storm Claudette, or in 1940 when 37.5 in. fell on Miller Island, LA from another stalled unnamed tropical system.

But we suspected that this wouldn’t be the end of it, and we were right.

All the while, an “international partnership” funded in part by the U.S. government (through grants to climate change cheerleader Climate Central), called World Weather Attribution (“international effort designed to sharpen and accelerate the scientific community’s ability to analyze and communicate the possible influence of climate change on extreme-weather events such as storms, floods, heat waves and droughts”) and was fervently working to formally (i.e., through a scientific journal publication) “attribute” the Louisiana rains to climate change.

The results of their efforts were made public a couple of weeks ago in parallel with the submission (we’ll note: not acceptance) of their article to the journal Hydrology and Earth System Science Discussions.

Their “attribution” can well, be attributed, to two factors. First, their finding that there has been a large increase in the observed probability of extreme rainfall along the central Gulf Coast—an increase that they claim can be directly related to the rise in the global (!) average temperature. And second, their finding that basically the single (!) climate model they examined also projects an increase in the probably of heavy rainfall in the region as a result of human-induced climate changes. Add the two together, throw in a splashy press release from a well-funded climate change propaganda machine and headlines like the AP’s “Global warming increased odds for Louisiana downpour” are the result.

As you have probably guessed since you are reading this under our “Spin Cycle” tag, a closer look finds some major shortcomings to this conclusion.

For example, big rains are part of the region’s history—and most (but not all) are result from meandering tropical weather systems whose progress has been slowed by mid-latitude circulation features. In most cases, the intensity of the tropical system itself (as measured by central pressure or maximum wind speed) is not all that great, but rather the abundant feed of moisture feed from the Gulf of Mexico and slow progress of the storm combine to produce some eye-popping, or rather boot-soaking, precipitation totals.  Here is a table of the top 10 rainfall event totals from the passage of tropical systems through the contiguous U.S. since 1921 (note that all are in the Gulf Coast region). Bear in mind that the further you go back in time, the sparser the observed record becomes (which means an increased chance that the highest rainfall amounts are missed). The August 2016 Louisiana event cracks the top 10 as number 10. A truly impressive event—but hardly atypical during the past 100 years. 

 

As the table shows, big events occurred throughout the record. But due to the rare nature of the events as well as the spotty (and changing) observational coverage, doing a formal statistical analysis of frequency changes over time is very challenging. One way to approach it is to use only the stations with the longest period of record—this suffers from missing the biggest totals from the biggest events, but at least it provides some consistency in observational coverage.  Using the same set of long-term stations analyzed by the World Weather Attribution group, we plotted the annual maximum precipitation in the station group as a function of time (rather than global average temperature). Figure 1 is our result. We’ll point out that there is not a statistically significant change over time—in other words, the intensity of the most extreme precipitation event each year has not systematically changed in a robust way since 1930. It’s a hard sell link this non-change to human-caused global warming.

Figure 1. Annual maximum 3-day rainfall total for stations with at least 80 years of record in the region 29-31N, 95-85W.

Admittedly, there is a positive correlation in these data with the global average surface temperature, but correlation does not imply causation. There is a world of distance between local weather phenomena and global average temperature. In the central Gulf Coast, influential denizens of the climate space, as we’ve discussed, are tropical cyclones—events whose details (frequency, intensity, speed, track, etc.) are highly variable from year to year (decade to decade, century to century) for reasons related to many facets of natural variability. How the complex interplay of these natural influencers may change in a climate warmed by human greenhouse gas emissions is far from certain and can be barely even be speculated upon. For example, the El Niño/La Niña cycle in the central Pacific has been shown to influence Gulf Coast tropical cyclone events, yet the future characteristics of this important factor vary considerably from climate model to climate model and confidence in climate model expectations of future impacts is low according to the U.N. Intergovernmental Panel on Climate Change (IPCC).

Which means that using a single climate model family in an “attribution” study of extreme Gulf Coast rainfall events is a recipe for distortion—at best, a too limited analysis, at worse, a misrepresentation of the bigger picture. 

So, instead of the widely advertised combination in which climate models and observations are in strong agreement as to the role of global warming, what we really have is a situation in which the observational analysis and the model analysis are both extremely limited and possibly (probably) unrepresentative of the actual state of affairs.

Therefore, for their overly optimistic view of the validity, applicability, and robustness of their findings that global warming has increased the frequency of extreme precipitation events in central Louisiana, we rate Climate Central’s World Weather Attribution’s degree of spin as “Slightly Soiled” and award them two Spin Cycles.

 

Slightly Soiled.  Over-the-top rhetoric. An example is the common meme that some obnoxious weather element is new, thanks to anthropogenic global warming, when it’s in fact as old as the earth. An example would the president’s science advisor John Holdren’s claim the “polar vortex,” a circumpolar westerly wind that separates polar cold from tropical warmth, is a man-made phenomenon. It waves and wiggles all over the place, sometimes over your head, thanks to the fact that the atmosphere behaves like a fluid, complete with waves, eddies, and stalls. It’s been around since the earth first acquired an atmosphere and rotation, somewhere around the beginning of the Book of Genesis. Two spin cycles.

Washington Post columnist and former Bush 43 speechwriter Michael Gerson has not always been charitable toward libertarians. He has been pretty good on Donald Trump and ObamaCare, though, and today he ties the two together:

Only 18 percent of Americans believe the Affordable Care Act has helped their families…A higher proportion of Americans believe the federal government was behind the 9/11 attacks than believe it has helped them through Obamacare…

Trump calls attention to these failures, while offering (as usual) an apparently random collection of half-baked policies and baseless pledges (“everybody’s got to be covered”) as an alternative. There is no reason to trust Trump on the health issue; but there is plenty of reason to distrust Democratic leadership. No issue — none — has gone further to convey the impression of public incompetence that feeds Trumpism.

Read the whole thing.

In a new report, scholars from the Urban Institute claim ObamaCare premiums “are 10 percent below average employer premiums nationally.” There is variation among states. The authors report ObamaCare premiums are actually higher in 12 states, by as much as 68 percent. 

At Forbes.com, I explain the Urban scholars are making the “apples to apples” comparison they claim to be:

The Urban Institute study instead engages in what my Cato Institute colleague Arnold Kling calls a game of “hide the premium.” As ACA architect Jonathan Gruber explained, “This bill was written in a tortured way” to create a “lack of transparency” because “if…you made explicit that healthy people pay in and sick people get money, it would not have passed.” When it did pass, it was due to what Gruber called the “huge political advantage” that comes from hiding how much voters are paying, as well as ”the stupidity of the American voter.”

That lack of transparency has allowed supporters to claim the ACA is providing coverage to millions who are so sick that insurance companies previously wouldn’t cover them, while simultaneously claiming Exchange coverage is no more expensive than individual-market coverage prior to the ACA or than employer-sponsored coverage. When we incorporate the full premium for Exchange plans, the smoke clears and we see Exchange coverage is indeed more expensive than employer-sponsored coverage. There ain’t no such thing as a free lunch.

If you think this is fun, just imagine the shell games we could play with a public option.

Read the whole thing.

New data on worker pay in the government and private sector has been released by the Bureau of Economic Analysis. There is good news: the pace of federal pay increases slowed during 2015, while the pace of private-sector pay increases picked up. 

After increasing 3.9 percent in 2014, average compensation for federal government civilian workers increased 1.9 percent in 2015. Meanwhile, average private-sector compensation increased 1.4 percent in 2014, but then sped up to a 3.8 percent increase in 2015.

Private workers thus closed the pay gap a little with federal workers in 2015, but there is a lot of catch up to do. In 2015 federal workers had average compensation (wages plus benefits) of $123,160, which was 76 percent higher than the private-sector average of $69,901. This essay examines the data in more detail.

Average federal compensation grew much faster than average private-sector compensation during the 1990s and 2000s. But the figure shows that federal increases have roughly equaled private-sector increases since 2010. President Obama modestly reined in federal pay increases after the spendthrift Bush years. Will the next president continue the restraint?

 

For background on federal pay issues and the BEA data, see here.

 

Donald Trump Jr. tweeted out this meme yesterday:

Social media immediately took arms to attack him.  I think the Skittles meme is actually a valuable and useful way to understand the foreign-born terrorist threat but size of the bowl is way too small.  This is the proper Skittles analogy:

Imagine a bowl full of 3.25 million Skittles that has been accumulated from 1975 to the end of 2015.  We know that 20 of the Skittles in that bowl intended to do harm but only three of those 20 are actually fatal.  That means that one in 1.08 million of them is deadly.  Do you eat from the bowl without quaking in your boots?  I would.

Perhaps future Skittles added into the bowl will be deadlier than previous Skittles but the difference would have to be great before the risks become worrisome as I write about here.  The Trump Jr. terrorism-Skittles meme is useful to understand terrorism risk – it just requires a picture of a bowl large enough to fit about 7,200 pounds of Skittles.  

It’s high time that I got ‘round to the subject of “monetary control,” meaning the various procedures and devices the Fed and other central banks employ in their attempts to regulate the overall availability of liquid assets, and through it the general course of spending, prices, and employment, in the economies they oversee.

In addressing this important subject, I’m especially anxious to disabuse my readers of the popular, but mistaken, belief — and it is popular, not only among non-experts, but also among economists — that monetary control is mainly, if not entirely, a matter of central banks’ “setting” one or more interest rates.  As I hope to show, although there is a grain of truth to this perspective, a grain is all  the truth there is to it. The deeper truth is that “monetary control” is fundamentally about controlling the quantity of (hang on to your hats) … money! In particular, it is about altering the supply of (and, in recent years, the demand for) “base” money, meaning (once again) the sum of outstanding Federal Reserve notes and depository institutions’ deposit balances at the Fed.

Although radical changes to the Fed’s monetary control procedures since the recent crisis don’t alter this fundamental truth about monetary control, they do make it impractical to address the Fed’s control procedures both before and since the crisis within the space of a single blog entry.  Instead, I plan to limit myself here to describing monetary control as the Fed exercised it in the years leading to the crisis.  I’ll then devote a separate post to describing how the Fed’s methods of monetary control have changed since then, and why the changes matter.

The Mechanics of “Old-Fashioned” Monetary Control

In those good-old pre-crisis times, the Fed’s chief monetary control challenge was one of adjusting the available quantity of base money, and of bank’s deposit balances at the Fed especially, sufficiently to sustain or sponsor general levels of lending and spending consistent with its ultimate employment and inflation objectives. If, for example, the FOMC determined that the Fed had to encourage lending and spending beyond already projected levels if it was to avoid a decline in inflation,  a rise in unemployment, or a combination of both, it would proceed to increase depository institutions’ reserve balances, with the intent of encouraging those institutions  to put their new reserves to work by lending (or otherwise investing) them.  Although the lending of unwanted reserves doesn’t reduce the total amount of reserves available to the banking system, it does lead to a buildup of bank deposits as those unwanted reserves get passed around from bank to bank, hot-potato fashion.  As deposits expand, so do banks’ reserve needs, owing partly (in the U.S.) to the presence of minimum legal reserve requirements. Excess reserves therefore decline. Once there are no longer any excess reserves, or rather once there is no excess beyond what banks choose to retain for their own prudential reasons, lending and deposit creation stop.

Such is the usual working-out of the much-disparaged, but nevertheless real, reserve-deposit multiplier. As I explained in the last post in this series, although the multiplier isn’t constant — and although it can under certain circumstances decline dramatically, even assuming values less than one — these possibilities don’t suffice to deprive the general notion of its usefulness, no matter how often some authorities claim otherwise.

Regardless of other possibilities, for present purposes we can take the existence of a multiplier, in the strict sense of the term, and of some official estimate of the value of this multiplier, for granted. The question then is, how, given such an estimate, does (or did) the Fed determine just how much base money it needed to create, or perhaps destroy, to keep overall credit conditions roughly consistent with its ultimate macroeconomic goals? It’s here that the “setting” of interest rates, or rather, of one particular interest rate, comes into play.

The particular rate in question is the “federal funds rate,” so called because it is the rate depository institutions charge one another for overnight loans of “federal funds,” which is just another name for deposit balances kept at the Fed. Why overnight loans? In the course of a business day, a bank’s customers make and receive all sorts of payments, mainly to and from customers of other banks. These days, most of these payments are handled electronically, and so consist of electronic debits from and credits to banks’ reserve accounts at the Fed. Although the Fed allows banks to overdraw their accounts during the day, it requires them to have sufficient balances to end each day with non-zero balances sufficient to meet their reserve requirements, or else pay a penalty. So banks that end up short of reserves borrow “fed funds” overnight, at the “federal funds rate,” from others that have more than they require.

Notice that the “federal funds rate” I just described is a private-market rate, the level of which is determined by the forces of reserve (or “federal funds”) supply and demand. What the Fed “sets” is not the actual federal funds rate, but the “target” federal funds rate. That is, it determines and then announces a desired  federal funds rate, to which it aspires to make the actual fed funds rate conform using its various monetary control devices.

Importantly, the target federal funds rate (or target ffr, for short) is only a means toward an end, and not a monetary policy end in itself. The Fed sets a target ffr, and then tries to hit that target, not because it regards some particular fed funds rate as “better” in itself than other rates, but because it believes that, by supplying banks with reserve balances consistent with that rate, it will also provide them with a quantity of reserves consistent with achieving its ultimate macroeconomic objectives. The fed funds rate is, in monetary economists’ jargon, merely a policy “instrument,” and not a policy “objective.” Were a chosen rate target to prove incompatible with achieving the Fed’s declared inflation or employment objectives, the target, rather than those objectives, would have to be abandoned in favor of a more suitable one. (I am, of course, describing the theory, and not necessarily Fed practice.)

But why target the federal funds rate? The basic idea here is that changes in depository institutions’ demand for reserve balances are a rough indicator of changes in the overall demand for money balances and, perhaps, for liquid assets generally. So, other things equal, an increase in the ffr not itself inspired by any change in the Federal Reserve policy can be taken to reflect an increased demand for liquidity which, unless the Fed does something, will lead to some decline in spending, inflation, and (eventually) employment. Rather than wait to see whether these things transpire, an ffr-targeting Fed would respond by increasing the available quantity of federal funds just enough to offset the increased demand for them. If the fed funds rate is indeed a good instrument, and the Fed has chosen the appropriate target rate, then the Fed’s actions will allow it to do a better job achieving its ultimate goals than it could if it merely kept an eye on those goal variables themselves.

Open Market Operations

To increase or reduce the available supply of federal funds, the Fed increases or reduces its open-market purchases of Treasury securities. (Bear in mind, again, that I’m here describing standard Fed operating procedures before the recent crisis.) Such “open-market operations” are conducted by the New York Fed, and directed by the manager of that bank’s System Open Market Account (SOMA). The SOMA manager arranges frequent (usually daily) auctions by which it either adds to or reduces its purchases of Treasury securities, depending on the FOMC’s chosen federal funds rate target and its understanding of how the demand for reserve balances is likely to evolve in the near future. The other auction participants consist of a score or so of designated large banks and non-bank broker-dealers known as “primary dealers.” When the Fed purchases securities, it pays for them by crediting dealers’ Fed deposit balances (if the dealers are themselves banks) or by crediting the balances of the dealers’ banks, thereby increasing the aggregate supply of federal funds by the amount of the purchase. When it sells securities, it debits dealer and dealer-affiliated bank reserve balances by the amounts dealers have offered to pay for them, reducing total system reserves by that amount.

Because keeping the actual fed funds rate near its target often means adjusting the supply of federal funds to meet temporary rather than persistent changes in the demand for such, the Fed undertakes both “permanent” and “temporary” open-market operations, where the former involve “outright” security purchases or (more rarely) sales, and the latter involve purchases or sales accompanied by  “repurchase agreements” or “repos.” (For convenience’s sake, the term “repo” is in practice used to describe a complete sale and repurchase transaction.) For example, the Fed may purchase securities from dealers on the condition that they agree to repurchase those securities a day later, thereby increasing the supply of reserves for a single night only. (The opposite operation, where the Fed sells securities with the understanding that it will buy them back the next day, is called an “overnight reverse repo.”) Practically speaking, repos are collateralized loans, except in name, where the securities being purchased are the loan collateral, and the difference between their purchase and their repurchase price constitutes the interest on the loan which, expressed in annual percentage terms, is known as the “repo rate.”

The obvious advantage of repos, and of shorter-term repos especially, is that, because they are self-reversing, a central bank that relies extensively on them can for the most part avoid resorting to open-market sales when it wishes to reduce the supply of federal funds. Instead, it merely has to refrain from “rolling over” or otherwise replacing some of its maturing repos.

Sustainable and Unsustainable Interest Rate Targets

Having described the basic mechanics of open-market operations, and how such operations can allow the Fed to keep the federal funds rate on target, I don’t wish to give the impression that achieving that goal is easy. On the contrary: it’s often difficult, and sometimes downright impossible!

For one thing, just what scale, schedule, or types of open-market operations will serve best to help the Fed achieve its target is never certain. Instead, considerable guesswork is involved, including (as I’ve mentioned) guesswork concerning impending changes in depository institution’s demand for liquidity. Ordinarily such changes are small and fairly predictable; but occasionally, and especially  when a crisis strikes, they are both unexpected and large.

But that’s the least of it. The main challenge the Fed faces consists of picking the right funds rate target in the first place. For if it picks the wrong one, it’s attempts to make the actual funds rate stay on target are bound to fail. To put the matter more precisely, given some ultimate inflation target, there is at any time a unique federal funds rate consistent with that target. Call that the “equilibrium” federal funds rate.** If the Fed sets its ffr target below the equilibrium rate, it will find itself engaging in endless open-market purchases so long as it insists on cleaving to that target.

That’s the case because pushing the ffr below its equilibrium value means, not just accommodating changes in banks’ reserve needs, thereby preserving desirable levels of lending and spending, but supplying them with reserves beyond those needs. Consequently the banks, in ridding themselves of the unwanted reserves, will cause purchasing of all sorts, or “aggregate demand,” to increase beyond levels consistent with the Fed’s objectives. Since the demand for loans of all kinds, including that for overnight loans of federal funds, itself tends to increase along with the demand for other things, the funds rate will once again tend to rise above target, instigating another round of open-market purchases, and so on. Eventually one of two things must happen: either the Fed, realizing that sticking to its chosen rate target will cause it to overshoot its long-run inflation goal, will revise that target upwards, or the Fed, in insisting on trying to do the impossible, will end up causing run-away inflation.  Persistent attempts to push the fed funds rate below its equilibrium value will, in other words, backfire: eventually interest rates, instead of falling, will end up increasing in response to an increasing inflation rate.  A Fed attempting to target an above-equilibrium ffr will find itself facing the opposite predicament: unless it adjusts its target downwards, a deflationary crisis, involving falling rather than rising interest rates, will unfold.

The public would be wise to keep these possibilities in mind whenever it’s tempted to complain that the Fed is “setting” interest rates “too high” or “too low.”  The public may be right, in the sense that the FOMC needs to adjust the fed funds target if it’s to keep inflation under control.  On the other hand, if what the public is really asking is that the Fed try to force rates above or below their equilibrium values, it should consider itself lucky if the FOMC refuses to listen.

A Steamship Analogy

I hope I’ve already said enough to drive home the fact that there was, prior to the crisis at least, only a grain of truth to the common belief that the Fed “sets” interest rates. It is, of course, true that the Fed can influence the prevailing level of interest rates through its ability to alter both the actual and the expected rate of inflation. But apart from that, and allowing that the Fed has long-run inflation goals that it’s determined to achieve, it’s more correct to say that the Fed’s monetary policy actions are themselves “set” or dictated to it by market-determined equilibrium rates of interest, than that the Fed “sets” interest rates.

But in case the point still isn’t clear, an analogy may perhaps help. Suppose that the captain of a steamship wishes to maintain a sea speed of 19 knots. To do so, he (I said it was a steamship, didn’t I?) sends instructions, using the ship’s engine-order telegraph, to the engineer, signaling “full ahead,” “half ahead,” “slow ahead,” “dead slow ahead,” “stand by,” “dead slow astern,” “slow astern,” and so on, depending on how fast, and in what direction, he wants the propellers to turn. The engine room in turn acknowledges the order, and then conveys it to the boiler room, where the firemen are responsible for getting steam up, or letting it down, by stoking more or less coal into the boilers. But engine speed is only part of the equation: there’s also the current to consider, variations in which require compensating changes in engine speed if the desired sea speed is to be maintained.

Now for the analogy: the captain of our steamship is like the FOMC; its engineer is like the manager of the New York Fed’s System Open Market Account, and the ship’s telegraph is like…a telephone.  The instructions “full ahead,” “half ahead,” “stand by,” “half astern,” and so forth, are the counterparts of such FOMC rate-target adjustments as “lower by 50 basis points,” “lower by 25 basis points,” “stand pat,” “raise by 25 basis points,” etc. Coal being stoked into the ship’s boilers is like fed funds being auctioned off. Changes in the current are the counterpart of changes in the demand for federal funds that occur independently of changes in Fed policy.  Finally, the engine speed consistent at any moment with the desired ship’s speed of 19 knots is analogous to the “equilibrium” federal funds rate.  Just as a responsible ship’s captain must ultimately let the sea itself determine the commands he sends below, so too must a responsible FOMC allow market forces dictate how it “sets” its fed funds target.

***

Such was monetary control before the crisis. Since then, much has changed, at least superficially; so we must revisit the subject with those changes in mind. But first, I must explain how these changes came about, which means saying something about how the S.S. Fed managed to steam its way straight into a disaster.

________________________

**I resist calling it the “natural” rate, because that term is mostly associated with the work of the great Swedish economist Knut Wicksell, who meant by it the rate of interest consistent with a constant price level or zero inflation, rather than with any constant but positive inflation rate. The difference between my “equilibrium” rate and Wicksell’s “natural” rate is simply policymakers’ preferred long-run inflation rate target. Note also that I say nothing here about the Fed’s employment goals.  That’s because, to the extent that such goals are defined precisely rather than vaguely, they may also be incompatible, not only with the Fed’s long-run inflation objectives, but with the avoidance of accelerating inflation or deflation.

[Cross-posted from Alt-M.org]

This week in New York, President Obama is hosting a summit on refugees with other governments with the goal of doubling refugee resettlement internationally. The summit will come a week after the president announced that the United States will plan to accept 110,000 refugees in fiscal year (FY) 2017, which begins in October. While this is 25,000 more than this year’s target, it is still much less than the share of the international displaced population that the country has historically taken.

If the next administration follows through on the president’s plans, 2017 would see the largest number of refugees accepted since 1994, when the U.S. refugee program welcomed nearly 112,000 refugees. Yet, due to the scale of the current refugee crisis, the number represents a much smaller portion of persons displaced from their homes by violence and persecution around the world than in 1994.

As Figure 1 shows, the U.S. program took almost a half a percent of the displaced population, as estimated by the United Nations, in 1994. Next year, it will take just 0.17 percent. The average since the Refugee Act was passed in 1980 has been 0.48 percent. There was a huge drop-off after 9/11, as the Bush administration attempted to update its vetting procedures, and although the rate rebounded slightly, it continued its downward descent. 

Figure 1: Percentage of the U.N. High Commissioner for Refugees Population of Concern Admitted Under the U.S. Refugee Program (FY 1985-2017)

Sources: United Nations High Commissioner for Refugees, U.S. Department of State. *The 2017 figure uses the 2016 UN estimate and assumes that the U.S. will reach its 2017 goal.

But as the Obama and Bush administrations slowly ramped up the refugee program after 2001, the increases could not keep up with the number of newly displaced persons. Today, the number of displaced persons has reached its highest absolute level since World War II. As Figure 2 shows, the U.S. refugee program has almost returned to its early 1990s peak in absolute terms, but all of the increases in the program since 2001 have been matched by even greater increases in the number of displaced persons.  

Figure 2: Admissions Under the U.S. Refugee Program and U.N. High Commissioner for Refugees Population of Concern (FY 1985-2017)

Sources: See Figure 1

The increase in the refugee target for 2017 is still an improvement, but if it is to be anything other than a completely arbitrary figure, it should be based primarily on the international need for resettlement. The Refugee Act of 1980 states that the purpose of the program is to implement “the historic policy of the United States to respond to the urgent needs of persons subject persecution.” In other words, it should be a calculation based first on the needs for resettlement.

If the U.S. program accepted the same rate as it has historically since the Refugee Act was passed in 1980, it would set a target of 300,000 refugees for 2017. If it accepted the average rate from 1990 to 2016, the target would be 200,000. This is the amount that Refugee Council USA—the advocacy coalition for the nonprofits that resettle all refugees to the United States—has urged the United States to accept. 

The decline in the acceptance rate for the U.S. refugee program also highlights the inflexibility of a refugee target controlled entirely by the U.S. government. White House spokesman Josh Earnest said that the president would have set a higher target, but because accepting refugees is “not cheap,” and congressional appropriators were not on board with the plan, they could not.

As I’ve argued before, the United States could accept more refugees using private money and private sponsors without needing Congress’s sign-off. The United States used to have a private refugee program in the 1980s, and for decades before the Refugee Act of 1980, the United States resettled tens of thousands of refugees with private money.

Canada currently has a private sponsorship program that is carrying the disproportionate weight of its large refugee resettlement program, with better results than the government-funded program. Other countries are following Canada’s example, it’s time the United States do so as well

There should be no limits on American charity. If American citizens want to invite people fleeing violence and persecution abroad to come into their homes, the government should not stand in their way. In the face of this historic crisis, the United States should allow Americans to lead a historic response that is consistent with our past.

On September 13, the University of California at Berkeley announced that it may have to take down online lecture and course content that it has offered free to the public: 

Despite the absence of clear regulatory guidance, we have attempted to maximize the accessibility of free, online content that we have made available to the public. Nevertheless, the Department of Justice has recently asserted that the University is in violation of the Americans with Disabilities Act because, in its view, not all of the free course and lecture content UC Berkeley makes available on certain online platforms is fully accessible to individuals with hearing, visual, or manual disabilities.

That Berkeley is not just imagining these legal dangers is illustrated by this clip from Tamar Lewin of the New York Times from February of last year: “Harvard and M.I.T. Are Sued Over Lack of Closed Captions.” 

I’ve been warning about this, to no apparent avail, for a long time. In a July post in this space, I noted the tag-team alliance of the U.S. Department of Justice, disabled-rights groups, and fee-seeking private lawyers in gearing up web-accessibility doctrine: when extreme positions are too politically unpalatable for DoJ to endorse directly, it supports private rights groups in their demands, and when a demand is too unpopular or impractical even for the rights groups, there’s nothing to stop the freelance lawyers from taking it up. 

Neither political party when in office has been willing to challenge the ADA, so there is every reason to believe this will continue. If you appreciate high-level course content from leading universities, but are not a paying enrolled student, you’d better enjoy it now while you can.   

From my new policy analysis (joint with Angela Dills and Sietse Goffard) on state marijuana legalizations:

In November 2012 voters in the states of Colorado and Washington approved ballot initiatives that legalized marijuana for recreational use. Two years later, Alaska and Oregon followed suit. As many as 11 other states may consider similar measures in November 2016, through either ballot initiative or legislative action. Supporters and opponents of such initiatives make numerous claims about state-level marijuana legalization.

Advocates think legalization reduces crime, raises tax revenue, lowers criminal justice expenditures, improves public health, bolsters traffic safety, and stimulates the economy. Critics argue that legalization spurs marijuana and other drug or alcohol use, increases crime, diminishes traffic safety, harms public health, and lowers teen educational achievement. Systematic evaluation of these claims, however, has been largely absent.

This paper assesses recent marijuana legalizations and related policies in Colorado, Washington, Oregon, and Alaska.

Our conclusion is that state marijuana legalizations have had minimal effect on marijuana use and related outcomes. We cannot rule out small effects of legalization, and insufficient time has elapsed since the four initial legalizations to allow strong inference. On the basis of available data, however, we find little support for the stronger claims made by either opponents or advocates of legalization. The absence of significant adverse consequences is especially striking given the sometimes dire predictions made by legalization opponents.

 Global Science Report is a feature from the Center for the Study of Science, where we highlight one or two important new items in the scientific literature or the popular media. For broader and more technical perspectives, consult our monthly “Current Wisdom.”

For more than two weeks Hurricane Hermine (including, its pre-hurricane and post-hurricane life) was prominent in the daily news cycle.  It threatened, at one time or another, destruction along U.S. coastlines from the southern tip of Florida westward to New Orleans and northward to Cape Cod.  Hurricane/global warming stories, relegated to the hell of the formerly attractive by the record-long absence of a major hurricane strike on U.S. shores, were being spiffed up and readied for publication just as soon as disaster would strike.  But, alas, Hermine didn’t cooperate, arguably generating more bluster in the press than on the ground, although some very exposed stretches of North Florida did incur some damage.  

Like Jessie, Woody and Stinky Pete in Toy Story 2, the hurricane/global warming stories have been put back in their boxes (if only they could be sent to a museum!).  

But, they didn’t have to be. There was much that could have been written speculating on the role of global warming in the Hermine’s evolution—but it’s just not politically correct.

With a bit of thought-provocation provided by newly-minted Cato Adjunct Scholar Dr. Ryan Maue—one of the best and brightest minds in the  world on issues of tropical cyclone/climate interactions (and other extreme weather types)—we’ll review Hermine’s life history and consider what factors “consistent with” human-caused climate change may have shaped its outcome.

We look forward to having Ryan’s more formal input into our future climate change discussions, but didn’t want to pass up an opportunity to work some of his thoughts into our Hermine recap—for who knows, we may have to wait another 10 years for a Florida hurricane landfall!

Hermine was probably the most hyped category 1 hurricane in history—thanks to a large media constituency hungry for weather disaster stories to link with climate change peril. 

Widespread access to weather forecast models and a thirst to be first with the story of impending global-warming fueled doom led to wild speculation in the media (both new and traditional) about “worst case scenarios” of a “major hurricane” landfall nearly a full week prior to the nascent swirl in the tropical atmosphere becoming an officially designated tropical cyclone by the National Hurricane Center. As “Invest 99L” (as it was originally designated) trudged westward through the tropical Atlantic environs during the week of August 22nd, it was struggling to maintain its life, much less its worst case aspirations. Pretty much every day you could find reports of a potentially dire impact from the Florida East Coast all the way through waterlogged Louisiana.

To hear it told, the combination of unusually warm water, the lack of recent hurricane landfalls and rising sea levels set the stage for a disaster. 

Invest 99L ultimately did survive an unfavorable stretch of environmental conditions in the Florida Straits and eventually, in the Gulf of Mexico, grew into Hurricane Hermine and became the first in over 10 years to make landfall in Florida when it came ashore near St. Marks in the early morning hours of Friday, September 2nd. It was a category 1 hurricane at landfall and caused some coastal flooding along Florida’s Gulf Coast and knocked out power to more than 100,000 homes and businesses in the Tallahassee area.  In reality, no hurricane-force sustained winds were measured onshore.

As it traversed the Southeastern U.S. from Florida’s Big Bend region to North Carolina’s Outer Banks, the Labor Day forecast for Jersey Shore became ominous, as Hermine’s post-tropical personality was expected to lash the Mid-Atlantic coast with near-hurricane force winds for several days, flooding low-lying areas and severely eroding beaches with each tidal cycle.  Many Labor Day holiday plans were cancelled.

As it turned out, Hermine’s post-tropical remnants travelled further offshore than originally anticipated and never quite attainted projected strength—a combination which resulted in much less onshore impact than was being advertised, with much grumbling from those who felt had cancelled their vacation plans for nothing.

In the end, Hermine’s hype was much worse than its bite. But all the while, if folks really wanted to write stories about how storm behavior is “consistent with” factors related to global warming, they most certainly could. 

For example, regarding Hermine, Ryan sent out this string of post-event tweets:

 

As is well known, we have gone much further than we ever have without a Category 3 (major) or higher hurricane strike.  The last was nearly 11 years ago, when Wilma struck south Florida on October 24, 2005.  With regard to this and the reduced frequency of U.S. strikes in general, hurricane researcher Chunzai Wang from NOAA’s Atlantic Oceanographic and Meteorological Laboratory  gave an informative presentation about a year ago, titled “Impacts of the Atlantic Warm Pool on Atlantic Hurricanes.” It included this bullet point:

● A large (small) [Atlantic Warm Pool—a condition enhanced by global warming] is unfavorable (favorable) for hurricanes to make landfall in the southeast United States.  This is consistent with that no hurricanes made landfall in the southeast U.S. during the past 10 years, or hurricanes moved northward such as Hurricane Sandy in 2012. [emphasis added]

In an article we wrote a few years back titled “Global Savings: Billion-Dollar Weather Events Averted by Global Warming,” we listed several other examples of elements “consistent with” climate change that may inhibit Atlantic tropical cyclone development and avert, or mitigate, disaster—these include increased vertical wind shear, changes in atmospheric steering currents, and Saharan dust injections.

But, you have probably never read any stories elsewhere that human-caused climate changes may be acting to lessen the menace of Atlantic hurricane strikes on the U.S. Instead you’ve no doubt read that “dumb luck” is the reason why hurricanes have been eluding our shores and why, according to the Washington Post, our decade-plus major hurricane drought is “terrifying” (in part, because of climate change).

We have little wonder why.

You Ought to Have a Look is a regular feature from the Center for the Study of Science.  While this section will feature all of the areas of interest that we are emphasizing, the prominence of the climate issue is driving a tremendous amount of web traffic.  Here we post a few of the best in recent days, along with our color commentary.

There was an interesting stream of articles this week that, when strung together, provides a pretty good idea as to how the scientific literature on climate change can (and have) become biased in a hurry.

First up, consider this provocative article by Vladimir Jankovic and David Schultz of University of Manchester titled “Atmosfear: Communicating the effects of climate change on extreme weather.” They formalize the idea that climate change communication has become dominated by trying to scare folks into acceptance (and thus compliance with action). The abstract is compelling:

The potential and serious effects of anthropogenic climate change are often communicated through the soundbite that anthropogenic climate change will produce more extreme weather. This soundbite has become popular with scientists and the media to get the public and governments to act against further increases in global temperature and their associated effects through the communication of scary scenarios, what we term “atmosfear.” Underlying atmosfear’s appeal, however, are four premises. First, atmosfear reduces the complexity of climate change to an identifiable target in the form of anthropogenically forced weather extremes. Second, anthropogenically driven weather extremes mandate a responsibility to act to protect the planet and society from harmful and increased risk. Third, achieving these ethical goals is predicated on emissions policies. Fourth, the end-result of these policies—a non-anthropogenic climate—is assumed to be more benign than an anthropogenically influenced one. Atmosfear oversimplifies and misstates the true state of the science and policy concerns in three ways. First, weather extremes are only one of the predicted effects of climate change and are best addressed by measures other than emission policies. Second, a pre-industrial climate may remain a policy goal, but is unachievable in reality. Third, the damages caused by any anthropogenically driven extremes may be overshadowed by the damages caused by increased exposure and vulnerability to the future risk. In reality, recent increases in damages and losses due to extreme weather events are due to societal factors. Thus, invoking atmosfear through such approaches as attribution science is not an effective means of either stimulating or legitimizing climate policies.

With a dominant atmosphere of atmosfear running through climate science, its pretty easy to see how the scientific literature (which is contributed to and gatekept by the scientific establishment) rapidly becomes overrun with pro-establishment articles—a phenomena that is the subject of a paper by a team led by Silas Nissen of the Danish Niels Bohr Institute. In their paper “Publication bias and the canonization of false facts” they describe how the preferential publication of “positive” results (i.e., those which favor a particular outcome), leads to a biased literature and, as a result, a misled public. From their abstract:

In our model, publication bias—in which positive results are published preferentially over negative ones—influences the distribution of published results. We find that when readers do not know the degree of publication bias and thus cannot condition on it, false claims often can be canonized as facts. Unless a sufficient fraction of negative results are published, the scientific process will do a poor job at discriminating false from true claims. This problem is exacerbated when scientists engage in p-hacking, data dredging, and other behaviors that increase the rate at which false positives are published…To the degree that the model accurately represents current scholarly practice, there will be serious concern about the validity of purported facts in some areas of scientific research.

Nissen and colleagues go on to conclude:

In the model of scientific inquiry that we have developed here, publication bias creates serious problems. While true claims will seldom be rejected, publication bias has the potential to cause many false claims to be mistakenly canonized as facts. This can be avoided only if a substantial fraction of negative results are published. But at present, publication bias appears to be strong, given that only a small fraction of the published scientific literature presents negative results. Presumably many negative results are going unreported. While this problem has been noted before, we do not know of any previous formal analysis of its consequences regarding the establishment of scientific facts.

And once the “facts” are ingrained, they set up a positive feedback loop as they get repeatedly “reviewed” in an increasingly popular pastime (enjoyed by national and international institutions alike) of producing assessment reports of the scientific literature, oftentimes as the foundation and justification for policymaking, such as the demonstrably atrocious “National Assessments” of climate change in the U.S. published by—who else—the federal government, to support—what else—it’s climate change policies.  In his new paper, “The mass production of redundant, misleading, and conflicted systematic reviews and meta-analyses,” Stanford’s John Ioannidis describes the proliferation of systematic review papers and why this is a terrible development for science and science-based policy. In an interview with Retraction Watch, Ioannidis discusses his findings. Here’s a taste:

Retraction Watch: You say that the numbers of systematic reviews and meta-analyses have reached “epidemic proportions,” and that there is currently a “massive production of unnecessary, misleading, and conflicted systematic reviews and meta-analyses.” Indeed, you note the number of each has risen more than 2500% since 1991, often with more than 20 meta-analyses on the same topic. Why the massive increase, and why is it a problem?

John Ioannidis: The increase is a consequence of the higher prestige that systematic reviews and meta-analyses have acquired over the years, since they are (justifiably) considered to represent the highest level of evidence. Many scientists now want to do them, leading journals want to publish them, and sponsors and other conflicted stakeholders want to exploit them to promote their products, beliefs, and agendas. Systematic reviews and meta-analyses that are carefully done and that are done by players who do not have conflicts and pre-determined agendas are not a problem, quite the opposite. The problem is that most of them are not carefully done and/or are done with pre-determined agendas on what to find and report.

Ioannidis concludes “Few systematic reviews and meta-analyses are both non-misleading and useful.”

Together, the chain of events described above leads to what’s been called an “availability cascade”—a self-promulgating process of collective belief. As we’ve highlighted on previous occasions, availability cascades lead to nowhere good in short order. The first step in combatting them is to recognize that we are caught up in one. The above papers help to illuminate this, and you really ought to have a look!

 

References:

Ioannidis, J., 2016. The mass production of redundant, misleading, and conflicted systematic reviews and meta-analyses, Milbank Quarterly, 94, 485–514, DOI: 10.1111/1468-0009.12210.

Jankovic, V. and D. Schultz, 2016.  Atmosfear: Communicating the effects of climate change on extreme weather. Weather, Climate and Society, DOI: http://dx.doi.org/10.1175/WCAS-D-16-0030.1.

Nissen, S, et al., 2016. Publication bias and the canonization of false facts. Archived at arXiv.org, https://arxiv.org/abs/1609.00494

On Monday, I argued that a new report by the Center for Immigration Studies (CIS) entitled “Immigrants Replace Low-Skill Natives in the Workforce” provided no evidence that immigrants are causing low-skilled natives to quit working. In fact, the trends point toward immigration pushing employed natives up the skills ladder. In his response yesterday, the author Jason Richwine either ignores my points or backtracks the claims in his original report.

Here are six examples:

1. In his paper, Mr. Richwine writes that “an increasing number of the least-skilled Americans [are] leaving the workforce” (my emphasis). I pointed out that this statement is not true, that the number of high school dropouts not working has actually declined since 1995. But in his response, he drops the “increasing,” altering his claim to say that “low-skilled Americans have been dropping out of the labor force even as low-skill immigrants have been finding plenty of work.” This altered claim is true. Some low-skilled Americans have dropped out of the labor force during this time, just not more of them, which is the implication.

2. In his paper, Mr. Richwine concludes that the share of male high school dropouts who are not working has increased. I pointed out that the share has increased only because natives are getting educated, not because the number of dropouts not working has increased. In response, Mr. Richwine tries to redeem his analysis by claiming that the fact that more Americans are graduating high school is “meaningless.” But in his paper, Mr. Richwine thought the distinction between high school graduates and dropouts was very important. As he explained:

In other words, high school graduates are not the same as high school dropouts. 

He was right the first time. Even George Borjas, the restrictionist Harvard professor, separates high school dropouts from graduates when studying the effect of immigration because they perform different tasks in the economy. The result is that they have very different outcomes. Indeed, the labor force participation rate of native-born Americans in their prime is consistently 15 to 20 percentage points above that of high school dropouts. While Mr. Richwine is free to think that there is no distinction between high school dropouts and high school graduates, the labor market disagrees.

Moreover, the skills upgrading during this time went beyond simply more natives graduating high school. In fact, the only skill categories that have seen a growth in the number of natives from 1995 to 2014 were those with some college education or a college degree. The number of natives with a high school degree or less decreased by roughly 10 million, while the number of natives who have received some college education or a college degree grew by roughly 10 million. This means that natives who are graduating high school are more often going on to get further education. 

3. In his paper, Mr. Richwine was seeking to explain an interesting trend—that the overall labor force participation rate for native-born Americans has gone down for prime age workers—and argues that more native-born high school dropouts not working explains much of this trend. I pointed out that this group doesn’t explain any of the trend since there are fewer native-born dropouts not working today than in 1995, but now he argues that there is no distinction between high school dropouts and high school graduates, so combining these two, how much of the decrease in the labor force participation rate for prime age natives over the last two decades can be explained by more lower-skilled workers abandoning work? Once again, the answer is zero—or actually less than zero since the number has declined.

Net Growth in the Number of Prime Age Natives Not in the Labor Force from 1995 to 2014

Source: Census Bureau, Current Population Survey, March Supplement

4. In his paper, Mr. Richwine claimed that low-skilled immigration was indirectly leading fewer low-skilled natives to work. I pointed out that actually all of the increase in the number of natives not working came from higher skilled categories. Now in his response to me, Mr. Richwine claims that this fact is also irrelevant because “no matter how many stories we tell about movement between categories, the picture [overall] is getting worse rather than better.” Actually, it is relevant. His entire paper was built around the supposed relationship between low-skilled immigration and low-skilled natives not working. Now he says it doesn’t matter which natives stopped working.

5. After Mr. Richwine finished backtracking, he doubled down on the worst point in his original paper, asking, “Why should we assume that natives can increase their skills in response to immigration?” First, the simple fact is that natives have upgraded their education and skills in recent years as immigrants have entered lower-skilled fields. Lower-skilled Americans have repeatedly shown that they can increase their skills. Second, as I pointed out, there is reason to believe that this relationship is causal because immigration raises the relative wage of higher-skilled workers. Incentives matter, and increasing the rewards for graduating high school or college has incentivized lower educated Americans to climb the skills ladder.

Still, Mr. Richwine argues that because “not everyone can become a skilled worker,” we should not “bring in more immigrants.” In this view, it doesn’t matter how many Americans benefit from immigration. Indeed, even if immigration helped those at the bottom much more than it hurt them by pushing them up the skills ladder, it shouldn’t be allowed unless every single high school dropout who isn’t even looking for work can—not only not be hurt—but benefit. What a strange argument.

6.  Mr. Richwine explicitly concedes that his analysis provides no evidence whatsoever that immigrants are a threat to the employment prospects of natives. But he nonetheless offers the strange theory that if there were no immigrants, policymakers would act to help those low-skilled natives. While he offers no evidence for this theory either, it does appear to be the case that government becomes more active in labor market regulation and welfare state intrusions during periods of low immigration. Former Center for Immigration Studies board member Vernon M. Briggs Jr. even wrote that one cost of immigration liberalization is that it would have prevented Congress from passing left wing worker, family, and welfare legislation.

If immigration is what is standing in the way of such legislation, then that is decidedly a good thing as well. These interventions have almost always done more harm than good. Indeed, as I showed in a recent post, a major reason for the dramatic improvement in immigrant labor market outcomes was the 1996 welfare reform that restricted their access to benefits. When government ignored them, they thrived. If the presence of immigrants encourage policymakers to do the same for natives (which they did to some extent in 1996), that is just another benefit of immigration.

So in the end, Mr. Richwine is left with this argument: It doesn’t matter if immigrants don’t harm natives. It doesn’t matter if immigrants help natives overall. It doesn’t matter if immigrants help the worst-off natives without hurting any of them. Only if pro-immigration proponents can prove that immigrants not only don’t hurt but actively help all of the poorest educated natives who aren’t even looking for work should the United States allow any immigrants to come in. This is an argument of someone who could never—even theoretically—change his mind.

Pages