Policy Institutes

Academics and professional economists have critiqued many well known academic papers on immigration in the last year. The first was by Alan de Brauw and Joseph R.D. Russell and it replicates and expands a famous 2003 paper by Harvard University economist George Borjas entitled “The Labor Demand Curve is Downward Sloping: Reexamining the Impact of Immigration on the Labor Market.” 

Borjas famously found that from 1960-2000 there was a  wage elasticity of -0.38, meaning that a 10 percent increase in the size of the labor force due to immigration in a particular skill-cell lowered the average weekly wages in that cell by 3.8 percent relative to workers in other skill-cells.  Borjas’ paper is an impressive piece of scholarship and has been the lynchpin of arguments to close the border in order to protect wages.  Many economists disagree with Borjas

De Brauw and Russell had three findings.  Their first finding was that the wage elasticity dropped to -0.22 when they extended Borjas’ study to 2010. That is an important finding by itself – if the Borjas model was correct then why would the impact of immigrants on wages decrease as more of them entered the labor force between 2000 and 2010? 

Their second set of findings is that small changes in variable definitions turned some of Borjas’ ideas into statistically insignificant results. While not definitive, that suggests that the conclusions in his paper are not reliable.     

That leads to De Brauw and Russell’s third set of findings. They looked at the relationship between annualized male and female wages in the skill-cells when women entered the workforce in significant numbers. The correlation turned out to be positive­, which means men and women with the same skill level are complementary.  Thus, they argued that Borjas’ model is misspecified as it assumed immigrants and natives in the same skill-cells are more substitutable than they really are. If this finding is true, it would call into question the assumptions Borjas’ built in to his model, namely that immigration and natives are substitutable rather than complementary.

I’m still eagerly awaiting Borjas’ response to De Brauw and Russell’s paper. The critique of Borjas’ paper was serious because it replicated his work, extended it another decade, and found the results didn’t hold up. Many academics have already contested Borjas’ claims in numerous ways as I document here and here but this challenge cuts deep.

The second important working paper of the last year is by George Borjas and it critiques an important paper by David Card called “The Impact of the Mariel Boatlift on the Miami Labor Market.”  Card’s paper studied the infamous 1980 Mariel boatlift that resulted in 125,000 Cubans arriving in Miami in a very short period of time – the type of exogenous shock that economists love to test as they have for Portugal, France, and Israel.

The Mariel flow increased Miami’s workforce by 7 percent in 3 months– 28 percent in a single year if it had continued at the same pace.  The Mariel boatlift was an extreme event that can help economists understand the wage effects of immigration and the elasticities in the labor market. But it’s also important to point out that in recent years the annual flow of immigrants to the United States is equal to about 0.3 percent of the U.S. population – two orders of magnitude smaller than what happened to Miami.

Card compared Miami wages prior to the boatlift and afterward with a set of cities that did not experience such a shock, concluding that Mariel lowered the relative wages of lesser-skilled Miami workers slightly before they rebounded entirely around 1984. The finding that a massive influx in lower-skilled workers did not affect wages has been an understandably important contribution to the literature.  

Borjas’ critique of the Card paper changes all of the treatment cities and control groups. Fair enough, maybe Card selected the wrong cities to begin with and Borjas selected four comparison cities that would better approximate Miami’s labor market without the Mariel boatlift. By changing the placebo, Borjas found that wages for less-skilled Miamians dropped by 10 to 30 percentage points due to the Mariel boatlift but fully recovered by the late 1980s. If Borjas had selected a better group of comparison cities than Card did, then this would be a very interesting finding indeed, but the relatively quick recovery in wages is still remarkable.      

Interestingly, using Borjas’ way of measuring the economic costs and benefits of immigration, this larger negative wage effect could actually increase the economy-wide benefits from immigration. The size of the wage decrease for some workers increases the economic surplus for other workers and capital owners – meaning that Borjas’ earlier estimates of the economy-wide benefits could be several times too small.

Neither Borjas nor Card have responded to the critiques of their classic papers – yet. Regardless of how this debate turns out, this is a great time to watch several excellent scholars and academics debate this important topic. Stay tuned.  

Princeton University economist Angus Deaton was awarded the Nobel prize in economics today. His work on carefully measuring consumption and other measures of well-being led him to understand development as a complex process not susceptible to improvement by technical or top-down interventions. For Deaton, knowledge is a key to development—even more so than income—and helps explain the tremendous progress humanity has experienced in the last 250 years when parts of the world we now call rich began their “great escape” from poverty and destitution.

In his book, The Great Escape: Health, Wealth And the Origins of Inequality, Deaton documents how that progress is now spreading around the globe and is the reason we are living longer, wealthier and healthier lives than at any time in history. (You can see him presenting the book at this Cato forum, and see a summary of that talk here.) Even countries with relatively low incomes have seen tremendous advances, largely as a result of the spread of scientific, medical and other kinds of knowledge. Though he is not deterministic, Deaton paints a largely hopeful picture of humanity reminiscent of the views of Julian Simon, whom he cites. He is also concerned with inequality, but recognizes that “Inequality is often a consequence of progress,” and distinguishes between inequality that helps humanity and the kinds that harm it (e.g., inequality that can lead to political inequality).

As his career progressed, Deaton joined the growing number of development experts who have become skeptical of foreign aid and consider that numerous other factors play a critical role in development. Citing pioneer development economist Peter Bauer, Deaton notes a foreign aid dilemma: “When the ‘conditions for development’ are present, aid is not required. When local conditions are hostile to development, aid is not useful, and it will do harm if it perpetuates those conditions.” In The Great Escape, Deaton goes on to document the myriad practical problems with foreign aid including corruption, the failure of loans conditioned on policy changes, the institutional incentives to lend, the divergence of donor country interests from recipient country needs, etc. Even when aid projects do good, he concludes:

The negative forces are always present; even in good environments, aid compromises institutions, it contaminates local politics, and it undermines democracy. If poverty and underdevelopment are primarily consequences of poor institutions, then by weakening those institutions or stunting their development, large aid flows do exactly the opposite of what they are intended to do. It is hardly surprising then that, in spite of the direct effects of aid that are often positive, the record of aid shows no evidence of any overall beneficial effect.

When thinking about aid, the developed world would do well by heeding Deaton’s advice and by not asking what we should do. “Who put us in charge?” Deaton rightly asks. “We often have such a poor understanding of what they need or want, or of how their societies work, that our clumsy attempts to help on our terms do more harm than good…And when we fail, we continue on because our interests are now at stake…”

Deaton provides a far better way of thinking about development:

What surely ought to happen is what happened in the now-rich world, where countries developed in their own way, in their own time, under their own political and economic structures. No one gave them aid or tried to bribe them to adopt policies for their own good. What we need to do now is to make sure that we are not standing in the way of the now-poor countries doing what we have already done. We need to let poor people help themselves and get out of the way—or, more positively, stop doing things that are obstructing them.

The Wall Street Journal takes a look at hurricane threats to cities along the seacoasts. It’s an odd article because the author, Greg Ip, does not discuss the central role that governments play in encouraging people to live in hurricane-prone areas.

Ip does mention the “levee effect” of misguided development taking place in low-lying areas because people feel safer behind large sea walls. In the United States, federal spending by the Army Corps of Engineers has encouraged people to live in unsafe coastal areas, as I discuss in this essay. After Hurricane Betsy struck New Orleans in 1965, for example, the Corps extended levees to additional low-lying areas around the city, thus encouraging further development and exacerbating damage in subsequent storms.

Ip does not discuss federal and state flood and wind insurance subsidies, which also encourage people to live in harm’s way. I discuss federal flood insurance subsidies in this essay, and a new essay in Cato’s Regulation examines state wind insurance subsidies.

I note,

rather than reducing the nation’s flooding problems, the National Flood Insurance Program (NFIP) has likely made flood damage worse by encouraging more development in hazardous areas. Since 1970, the estimated number of Americans living in coastal areas designated as Special Flood Hazard Areas by FEMA has increased from 10 million to more than 16 million. Subsidized flood insurance has backfired by helping to draw more people and development into flood zones.

And the Regulation article notes, “Insurance, if priced accurately, provides an important service of signaling to people the risk cost of living near water. [But] subsidized insurance rates destroy the information value of full-risk premiums, thus suppressing the true cost of living in severe weather zones and creating an excessive incentive to populate attractive but dangerous locations.” The federal government subsidizes flood insurance, and the article notes that Florida subsidizes wind insurance. Partly as a result of these subsidies, the coastal population of Florida has soared in recent decades.

An interesting fact about flood and wind insurance subsidies is that they are welfare for the well-to-do. Politicians often talk about helping the poor, but many of their policies disproportionally benefit the well-off.

A 2010 study, for example, looked at flood insurance claims data over a 10-year period and concluded, “the benefits of the NFIP appear to accrue largely to wealthy households concentrated in a few highly-exposed states.”

Similarly, the Regulation article examines Florida wind insurance data and finds that the benefits “accrue disproportionately to affluent households and the magnitude of this regressive redistribution is substantial.”

Montana’s scholarship tax credit (STC) law was already crippled and now bureaucrats are attempting to issue the coup de grâce

Montana’s STC law offers individuals and corporations tax credits in return for donations to nonprofit scholarship organizations that help families send their children to the school of their choice. All Montana students are eligible to apply for a tax-credit scholarship and the value of the scholarships is capped at half the statewide average per-pupil expenditure at the district schools (just over $5,300).

The only catch is that donations are capped at $150 per donor, far lower than in any other state. That means it would take at least 34 donors to fund a single $5,000 scholarship–a monumental task for scholarship organizations seeking to fund thousands of students.

But even if the scholarship organizations manage to raise the requisite funds, families may not be allowed to use the scholarships at their preferred school due to Montana Department of Revenue’s proposed rule barring the use of tax-credit scholarships at religious schools

The proposed regulations would bar schools from participating in the program if they’re “owned or controlled in whole or in part by any church, religious sect, or denomination.”

The proposed regulations also note schools are barred if their accreditation comes from a faith-based organization. […]

Republican state Sen. Kristin Hansen, who supported the bill, said the department was out of bounds.

“It’s the opposite of the intent of the legislation,” she said. “When we drafted the bill, we intentionally drafted a substantial definition of who qualified, so there wouldn’t be any questions about who would be eligible. I think the department has exceeded its authority by adding its own interpretation … when the Legislature was very clear. Absolutely, I think this proposed rule exceeds the department’s authority on more than one level.”

The bureaucrats claim they’re just following the state constitution’s historically anti-Catholic Blaine Amendment, which prohibits the appropriation of “any public fund or monies” to churches, religious schools, and other religious institutions. However, as the U.S. Supreme Court and several state supreme courts have held, tax-credit scholarships constitute private funding, not public funding, because the funds never enter the state treasury. Constitutionally, tax credits are no different than tax deductions or tax exemptions. Has the Montana Department of Revenue prohibited donors to churches from receiving charitable tax deductions? Has it prohibited the churches themselves from taking property tax exemptions? If not, why is it treating the tax credit law differently?

The department will hold a hearing on its proposed rules on November 5th. Hopefully the bureaucrats will see the error of their ways and change course. If not, they are inviting a lawsuit–one they are likely to lose.  

A Politico article today declares that the Common Core has “quietly” won the school standards war. It is a headline that would have been accurate several years ago, but today’s headline should be somewhat different:  “Common Core in major – but quiet – retreat.”

The one thing the article gets right is that the Core did, indeed, achieve almost complete domination very quietly. But that was around six years ago, when the Obama administration, at the behest of Core strategizers, slipped the de facto requirement that states adopt the Core into the $4.35 billion Race to the Top program, a pot of “stimulus” money the large majority of states grabbed for while the country panicked about the Great Recession. It was also used to pay for national tests to go with the Core. It was, for all intents and purposes, a silent coup.

But then something happened. Around 2011 the public suddenly became cognizant that they’d lost a war they weren’t even aware they were in. After the states had done their part in conforming to the new standards overlords, districts and schools were told, “implement this new set of standards you’ve never heard of.” That’s when the resistance began, and it quickly grew fierce. Indeed, the Core has been on the defensive ever since.

Polling, though subject to lots of variation thanks to wording and other issues, shows the losses the Core has suffered. As I noted a few months ago, more-neutral poll questions tend to show very low support for the Core, but it is a question that is biased in favor of the Core that captures the direction in which the Core has been going: backwards. Defining the Core as standards states simply choose to adopt that “will be used to hold public schools accountable,” the annual Education Next poll found support dropping from 65 percent in 2013 to 49 percent in 2015. Among teachers, the Core freefell from 76 percent support to 40 percent, with 50 percent now opposing.

Capturing how bad things are for the Core, a question in a brand new poll that blatantly spins for the Core, describing it as a “set of high-quality [italics added] academic standards,” elicited only 44 percent support, with only 9 percent saying the standards “are working in their current form and should not be changed.”

Sure doesn’t seem like the Core is triumphant, at least not on the battlefield of public opinion.

Where a better case can be made that the Core is winning is in the official presence of the Core as state standards. That is what the Politico article mainly argues is the evidence that the Core has won the war, but there, too, it is clear that the Core has been in steady retreat.

Of course, some states have officially dropped the Core: Indiana, Oklahoma, and South Carolina have joined the four states that never adopted it. And no, their new standards may not be all that different from the Common Core, but it was officially declaring that they would not be dictated to by Washington that was the big victory for anti-Core forces and, of course, federalism.

More substantive, but much less flashy, has been states leaving the tests that would be the linchpins of nationalization. The US Department of Education selected and paid for the work of two testing consortia – the Smarter Balanced Assessment Consortium (SBAC) and the Partnership for Assessment of Readiness for College and Careers (PARCC) – and the tests were to be the keys to making sure Common Core was both truly used – what gets tested is what gets taught – and set nationally comparable performance levels. But the consortia have crumbled.

SBAC, when it was awarded Race to the Top dollars, had 31 states as members. Today, it has 18 member states. PARCC started out with 25 states and DC. It is now down to 11 and DC, with Arkansas, Mississippi, and Ohio officially set to leave even that small group, and PARCC possibly on the ropes in Massachusetts. Adding to the testing woes are massive opt-outs in New York and to lesser extents in other states, and states gaming “shared” performance targets.

Without a doubt, it has been hard to get states to officially dump the Core. They expended a lot of time and money implementing it before the public ever knew it existed, and coupled with de facto federal penalties for leaving – coercion has been more than just Race to the Top – it is understandable that states with even vociferous opposition would be loath to declare, “forget all we’ve spent, not to mention that we’d have to whipsaw kids around – we’re trashing the Core.” No, instead of outright rebellion they’ve often taken a page from Core proponents: wage a stealthy war.

States are taking apart the Core largely by taking apart the tests, and the Core is in retreat.

A few weeks ago, unsatisfied with a report on REAL ID in the Minneapolis Star Tribune, I submitted an op-ed that the paper was kind enough to print. Unfortunately, they followed it up with an editorial favoring state compliance with REAL ID. And last week, the Star Tribune published an op-ed from a pro-national-ID advocacy group arguing that Minnesota should join the national ID system. The paper’s recent coverage of a meeting between state officials and the DHS reported uncritically on federal bureaucrats’ misrepresentations to Minnesota’s lawmakers. The REAL ID record in Minnesota should be set straight.

According to the Star Tribune’s report, Ted Sobel, director of DHS’s Office of State-Issued Identification Support, told Minnesota officials: “We are not asking Minnesota to turn over the keys to your information to anybody else. REAL ID does not affect one way or another how Minnesota protects the information of its residents.”

That is not accurate. REAL ID compliance would require Minnesota to make its drivers’ information available to all other States. The law is unequivocal on that (you can get it right from DHS’s web site):

To meet the requirements of this section, a State shall adopt the following practices in the issuance of drivers’ licenses and identification cards: …
(12) Provide electronic access to all other States to information contained in the motor vehicle database of the State.
(13) Maintain a State motor vehicle database that contains, at a minimum–
(A) all data fields printed on drivers’ licenses and identification cards issued by the State; and
(B) motor vehicle drivers’ histories, including motor vehicle violations, suspensions, and points on licenses.

That seems like turning over the keys to me, and it absolutely affects the security of Minnesotans’ personal information.

On compliance issues, Mr. Sobel was again inaccurate with Minnesota officials. He told Minnesota legislators: “We do not have the discretion to say ‘Well, the law has 43 things, but we can just give a waiver to 10 of them.’”

In fact, DHS has been doing exactly that since January 2008, when it created a “material compliance checklist” and began treating states as compliant if they did some of the things the law requires and showed enough obeisance. Not one of the jurisdications DHS lists on its web site as “Compliant/Extension States/Territories” is in full compliance with the law. They are all enjoying waivers issued by DHS. DHS’s hands are not tied, as Mr. Sobel claimed to Minnesota officials.

These are rather picayune details, but I think it’s important to make Minnesota officials aware, if I can, that they are not necessarily getting a straight story from the Department of Homeland Security. They are, as I wrote in the Star Tribune, being buffaloed by DHS and pro-national-ID advocates. Better informed, Minnesota’s duly elected officials should turn and face down the DHS and defend their authority over state policy while they protect Minnesotans’ privacy and data security.

The Star Tribune’s editorial board dismissed my arguments as part of an “ideologically driven crusade[] against the federal government.” I don’t think that is a fair characterization of my reason for preferring that Minnesota’s legislature set Minnesota’s policies. Minnesota’s legislature passed a budget before the beginning of the fiscal year, and Congress did not. Minnesota’s House has a Speaker, and Congress does not. Minnesota’s public servants are more accountable and almost certainly more accurate when they speak to the state’s legislators.

In the op-ed published last week, the policy director of an obscure Washington, D.C., group called “Keeping IDentities Safe” wrote that my intention in informing Minnestoans about REAL ID was “to provoke an Orwellian fear that driver’s licenses are being co-opted by the federal government to track law-abiding citizens.” I suppose they might draw such conclusions, because the likelihood of being tracked rises under a national ID system. As I noted in my book, Identity Crisis: How Identification is Overused and Misunderstood, such a system makes it much easier for governments and corporations to collect and store data that reveals our comings-and-goings, our preferences, commercial transactions, and so on. That makes it easier to influence and control us. One need not fear Orwell to prefer state policies that protect drivers’ privacy.

There is an argument that REAL ID is not a national ID system, but I think that case is closed. A national ID system is 1) national in scope; it 2) is used to identify; and it 3) is practically or legally required. State driver licensing systems unified under federal mandates meet this definition. It does not matter that the government workers administering federal identification policy were hired by the states. And it does not matter that the cards would say “Minnesota”, “New York”, or “New Hampshire” on the front. When data on our identity cards are all in the same formats, and federal information-sharing mandates are being carried out behind the scenes, that’s a national ID.

Urbanization is on the rise around the world. By 2050, some 70 percent of humanity will live in the cities and that is good news for the environment.

Many of the environmental advantages are derived from living spaces being condensed. For example, electricity use per person in cities is lower than electricity use per person in the suburbs and rural areas. Condensed living space that creates reduction in energy use also allows for more of the natural environment to be preserved. In a suburban or rural environment, private properties are spread out, because land values are relatively low. So, more of the natural environment is destroyed. In cities, property values are higher and space is used more efficiently. That means that more people live in the same square mile of land than in the rural areas.

Another environmental advantage of cities compared to rural areas is a decrease in carbon emissions per person. In a rural or suburban area people normally use their own vehicles to drive to work or anywhere else. Due to congestion, the use of personal cars in the city is much less attractive. More people use public transportation instead and that means that less carbon dioxide gets released into the atmosphere.

Find out more by visiting HumanProgress.org.

Ben Bernanke’s memoir is now out and is unapologetically pro-Fed. It is titled The Courage to Act. Here is the cover quote:

The main point of Bernanke’s book is that absent the Fed’s interventions over the past seven years the U.S. economy would have undergone another Great Depression. Thanks to him and his colleagues at the Fed the world is a much better place.

There has already been some push back on this Bernanke triumphalism. George Selgin, for example, notes that the recovery under Bernanke’s watch was anemic. Inflation consistently undershot the Fed’s target and the real recovery was weak. We may not have experienced another Great Depression, but we sure did get a long slump. Ryan Avent makes a similar point by observing that Bernanke had a chance in late 2011 to do something bold by endorsing a NGDP target, an action that could have jolted the economy from its doldrums. But alas, Bernanke failed to muster up the courage to have what Christina Romer called his “Volker Moment”.

Expect more push back along these lines from a book with such a bold title. One strand of criticism that many observers miss, but I hope will be considered in future reviews of Bernanke’s book is the role the Fed played in allowing the crisis to emerge in the first place. Could the Fed have done more to prevent the recession from becoming as severe as it did? Maybe a recession was inevitable, but was a Great Recession inevitable? These are the questions first raised by Scott Sumner and echoed by others including me. Our answer is no, the Great Recession was not inevitable. It was the result of the Fed failing to act aggressively enough in 2008.

This understanding draws upon the fact that the housing recession had been going on for about two years before a wider slowdown in economic activity occurred. As seen in the two figures below, sectors of the economy tied to housing began contracting in April 2006 while elsewhere employment growth and nominal income continued to grow. This all changed in the second half of 2008.

So what went wrong in the second half of 2008? Why did a seemingly ordinary recession get turned into a Great Recession? We believe the Fed became so focused on shoring up the financial system and worrying about rising inflation, that it lost sight of stabilizing aggregate demand. Based on theses concerns, especially the latter, the FOMC decided to do abstain from any policy rate changes during the August and September 2008 FOMC meetings. But by doing nothing at these meetings the FOMC was doing something: it was signaling the Fed would not respond to the weakening economic outlook. The FOMC, in other words, signaled it would allow a passive tightening of monetary policy in the second half of 2008.

A passive tightening of monetary policy occurs whenever the Fed allows total current dollar spending to fall, either through a endogenous fall in the money supply or through an unchecked decrease in money velocity. The decline in the money supply and velocity are the result of firms and households responding to a bleaker economic outlook. The Fed could have responded to and offset such expectation-driven developments by properly adjusting the expected path of monetary policy.

The figures below document this monumental failure by the FOMC. The first one shows the 5-year ‘breakeven’ or expected inflation rate. This is the difference between the 5-year nominal treasury yield and the 5-year TIPs yield and is suppose to reflect treasury market’s forecast for the average annual inflation rate over the next five years. The figure shows that prior to the September 16 FOMC meeting this spread declined from a high of 2.72 percent in early July to 1.23 percent on September 15. That is a decline of 1.23 percent over the two and half months leading up to the September FOMC meeting. This forward looking measure was screaming trouble ahead, but the FOMC ignored it.

One way to interpret this figure is that the Treasury market was expecting weaker aggregate demand growth in the future and consequently lower inflation. Even if part of this decline was driven by a heightened liquidity premium the implication is the same: it indicates an increased demand for highly liquid and safe assets which, in turn, implies less aggregate nominal spending. Either way, the spread was blaring red alert, red alert!

The FOMC allowed these declining expectations to form by failing to signal an offsetting change in the expected path of monetary policy in its August and September FOMC meetings. The next figure shows where these two meetings fell chronologically during this sharp decline in expectations.

As noted above, this passive tightening in monetary policy implies there would be a decline in the money supply and money velocity occurring during this time. The Macroeconomic Advisers’ monthly nominal GDP data indicates this is the case:

The Fed could have cut its policy rate in both meetings and signaled it was committed to a cycle of easing. The key was to change the expected path of monetary policy. That means far more than just the change in the federal funds rate. It means committing to keeping the federal funds rate target low for a considerable time and signaling this change clearly and loudly. With this approach, the Fed would have provided a check against the market pessimism that developed at this time. Instead, the Fed did the opposite: it signaled it was worried about inflation and that the expected policy path could tighten.

Recall that Gary Gorton provides evidence that many of the CDOs and MBS were not subprime, but when the market panicked a liquidity crisis became a solvency crisis. This is especially true in late 2008. Had the Fed responded to the falling market sentiment in the second half of 2008 the financial panic in late 2008 may have been far less severe and the resulting bankruptcies fewer. Again, the worst part of the financial crisis took place after the period of passive Fed tightening. This is very similar to the Great Depression when the Fed allowed the aggregate demand to collapse first and then the banking system followed.

So had Fed had the courage to act in 2008 the economy would be in a very different place today. Future reviewers of Bernanke’s book should keep that in mind.

P.S. For a more thorough development of this view see the book by Robert Hetzel of the Richmond Fed.

[Cross-posted from Alt-M.org]

This week the New York Times reports that the Supreme Court has refused to review the ruling of the Second Circuit Court of Appeals in the case United States v Newman. The Second Circuit, in December, overturned the insider trading conviction of a pair of hedge fund managers because nothing of value was exchanged in return for the information and thus the managers could not have known that the information they received was improperly disclosed to them by the information source.  The Supreme Court decision would seem to block insider trading prosecutions in the absence of clear financial gains to those who leak the information.

This, in turn, has energized some members of Congress to introduce legislation to make it illegal to trade on insider information regardless of how one obtains it. This standard would define insider trading far more broadly than the standard laid out in Newman, or, for that matter, even before Newman based on the precedent in Dirks v SEC.

In his article in the current issue of Regulation, Villanova University law professor Richard Booth explores the Newman ruling.  He argues that ordinary diversified investors neither lose nor gain from insider trading because they own all stocks and don’t trade very often.  The only investors who have an interest in the prosecution of insider trading are “activist investors – hedge funds and corporate raiders – who stand to benefit from slower reaction times as they buy up as many shares as possible before anyone notices.”  “… [H]edge fund managers have a distinct interest in seeing other hedge fund managers prosecuted for insider trading.”  They rather than ordinary investors are the beneficiaries of insider-trading prosecutions.  Thus ordinary investors should applaud the Newman ruling and oppose the attempts by Congress to adopt a European-style law against all insider trading.

For more Cato work on insider trading, see these links.

Research assistant Nick Zaiac contributed to this post.


The U.S. Departments of Agriculture and Health and Human Services made headlines last winter when they released the draft form of their updated dietary guidelines and revealed that they were considering “sustainability” as a factor in their recommended diet—and by “sustainable” they meant foods that had “lower greenhouse gases” associated with their production. This favors plant-based foods over animal- based ones.

President Obama’s Climate Action Plan now even had its far-reaching fingers in our food. We found this somewhat rude.

Under the wildly-crazy assumption that all Americans, now and forever, were to convert to vegetarianism, we calculated that the net impact on future global warming as a result of reduced greenhouse gas emissions was two ten-thousandths of a degree Celsius (0.0002°C) per year. Not surprisingly, we concluded if one were worried about future climate change, “ridding your table of steak shouldn’t be high on the list.”

We expanded upon our findings during the public review period for the newly proposed dietary guidelines and submitted a pointed Comment, stressing two issues:

Throughout the Scientific Report whenever greenhouse gases are mentioned, a negative connotation is attached and food choices are praised if they lead to reduced emissions.

This is misleading on two fronts.

First, the dominant greenhouse gas emitted by human activities is carbon dioxide which is a plant fertilizer whose increasing atmospheric concentrations have led to more productive plants, increasing total crop yields by some 10-15 percent to date. The USDA/HHS is at odds with itself in casting a positive light on actions that are geared towards lessening a beneficial outcome for plants, while at the same time espousing a more plant-based diet.

And second, the impact that food choices have on greenhouse gas emissions is vanishingly small—especially when cast in terms of climate change. And yet it is in this context that the discussion of GHGs is included in the Scientific Report. The USDA/HHS elevates the import of GHG emissions as a consideration in dietary choice far and above the level of its actual impact.

Ultimately, we advised that “climate change concerns don’t belong in dietary guidelines,” although fretting, “[w]e can only guess on what sort of impact our Comment will have, but we can at least say we tried.”

Turns out that we were wildly successful.

This week, prior to a Congressional hearing on the proposed guidelines, USDA Secretary Tom Vilsack and HHS Secretary Sylvia Burwell posted an article on the USDA blogsite where they addressed the issue of sustainability [emphasis added]:

There has been some discussion this year about whether we would include the goal of sustainability as a factor in developing dietary guidelines. (Sustainability in this context means evaluating the environmental impact of a food source. Some of the things we eat, for example, require more resources to raise than others.) Issues of the environment and sustainability are critically important and they are addressed in a number of initiatives within the Administration. USDA, for instance, invests billions of dollars each year across all 50 states in sustainable food production, sustainable and renewable energy, sustainable water systems, preserving and protecting our natural resources and lands, and research into sustainable practices. And we are committed to continuing this investment.

In terms of the 2015 Dietary Guidelines for Americans (DGAs), we will remain within the scope of our mandate in the 1990 National Nutrition Monitoring and Related Research Act (NNMRRA), which is to provide “nutritional and dietary information and guidelines”… “based on the preponderance of the scientific and medical knowledge.”  The final 2015 Guidelines are still being drafted, but because this is a matter of scope, we do not believe that the 2015 DGAs are the appropriate vehicle for this important policy conversation about sustainability.

Of course, we don’t know what comments changed their minds, but the notion that the entire nation going vegetarian would have no effect on climate seems powerful enough, as is the well-known direct fertilization effect of increasing carbon dioxide.

Sometimes when it comes to battling the federal government, it’s a major victory when you can just get them to behave within the rules. By getting the USDA and HHS to “remain within the scope of [their] mandate” and not consider climate change when establishing the U.S. Dietary Guidelines, it’s looking like this’ll be a win for the good guys fighting against the far-reaching and invasive climate actions being pursued by this Administration.

The Trans-Pacific Partnership will reportedly include an obligation for every country to provide at least 5 years of market exclusivity for new biologic drugs.  Technically, this counts as a loss for U.S. negotiators, who started with a demand for 12, lowered that to 8, reconfigured 8 into “5+3”, and at the VERY last minute—despite direct calls from President Obama to foreign leaders—were forced to acquiesce to 5 years.  The U.S. pharmaceutical industry says it’s very disappointed, but the outcome is good for the TPP and for consumers around the world.

It’s important to recognize that the exclusivity we’re talking about here has nothing to do with patent protection.  It is not a form of intellectual property.  Exclusivity is a regulatory policy that instructs the Food and Drug Administration not to approve generic, unpatented drugs they know are safe so that name-brand pharmaceutical companies can make more money. 

Those companies say that without a secured return on investment, they wouldn’t be able to invent new treatments.  But that’s what patents are for.  Regulatory exclusivity is a way to bypass the balances and limitations of patent law, which only protects new inventions not all expensive investments.  

They complain that it’s unfair for generic competitors to piggyback on all the expensive research and testing they did to secure FDA approval.  But that’s a problem with the expense of FDA approval.  Either lobby to make FDA approval cheaper or find a way to share costs.  Pharmaceutical companies are not entitled to the benefits they gain from regulatory inefficiency.

Biologics protection was a peculiar issue for U.S. negotiators to be spending so much effort on in the first place.  They spent a lot of negotiating capital trying to secure foreign regulations favorable to one part of one U.S. industry.  That doesn’t further the goal of free trade; in fact, it impedes that goal by diverting energy away from universally valuable efforts to open up Canada and Japan’s markets in agriculture.

The U.S. government may have wasted effort on biologic exclusivity, but at least they failed to hobble foreign countries with excessive drug regulation.  As a bonus, Congress is now free (if they wish—and they should) to roll back the 12 years of protection under U.S. law to something more reasonable.

The airship is making a comeback. Take the British Airlander10, which uses 20 percent of fuel burned by conventional aircraft and can be fitted with solar panels. Airlander can stay airborne for five days while carrying a maximum payload of 20,000 pounds. It is much safer than its 1930’s cousin and can operate in adverse weather. Combined with GPS navigation and tracking, an unmanned Airlander could stay airborne for up to two weeks, carrying cargo vast distances, including hard-to-reach places. The British manufacturer is already working on an airship that could carry up to 100,000 pounds of cargo – roughly equivalent to the payload of two 20 foot containers. A vast fleet of Airlanders moving silently through the air 24/7 could dramatically decrease the cost of transport (they are faster than ships and much more cost effective than aircraft), while connecting places without ports or runways. Find out more about the declining cost of air travel at www.humanprogress.org


A recent report from the Social Security Advisory Board’s Technical Panel found that the 75-year shortfall could be 28 percent (roughly $2.6 trillion) larger than the estimate in this year’s Trustees Report due to changes in some of the underlying technical assumptions. This disparity is more the product of the difficulties related to projecting the trajectory of a program as large and complicated as Social Security so far into the future, with the chair of the Technical Panel taking pains to reiterate that “the methods and assumptions used by the Social Security actuaries and Trustees are reasonable.” Even so, the report reveals the uncertainty related to the long-term projections for Social Security, with relatively small changes to some of the underlying assumptions significantly changing the program’s financial solvency outlook. Social Security is the largest government program in the world, and changes in its fiscal outlook could have a large impact on the government’s overall finances.

The changes in the Technical Panel report that would have the largest impact are concentrated in a few variables:

  • Higher fertility rate
  • Higher life expectancy
  • Higher interest rates

Other changes to inflation and real earnings growth rate assumptions have a small negative impact, while changes to immigration assumptions slightly improve the program’s financial picture.  Some of the changes reflect developments that are good overall but have a negative impact on Social Security’s finances, like higher life expectancy.


Some of the panel’s recommendations focus on making the methodology of the Trustees’ Report more transparent and the degree of uncertainty more clear.  While it’s possible that unforeseen changes to underlying variables like the fertility rate could improve the program’s financial outlook, it is much more likely that the trillions in unfunded obligations published in the Annual Trustees’ Report understate the shortfall, if anything.

To some extent we don’t know what Social Security’s long-run shortfall is, but we do know that there will have to be significant reforms to make the program solvent, and the longer these changes are delayed the bigger they’ll need to be. Whether it is raising payroll tax rates or cutting benefits, delaying reform only makes the needed changes more severe.

Percent Change Needed for 75-Year Solvency


Source: Social Security Administration, The 2015 Annual Report of the Board of Trustees of the Federal Old-Age and Survivors Insurance and Federal Disability Insurance Trust Funds, July 2015, p. 25.

One of the goals Social Security is to remove some degree of uncertainty related to life in old age, but this new report confirms that a high degree of uncertainty remains, both for the program’s overall solvency and for individual workers. Younger workers already get a worse deal than previous generations due to demographic change and the program’s structure. Even more troubling, they can’t know how much worse their deal will become as benefits are cut or taxes are increased in the future to try to address this shortfall.

One option that could remedy some of these inherent problems would be to allow workers, especially young workers, to divert some of their payroll taxes to individual accounts. Cato has explored this issue in the past. Chile, for example, has an elderly extreme poverty rate of 1.6 percent, and pension funds have seen a real annual return of 8.6 percent from 1981 to 2013. The United States should heed some of these lessons. Other countries, especially those in South America, have successfully introduced reforms along these lines.

In today’s Cato Online Forum essay, Iana Dreyer of the EU trade news service Borderlex marshals public opinion data to support a rather gloomy prediction about the chances for a robust and comprehensive TTIP outcome. Despite having “strong ‘Atlanticist’ instincts and the vision for Europe as a dynamic, globalized, economic powerhouse,” the EU’s business community and its cosmopolitan policy makers are likely to be thwarted by demographics: especially, by the aging German voter.

Iana concludes that the likely outcome will be a TTIP agreement that reflects the sensibilities of older, risk-averse Europeans who are unwilling to gamble with their social safety nets, even though those safety nets are not really on the negotiating table, which means a rather shallow and limited agreement at best.

The essay is offered in conjunction with a Cato Institute TTIP conference being held on Monday.  Read it. Provide feedback.  And register to attend the conference here.

Who isn’t nuts about fresh tomatoes plucked from a garden at the peak of ripeness? And who doesn’t bask in the adulation of those to whom we give them?

According to work recently published by Maria Sanchez-González et al. (2015), the more years you garden, the more tasty your tomatoes are likely to get, as atmospheric carbon dioxide increases. And, if you add a pinch of salt to the soil, they’ll taste even better.

Here’s the story:

The authors note “the South-Eastern region of Spain is an important area for both production and exportation of very high quality tomatoes for fresh consumption.” This is primarily due to favorable growing conditions such as a mild climate, good soils and saline waters that promote “exceptional fruit quality of some varieties,” including the Raf tomato hybrid. However, Sánchez-González et al. additionally note that, “despite the high value of Raf tomatoes in the Spanish national market, their productivity is relatively low and the consumer does not always get an acceptable quality, often because the fruit growth conditions, mainly thermal and osmotic, were not adequate.” Against this backdrop, the team of six researchers set out to determine if they could improve the production value of this high value commercial crop by manipulating the environmental conditions in which the tomatoes are grown. To accomplish this objective, they grew hybrid Raf tomato plants (Lycopersicon esculentum Mill. cv. Delizia) in controlled environment greenhouses at two salinity levels (low and high) under ambient (350 ppm) and elevated (800 ppm) CO2 concentrations. Then over the course of the growing season, and at harvest, they measured several parameters related to the growth and quality of the hybrid tomatoes. And what did their analysis of those measurements reveal?

According to the researchers, the high salinity treatment “increased firmness, total soluble solids content, titratable acidity and the percentage of dry matter of the fruit,” leading them to conclude that the high salinity growth medium “is necessary to obtain high quality tomato fruits.” However, this benefit did not come without a price, as higher salinity decreased marketable yield (47% less), fruit numbers (9.5% less), and average weight of the fruits (19% less) when compared to tomatoes grown under low salinity conditions. With respect to CO2, elevated levels increased tomato yield, fruit numbers and average weight of the fruits in the low salinity treatment, and they reduced the deleterious effects of salinity on these measures in the high salinity treatment. Additionally, elevated CO2 shortened the time required for fruit development by two days and it had little to no effect on fruit quality. Consequently, the authors conclude by stating “the results of this work suggest that the utilization of a [high salinity] nutrient solution … is necessary to obtain high quality tomato fruits and CO2 application increases its production,” while adding “CO2 enrichment allows increase in the production of a high value commercial crop grown under saline conditions by reducing the time needed for complete fruit development without compromising organoleptic quality.”

In the future, therefore, the Spanish national market of hybrid Raf tomatoes will benefit thanks to the ever-increasing CO2 concentration of the atmosphere. And so will home gardeners.

In addition to what Sanchez-Gonzalez et al. recently published, research done by Joseph Heckman at Rutgers (the State University of New Jersey and the name of a popular and very tasty heirloom tomato) shows that tomatoes grown in sea water (or the commercially-available equivalent for aquarium use) taste better. Further research has shown that this effect isn’t just limited to commercial varieties. It’s been tested on Burpee’s widely grown Early Girl, which contains much of the genetic material from their “boy” series (Big Boy, Better Boy, Lemon Boy, Brandy Boy, etc…), so that it looks like the flavor enhancement will occur across a wide spread of varieties.

And, as has been shown repeatedly in crop and plant science, the additional CO2 is a cost-free enhancer of crop yields, including tomatoes. In combination with saline water, you’ll get more and even better tomatoes, and even more adulation for your tasty fruits.


Sánchez-González, M.J., Sánchez-Guerrero, M.C., Medrano, E., Porras, M.E., Baeza, E.J. and Lorenzo, P. 2015. Influence of pre-harvest factors on quality of a winter cycle, high commercial value, tomato cultivar. Scientia Horticulturae 189: 104-111.

Also, see: http://www.growingformarket.com/articles/Improve-tomato-flavor

SHANGHAI, CHINA—Shanghai is China’s financial capital. A former Western concession, the city today shows little sign of the many bitter political battles fought over the last century. Tourists throng the Bund along the Huangpu River while global corporations fill the skyscrapers in Pudong, across the water.

But politics in China today is a blood sport. President Xi Jinping has been taking down powerful opponents, so-called “tigers.” However, he has not revived propaganda posters, once a pervasive political weapon.

Yang Pei Ming, a tour guide, started collecting posters in 1995. He eventually set up the Shanghai Propaganda Poster Art Center. Explained Yang: “With the shift toward a more modern and forward-thinking China, it would be a mistake to forget our history.”

Now licensed by the government, the exhibit’s official name is the Shanghai Yang Pei Ming Propaganda Poster Art Museum. Yang accumulated 6000 different propaganda posters and a plethora of other tchotchke from Mao’s suffocating personality cult.

The earlier posters look more cartoonish or stylized. Soon the atrocious school of “socialist realism” took over, presenting the “reality” of the triumph of socialism—happy workers and farmers creating utopia on earth.

Whatever their form, the posters tell much about the politics of China. In one poster Mao towers over a crowd denouncing a profiteering capitalist. A 1951 poster shows a large Mao, arm outstretched, surrounded by scenes from the country, entitled “New China Under Leadership of Wise Chairman Mao.”

Not every poster had his visage. One shows members of the People’s Liberation Army being greeted by happy Chinese. Others show model families, community celebrations, happy workers building the new china, and people enjoying an abundance of food.

Some posters were more pointed politically. One entitled “Drive US Imperialism Invading Force out of China” shows a PLA soldier with a broom sweeping away the debris of a defeated foe. Many posters celebrated Beijing’s relationship with the Soviet Union.

Many posters were weapons in domestic political battles. Confronting “bandits and spies” was a common theme. Reactionaries, landlords, and other enemies also were targeted. So was the U.S.

Still, nothing beats the idyllic country scenes, of happy, well-dressed farmers planting fields of rice, leading healthy livestock, and picking fruit in bountiful orchards. Starting in 1957, however, noted Yang, “political movements started to mobilize public opinion.”

One poster shows a worker defending against a reactionary mob, declaring: “Smash the Attack from the Rightists to Defend Socialist Construction.” Many posters urged greater production and lauded “bumper harvests” in the midst of devastating famine.

The heyday of posters was the Cultural Revolution, which began in 1966. During this period Mao used posters to take the personality cult to new heights while denouncing his enemies.

Mao almost always was pictured beatifically, looking out, sometimes clapping or with arm outstretched, over the beautiful countryside or adoring masses. In one poster a man holds Mao’s little red book aloft in front of a crowd doing the same: “Proletarian revolutionary rebels unite.” The posters attacked “Russian revisionism” as well as “US imperialism.”

After Mao’s death much of the Communist Party turned against the “Gang of Four,” whose members had most enthusiastically carried out his dictates. Posters stoked the campaign: “Strike Gang of four” declares one, while another insists “Smash ‘Gang of four’.”

The pragmatic Deng Xiaoping came to power, and abolished what he called the “big character poster.” He wanted no more political crusades or ideological campaigns, no more social chaos and economic disruption.

As I wrote in Forbes online: “What’s best for China is a loss for the rest of us, at least political junkies. But Shanghai’s poster museum thankfully preserves this unique art form.”

American politics has been ugly of late. But still, politics in the U.S. cannot compare with that in modern China. This tumultuous process is captured by changing Chinese poster art. The Shanghai Propaganda Poster Art Center is a “must see” for anyone visiting the city—or going online.

The Washington City Paper asked “thirteen riders, advocates, and experts” how to fix the Washington Metro Rail system. Former Metro general manager Dan Tangherlini and former DC DOT director Gabe Klein offered banalities about “putting the customer first.”

Smart-growth advocate Harriet Trepaning thinks Metro “needs a different kind of leader,” as if changing the person at the top is going to keep smoke out of the tunnels and rails from cracking. She admits that “I don’t think we’ve been straight with anybody, including ourselves or our riders, about what it really takes to [keep the rails in a] state of good repair.” But her only solution is to have “a dedicated source of revenue,” i.e., increase local taxes for a system that already costs state and local taxpayers close to a billion dollars per year.

Coalition for Smarter Growth director Stewart Schwartz and former APTA chair Rod Diridon also want to throw money at it. Others dodge the money question and suggest that Metro do all sorts of things that it can’t afford and doesn’t have any incentive to do anyway.

Only one writer–yours truly–dared to suggest that “rail was probably the wrong choice for D.C.” for the very reason Tregoning suggests: Metro planners and managers have deceived themselves and the public about how much it truly costs to keep it in a state of good repair. Moreover, in the long run–10 years–“shared, self-driving cars are going to replace most transit.”

In the short run, tnstead of building the Purple Line, completing the Silver Line, and rebuilding the other rail lines, Metro should “seriously consider replacing” some of its worn-out rail lines “with bus-rapid transit.” This way, it won’t be stuck paying for a bunch of white elephants when people discover that shared, self-driving cars are less expensive, more convenient, and more reliable than trains. Unfortunately, these suggestions are likely to fall on deaf ears even though they are the most affordable ones offered.

Cato Senior Fellow Dan Pearson is the author of today’s Cato Online Forum essay, which explains the value and limitations of the International Trade Commission’s economic assessments of trade agreements.  Too often, parties opposed to trade liberalization misappropriate the estimates in ways that raise doubts about the integrity of the models. Dan’s conclusion: 

Supporters of trade liberalization should be prepared to counter those who would misinterpret the economic analysis of trade agreements in order to advance anti-trade arguments.  Yes, trade liberalization will produce both winners and losers.  But credible analysis clearly indicates that making markets more open and competitive will lead to improved resource allocation, expanded international trade, greater economic growth, and higher consumer welfare.  Those objectives are genuinely worth pursuing. 

The essay is offered in conjunction with a TTIP conference being held at the Cato Institute on Monday, October 12. Read it. Provide comments. And please sign up to attend the conference.

There’s been a lot of speculation in the press recently about Russia’s motives for its military intervention in Syria, and many are quick to attribute the intervention to a desire to – metaphorically speaking - poke America in the eye. Surrounding this speculation are images of Vladimir Putin as a strategic genius, playing geopolitical chess at the grandmaster level.

Nothing could be further from the truth.  It’s certainly convenient for Putin to make the United States look bad in any way he can. But there are a variety of other reasons for Russia’s involvement in Syria. And though Putin may briefly look like he is in control of the situation in Syria, the intervention is likely to end badly for him.

It’s notable that while many reports are portraying the Russian intervention in terms of U.S.-Russian relations, and intimating that Russia is in some way ‘winning’, Russia specialists are more likely to point to other factors, and to view the intervention as ill-fated.

Politico recently published a compilation of interviews with 14 Russia specialists on Putin’s goals in Syria. All but one pointed to a couple of key factors to explain Russian intervention: 1) Russian domestic concerns; 2) a desire for diplomatic gain; or 3) a desire to prevent other authoritarian regimes from falling. More tellingly, the vast majority also expressed the opinion that Russia’s actions are reckless, and will end badly.  

The first of these motivations – domestic political concerns – is likely the key reason for Russia’s intervention in Syria. It’s an excellent opportunity for Vladimir Putin to distract domestic attention from his ongoing failings in Ukraine, and to present an image of Russia as a great power.

The campaign is television gold for a regime which relies heavily on state media and propaganda to maintain popular support at home. Russia’s most recent escalation – the use of cruise missiles fired from ships in the Caspian Sea  against targets in Syria – was announced by Defense Minister Sergei Shoygu in a live TV interview with Putin. A cynic would suspect that the strikes were strategically incidental and intended mostly for a domestic audience.  

The United States plays some role in the second motivation. But rather than seeking to directly confront the United States in Syria, it’s likely that Russia is seeking a diplomatic bargaining chip. Though Putin has strongly supported the Assad regime for some time, it has been unable to move the diplomatic needle on Western and Gulf state demands that Assad must go.

Direct intervention in the conflict gives it a larger stake and greater bargaining power in negotiations. And Russia’s intervention not only distracts from its involvement in Ukraine, it enables it to reengage with the international diplomatic community after a period of relative isolation.

The third of these motivations is Russian hopes of preventing the fall of a friendly authoritarian ally. Yet even this has some roots in Russian domestic politics. President Putin has long feared so-called ‘color revolutions,’ the popular uprisings that swept a number of post-Soviet dictators from power, which Russian media often attribute to U.S. meddling.

By intervening in Syria, Putin not only hopes to save an allied regime, but to undermine the idea of a successful popular uprising against an authoritarian leader. It says far more about his paranoia and insecurity than about Russian strength.

Ultimately, U.S. policymakers would be wise to remember these factors. Putin isn’t a strategic genius, matching up against the United States in some geopolitical game. Instead, he’s making a gamble in Syria, hoping for diplomatic and domestic gain.  It’s likely he’ll regret his decision. 

One of the most controversial and radical moves implemented during the populist rule of Cristina Fernández de Kirchner in Argentina was the nationalization of private pension funds in 2008.

Not only did the government seize $29.3 billion in pension savings but, since the private pension funds owned stock in a multitude of companies, the government also seized that stock and used it to appoint cronies to their boards. This significantly increased the government’s control over the private sector.

Even though none of the opposition candidates has proposed peddling back the nationalization of the pension funds, the Kirchner administration is taking no chances. This week the government enacted a law that makes it extremely difficult for future administrations to sell the stock: from now on it will require a two-thirds majority vote in both chambers of Congress. Since kirchnerismo will likely remain a significant political force in Congress in the foreseeable future, it will enjoy a veto power over any future sale of the stock regardless of who wins the presidential election in late October.

Tellingly, the Argentine government has also drafted legislation that would limit the extraordinary executive powers that the presidency has accumulated since the Kirchner couple came to power in 2003 (Cristina was preceded by her husband Nestor). But don’t count on Cristina discovering her inner Montesquieu. The Kirchner administration has signaled that the bill would be approved only if an opposition candidate wins the election.

Thus, even though Cristina might have only few more months in power, much of her economic model will live on.