Cato Op-Eds

Individual Liberty, Free Markets, and Peace
Subscribe to Cato Op-Eds feed

When the Federal Open Market Committee (FOMC) meets in Washington next week, its members are widely expected to vote to raise interest rates for the first time since June 2006.  By doing so, they will move towards monetary policy normalization, after more than seven years of near-zero interest rates, and a vast expansion of the central bank’s balance sheet.

But how did monetary policy become so abnormal in the first place?  Were the Fed’s unconventional monetary policies a success?  And how smoothly will implementation of the Fed’s so-called “exit strategy” go?  These are among the questions addressed by Dan Thornton, a former vice president of the Federal Reserve Bank of St. Louis, in “Requiem for QE,” the latest Policy Analysis from Cato’s Center for Monetary and Financial Alternatives.

Thornton begins with an account of the Fed’s deliberations and actions during the early stages of the financial crisis.  What’s particularly striking, looking back, is how the Fed resisted letting its balance sheet grow, or otherwise departing from its conventional, funds-rate targeting procedure, until the Great Recession was well underway.

In fact, from August 2007, when BNP Paribas suspended redemption of three of its investment funds, to September 2008, when Lehman Brothers filed for bankruptcy, the Fed loaned banks and others more than $300 billion.  But these loans were sterilized—meaning that the Fed sold an equal amount of government securities.  As a result, the Fed’s balance sheet didn’t grow, and neither did total bank reserves or the monetary base.  For over a year, in other words, the Fed reacted to a growing liquidity crisis not by taking steps to boost credit in general, but rather by reallocating the existing supply of credit towards particular, troubled firms — a move George Selgin examined (and criticized) in detail in a recent post on this blog.

The Fed’s approach flew in the face of widely-acclaimed research presented by Milton Friedman and Anna Schwartz in their A Monetary History of the United States.  As Thornton points out, Friedman and Schwartz “connected substantial reductions in the nominal quantity of money and credit to correspondingly large declines in economic activity, and declared it the Fed’s duty to act quickly to prevent such reductions by expanding the monetary base.”  In 2007–08, however, no such action was forthcoming, and the crisis continued to intensify.

The sad part is that then-Fed chairman Ben Bernanke knew all this.  Back in 2002, he concluded a speech in honor of Milton Friedman’s 90th birthday by declaring: “I would like to say to Milton and Anna:  Regarding the Great Depression.  You’re right.  We did it.  We’re very sorry.  But thanks to you, we won’t do it again.”  And yet, as Thornton makes clear, “it” — failing to expand the balance sheet in order to prevent a collapse of credit — is “precisely what the Fed did in the months leading up to Lehman Brothers’ failure.”

Eventually, the Fed was forced to change tack.  Once Lehman Brothers collapsed, it found itself lending and buying assets on a scale that couldn’t possibly be sterilized via correspondingly large asset sales.  The Fed’s balance sheet grew, bank reserves began to pile up, and the federal funds rate dropped well below the FOMC’s target.  Unofficially, and almost by accident, quantitative easing (QE) began.

Quantitative easing was put on a formal footing in March 2009, when the FOMC announced the combined purchase of $750 billion of mortgage-backed securities and $300 billion of Treasuries.  From the very beginning, however, it was clear that the Fed was not belatedly adopting a Friedmanite approach.  The stated purpose of their asset purchases was not to boost the monetary base — that was just a side effect.  Instead, the Fed’s goal was to manipulate the yields on certain long-term assets, in the hope that this would spill over into broader markets, and the economy as a whole.

How was it meant to work?  Well, as Thornton points out, the rationale was “vague and highly uncertain,” and developed over time as QE was put into effect.  But the general idea was twofold.

First, by forcing down yields on the long-term assets it was purchasing, the Fed hoped other investors would be induced to substitute those assets for similar, but somewhat higher-yielding alternatives.  In turn, that shift in demand would push down yields on the alternative assets, encouraging investors who held them to rebalance their portfolios as well.  As this ripple effect spread through the financial markets, equilibrium interest rates would fall.

Second, the Fed hoped that QE announcements would signal to markets that monetary policy was going to be “persistently more accommodative” — Ben Bernanke’s words — than previously thought.  This would lower investor expectations for the path of short-term interest rates, and in so doing put additional downward pressure on long-term interest rates.

In short, then, the Fed expected QE to boost the economy through the interest rate channel of monetary policy: lower interest rates would boost equity prices and weaken the dollar; lower borrowing costs, higher wealth, and greater international competitiveness would drive spending, investment, and exports, and stimulate an ailing economy.

So did QE actually work in this way?  Thornton assesses these claims in detail (see, in particular, pp. 12–22 of his analysis), and finds little — either in economic theory or in empirical evidence — to support them.

For one thing, as large as the Fed’s asset purchases were relative to previous open market operations, they were modest compared to financial markets as a whole.  It is therefore “difficult to believe,” as Thornton puts it, “that the distributional effects of the FOMC’s quantitative easing policy could have had a significant effect on either relative yields or the level of the entire interest rate structure.”  What’s more, the event-study literature on QE announcements does not offer any convincing evidence of their effectiveness, once you account for the effect of other, simultaneous announcements (Fed statements typically contain a variety of news capable of influencing bond yields).

The claim that QE announcements changed expectations for the path of short-term interest rates is similarly hard to substantiate, not least because event studies only show the immediate impact of such announcements — and for QE to reduce long-term interest rates via this signaling channel, its effect on expectations must be persistent.  It is telling, moreover, that the Fed’s QE announcements did not even produce a consensus among FOMC participants about the expected path of the federal funds rate.  That being the case, it is hard to see how those same announcements could have had a significant impact on market expectations — especially when it is well-established that interest rates are, in Thornton’s words, “essentially unknowable beyond horizons of a few months.”

Of course, that QE did not work as the Fed hoped does not mean it had no effect at all.  As Thornton points out, “a monetary policy directed at keeping interest rates on low-risk securities near zero … is sure to cause people to seek higher yields by purchasing more risky assets.”  Thornton suggests that pension funds, in particular, have been forced to hold riskier portfolios to generate the returns necessary to meet their obligations.  As well as raising concerns about future financial crises, this distortion of investor behavior has had distributional effects: wealthy investors, who are better able to assume more risk, have benefited from booming equity prices, whereas as less well-off pensioners — who have good reason to be risk averse — have had to settle for miserly returns on their fixed-income portfolios.

What QE hasn’t done, however, is enhance the aggregate supply of credit to the market.  This is hardly surprising, given that the Fed began paying interest on bank reserves in October 2008 — a move designed to encourage banks to build up excess reserves, instead of increasing lending.  Indeed, when it came up at FOMC meetings, Ben Bernanke, Janet Yellen, and others expressly rejected the idea that QE would stimulate the economy by boosting bank lending.  Still, when you couple this absence of increased credit with the lack of evidence that QE worked the way the Fed thought it would, you would be forgiven for wondering whether QE had any significant impact on output and employment at all.

What now?  The Fed’s large-scale asset purchases are over, and the FOMC’s interest rate target looks set to gradually rise.  But the Fed’s “exit strategy” remains a work in progress.  Interest on reserves (IOR), for example, may yet prove a troublesome way of raising the federal funds rate.  As Thornton points out, “An IOR of 3 percent … would see the Fed paying nearly $80 billion a year in interest to the commercial banking sector — something that is unlikely to prove popular in Congress or among the general public.”

Meanwhile, the Fed’s preferred alternative, overnight reverse repurchase agreements, would have to be rolled over continuously to have any effect on the size of the Fed’s balance sheet.  A better approach, according to Thornton, would be to “begin the process of policy normalization by selling long-term securities slowly … If things go well, sales could subsequently be accelerated.”  But the Fed seems reluctant to consider outright asset sales at all.  This suggests that the bloated central bank balance sheet that QE created could be with us for some time to come — whether or not the FOMC votes next week to begin the process of policy normalization.

Yet the real legacy of QE may come in how the FOMC reacts to the next crisis.  One concern is that the Fed seems to have lost “all confidence in the ability of markets to heal themselves,” and now assumes that “only monetary or fiscal policy actions” can “restore the economy’s health.”  Another worry is mission creep: “If the Fed can support the mortgage and commercial papers markets … why shouldn’t it support the market for student loans — or any other market for that matter?”  It’s a dispiriting prospect, but both of these observations suggest an increasingly hyperactive monetary authority going forward.

Ultimately, says Thornton, we should prepare ourselves for more of the same: given the opportunity, “the Fed will continue to distort markets until, by some as-yet-unknown magic, those markets return to normal.”  One suspects that such magic may be a long time coming.

[Cross-posted from Alt-M.org]

This fall, the Department of Homeland Security and its pro-national ID allies staged a push to move more states toward complying with REAL ID, the U.S. national ID law. The public agitation effort was so successful that passport offices in New Mexico were swamped with people fearing their drivers’ licenses would be invalid for federal purposes. A DHS official had to backtrack on a widely reported January 2016 deadline for state compliance.

DHS continues to imply that all but a few holdout states stand in the way of nationwide REAL ID compliance. The suggestion is that residents of recalcitrant jurisdictions will be hung out to dry soon, when the Transportation Security Administration starts turning away travelers who arrive at its airport checkpoints with IDs from non-compliant states.

At this writing, DHS’s “REAL ID Enforcement in Brief” page lists just two non-compliant jurisdictions: Minnesota and American Samoa. You’d take this as a signal of broad compliance. But look closely at the list of “Compliant/Extension” jurisdictions, and you see that more than half of them are non-compliant and enjoying extensions themselves.

But here’s the kicker: Even the ‘compliant’ states aren’t compliant. DHS treats states as ‘compliant’ if they’re synched up with a pared-back “material compliance checklist.” It’s DHS reducing the obligations of the law by fiat, and it’s no different than extending the deadline indefinitely on a subset of the law’s actual requirements.

When DHS bureaucrats tell state elected officials that the agency can no longer offer them extensions, this is false. DHS is currently giving every state in the union, including the ones it calls “compliant,” extensions with respect to provisions of the law and regulations that don’t appear on its checklist.

To help clarify what REAL ID compliance actually is, we’ve compiled a list of REAL ID’s actual requirements, what the law and regulations would mean for states. REAL ID has nearly 100 requirements, ranging from very easy, standard ID-card practices to things that are currently impossible and ill-advised. States have to do all of them to be compliant.

We’ve grouped the requirements together for easy reading. Many are back-end mandates, requiring states to meet standards for facilities, data storage, employee background checks, and so forth. Notably, REAL ID requires states to make driver data available to every other state, which means that every DMV must open its state’s residents’ data to the risk of exposure to any state that lacks sufficient information controls. That ill-advised policy is currently impossible, luckily, because the data sharing network has not yet been built. Congress puts money into the project annually in the DHS appropriations bill.

The full REAL ID requirements make it easy to see why no state is in compliance today, and why no state should expend their taxpayers’ money on the onerous, complex scheme to implement the U.S. national ID law.

Data Storage and Sharing Requirements

The regulations governing REAL ID require issuing authorities to meet certain standards for data storage, as well as to engage in data sharing with issuing authorities in other states and with DHS. This data sharing is the heart of the national ID, creating as it does a network of ostensibly state-level databases linked together according to federal standards.
•    States must maintain copies of source documents, applications and signed declarations
•    If the state allows for/recognizes name changes, it must maintain copies of the evidence of name change
•    If the state allows for/recognizes name changes, it must maintain a record of both the original and recorded names
•    If the state allows for alternate documents, it must maintain copies of the alternate documents
•    Paper copies of documents must be maintained for at least 7 years; digital copies and microfiche copies must be maintained for at least 10 years
•    States must store photo images (the aforementioned “mandatory facial image capture”) in JPEG 2000 format
•    States must maintain copies of the digital photograph of the applicant for a minimum of five years if no card is ultimately issued or for two years past the date of the card’s expiration if issued
•    States must store signature copies in TIF or TIF-compatible format
•    States must make photos retrievable for law enforcement, who can access said photos if they request properly
•    States must maintain databases that contain all fields printed on licenses/IDs and SSNs of holders
•    Databases must contain full legal names of drivers
•    Databases must contain all other data presented by driver to the DMV but not printed on license
•    Database must contain full drivers’ histories
•    States must provide electronic access to the driver database(s) to other states

“Visible” Card Regulations

These are regulations that are the most basic visible requirements of REAL ID: the physical format of the license/ID card and what information has to be displayed on it. These include the most basic information of the license holder, some of which are common to both REAL ID and “non-federal” licenses. They include:
•    Full legal name
•    Date of birth
•    Gender
•    Unique license/ID number
•    Digital photograph (black-and-white or color)
•    Holder’s primary address
•    Signature of holder (or an alternate mark/signifier for those unable to sign)
•    Machine readable bar code
•    Visible expiration date
•    Date of issuance
•    State/jurisdiction of issuance
•    Compliance mark (“Gold Star”)
•    Text exclusively in Latin script
Many of these requirements have been standard components of driver’s licenses for decades. The machine readable bar code requirement and the REAL ID compliance mark (the “Gold Star” on compliant licenses) are new with the act. In the event that the state/jurisdiction continues to make non-compliant licenses and IDs available to residents alongside compliant versions, two additional visible requirements exist:
•    Non-REAL ID licenses and IDs must prominently state on face that they are not for federal purposes
•    Non-REAL ID licenses and IDs must use unique designs and/or color indicators to differentiate them from compliant versions

Non-Visible Card Regulations

These are regulations which also deal with physical requirements for compliance. However, they are requirements which are not apparent from a casual inspection of a license, or which would not be apparent without some knowledge of REAL ID’s regulatory requirements or without special equipment. (Encoded information is dealt with in the next section). They include:
•    A digital photo which meets ISO/IEC 19794-5:2005(e) requirements (an international standard for identity document photographs)
•    A digital photo which is either black-and-white or color
•    A digital photo which has been taken with facial image capture technology
•    Name field of no less than 39 characters, and with truncation of longer names in case of extreme length
•    Multi-layered anti-fraud features (like watermarks, holograms and non-standard inks)
•    Durable physical materials used in production, which are unavailable to the general public and able to withstand inspection and use

Barcode Regulations

Machine readable barcodes are typically placed on the back of compliant licenses. These barcodes are encoded with the bearer’s personal information, largely duplicating the visible printed data, and including information on the card’s current design and how it was produced (an inventory control number). The barcode is designed to be scanned by law enforcement, government entities, and others with access to the appropriate reader.
•    Encoded legal name
•    Encoded date of birth
•    Encoded gender
•    Encoded address
•    Encoded state/jurisdiction of issuance
•    Encoded expiration date
•    Encoded date of issue
•    Encoded identification number
•    Encoded revision date of license/ID design
•    Encoded inventory control number

Application Requirements

These requirements deal with what has to be presented by the applicant during the application process for a REAL ID compliant license or ID card. They are the only set of requirements incumbent upon the applicant; all the other requirements are incumbent upon the issuing authority or the Department of Homeland Security. To receive a compliant license or ID, the applicant must present:
•    Recognized photo ID document
•    Proof of name change (if applicable)
•    Proof of date of birth
•    Proof of Social Security number OR proof of ineligibility for a number
•    Documentation of permanent address
•    Proof of lawful status
•    No foreign documents allowed, except for a foreign passport
•    Signature
•    Signature upon document attesting, under penalty of perjury, validity of everything presented

Issuing Requirements

These requirements are the counterpart to the application requirements; they are the requirements incumbent upon the issuing authority during the application and license/ID issuance process.
•    Verification of photo ID
•    Verification of date of birth
•    Verification of full Social Security Number
•    Verification of documentation of permanent address
•    Verification of proof of lawful status
•    Verification of presented ID documents with issuing authority
•    Subject each applicant to a photo utilizing “mandatory facial image capture”
•    Resolve any issues arising from SSN verification or document verification
•    Confirm with other appropriate states/jurisdictions that any prior out-of-state licenses have been terminated (or will be terminated upon receipt of the new license)
•    Make a reasonable effort to ensure that the applicant does not hold a license in another jurisdiction under a different name
•    Verify out-of-state REAL IDs (if presented as a form of ID) with the issuing authority
•    Limit validity of new license/ID to 8 years maximum
•    States are not required to comply with all of the requirements if the license/ID is to be issued in support of federal, state, or local criminal justice (i.e. witness protection programs, federal judges with protected addresses, federal law enforcement officials etc.)

Renewal Requirements

These requirements follow from the issuing requirements, and deal with additional protocols during a renewal. One non-in-person renewal is allowed for a REAL ID compliant license/ID during each renewal cycle; the next renewal must be in person.
•    States may permit remote renewal, if state law allows for it
•    During a remote renewal, the state must re-verify the license holder’s SSN
•    However, states must require in-person renewal of the license/ID no less frequently than every 16 years
•    Re-verify the license holder’s SSN during an in-person renewal
•    Re-verify the license holder’s proof of legal presence if the holder is a non-citizen/permanent resident
•    Take an updated digital photograph of the holder
•    If there has been a material change in the applicant’s material information (i.e. a name change, a sex change etc.) an in-person renewal is mandatory
•    Renewed licenses issued prior to the material compliance date set by DHS will be accepted for official purposes until that date

Exemption/Alternate Requirements

•    States may establish a well-documented and DHS-approved exemptions process for applicants able only to present alternate/non-standard documents
•    States with exemption processes must make an effort to authenticate alternate documents and they must note the use of alternate documents in their records
•    States must conduct period reviews of the exemptions process
•    States will only issue temporary REAL ID-compliant licenses/IDs to persons under certain categories of legal status (like refugees, certain visa holders)
•    Temporary licenses/IDs shall only be valid for the period of authorized stay or, if indefinite, one year (subject to renewal)
•    Temporary licenses must prominently display expiration date
•    Temporary licenses shall only be renewed upon presentation of valid documentary proof of continued/extended status
•    Presented documents for continued/extended status must be verified with the appropriate federal authorities

DMV Security/Physical Requirements

•    States must produce a security plan, which must be submitted to DHS as part of REAL ID certification
•    The plan must ensure the physical security of locations where licenses and ID cards are produced
•    The plan must ensure the physical security of document materials and papers from which licenses and IDs produced
•    The plan must address the security of personal information held at responsible DMV facilities
•    The plan must address document and physical security features of the card, consistent with regulations
•    The plan must address access control for employees and systems
•    The plan must establish fraudulent document recognition training programs for DMV employees
•    The plan must address emergency/incident response at facilities
•    The plan must address internal audit controls
•    The plan must include an affirmation that the State possesses both the authority and the means to produce, revise, expunge, and protect the confidentiality of REAL ID driver’s licenses or identification cards issued in support of Federal, State, or local criminal justice agencies or similar programs that require special licensing or identification to safeguard persons or support their official duties
•    States must ensure the physical security of materials and locations used in the production of licenses/IDs, consistent with security measures detailed in the security plan
•    The state must subject all persons authorized to manufacture or produce licenses and IDs to appropriate security requirements
•    The state must perform background, criminal history, employment history, and reference checks on REAL ID-responsible DMV employees
•    Employees subject to checks prior to 2006 need not be rechecked

A group of prominent conservatives recently released an ObamaCare replacement plan that would replicate many of that law’s worst features. As I explain in a new post at Darwin’s Fool, conservatives need to examine this proposal closely against the alternative. An excerpt:

If you’re a conservative and you’re reading this, chances are good you have a gun to your headConservatives are so averse to health policyNational Review‘s Ramesh Ponnuru once quipped that “Republicans will do anything to repeal ObamaCare–except think about health care.” This is no small problem. Indeed, it is how we got ObamaCare in the first place: conservative neglect enabled a raft of very un-conservative health care ideas to germinate at the Heritage Foundation for a decade and a half. By the time Democrats picked up those ideas and ran with them in 2009, it was too late. Conservatives were powerless to stop them.

Conservatives may indeed be just one election away from repealing ObamaCare, which is all to the good. But some conservatives have proposed replacing ObamaCare with refundable tax credits for health-insurance.  Tax credits are ObamaCare-lite. They would cement in place many of ObamaCare’s worst features, and replicate its awful results. If those features acquire a bipartisan imprimatur, we will never in our lifetimes be rid of them. Unless conservatives give tax credits the scrutiny they should have applied to the Heritage Foundation plan in the 1990s, they will make the same mistake all over again.

Conservatives don’t have to repeat history. A better set of reforms offers a clear path toward a market system, and away from ObamaCare, by building on the bedrock conservative idea of health savings accounts (HSAs). “Large” HSAs would deliver better, more affordable, and more secure health care, particularly for the most vulnerable. At the same time, Large HSAs would give workers a larger effective tax cut than all the Reagan and Bush tax cuts combined, and nine times larger than repealing ObamaCare.

Read the whole thing.

The other night, Fox News host Bill O’Reilly again insisted that ISIS poses a dire threat to the United States.  On this occasion, though, O’Reilly surpassed the shrill warnings of his ideological colleagues, insisting that the situation was the same as the nation faced in 1938 with the rise of Nazi Germany and its fascist allies.  

It is a preposterous comparison.  In 1938, three of the top seven world powers were governed by fascist regimes and were linked together in the Tripartite Alliance.  Those three countries, Germany, Italy, and Japan, were all modern, powerful nation states, with large, productive economies.  They also were able to field several million ground troops backed by extremely capable air and naval forces.  Together, those countries posed a credible threat not only to American security but to the entire global balance of power.

The resources ISIS can draw upon are puny by comparison.  The movement controls a very limited territory, the shaky “caliphate” in western Iraq and eastern Syria, and that redoubt is nearly surrounded by hostile regional forces—Iran and its Shiite allies in Iraq, the Kurds, and the Alawite-led government in Syria.  The correlation of forces was not favorable to ISIS even before Russia added its considerable military weight to the anti-ISIS coalition.

The closest historical model to the ISIS threat is not the menace that the fascist powers posed in the late 1930s, but the far more limited one that radical anarchists mounted in the last half of the nineteenth century.  Akil N. Awan, Associate Professor in Modern History, Political Violence and Terrorism at the University of London, ably shows the similarities in a recent article in National Interest Online.

Awan points out that anarchists were responsible for an alarming series of mass shootings, bombings, and high-profile assassinations. An 1893 attack on the opera house in Barcelona, Spain, which killed 22 people and wounded another 35, was eerily similar to the recent incident in Paris.  In a period of less than five decades, anarchists assassinated two U.S. presidents, a Russian czar, an Austria-Hungarian empress, an Italian president, a French president, an Italian king, and two Spanish prime ministers.  And as in our own era, there was a growing atmosphere of panic, with calls for drastic measures that would do lasting damage to fundamental civil liberties.

We should keep that historical precedent firmly in mind when we see similar efforts to hype the ISIS threat and use it as a justification for draconian security measures. The recent ISIS attacks are scary (indeed, that is the whole point of terrorist tactics), but my colleague Mike Tanner correctly placed the risk in context in an article on National Review Online.  Mike notes that the risk for Americans dying in a terrorist assault is still an infinitesimal one in 20 million.  Indeed, the chance of dying in an automobile accident is nearly 200 times greater.  It is far more rational to worry about an incompetent, distracted, or drunk driver barreling down the road at you the next time you get into your automobile than it is to fret about being the victim of a terrorist attack.

ISIS is the twenty-first century equivalent of the nineteenth century anarchists—a movement of violent malcontents with very limited power.  It is a wild exaggeration to compare the threat they pose to the fascist powers of the 1930s (or the Soviet Union during the Cold War).  Invoking such a false historical analogy to panic the American people does a profound disserve both to historical truth and fabric of liberty.  

The two Koreas recently chatted at Panmunjun, the truce village within the Demilitarized Zone. They reached an agreement - to talk some more, starting on Friday.

That’s the way it usually is. When there’s a specific issue that must be resolved, real results sometimes are reached. But promises of future talks usually fall short.

Will this time be any different? The two sides scheduled talks with vice ministers for December.

Even if the discussions actually occur, the agenda remains unclear. The joint statement pointed to “issues that will improve relations between the South and the North.”

But those issues, of which there are many, rarely have been susceptible to settlement via negotiation. Most problems on the peninsula grow out of the North Korean regime’s determined misbehavior.

Topping the list for the South was family reunions, or “divided families,” according to MOU spokesman Jung Joon-hee. Visits are a fine humanitarian gesture, but irrelevant to the larger geopolitical conflict.

Jung said at their meeting the so-called Democratic People’s Republic of Korea focused on restarting tours of its Mount Kumgang resort. But they were suspended years ago after a North Korean guard shot and killed a tourist who wandered into a forbidden area.

Although it went unmentioned at the Thanksgiving meeting, North Korea also desires a resumption of aid, which was suspended (the “May 24 measures,” as they oft have been called) after the sinking of a South Korean warship and bombardment of a South Korean island in 2010. But the DPRK never accepted responsibility for the first and justified the second as defensive.

There also could be discussion of reunification, but no serious person believes it is possible with the current regime in Pyongyang. Conventional arms control would be a logical topic, but so far the North has seen little reason to drop its threatening military posture.

The most important issue is Pyongyang’s nuclear weapons program. However, to believe that Kim Jong-un is prepared to negotiate away the military’s most important and expensive weapon is to believe in the Tooth Fairy or Great Pumpkin.

Still, talks are better than no talks. As Winston Churchill observed, better to “jaw-jaw” than “war-war.”

Perhaps the best policy is to seek to expand North Korean contacts with the West. That should include the U.S.

First, there is no reason to think the Kim monarchy (with Communist characteristics) is likely to disappear. The regime has withstood famine, poverty, and the death of two dominant dictators.

Second, the regime has been less confrontational when engaged diplomatically. That suggests the North actually desires engagement, and perversely is willing to threaten to get it. There’s little reason not to respond positively, as long as expectations are kept low.

Third, change actually is occurring in North Korea. Private markets continue to spread. Moreover, Paul Tjia, a Dutch business consultant who works in North Korea, recently observed: “The country is really opening up. They want more investment and trade.”

Most important, the DPRK economy has been growing. Cho Dong-ho, a professor at Ewha Women’s University, recently argued that annual growth was likely close to five percent. Felix Abt, who co-founded a business school in the North, reported: “Poverty has dropped and, equally visible, a middle class has emerged.”

Fourth, coercion has failed. As long as China refuses to cut off energy and food, the Kim regime is likely to survive. If the West is forced to live with Pyongyang’s current rulers, it’s worth considering another approach. Abt contended that in his experience “intense interaction can lead to many changes.” He acknowledged fears of propping up the regime, but believed involvement “also helps transform it.”

As I explain in National Interest online: “Rather than the gift that keeps giving, North Korea is the horror story that keeps playing. Attempting to ignore it isn’t working, however. The U.S. should follow the South in addressing Pyongyang.”

Expectations should be low, even nil. But society is changing within the DPRK. When no other policy seems to work, why not give engagement a try?

President Obama has just signed the Every Student Succeeds Act, ending the era of No Child Left Behind. If nothing else, that big majorities of both parties in Congress felt the need to greatly ease federal force in elementary and secondary education – at least overt federal force – is a powerful testament to the breadth of the public backlash against federally driven standardization, testing, and “accountability.” That backlash may well have hit a tipping point thanks to the Common Core, through which the federal government attempted to get states not just to have state curriculum standards and tests, but national standards and tests. In other words, Washington began to influence the specifics of what children across the country would learn.

Is the ESSA much better than NCLB? No, and it could potentially end up taking very little power away from Washington even though the language surrounding it has been all about returning authority to states and districts. But that the rhetoric about the federal role has had to change so greatly is a very encouraging thing.

Of course, the work of getting Washington to obey the Constitution by getting out of education – and of fundamentally changing the education system to one based in freedom – is nowhere near complete. But at least things may be heading in the right direction.

A bill before Congress would practically give the Forest Service a blank check for firefighting. HR 167, the Wildfire Disaster Funding Act, proposes to allow the Forest Service to tap into federal disaster relief funds whenever its annual firefighting appropriation runs out of money. It’s not quite a blank check as the bill would limit the Forest Service to $2.9 billion in firefighting expenses per year, but that’s not much of a limit (yet), as the most it has ever spent was in 2006 when it spent $1.501 billion.

The Forest Service puts out fires by dumping money on them.

Having a blank check is nothing new for the Forest Service. In 1908, Congress literally gave the agency a blank check for fire suppression, promising to refund all fire suppression costs at the end of each year. As far as I know, this is the only time in history that a democratically elected legislature gave a bureaucracy a blank check to do anything: even in wartime, the Defense Department had to live within a budget.

Due to rising firefighting costs, Congress repealed the Forest Service’s blank check in about 1978, giving the agency a fixed amount each year and telling it to save money in the wet years to spend in the dry years. The agency actually reduced its costs for about a decade, but then two severe fire years in 1987 and 1988 led the Forest Service to borrow heavily from its reforestation fund. Congress eventually reimbursed this fund, and costs have been growing ever since.

In the 1970s, when firefighting costs were so out of control that Congress repealed the blank check, the agency spent about 10 to 20 percent of its national forest management funds on fire. Today, even though the agency’s budget has kept up with inflation, more than half goes for fire.

Yet there is some restraint on what the agency spends. In severe fire years, it has to borrow money from its other programs, putting a crimp in those activities. Congress eventually reimburses that money, but in the meantime fire managers are aware that their spending is having an impact on other agency projects.

The new law would eliminate even this restraint. Proponents argue that this will protect those other Forest Service programs from disruption, but since they are eventually funded anyway all it will really due is unleash fire managers to spend without limit (at least up to $2.9 billion, after which Congress will no doubt up the limit).

The Bureau of Land Management and other agencies in the Department of the Interior (the Forest Service, for those who don’t know, is in the Department Agriculture) have never had a blank check and spend considerably less on wildfire than the Forest Service. From 2010 to 2014, the Forest Service spent an average of $914 per acre burned, while Interior agencies spent an average of just $171. That’s a difference of more than five times.

Firefighters on the ground are acutely aware of this difference. They sometimes say that the Forest Service fights fires by dumping money on them. Firefighters who work for the BLM, Park Service, or other Interior agencies look with awe on Forest Service firefighters and the resources they can bring to bear against fire.

Yet a lot of that spending is of little value. The rest of the saying about the Forest Service dumping money on fires goes, “… until it rains, and then the rain puts the fire out.”

Some in the Forest Service argue that firefighting costs have grown because decades of fire suppression have left the forests more prone to fire. This is contradicted by a 2002 Forest Service report that found that, unlike southern forests, most western forests are not susceptible to becoming more fire prone in the absence of fire, and of those that are, only a portion have been significantly altered by years of fire suppression (which frankly, wasn’t very successful anyway). The report concluded that only about 6 percent of federal lands in the West were more susceptible to fire due to fire suppression.

Historically, acres burned have always been a function of the percent of land that is severely to extremely dry in the summer months (July, August, and September). The correlation between this number and the number of acres burned is greater than 90 percent. If a few fires have been larger in recent years, it is mainly because, since the 1994 South Canyon Fire that killed 14 firefighters, the Forest Service has allowed more acres to burn in order to protect firefighter lives.

The best solution would be to stop subsidizing national forest management altogether and let forest managers figure out how much to spend on fire out of the user fees they collect. Short of that, the Forest Service could contract out its firefighting needs to state forest fire agencies, which the BLM and other Interior agencies sometimes do to save money.

Instead, HR 167 proposes to deal with fire by dumping money on it. “Let’s be clear: we are always going to pay whatever it costs to fight catastrophic wildfires,” say the bill’s proponents. “We do this today, and we will continue to do so.” They obviously don’t realize that this approach simply makes firefighting more expensive.

Today is Human Rights Day, a time we should celebrate great advances in human freedom through history—the rise of the rule of law, the abolition of slavery, the spread of religious liberty, the secular decline of violence, respect for free speech, etc.—as well as honor those groups and individuals working to promote or safeguard human rights in the many parts of the world they are currently being violated or threatened.

At Cato, we have been honored to host and work with human rights champions from around the globe, all of whom have suffered persecution for speaking truth to power. The list includes renowned Soviet dissident Vladimir Bukovsky, independent Cuban blogger and journalist Yoani Sanchez, Malaysian politician and former deputy prime minister Anwar Ibrahim, Venezuelan opposition leader Maria Corina Machado, Russian liberty advocate Garry Kasparov, Chinese activist  Chen Guangcheng (sometimes known as the blind, “barefoot lawyer”) and many more.

Because we believe in the inherent dignity of individuals, human freedom is worth defending. For that reason, and because freedom plays a central role in human progress, it is also worth gaining a better measure and understanding of the spread of,  and limitations on, freedom around the world. That’s why we created the Human Freedom Index in conjunction with the Fraser Institute and the Liberales Institute. The index is the most comprehensive global measure of civil, personal and economic freedom so far devised. And although Human Rights Day technically commemorates the Universal Declaration of Human Rights, we think the Human Freedom Index and its definition of freedom—the absence of coercive constraint—can help us think more carefully about the state of freedom around the world.

You may view the index here, see how countries and regions of the world rank, examine how income and democracy relate to freedom, get a sense of how various freedoms relate to one another, and otherwise gauge how the world is doing on 76 distinct indicators.

Other Cato activities and publication that may be of interest on Human Rights Day include:

Recent events

“The Deteriorating State of Human Rights in China”

“Property Rights Are Human Rights: Why and How Land Titles Matter to Indigenous People”

“Islam, Identity, and the Future of Liberty in Muslim Countries”

“Magna Carta and the Rule of Law around the World”

“The Moral Arc: How Science and Reason Lead Humanity toward Truth, Justice and Freedom”

Publications

The Tyranny of Silence by Flemming Rose

The Power of Freedom: Uniting Human Rights and Development by Jean-Pierre Chauffour

Realizing Freedom by Tom Palmer

“Islam and the Spread of Individual Freedoms: The Case of Morocco” by Ahmed Benchemsi

“Capitalism’s Assault on the Indian Caste System,” by Swami Aiyar

“Magna Carta’s Importance for America,” by Roger Pilon

Before becoming wedded to statism in America, liberalism was a philosophy of liberation. But while leading liberals of the past advocated peace, many foreign (“classical”) liberals today favor war—at least, if conducted by America.

For instance, former chess champion Garry Kasparov has taken on the heroic but thankless task of battling for democracy in his Russian homeland. Alas, he also is surprisingly generous with other people’s lives. He recently declared: “Anything less than a major U.S. and NATO-led ground offensive against ISIS will be a guarantee of continued failure and more terror attacks in the West.”

Kasparov is confused over cause and effect, since terrorism most often follows intervention, as did the recent Islamic State strikes against France, Hezbollah and Russia. But there is a more basic point.

It’s easy for a celebrity Russian living in the West to argue that it is the job of Americans, with maybe a couple Europeans tossed in, to destroy ISIS, save Syria, and more. But there’s actually nothing liberal in pushing a broader, longer war on others.

Kasparov is not alone. A number of foreign liberals—Lithuanian, Russian, Slovakian, Swedish, for instance—have criticized American libertarians for advocating a non-interventionist foreign policy. They’ve instead argued that a “compelling” argument can be made for a “globalist” strategy.

Actually, that’s true only so long as one isn’t paying the cost of the foreign policy. As foreigners typically do not for American intervention, unless it is directed at them.

Indeed, foreign liberals who call for intervention mostly talk about America. After all, the Russian government is interventionist, but not in the right way. Lithuania, Slovakia, and Sweden have minuscule militaries. No one cares whether the latter three countries even have a foreign policy.

About the only option for them is to ask someone else, namely America, to defend them. Thus, when they advance “collective security,” they really mean Americans should do the creating and investing—and, ultimately, fighting.

But U.S. foreign policy should, indeed, must, be guided by what is in the interest of those doing the paying and dying, namely the American people. The Pentagon exists to protect them, and the liberal republic which governs them, not conduct grand “liberal” crusades around the world.

First, as social critic Randolph Bourne warned, “War is the health of the state.” Military spending is the price of one’s foreign policy. Moreover, war kills, disables, and wounds. The national security state generates economic controls, restraints on civil liberties, and restrictions on political freedoms.

Second, U.S. alliances act as a form of international welfare. Washington doing it ensures that no one else will do it. Yet today the European Union enjoys a greater GDP and population than America.

Third, an interventionist, warlike policy kills. Not just Americans, but foreigners. The foolish Iraq invasion unleashed sectarian war that killed perhaps 200,000 Iraqis before ebbing, only to flare again under the Islamic State, a malign force spawned by the conflict.

Fourth, Washington does badly at social engineering at home. It does far worse attempting to remake the world, especially the Middle East.

As I argue in the American Conservative, “Given these realities, the kind of aggressive U.S. policy toward Russia desired by many foreign liberals would be foolish and, yes, illiberal, for America. Russian activities harm the liberties of other peoples. Doing more to stop Moscow would do greater damage to the liberties of Americans.”

Moreover, where is Europe? The continent enjoys around eight times the GDP and three times the population of Russia.

Is the result a good outcome? No. But nothing in liberal philosophy requires residents of the globe’s most powerful “liberal” nation to bankrupt themselves, sacrifice their liberty, and court national destruction to try to remake the earth. Americans, especially traditional liberals, should choose domestic peace over international conflict. 

Fool me once, shame on you; fool me twice, shame on me. In this case, the Palmetto State, following the lead of other state and federal regulators, has added a new twist to that old saying: fool no one, pay $124 million to the treasury.

Ortho-McNeil-Janssen (“Janssen”) is a pharmaceutical company that distributes a popular antipsychotic drug known as Risperdal. In the 1990s and early 2000s, Risperdal was in fierce competition for market dominance and made some questionable claims about the drug’s side effects. The FDA investigated and compelled the company to correct some defective warning labels.

South Carolina regulators, however, despite the FDA’s settlement of the matter, commenced state action against Janssen under the state’s Unfair Trade Practices Act. That action worked its way up to the state supreme court, which ultimately confirmed a $124 million penalty against the company. That massive fine was sustained on the theory that each labeling violation was its own violation of the statute, worth up to $5,000 each, rather than the overall labeling violation counting as one singular misdeed.

Such a large penalty, disproportionate to the actual harm caused (none) runs afoul of the Eight Amendment requirement that “excessive fines [not be] imposed.” Cato has filed an amicus brief calling for the U.S. Supreme Court to reverse the decisions below and clarify the scope of the Excessive Fines Clause.

South Carolina’s statute, like many similar state laws, is poorly worded and fails to define whether each individual manifestation of a regulatory violation is cognizable as an offense. Taking advantage of that lack of specificity, South Carolina converted a potential $5,000 fine into a $124 million one. Because of the huge numbers that can be achieved by multiplying even modest per-violation fines, state and federal regulators are often able to secure grandiose settlements and thereby insulate their fines from judicial review.

Moreover, the state supreme court here accepted this theory in the face of no evidence of harm resulting from the allegedly improper statements. The U.S. Supreme Court has said that under the Excessive Fines clause, the monetary penalty imposed shall not be “grossly disproportional to the gravity of the defendant’s offense.” United States v. Bajakajian (1998). A finding of no harmful effect attached to 9- or 10-figure penalties blows any notion of proportionality out of the water.

And South Carolina is not the only state where this is occurring. For example, an Arkansas court imposed a $1.2 billion penalty for purported misstatements about the same drug at issue here, on the theory that the Arkansas Medicaid Fraud False Claims Act was violated each time the drug was prescribed or re-filled. Other cases have revealed penalties as high as 20 or 46 times the harm suffered by consumers.

The Supreme Court should take this opportunity to reaffirm that the Eighth Amendment’s Excessive Fines Clause imposes a judicially enforceable limit on grossly disproportional fines. It will consider next month whether to take up Ortho-McNeil-Janssen Pharmaceuticals, Inc. v. South Carolina.

The federal government uses protectionist country-of-origin labeling (COOL) regulations to privilege a certain segment of the U.S. cattle industry at the expense of meat processors, retailers, and consumers.  Due to a successful challenge by Canada and Mexico at the World Trade Organization, and the resulting threat of trade retaliation, Congress may finally repeal the law.  This is good news.

I explained last month in The Hill(Online) what’s wrong with the COOL law:

Under current U.S. regulations, meat produced in the United States and sold in American grocery stores must carry a label indicating in which country or countries the animal was born, raised, and slaughtered. In order to comply with this law, American meat processors have to keep track of where each animal was born or raised and segregate any border-crossing cattle to ensure accurate labels. The requirement imposes a significant cost on processors, which they can avoid if the only cattle they purchase are born and raised in the United States.

The WTO ruled against the labeling law because much of what the law requires burdens processors who buy Canadian cattle without conferring any benefit on consumers.

In a free market, consumers receive product information if they care enough about it to pay for that information.  Sometimes providing that information is cheap and sometimes it’s expensive. 

When the government comes in to mandate labels, it’s because someone wants consumers to have information that consumers don’t actually care enough about to pay for.  Mandated labels also reflect what the government (and lobbyists) want people to know, not what actually matters to consumers.  In the case of COOL, protectionists think Americans will buy beef from U.S.-origin cattle if they have that information thrust upon them even though what Americans really want is high-quality food at a low price. There’s a second layer of rent-seeking here, because compliance costs privilege domestic ranchers regardless of consumer response to the labels.

Supporters of the law rely largely on the claim that consumers have a “right to know” where their food comes from.  A quick look at the costs and benefits of providing this “right to know” through the existing mandatory country of origin labeling scheme reveals how simplistic formulations of positive rights do more harm than good.

If Americans have the right to know what country the animals they eat were born or raised in, do they also have a right to know what state a domestic animal came from?  What about the ranch it lived at or the direction it typically faced while grazing?  Do we have a right to know the animal’s name or favorite Taylor Swift song? 

The questions may sound silly, but can you answer them and explain why such labels should or should not be required?  The answer surely depends on what limiting principle, if any, defines the contours of the “right to know.”  Perhaps the right to know depends on the costs of acquiring or providing the information.  The rhetoric of rights, however, implies that the costs are irrelevant.  Weighing costs and benefits certainly won’t justify the COOL law, which was found to have a net negative impact on the U.S. economy by the Office of Management and Budget in 2004 and by the USDA itself in 2015.

Perhaps we only have a “right to know” things that matter—but does it matter that the animal whose meat you’re eating was born in Canada?  Who decides what matters?  Personally, I can’t think of a single reason to care what side of the 49th parallel my steak was on when it began its life.  Why is that more important than knowing which Dakota it came from or what pitch it mooed on?  Assertions of a vague right to know don’t answer any of these questions, which are at the heart of a policy debate over how strict and expensive COOL regulations should be.

What we do know is that the “right to know” is a catalyst for cronyism and inefficiency.  In a free market, the simple desire to know is enough to prompt the supply of information to consumers at a price they want to pay.

Over the last month, GOP presidential hopeful Donald Trump’s counterterrorism policy prescriptions have included creating a database of Arab and Muslim Americans, and more recently, a call for a ban on all Arab/Muslim immigration to the United States. While he has yet to call for the creation of WW II-style ethnic/religious concentration camps for our Arab/Muslim American neighbors, at this point nothing seems beyond the pale for Trump. Unfortunately, as I have noted before, when it comes to stigmatizing–if not de facto demonizing–Arab/Muslim Americans, he’s getting some help from DHS, DoJ, and the legislative branch.

Indeed, in the ongoing legislative battle to pass dubious cybersecurity legislation, House Homeland Security Chairman Mike McCaul (R-TX) is being wooed to support the revised cyber information sharing bill with a new carrot: the inclusion of his “countering violent extremism” (CVE) bill in the FY16 omnibus spending bill–a measure condemned earlier this year by civil society groups from across the political spectrum.

To date, McCaul has been opposed to the Senate’s approach to cybersecurity issues in the form of the Cybersecurity Information Sharing Act (CISA), and, keeping that in mind, House and Senate supporters have largely excluded him from their negotiations over a final cyber bill. By dangling the inclusion of his CVE legislation in the omnibus is a clear effort to get McCaul to drop his opposition to CISA by giving him one of his priorities: Passage of CVE legislation would create yet another bureaucracy in DHS to essentially monitor the Arab/Muslim American population for signs of extremism. 

The fact that a similar CVE effort in the U.K. failed miserably has not deterred Congressional boosters like McCaul from pursuing that same discredited approach at the expense of the civil and constitutional liberties of a vulnerable minority population. Additionally, the expense of American taxpayers is likely to be at least an additional $10 million per year for the proposed DHS CVE office. 

As former NBC Nightly News anchor Tom Brokaw reminded us this week, Arab and Muslim Americans have died for the United States in Iraq and Afghanistan. They have paid for our freedom with their blood and their lives. Proposals that would strip them of their rights and attempt to turn them into political and societal lepers should be repudiated–vocally and forcefully. Those who propose such un-American and unconstitutional discrimination are the ones who should be shunned and permanently confined to the unhinged fringes of American political and social life.

Despite some of the breathless headlines, Finland is not adopting a national universal basic income. That is, Finland is not scrapping the existing welfare system and distributing the same cash benefit to every adult citizen without additional strings or eligibility criteria. Finland is moving forward with one of the most extensive and rigorous basic income experiments in decades, which could help answer some of the lingering questions surrounding the basic income. The failures of the current system are well documented, but there are concerns about costs and potential work disincentives with a basic income. Finland’s experiment could prove invaluable in trying to find an answer some of these questions, and whether it is possible some kind of basic income or negative income tax would be a preferable alternative to the tangled web of programs in place now.

The Finnish Social Insurance Institution (Kela) will lead a consortium of think tanks, universities, and businesses in surveying the existing literature, analyzing past experiments, and designing different models to test in Finland. They will present an interim report next March, where the government will decide which models to develop further. The consortium will present a final report in November, after which the government will choose which models to actually test. The experiment will begin in 2017 and last for two years, after which the consortium will begin to evaluate the results.

One of the most important issues with any basic income proposal is deciding whether it would replace the current system or be added on to the existing structure. (The latter, of course, does not have much appeal from a limited-government perspective.) The consortium is considering multiple models, as Kela’s presentation shows: 

 In the full model, most safety net programs would be replaced with a fairly high basic income, while a partial model would purportedly keep some programs, such as housing assistance, intact. The consortium is also exploring a negative income tax, where benefits would be phased-out with earned income. At this early stage, these models are in flux and not fully developed. It’s not yet clear which programs would be replaced in which models or what the benefit level would be. These developments should be closely monitored as the working group solidifies more details.

Finnish politicians may decide to test multiple models, so the experiments could give a better understanding of how the effects of a basic income differ from a negative income tax, for example. Kela has also expressed interest in conducting not only a national experiment, where randomly selected Finns around the country are given the basic income to examine its effects on work effort and well-being, but also county level and local experiments where larger proportions of the target population get the benefit. These local and county experiments would help the researchers analyze the effects of a basic income beyond the individual. At the community level, they may see how businesses, neighborhoods, and other government programs are affected.

Even with these studies, some uncertainty will remain. We won’t know how these results would translate to other countries that have different economies, fiscal situations, and welfare systems. The studies only last two years, so longer-term effects over the course of a person’s life or subsequent generations will not be understood. Even with these limitations, this would be the largest and most comprehensive basic income experiment to date.

Some aspects of a basic income are intriguing. The current system is deeply flawed, so doing away with the dozens of different government programs and bureaucracies has some appeal. But too many questions remain regarding cost and impact on work incentives. My colleague Michael Tanner explored these issues in depth in his paper earlier this year, and an issue of Cato Unbound allowed proponents and skeptics to suss out the topic.  The Finnish experiments, and similar developments in Switzerland and cities like Utrecht, could help answer some of the many questions raised by a basic income proposal. Stay tuned. 

Coming out of oral argument in Fisher v. UT-Austin, I have a frustrating sense of déjà vu all over again. Not simply because this is the second iteration of Abigail Fisher’s plea not to be judged by skin color, but because every time the Supreme Court takes up affirmative action both sides talk past each other and the issue is (not) resolved by a mushy baby-splitter like Justices Lewis Powell or Sandra O’Connor. Regardless of what the particular legal issues may be, one side pushes racial preferences forever (for whatever reason, currently “diversity”) and the other says never (because the way to stop racial discrimination is to stop discriminating on race). The ultimate ruling inevitably rejects the specific use of race at issue but keeps the door open for future uses – chasing some Goldilocks ideal of “race consciousness” but not too much.

Fisher II is no different. I’ll let others provide detailed exegeses of the justices’ repartee, but the bottom line is that there aren’t any surprises here. With Justice Elana Kagan recused, there’s a reduced three-justice liberal bloc staunchly in favor of UT-Austin’s holistic review (which Cato’s brief assails as being a black box that can’t pass the smell test, let alone strict scrutiny). Conversely, I heard nothing from Chief Justice Roberts or Justices Scalia/Thomas/Alito that would support the university. For that matter, Justice Kennedy – who dissented in the University of Michigan case of Grutter v. Bollinger (2003) that was Fisher’s precursor – didn’t say anything to indicate he would approve UT’s admissions program either, though at one point suggested that a remand for fact-finding might be appropriate (Later, he all but rejected that idea).

So we wait to see how broadly Kennedy wants to go. Will he merely vote to strike down the use of race in the admissions decisions complementing UT’s Top 10 program, or will he cast doubt on the use of race in educational administration altogether? Will he tighten the judicial standard of review that the Court set in Fisher I – making it essentially impossible to meet – or will he throw bones to both sides in a way that again avoids changing the status quo?

At some point, the Supreme Court has to realize that the hallowed “diversity” interest is both pretext and ephemera, and that an admissions program that uses race in a constitutional manner is a self-contradicting proposition. I don’t know if that day will come next June when Fisher is decided, but my fervent hope is that Justice Kennedy pushes his own jurisprudence further in that direction.

With Justice Antonin Scalia writing, the Supreme Court has unanimously ruled that a challenge can proceed in federal court arguing that Maryland’s ridiculous, convoluted Congressional redistricting map violates the Constitution by harming some voters based on their political views. In general, the high court has declined to disturb partisan gerrymanders, no matter how flagrant, so long as they otherwise comply with equal population requirements and the federal Voting Rights Act. The rationale for this position being that the Court could not identify any principled standard to apply that would not draw it into a multitude of political disputes. 

Yesterday’s unsurprising decision rests solely on a narrow procedural point – whether the claim as not “obviously frivolous” deserves to proceed to a three-judge panel, rather than being thrown out by a single judge – which augurs little about whether the Court has changed that view. The Court has signaled continued interest in redistricting, however, by accepting two other merits cases for argument this term, Wittman v. Personubhallah from Virginia and Harris v. Arizona Independent Redistricting Commission. These are in addition to the just-argued case of Evenwel v. Abbott, discussed this morning by colleague Ilya Shapiro (no relation to Steve Shapiro, plaintiff in the Maryland case). 

Meanwhile, state interest in redistricting reform is gaining steam. Last month, Ohio voters overwhelmingly approved a plan to draw state legislative districts by nonpartisan commission, an idea that has advanced in a number of other states in recent years, especially out West.

I’ve had a chance to grapple with these issues myself this fall, because Maryland Gov. Larry Hogan was generous enough to appoint me as a private citizen as co-chair of his bipartisan Maryland Redistricting Reform Commission, created by executive order in August. After three months of hearings, research, and workshops, we filed our report last month, calling for Maryland to adopt an independent citizen commission format similar to California’s pioneering model, but simplified and adapted to fit our smaller state and its institutions. While libertarians have paid only sporadic attention to gerrymandering over the years, the practice exemplifies the manner in which self-serving use of government power can entrench and insulate a political class, enabling it to withstand discontent and correction from voters. 

The progressive group Envision Frederick County invited me to share my thoughts on the issue at greater length. Those thoughts, of course, represent my views alone, not those of anyone else at the Cato Institute. You can read them here, as well as the entire report of the governor’s commission. I hope they represent steps onward and upward to a fairer and more effective system of representation, both in Maryland and in other states afflicted by gerrymandering.

One of the most important aspects of the separation of powers is the commitment of the power of the purse to the legislative branch. It constrains the executive and the judiciary from engaging in unilateral action without congressional approval. If there’s no approval, there will be no money to pay for the executive action, as the rule would have it. Unsurprisingly, with the advent of the administrative state and an aggressive executive, this power has been significantly diminished in modern times

Indeed, Article I, Section 8 of the Constitution provides expressly that “[t]he Congress shall have power to lay and collect taxes, duties, imposts and excises, to pay the debts,” to the exclusion of any other branch’s exercise of those powers. The upshot: the separation of powers, especially Congress’ power over appropriation priorities, is eroded as executive agencies and executive allies have access to funds not appropriated by Congress.

In order to keep their power of the purse intact, Congress originally enacted the Miscellaneous Receipts Statute in 1849. That law is now codified today in Title 31 of the U.S. Code. It requires all government officials in receipt of funds, such as settlements from civil or criminal enforcement, to deposit that money with the Treasury. As a structural point, the law effectively aims at stopping executive agencies from self-funding through enforcement or other receipts of money. It maintains their dependence on Congress for their annual appropriation.

However, the Justice Department has found a way around this law to fund political allies on the left or executive priorities without congressional approval: settlement agreements. As Wall Street Journal columnist Kimberley Strassel recently reported, “[i]t works like this: The Justice Department prosecutes cases against supposed corporate bad actors. Those companies agree to settlements that include financial penalties. Then Justice mandates that at least some of that penalty money be paid in the form of “donations” to nonprofits that supposedly aid consumers and bolster neighborhoods.”

The trick here is that Justice never “receives” the funds within the meaning of the Miscellaneous Receipts Statute, and thus has no requirement to deposit the funds it exacts from defendants with the Treasury—the donations are made directly without money ever being received into Justice’s hands.

Despite the fact that Justice Guidance discourages the practice because “it can create actual or perceived conflicts of interest and/or other ethical issues”—and, indeed, it was almost banned in 2008 due to perceptions of abuse—Justice continues to push this method of funding political allies and favored priorities of the executive. In fact, “[i]n 2011 Republicans eliminated the Housing Department’s $88 million for ‘housing counseling’ programs,” Strassel reports, “which spread around money to groups like La Raza. Congress subsequently restored only $45 million, and has maintained that level. . . [B]ank settlements pour some $30 million into housing counseling groups, thereby essentially restoring all the funding.”

Not only are many of the charitable groups benefitting from this rule left-wing activist organizations like the National Council of La Raza or the National Urban League, but many times defendants are given double credit for these donations, receiving $2 towards the total penalty in the settlement for each $1 given to nonprofits. Hundreds of millions of dollars have been funneled to left-leaning groups by this method.

Congress’ power of the purse is effectively curtailed by the end-run of the Miscellaneous Receipts Act. As muy colleague Ilya Shapiro and I pointed out in a recent National Review piece, this comes on the heels of a several-year-long effort by Justice to reduce the level of culpability required to hold corporate managers responsible for actions of the company. The easier it is to prosecute, the easier it is to force a big settlement.

But the problem is broader, for not only have Congress’s appropriations powers been diminished by aggressive agencies, but Congress itself has all but abdicated its power over the purse through modern budgetary practices.

Since 2001, Congress has funded the federal government not through the traditional 12 separate appropriations bills covering various sets of executive branch agencies, but through an unending series of Continuing Resolutions, or CRs—omnibus statutes that extend the previous year’s entire federal budget with broad percentage adjustments. As the Hudson Institute’s Christopher DeMuth recently noted:

The CR surrenders Congress’s power of the purse. When Congress is appropriating individual agencies, it can adjust program spending and policy elements on a case-by-case basis. It doesn’t always get its way in the face of a possible presidential veto, but at least Congress is in the game, with a multitude of tactics and potential compromises in play. In contrast, the threat of shutting down the entire government is disproportionate to discrete policy disagreements. The tactic would be plausible only in the rare case where congressional opinion amounted to veto-proof majorities in both chambers.

There are ways to fix this. Rep. Bob Goodlatte (R-Va.) introduced legislation that passed the House and is pending in the Senate to end the settlement slush fund Justice has created. More important still, Congress can return to its earlier practice of passing separate appropriations measures rather than Continuing Resolutions, thereby taking back its control of budget priorities.  Constitutional government requires nothing less than a restoration of the separation of powers.

 

As more North Carolina families are using school vouchers, enrolling their children in charter schools, or homeschooling, some traditional district schools are experiencing slower growth in enrollment than anticipated. The News & Observer reports:

Preliminary numbers for this school year show that charter, private and home schools added more students over the past two years than the Wake school system did. Though the school system has added 3,880 students over the past two years, the growth has been 1,000 students fewer than projected for each of those years.

This growth at alternatives to traditional public schools has accelerated in the past few years since the General Assembly lifted a cap on the number of charter schools and provided vouchers under the Opportunity Scholarship program for families to attend private schools.

Opponents of school choice policies often claim that they harm traditional district schools. Earlier this year, the News & Observer ran an op-ed comparing choice policies to a “Trojan horse” and quoting a union official claiming that “public schools will be less able to provide a quality education than they have in the past” because they’re “going to be losing funds” and “going to be losing a great many of the students who are upper middle-class… [who] receive the most home support.” 

Setting aside the benefits to the students who receive vouchers or scholarships (and the fact that North Carolina’s vouchers are limited to low-income students and students with special needs), proponents of school choice argue that the students who remain in their assigned district schools benefit from the increased competition. Monopolies don’t have to be responsive to a captive audience, but when parents have other alternatives, district schools must improve if they want to retain their students. But don’t take their word for it. Here’s what a North Carolina public school administrator had to say about the impact of increased competition:

New Wake County school board Chairman Tom Benton said the district needs to be innovative to remain competitive in recruiting and keeping families in North Carolina’s largest school system. At a time when people like choice, he said Wake must provide options to families.

“In the past, public schools could assign students to wherever they wanted to because parents couldn’t make a choice to leave the public schools,” Benton said. “Now we’re trying to make every school a choice of high quality so that parents don’t want to leave

New Wake County is not unique in this regard. As I’ve noted previously, there have been 23 empirical studies investigating the impact of school choice laws on the students at district schools. As shown in the chart below, 22 of those studies found that the performance of students at district schools improved after a school choice law was enacted. One study found no statistically significant difference and none found any harm.

Beating district schools over the head with more and more top-down regulations has done little to improve quality. A better approach is bottom-up: empower parents with alternatives and give district schools the freedom to figure out how to provide a quality education that will persuade parents to choose them.

[Hat tip to Dr. Terry Stoops of the John Locke Foundation for the story from New Wake County.]

I was at the Supreme Court for oral argument in Evenwel v. Texas, the case asking whether states have to draw legislative districts that equalize voters or people. (For more background, see here and Cato’s brief, and the argument transcript.)

I don’t have much to add to the excellent analysis of our own Andrew Grossman, other than to highlight that it looks like the ruling will come down to the votes of Chief Justice John Roberts and Justice Anthony Kennedy. Justice Samuel Alito seems to be the only safe vote for the challengers, though one can infer from their pasts that Justices Clarence Thomas – who dissented 15 years ago from the Court’s decision not to take a previous case raising this issue but maintained his characteristic silence – and Antonin Scalia – who was (very) uncharacteristically silent – are also on that side. The four members of the so-called liberal bloc, meanwhile, were unflinching in their attack on the challengers’ position as threatening representational interests and also being impractical.

Justice Kennedy seemed to want to have it both ways, asking Texas Solicitor General Scott Keller (a friend of mine), “Why can’t you use both [population equality and voter equality]?” That approach may well appeal to the chief justice, who could, in the alternative, simply defer to the states (which is Texas’s position, while the United States insists that total population must be the measure used).

Indeed, it’s possible that we end up with a 3-2-4 split, in which case the Kennedy/Roberts position would set the controlling precedent and we would still see a change in how at least some states draw district lines without affecting the more significant nationwide standard that the challengers request.

Such a split-the-baby decision, while perhaps emblematic of the Roberts Court, would be constitutionally unsatisfying. As I write in my new USA Today oped:

The Supreme Court must thus intervene again, to maintain voter equality by specifying that “one person, one vote” demands an equalization of voters rather than population.

Otherwise, you end up with the scenario we see in Texas. Depending on where you live in the Lone Star State, you might be one of 383,000 people who choose a state senator, or one of 611,000. Indeed, the legislature could’ve drawn 31 districts of equal population where 30 have one voter each and the 31st all the other voters.

That can’t be right. If “one person, one vote” means anything, it’s that we can’t weigh some people’s votes more than others’.

Over at Cato’s Police Misconduct web site, we have identified the worst case for the month of November.  It involved several officers with the San Antonio Police Department (SAPD).

Here’s what reportedly happened.  SAPD police were hunting for a suspect on drugs and weapons charges.  In a case of mistaken identity, officers swarmed on poor Roger Carlos.  Mr. Carlos had done nothing wrong.  He was apparently just standing in the wrong place at the wrong time.  And even though Mr. Carlos complied with the police commands, to get on the ground and to not resist arrest, they just kept hitting him over and over again.

Mr. Carlos’s wife, Ronnie, still can’t believe what has happened to her husband.  The couple has three boys under the age of ten–but their father is now paralyzed from the chest down.  Doctors are also concerned that Mr. Carlos may have difficulty breathing down the road.  The medical bills for multiple surgeries are enormous.

After reviewing the case, a police discipline board recommended 15-day suspensions for three officers involved.  The Police Chief, William McManus, thought that recommendation was wrong.  He shortened each of the suspensions to five days.

The National Academy of Sciences recently published a comprehensive report on the pace of immigrant assimilation.  Short conclusion: It’s on par with previous waves of immigrants.  I want to highlight one section of their report that explains why assimilation is so rapid that is only occasionally mentioned by some and totally ignored by others: ethnic attrition. 

Ethnic attrition occurs when the descendants of immigrants from a particular country, let’s say Mexico, cease to identify as Mexicans, Hispanic, or Latino in surveys.  This almost entirely occurs through intermarriage with spouses of different ethnic groups.  This wouldn’t matter except that ethnic attrition is selective, not random, and is severe (see Table 1).  Subsequent generations descended from Spanish-speaking immigrants who identify as Hispanic, Mexican, or Latino systematically differ from those who are descended from the same Spanish-speaking immigrants but who drop the self-identification. 

Therefore the problem is that you can’t use polls of self-identified Mexicans, Hispanics, or Latinos born here to form an accurate picture of multi-generational assimilation.  Any poll of those groups will only catch those who self-identify as such, not those born here to Mexican, Hispanic, or Latino parent(s) who do not.

Table 1

Hispanic Self-Identification with Ancestors from Spanish Speaking Country

Most Recent Ancestor from A Spanish-Speaking Country

Percent

Respondent (1st generation)

98.7

Parent(s) (2nd generation)

83.3

Grandparent(s) (3rd generation)

73

Great grandparent(s) (4th generation)

44.4

Further back (5th+generation)

5.6

Source: Ethnic Identification, Intermarriage, and Unmeasured Progress by Mexican Americans,” by Brian Duncan and Stephen J. Trejo. 

Note:  This information is from a small sample size of 369 from the 1970 U.S. Census Content Reinterview Study.  It should be taken as suggestive rather than hard truth.   

Studies that rely on the subjective Mexican self-identification of the descendants of immigrants typically find low rates of economic and educational assimilation that stall between the second and third generation.  There is significant educational and economic progress after correcting for ethnic attrition and measuring all of the descendants of immigrants.  Below are the major papers in the ethnic attrition literature. 

Duncan and Trejo’s 2007 seminal paper showed that Hispanic self-identification fades by generation and that correcting for ethnic attrition reveals far more socioeconomic progress than other methods of measuring assimilation.  Looking at the microdata from the 1970 US Census for everyone who had at least one ancestor for a Spanish-speaking country shows significant attrition (Table 1).  Virtually all (99 percent) of immigrants from Spanish-speaking countries self-identified as Hispanic, 83 percent of the second generation, 73 percent of the third, 44 percent of the fourth, and 6 percent of the fifth and higher generations. 

Intermarriage plays a central role in explaining the rapid loss of Hispanic self-identification.  In the 1970 data, 97 percent of Americans with Hispanic ancestry on both sides of the family self-identified as Hispanic while only 21.4 of those with Hispanic ancestry on one side of the family did so.  Their analysis of 62,734 marriages in the 2000 Census found a high rate of intermarriage (Table 2).

Table 2

U.S. Born Mexican-Americans and Their Spouses

 

US-Born Mexicans

Spouse

Husbands

Wives

U.S. Born Mexicans

50.6

45.3

Foreign Born Mexicans

13.6

17.4

Other Hispanics

4.2

4.1

Other Races, Ethnicities

31.6

33.2

All

100

100

Non-Mexicans

35.8

37.3

Source: Ethnic Identification, Intermarriage, and Unmeasured Progress by Mexican Americans,” by Brian Duncan and Stephen J. Trejo.

Duncan and Trejo found a positive educational (Table 3) and economic selectivity for Mexican-Americans whose spouses were from other ethnic groups.  In other words, Mexican Americans who are more educated are also more likely to intermarry with other ethnic and racial groups.  60 to 52 percent of the children from mixed marriages do not self-identify as Mexican.  For third-generation Mexicans who marry a non-Mexican, between 66 and 53 percent of their children do not self-identify as Mexican. 

The Mexican-American spouses in mixed marriage have at least a year more of education than Mexicans in non-mixed marriages, the children of those marriages gain even more education, and a majority of their children don’t self-identify as Mexican.  Adjusting for ethnic attrition significantly shifts how we view the educational and economic assimilation of the descendants of all Mexican immigrants.

Table 3

Average Education Outcomes by Type of Marriage, 2000

    Years of Education Husbands   Type of Marriage     Both Spouses U.S.-born Mexicans

12

  Husband Foreign-Born Mexican

9.6

  Wife Foreign-Born Mexican

11.5

  Husband Non-Mexican

13.5

  Wife Non-Mexican

13.1

All Husbands

12.3

      Wives     Type of Marriages     Both Spouses U.S.-born Mexicans

12.1

  Husband Foreign-Born Mexican

11.4

  Wife Foreign-Born Mexican

10.3

  Husband Non-Mexican

13.1

  Wife Non-Mexican

13.3

All Wives  

12.4

Source: Ethnic Identification, Intermarriage, and Unmeasured Progress by Mexican Americans,” by Brian Duncan and Stephen J. Trejo.

Richard Alba and Tariqul Islam (2009) argue that self-identification research fails to accurately measure assimilation due to intermarriage, echoing Duncan and Trejo.  Alba and Islam find that Americans with mixed Mexican ancestry are less likely to identify themselves as “Mexican Americans” in the Census.  Another problem is that Hispanic-origin question on the 1980, 1990 and 2000 censuses changed and encouraged Americans of Mexican descent to identify themselves pan-ethnically as “Hispanics or Latinos” instead of as “Mexican Americans.” 

In a later paper, Alba and Islam (2011) find that those who self-identify as Mexican-American tend to do poorly in socioeconomic advancement, especially in education, compared to other immigrant groups.  However, the descendants of Mexicans who are of mixed ancestry are the most likely to not self-identify as Mexican and tend to be more educated than other Mexican-Americans. 

Duncan and Trejo built on their earlier work with 2009 and 2011 papers that found Mexican Americans who do not self-identify differ systematically from those who do.  Mexicans who intermarry have higher levels of human capital and their children do not self identify as Mexican in Census data.  

For instance, second generation Americans who have only one parent born in Mexico are more educated than those with both parents born in Mexico.  The latter group are also 10 percent less likely to be deficient in English and their children are 9.5 percent less likely to self-identify as Mexicans.  This conclusion was partly preceded by a 2005 paper by Delia Furtado that found human capital and intermarriage increases immigrant adoption of native culture, boost assimilate rates for their children, achieve higher socioeconomic attainment, and possess higher human capitals.

Duncan and Trejo’s 2009 and 2011 papers used a new measure to identify Mexican-Americans who do not self-identify as such for the first and second generations.  Their new method discards the self-identification data and instead relies on identifying all respondents born in Mexico and those who have at least one parent born in Mexico.  Their new method better at identifying first and second generation Mexican-Americans but it cannot identify third, fourth, or subsequent generations due to a lack of data from previous Censuses.

In twin papers published in 2011 and 2012, Duncan and Trejo apply their new method to compare Mexican ethnic attrition to ethnic attrition for other Hispanic (Puerto Ricans, Cubans, Salvadorans and Dominicans) and Asian immigrant groups (Chinese, Indians, Japanese, Koreans and Filipinos).  They find that Mexicans and their American-born descendants do show the lowest rates of ethnic attrition in the first and second generations.  However, there is a “catch up effect” for the children of mixed-ancestry marriages where one spouse is Mexican.

Duncan and Trejo in 2015 and Frank Bean et. al in 2011 argue that the large number of Mexican unauthorized immigrants from Mexico produced the lower rate of ethnic attrition.  Mexican unlawful immigrants have limited job opportunities, take lower-skill jobs, and have less education and fewer skills meaning that they are less likely to intermarry or pass opportunity on to their children.  This explains why it takes more time for Mexican-Americans to achieve the same level of education and wages compared to other immigrant groups – an effect called “delayed incorporation.”  Legalizing them would speed assimilation.

Quick survey results from self-identified Americans of Mexican, Hispanic, or Latino origin are not a reliable gauge of assimilation because they miss many second, third, and later generation descendants of immigrants who don’t self-identify as such.  Properly adjusting for ethnic attrition reveals substantial assimilation of Mexican, Hispanic, and Latino immigrants and their descendants.

Pages