Cato Op-Eds

Individual Liberty, Free Markets, and Peace
Subscribe to Cato Op-Eds feed

Harriet Tubman’s forthcoming placement on the U.S. twenty dollar bill is being hailed as a symbolic win for women. Tubman certainly deserves the honor, and Cato’s Doug Bandow called for putting Tubman on “the twenty” a year ago. In celebration of the soon-to-be-redesigned twenty dollar bill, here are 5 graphs showcasing the incredible progress that women have made in the realms of work, education, health, etc.

1. The gender wage gap, which is largely the result of divergent career choices between men and women rather than overt sexism, is narrowing in the United States and in other developed countries. Part of this trend may be explained by more women entering highly paid fields previously dominated by men. For example, there are more women inventors and researchers in developed countries.

2. Around the world, girls in their teens have fewer children and are more likely to complete secondary education. As a smaller share of teenaged girls become mothers, many are better able to pursue education. The gender gap in youth literacyprimary school completion, and secondary school completion are all shrinking, even in many poor areas. Today, there are actually more women than men pursuing tertiary education and earning college degrees

3. In the United States, domestic violence against women has fallen considerably since the 1990s. And the very worst kind of domestic violence—homicide of an intimate partner—has also become rarer in the United States, both for male and female victims. Police also recorded fourteen thousand fewer cases of rape in the United States in 2013 than in 2003—in spite of a population increase. In fact, both rapes and sexual assaults against women have declined significantly in the United States since the 1990s. Evolving attitudes about the acceptability of violence against women may be partially to thank.

4. Dramatically fewer women die in childbirth, once a common cause of death for women. Around the world, more pregnant women receive prenatal care and more births are attended by skilled health staff. Like their mothers, newborn babies are also less likely to die, as are infants and children generally. Between 1990 and 2015, a girl’s likelihood of dying before her fifth birthday fell by about 54% globally. 

5. More and more women hold seats in the world’s parliaments, or hold ministerial level positions. There are also more women legislators, managers, and senior officials. There is some evidence that women are more likely to rise to achieve high positions in the private sector in countries with freer markets. 

 

Arizona state representative Sonny Borrelli (R) remarked that crime rates in his state dropped 78 percent since the passage of that state’s infamous SB1070 in 2010. His remark was thoroughly debunked. Below are a few charts to put Arizona’s crime rates in context. 

It is very difficult to show causality between a law and its effect on crime in later years. Crime rates have trended downward in the United States for over 20 years now. It is difficult to credit any decline after 2010 to a specific Arizona immigration law.  Also, Arizona’s crime rate cannot be considered in isolation. Comparing it to neighboring states and the country as a whole which did not pass an SB1070-type bill is necessary to even get a slight hint of how that law on crime.  Furthermore, there is a vast empirical literature on the effect of immigration on crime. At worst, immigration has almost no effect on crime. At best, immigration decreases crime rates.   

All of the figures are presented as a rate of crime per 100,000. The violent crime rate in Arizona was declining before SB 1070 and continued to decline afterward (Figure 1). From 2009 to 2014, the Arizona violent crime rate declined by 6.3 percent while it dropped 13 percent nationally. There was a decline of 16.3 percent in California, 9.9 percent  in Nevada, and 5.5 percent in New Mexico. 

Figure 1

Violent Crime Rate

 

Source: FBI.

Like the violent crime rate, the property crime rate in Arizona was also declining before SB 1070 and continued to decline afterward (Figure 2). From 2009 to 2014, the Arizona property crime rate decline by 10.9 percent while it dropped 14.6 percent nationally. It declined in every other state: 10.6 percent in California, 14.3 percent in Nevada, and 4.6 percent in New Mexico.

Figure 2

Property Crime Rate 

 

Source: FBI.

Since much of SB1070 was struck down by the courts it should be even more difficult to pinpoint its effect on crime.

In 1960, the Murr family purchased a 1.25-acre lot (Lot F) in a subdivision on the St. Croix River in Wisconsin. They built a recreation cabin on the lot. Three years later, the family decided to purchase an adjacent 1.25 acre lot (Lot E) as an investment. The family did not build on Lot E, and the parents later gave their children the property. When the children began to look into selling Lot E, the government said that they couldn’t. Why? Because regulations passed after both lots were purchased require a bigger “net project area” (the area that can be developed) than either lot had by itself. Because the lots were commonly owned, the government combined them into one unit and, consequently, prohibited the development or sale of what was once Lot E.

Combining the lots essentially eviscerated the independent value that Lot E once had. The Murrs filed suit against Wisconsin and St. Croix County, arguing that the governments’ action violated the Fifth Amendment Takings Clause by depriving the Murrs of the value of the property (Lot E) without just compensation. Regulatory takings cases like this one are analyzed under the Penn Central test, which applies its three factors to “the parcel as a whole,” thus making the definition of “the whole parcel” highly relevant and even determinative, as it was here. The governments’ defense in the Murr case is a tricky mathematical manipulation: By considering Lot E and Lot F together, the government argues that the taking is not unconstitutional because it affects only half of “the parcel.” But, the Murrs argue, if Lot E is analyzed individually, then the government took the whole thing.

Defining “the parcel as a whole” has been a long-disputed issue, so the Murrs, represented by the Pacific Legal Foundation, sought, and received, Supreme Court review after the Wisconsin Supreme Court declined to hear the appeal from the Wisconsin Court of Appeals, which is a pretty unique way for the Court to take a case. This gives property rights advocates hope that the Supreme Court will bring some clarity to the muddied waters that are the Penn Central test’s three factors. The government should not be allowed to combine lots simply because they have a common owner, and it should especially not be allowed to do so in order to avoid paying the “just compensation” required by the Fifth Amendment. The Cato Institute has filed a brief in support of the Murrs urging the Court to clarify Penn Central. Although the Court has attempted in a few other cases to clarify its test, it remains unclear what the factors even mean, how they are to be measured, how they relate to one another, and how they are to be weighted. Despite, or perhaps because of, the muddled nature of the test, the government wins the vast majority of regulatory takings cases.

Adopting a bright-line rule here in the narrow context of determining what constitutes “the parcel as a whole” would bring some clarity to the Penn Central test and help protect property rights. Any rule permitting the combination of adjacent parcels would exacerbate Penn Central’s problems by leaving the lower courts to determine when combination is permissible and when it is not. Already the lower courts disagree on this issue, leading to greater uncertainty and less protection for property rights. This destabilizes property owners’ reliance interests and discourages property investment. State and local governments across the country have been using the vagueness of Penn Central to facilitate taking private property without just compensation. By clarifying the “parcel as a whole,” the Court can curtail one type of eminent domain abuse.

One of the most feared of all model-based projections of CO2-induced global warming is that temperatures will rise enough to cause a disastrous melting/destabilization of the Greenland Ice Sheet (GrIS), which would raise global sea level by several meters. But how likely is this scenario to occur? And is there any way to prove such melting is caused by human activities?

The answer to this two-part question involves some extremely complex and precise data collection and understanding of the processes involved with glacial growth and decay. Most assuredly, however, it also involves a scientifically accurate assessment of the past history of the GrIS, which is needed to provide a benchmark for evaluating its current and future state. To this end, a recent review paper by Vasskog et al. (2015) provides a fairly good summary of what is (and is not) presently known about the history of the GrIS over the previous glacial-interglacial cycle. And it yields some intriguing findings.

Probably the most relevant information is Vasskog et al.’s investigation of the GrIS during the last interglacial period (130-116 ka BP). During this period, global temperatures were 1.5-2.0°C warmer than the peak warmth of the present interglacial, or Holocene, in which we are now living. As a result of that warmth, significant portions of the GrIS melted away. Quantitatively, Vasskog et al. estimate that during this time (the prior interglacial) the GrIS was “probably between ~7 and 60% smaller than at present,” and that that melting contributed to a rise in global sea level of “between 0.5 and 4.2 m.” Thus, in comparing the present interglacial to the past interglacial, atmospheric CO2 concentrations are currently 30% higher, global temperatures are 1.5-2°C cooler, GrIS volume is from 7-67% larger, and global sea level is at least 0.5-4.2 m lower, none of which signal catastrophe for the present.

Clearly, therefore, there is nothing unusual, unnatural or unprecedented about the current interglacial, including the present state of the GrIS. Its estimated ice volume and contribution to mean global sea level reside well within their ranges of natural variability, and from the current looks of things, they are not likely to depart from those ranges any time soon.

 

References

Reyes, A.V., Carlson, A.E., Beard, B.L., Hatfield, R.G., Stoner, J.S., Winsor, K., Welke, B. and Ullman, D.J. 2014. South Greenland ice-sheet collapse during Marine Isotope Stage 11. Nature 510: 525–528.

Vasskog, K., Langebroek, P.M., Andrews, J.T., Nilsen, J.E.Ø. and Nesje, A. 2015. The Greenland Ice Sheet during the last glacial cycle: Current ice loss and contribution to sea-level rise from a palaeoclimatic perspective. Earth-Science Reviews 150: 45-67.

New Hampshire legislators are working to end a legal battle between a small town and state education bureaucrats over the town’s school choice program.

The town of Croydon (2010 population: 764) has fewer than 100 elementary-and-secondary-school-aged students. Unsurprisingly, the town found it was not cost effective to run its own K-12 school system. Instead, the town runs a very small K-4 district school and had a longstanding, exclusive agreement with a neighboring district to educate 5th through 12th graders. However, when their contract was nearing expiration, town leaders decided to allow students to take the funds assigned to them to a school of choice.

Sadly, the New Hampshire Department of Education wasn’t about to let a town empower parents to escape the district school system so easily. After a series of meetings and threats to withhold state funds, the department ordered Croydon to end their school choice program, which it claimed violated state law. However, former NH Supreme Court Justice Charles G. Douglas, III, the attorney for Croydon, that the department was misreading state law:

The letter from Douglas and [then-Croydon School Board Chairman Jody] Underwood argues against the state laws [NH Commissioner of Education Virginia] Barry used to support her order to stop school choice in Croydon:

“You cite RSA 193:1 and purport that it says that districts may only assign students to public schools. This is inaccurate. RSA 193:1 defines the duties of parents to ensure school attendance, and neither describes the duties districts have nor restricts the assignment ability of districts. In addition to your inaccurate interpretation, you cite to the portion of that statute that states: ‘A parent of any child at least 6 years of age … shall cause such a child to attend the public school to which the child is assigned.’ You fail to cite section (a) of the statute which clearly states that private school attendance is an exception to attending public school.”

The dispute is now being litigated.

Recently, some NH legislators sought to clarify any ambiguities in the law by explicitly authorizing local authorities to allow local education funding follow children to private schools of choice. As the New Hampshire Union Leader editorialized, this is a step in the right direction. However, the legislation does contain one serious flaw: it limits parental choices to non-religious schools, thereby discriminating against schools based solely on their religious affiliation.

It’s understandable why the bill’s sponsors excluded religious schools. The state’s historically anti-Catholic ”Blaine Amendment” states that “no money raised by taxation shall ever be granted or applied for the use of the schools of institutions of any religious sect or denomination.” Some have interpreted this to mean that no state dollars can flow to religious schools, but former Justice Douglas disagrees. In an analysis he coauthored with Richard Komer of the Institute for Justice for the Josiah Bartlett Center (my former employer), Douglas explained:

A school choice program that is purposely designed to be neutral with respect to religion, and which provides only incidental and indirect benefits to a religious sect or religion in general, benefits that are purely the result of the choices of individual citizens receiving state funds, does not violate the religion/state separation provisions of either the United States or New Hampshire Constitutions. [emphasis in the original]

If legislators make religious schools eligible, they will likely invite litigation from the same anti-school choice groups that sued the state of New Hampshire over its scholarship tax credit law in 2013–a case the state supreme court eventually rejected. However, excluding religious schools is also likely to invite litigation. Just this week, the Institute for Justice announced that it was filing suit against Douglas County, Colorado for excluding religious schools from its voucher program after a plurality of the state supreme court interpreted Colorado’s Blaine Amendment to prohibit granting vouchers to students who wanted to attend religious schools. Attorneys with the Institute for Justice argue that such discrimination violates several provisions of the U.S. Constitution:

The exclusion of religious options from the program violates the Free Exercise, Establishment, Equal Protection, and Free Speech Clauses of the United States Constitution, as well as the Due Process Clause, which guarantees the fundamental right of parents to control and direct the education and upbringing of their children.

New Hampshire legislators are wise to expand parental choice in education. As the consensus of high-quality research shows, expanding educational choice benefits both participating students and those who remain in their assigned district schools. However, legislators should avoid writing discrimination into state law. The state constitution does not demand it and, even if did, the U.S. Constitution requires the government to be neutral toward religious options in public programs, forbidding states from either favoring or discriminating against religious groups or institutions.  

To learn more about school choice in New Hampshire, watch the Cato Institute’s short documentary on the Granite State’s scholarship tax credit law:

Live Free and Learn: Scholarship Tax Credits in New Hampshire

Buried in a recent Paul Krugman blog post is this statement:

… protectionism reduces world income.

This is correct, and is pretty much all you need to know about protectionism. Which approach to eliminating protectionism – unilateral, trade agreement, etc. – works best can be debated, but there should be no question we should get rid of it due to its impact on incomes.

Yet somehow for Krugman there is still a question. In the rest of the column, and various other recent ones, he comes up with contrived justifications for why we should not bother with protectionism. For example, he says:

if you want to make the case that trade liberalization has been the principal driver of growth, or anything along those lines, well, the models don’t say that.

Is trade liberalization the “principal driver of growth”?  I don’t know if it is the “principal driver of growth.” But I do know that it “reduces world income.” Isn’t that enough?

And a while back, he said this:

In fact, the elite case for ever-freer trade, the one that the public hears, is largely a scam. That’s true even if you exclude the most egregious nonsense, like Mitt Romney’s claim that protectionism causes recessions.

Look, I don’t know if every tiny bit of protectionism anywhere will cause a recession (and Romney’s statements were much more nuanced than what Krugman implies). But regardless, if we agree that protectionism “reduces world income,” isn’t that enough?

As for jobs, he said:

… what the models of international trade used by real experts say is that, in general, agreements that lead to more trade neither create nor destroy jobs.

I think that’s pretty much right: Trade is not about the total number of jobs, but rather about incomes. As for incomes, recall that protectionism “reduces world income”!

Obviously, Krugman has policy priorities other than trade right now, and he’s trying to push people away from the trade issue. Which is a shame, because it could be helpful to have someone like him on board to counteract the uninformed rhetoric of many leading politicians, who seem to believe, based on emotions rather than evidence, that they can use protectionism to “make America great again” or something.

It occurs to me that, despite the unprecedented flood of writings of all sorts — books, blog-posts, newspaper op-eds, and academic journal articles —  addressing just about every monetary policy development during and since the 2008 financial crisis, relatively few attempts have been made to step back from the jumble of details for the sake of getting a better sense of the big picture.

What, exactly, is “monetary policy” about?  Why is there such a thing at all?  What should we want to accomplish by it — and what should we not try to accomplish?  By what means, exactly, are monetary authorities able to perform their duties, and to what extent must they exercise discretion in order to perform them?  Finally, what part might private-market institutions play in promoting monetary stability, and how might they be made to play it most effectively?

Although one might write a treatise on any one of these questions, I haven’t time to write a thesis, let alone a bunch of them; and if I did write one, I doubt that policymakers (or anyone else) would read it.  No sir: a bare-bones primer is what’s needed, and that’s what I hope to provide.

The specific topics I tentatively propose to cover are the following:

  1. Money.
  2. The Demand for Money.
  3. The Price Level.
  4. The Supply of Money.
  5. Monetary Control, Then and Now
  6. Monetary Policy: Easy, Tight, and Just Right.
  7. Money and Interest Rates.
  8. The Abuse of Monetary Policy.
  9. Rules and Discretion.
  10. Private vs Official Money.

Because I eventually plan to combine the posts into a booklet, your comments and criticisms, which I’ll be sure to employ in revising these essays, will be even more appreciated than they usually are.

******

“The object of monetary policy is responsible management of an economy’s money supply.”

If you aren’t a monetary economist, you will think this a perfectly banal statement.  Yet it will raise the hackles of many an expert.  That’s because no-one can quite say just what a nation’s “money supply” consists of, let alone how large it is.  Experts do generally agree in treating “money” as a name for anything that serves as a generally-accepted means of payment.  The rub resides in deciding where to draw a line between what is and what isn’t “generally accepted.”  To make matters worse, financial innovation is constantly altering the degree to which various financial assets qualify as money, generally by allowing more and more types of assets to do so.  Hence the proliferation of different money supply measures or “monetary aggregates” (M1, M2, M3, MZ, etc.).  Hence the difficulty of saying just how much money a nation possesses at any time, let alone how its money stock is changing.  Hence the futility of trying to conduct monetary policy by simply tracking and regulating any particular money measure.

For all these reasons many economists and monetary policymakers have tended for some time now to think and speak of monetary policy as if it weren’t about “money” at all.  Instead they’ve gotten into the habit of treating monetary policy as a matter of regulating, not the supply of means of exchange, but interest rates.  We all know what interest rates are, after all; and we can all easily reach an agreement concerning whether this or that interest rate is rising, falling, or staying put.  Why base policy on a conundrum  when you can instead tie it to something concrete?

And yet…it seems to me that in insisting that monetary policy is about regulating, not money, but interest rates, that economists and monetary authorities have managed to obscure its true nature, making it appear both more potent and more mysterious than it is in fact.  All the talk of central banks “setting” interest rates is, to put it bluntly, to modern central bankers what all the smoke, mirrors, and colored lights were to Hollywood’s Wizard of Oz: a great masquerade, serving to divert attention from the less hocus-pocus reality lurking behind the curtain.

But surely the Fed does influence interest rates.  Isn’t that, together with the fact that we can clearly observe what interest rates are doing, not reason enough to think of monetary policy as being “about” interest rates?  And doesn’t money’s mutable nature make it inherently mysterious — and therefore ill-suited to serve either as an object of monetary policy, let alone as a concept capable of  demystifying that policy?

No, and no again.  Although central banks certainly can influence interest rates, they typically do so, not directly (except in the case of the rates they themselves charge in making loans or apply to bank reserves), but indirectly.  The main thing that central banks directly control is the size and make up of their own balance sheets, which they adjust by buying or selling assets.  When the FOMC elects to “ease” monetary policy, for example, it may speak of setting a lower interest rate “target.” But what that means — or what it almost always meant until quite recently — was that the Fed planned to  increase its holdings of U.S. government securities by buying more of them from private (“primary”) dealers.  To pay for the purchases, it would wire funds to the dealers’ bank accounts, thereby adding to the total quantity of bank reserves.[1]   The greater availability of bank reserves would in turn improve the terms upon which banks with end-of-the-day reserve shortages could borrow reserves from other banks.[2]  The “federal funds rate,” which is the average (“effective”) rate that financial institutions pay to borrow reserves from one another overnight, and the rate that the Fed has traditionally “targeted,” would therefore decline, other things being equal.

Because central banks’ liabilities consist either of the reserve credits of banks and the central government, or of circulating currency, and because commercial banks’ holdings of currency and central-bank reserve credits make up the cash reserves upon which their own ability to service deposits of various kinds rests, when a central bank increases the size of its own balance sheet, it necessarily increases the total quantity of money, either indirectly, by increasing the amount  of cash reserves available to other money-producing institutions, or indirectly, by placing more currency into circulation.

Just how much the money supply changes when a central bank grows depends, first of all, on what measure of money one chooses to employ, and also on the extent to which banks and other money-creating financial institutions lend or invest rather than simply hold on to fresh reserves that come their way.  Before the recent crisis, for example, every dollar of “base” money (bank reserves plus currency) created by the Federal Reserve itself translated into just under 2 dollars of M1, and into about 8 dollars of M2.  (See Figure 1.)  Lately those same base-money “multipliers” are just .8 and 3.2, respectively.  Besides regulating the available supply of bank reserves, central banks can influence banks’ desired reserve ratios, and hence prevailing money multipliers, by setting minimum required reserve ratios, or by either paying or charging interest on bank reserves, to increase or lower banks’ willingness to hold them. [3]

Figure 1: U.S. M1 and M2 Multipliers

If the money-supply effects of central bank actions aren’t always predictable, the interest rate effects are still less so.  Interest rates, excepting those directly administered by central banks themselves, are market rates, the levels of which depend on both the supply of and the demand for financial assets.  The federal funds rate, for example, depends on both the supply of “federal funds” (meaning banks’ reserve balances at the Fed) and the demand for overnight loans of the same. The Fed has considerable control over the supply of bank reserves; but while it can also influence banks’ willingness to hold reserves, that influence falls well short of anything like “control.”  It’s therefore able to hit its announced federal funds target only imperfectly, if at all. Finally, even though the Fed may, for example, lower the federal funds rate by adding to banks’ reserve balances, if the real demand for reserves hasn’t changed, it can do so only temporarily.  That’s so because the new reserves it creates will sponsor a corresponding increase in bank lending, which will in turn lead to an increase in both the quantity of bank deposits and the nominal demand for (borrowed as well as total) bank reserves.   As banks’ demand for reserves rises, the federal funds rate, which may initially have fallen, will return to its original level.  More often than not, when the Fed appears to succeed in steering market interest rates, it’s really just going along with underlying forces that are themselves tending to make rates change.

I’ll have more to say about monetary policy and interest rates later.  But for now I merely want to insist that, despite what some experts would have us think, monetary policy is, first and foremost, “about” money.  That is, it is about regulating an economy’s stock of monetary assets, especially by altering the quantity of monetary assets created by the monetary authorities themselves, but also by influencing the extent to which private financial institutions are able to employ central bank deposits and notes to create alternative exchange media, including various sorts of bank deposits.

Thinking of monetary policy in this (admittedly old-fashioned) way, rather than as a means for “setting” interest rates, has a great advantage I haven’t yet mentioned.  For it allows us to understand a central bank in relatively mundane (and therefore quite un-wizard-like) terms, as a sort of combination central planning agency and factory.  Central banks are, for better or worse, responsible for seeing to it that the economies in which they operate have enough money to operate efficiently, but no more.  Shortages of money wastes resources by restricting the flow of payments, making it hard or impossible for people and firms to pay their bills, while both shortages and surpluses of money hamper the correct setting of individual prices, causing some goods and services to be overpriced, and others underpriced, relative to others.  Scarce resources, labor included, are squandered either way.

Though they are ultimately responsible for getting their economies’ overall money supply right,  central banks’ immediate concern is, as we’ve seen, that of controlling the supply of “base” money, that is, of paper currency and bank reserve credits — the stuff banks themselves employ as means of payment.  By limiting the supply of base money, central banks indirectly limit private firms’ ability to create money of other sorts, because private firms are only able to create close substitutes for base money by first getting their hands on some of the real McCoy.

But how much money is enough?  That is the million (or trillion) dollar question.  The platitudinous answer is that the quantity of money supplied should never fall short of, or exceed, the quantity demanded.  The fundamental challenge of monetary policy consists, first of all, of figuring out what the platitude means in practice and, second, of figuring out how to make the money stock adjust in a manner that’s at least roughly consistent with that practical answer.

Next: The Demand for Money.

__________________________________________

1. Although people tend to think of a bank’s reserves as consisting of the currency and coin it actually has on hand, in its cash machines, cashiers’ tills, and vaults, banks also keep reserves in the shape of deposit credits with their district Federal Reserve banks. When the Fed wires funds to a bank customer’s account, the customer’s account balance increases, but so does the bank’s own reserve balance at the Fed. The result is much as if the customer made a deposit of the same amount, using a check drawn on some other bank, except that the reserves that the bank receives, instead of being transferred to it from some other bank, are fresh ones that the Fed has just created.

2. Although amounts that banks owe to one another are kept track of throughout the business day, it is only afterwards that banks that are net debtors must come up with the reserves they need both to settle up and to meet their overnight reserve requirements.

3. The Fed first began paying interest on bank reserves in October 2008.  Although some foreign central banks are now charging interest on reserves, the Fed has yet to take that step; nor is it clear whether it has the statutory right to do so.

[Cross-posted from Alt-M.org]

One common criticism of immigrants is that they could undermine American institutions, weakening them so much that future economic growth will decrease so that the long run costs of liberalization are greater than the benefits. As I’ve written before this criticism doesn’t hold up to empirical scrutiny but a new line of attack is that immigrant trust levels could weaken productivity.  I decided to look at trust variables in the General Social Survey (GSS), a huge biennial survey of households in the United States, to see if immigrants and their children are less trusting than other Americans. The results were unexpected.

The first variable examined was “trust.” The question asked was: “Generally speaking, would you say that most people can be trusted or that you can’t be too careful in life?” I confined the results to the years 2004-2014 to measure more recent immigrants. The first generation is the immigrant generation, the second generation are children of immigrants, the third generation are the grandchildren of immigrants, and the fourth+ generation includes their great grandchildren and every older generation.

The first and second generations are less trusting than the third generation, confirming the findings of the literature. However, the fourth-plus generation is about as distrustful as the first and second generations. The immigrants and their children are not the trust anomaly, the third generation is. They are more trusting than every other generation of Americans (Figure 1).           

Figure 1

Trust

Source: General Social Survey

Related to “trust,” the “fair” variable asks: “Do you think most people would try to take advantage of you if they got a chance, or would they try to be fair?” It reveals the same pattern as “trust” –first and second generations are much more likely to say people try to take advantage than the third generation (Figure 2). However, the fourth+ generations are nearly indistinguishable from the immigrant generation and their children. Only the third generation sees other people as particularly fair. 

Figure 2

Fair

Source: General Social Survey

The third variable examined was “helpful,” which asks: “Would you say that most of the time people try to be helpful, or that they are mostly just looking out for themselves?” The same pattern emerged – the first and second generations were more similar to the fourth-plus generation. The second generation is nearly identical to the fourth-plus generation. The third generation was most likely to say that most people were helpful and least likely to say that people look out for themselves. Again, the third generation is the trust anomaly, not the immigrants.    

Figure 3

Helpful

Source: General Social Survey

The differences between immigrants, their children, and fourth-plus generation Americans are small when it comes to levels of trust, opinions of fairness, and whether people are helpful. It’s hard to see how these small differences could comprise the micro foundations of an institution-based argument against liberalized immigration. The question is not why native-born Americans trust and immigrants don’t, it’s why do third generation Americans trust so much while all other Americans and immigrants do not?

Notes:

For “trust,” there were 1039 first generation respondents, 364 from the second generation, 327 for the third generation, and 4988 for the fourth-plus generation. For “fair,” there were 972 first generation respondents, 345 from the second generation, 304 for the third generation, and 4699 for the fourth-plus generation. For “helpful,” there were 986 first generation respondents, 347 from the second generation, 303 from the third generation, and 4700 from the fourth-plus generation. 

Today, our friends at Families Against Mandatory Minimums released a video documenting the case of Mandy Martinson. Mandy is one of the many non-violent drug offenders serving too long in federal prison. For her first, non-violent drug offense, Mandy was sentenced to 15 years.

 

To paraphrase FAMM’s Florida director and former Catoite Greg Newburn, mandatory minimums operate on the assumption that prosecutors are the best people to determine the sentence for a defendant…up until the point that prosecutor becomes a judge. 

All mandatory minimums should be repealed. Let the judges decide. 

There’s a lot to say about the substance of the misguided anti-encryption legislation sponsored by Sens. Dianne Feinstein and Richard Burr, which was recently released as a “discussion draft” after a nearly-identical version leaked earlier this month.  I hope to do just that in subsequent posts.  But it’s also worth spending a little time on the proposal’s lengthy  pre-amble, which echoes the rhetorical tropes frequently deployed by advocates for mandating government access to secure communications and stored data. 

The bill is somewhat misleadingly titled the “Compliance With Court Orders Act of 2016”—which you’d think would be a matter for the Judiciary Committee, not the Senate Select Committee on Intelligence—and begins with the high minded declaration that “no person or entity is above the law.”  Communication services and software developers, we are told, must “respect the rule of law and comply with all legal requirements and court orders.”  In order to “uphold the rule of law,” then, those persons and entities must be able to provide law enforcement with the plaintext—the original, un-garbled contents—of any encrypted message or file when so ordered by a court.

The politest way I can think of to characterize this way of framing the issue is: Nonsense.  Whatever your view on mandates of the sort proposed here, they have little to do with the principle of “the rule of law”: The idea that all citizens, including those who wield political power, must be governed by neutral, publicly known, and uniformly applicable rules—as opposed to, say, the whims and dictates of particular officials.  This formal principle says nothing about the content of the legal obligations and restrictions to which citizens are subject—only that those restrictions and obligations, whatever they are, should be known and consistently applied.  In effect, Feinsten and Burr are pretending that a sweeping and burdensome new regulatory requirement is nothing more than the application of a widely-revered formal principle central to free societies.  We can debate the merits of their proposed regulation, but this talking point really ought to be laughed out of the room.

There are two wholly different kind of scenarios in which technology companies have recently been charged with placing themselves “above the law” by declining to assist law enforcement.  Both are specious, but it’s worth distinguishing them and analyzing them separately.

First, you have the kind of situation at issue in the recent conflict between Apple and the FBI, which has received so much media coverage. In this instance, it is clear that Apple was indeed capable of doing what the FBI wanted it to do: Write a custom piece of software that would disable certain security features on the work iPhone used by a deceased terrorist, enabling the FBI to crack the phone’s passcode and unlock the data within.  Sen. Feinstein condemned the company for fighting that order in court, declaring: “Apple is not above the laws of the United States, nor should anyone or any company be above the laws. To have a court warrant granted, and Apple say they are still not going to cooperate is really wrong.”  A similar view of the conflict was implicit in a slew of lazy news headlines that characterized Apple as “defying” a court’s order.

All of this, however, reflects a profound and rather disturbing misunderstanding of how our legal system operates.  Subpoenas and court orders routinely issue initially in response to a request from the government, with no opposing arguments heard.  But the recipients of those orders, as  a matter  of course, have an essential legal right to contest those orders in an adversarial hearing.  Here, Apple raised a variety of different objections—among them, that the statute invoked by the government, the All-Writs Act, did not actually authorize orders of the sort that the FBI had sought; and that even if the statute could be generally interpreted to permit such orders, that this one imposed an excessive and unreasonable burden on Apple. 

Now, you can agree or disagree with the various legal arguments advanced by Apple, as well as the many legal and technical experts who lined up to back the company.  But this is not Tim Cook standing atop a barricade howling “Anarchy!”—and it is a borderline-Orwellian abuse of language to say that a company puts itself “above the law” by using the legal process to contest the government’s interpretation of the law.  If the case had gone to the Supreme Court, Apple had lost, and still insisted it wouldn’t comply, then yes, they’d be placing themselves “above the law.”  Until then, they’re just working appropriately within the legal system, and it’s frankly chilling to hear elected officials implying there’s anything improper about that.  “The rule of law” does not require that everyone engaged in litigation with the government should surrender and defer to the interpretation of the government’s lawyers—especially given that one judge has already held that Apple has the better argument.

The second scenario the bill aims to address is the one where a company simply can’t do anything to help, because they don’t have access to the cryptographic keys needed to decipher a given message or file.  Now, you can argue the merits of passing a new mandate requiring companies to have this capability. What you cannot reasonably argue is that “the rule of law” is undermined when, in the absence of a mandate, companies cannot comply with orders to decrypt files with user-generated keys.   The rule of law does not mean that every imaginable outcome a judge directs—from decrypting data to flying like a bird—must be achievable by anyone served with a court order.

Consider: It is possible to build cars (and, presumably, laptops or firearms or any number of other consumer goods) with a GPS location beacon and a remote shutoff switch that will disable them until police can arrive in the event of theft.  Some cars are indeed built with such features, which would no doubt be of great help to police in a variety of cases. But most cars are not built with these features, and nobody thinks General Motors—served with an order to locate a stolen car and shut down the engine—would be “defying the rule of law” if they had to reply:  “We have no way to do that; we didn’t build that car with those capabilities.”  Moreover, if a legislator proposed a massive and costly new regulation requiring that all new cars be built with such features, we would rightly gawp incredulously at the suggestion that this was merely an effort to “ensure compliance with court orders,” as though the failure to build a feature useful to police were tantamount to obstruction of a lawful search warrant.  We would, indeed, probably regard such an argument as a rather brazen attempt to downplay the costly new regulatory burden such a mandate would impose on auto makers. 

All sorts of technologies—from document shredders to toilets—may help criminals keep incriminating material out of the hands of police.  As a result, some searches conducted pursuant to lawful warrants will not succeed in turning up the evidence sought.   We can regard that as unfortunate, and we can debate what measures what may be appropriate in aiding police meet with greater success.  But anyone who tried to ratchet up the rhetoric by claiming that toilets therefore undermine the Rule of Law would be laughed out of the room—which is the appropriate reaction here, as well.

So much for rhetoric.  In a subsequent post, I’ll get into why the substantive idea of a “decryptability” mandate is so insanely misguided.

On Sunday night, Brazil’s Chamber of Deputies voted overwhelmingly (367-137) to open impeachment proceedings against President Dilma Rousseff. The Senate will now vote on whether to take the case and try her, which is all but guaranteed. As a matter of fact, barring some unforeseen event, Dilma’s days as president are numbered.

These are Brazil’s most turbulent months since the return to democracy in 1985. Not only is the president about to be removed from office, but the country is also mired in its worst economic recession since the 1930s. It is not coincidence that Dilma’s popularity (10%) stands at a similar level to Brazil’s fiscal deficit (10.75%), the unemployment rate (9.5%), and the inflation rate (9.4%). The economic and political crises are feeding off of one another.

Here are some facts and myths regarding this impeachment process:

“It’s a coup!”

For some in the Latin American left, anything that cuts short a president’s tenure in office —even if it’s an impeachment process stipulated in the Constitution— is a coup. The same narrative was applied when left-wing President Fernando Lugo was impeached by Paraguay’s Congress in 2012.

The impeachment process and the crimes for which a president can be impeached in Brazil are clearly outlined in articles 85 and 86 of the Constitution. Moreover, the entire process has been overseen by the Supreme Court, which has thus far found no fault in how things have been conducted. It’s important to add that 8 of the 11 justices in the Supreme Court were appointed by Dilma and her Workers’ Party predecessor, Lula da Silva.

Tellingly, when the Guatemalan Congress voted last year to strip right-wing President Otto Pérez Molina of his immunity, so he could be prosecuted for corruption charges, no one claimed it was a coup.

“Dilma hasn’t been accused of any wrongdoing”

It is true that Dilma hasn’t been accused of personally being involved in the Petrobras bribing scheme that inflicted loses of $17 billion on the state-owned oil company. Even though she was the chairwoman of the oil giant when most of the corrupt deals took place, her defense is that she was unaware that this was going on; at any rate, not a very good show of competence.

However, President Rousseff is not being impeached over the Petrobras corruption scandal, but over her government’s illegal handling of budgetary accounts. In this regard, it was an independent court —the Federal Accounts Court— that ruled that the Rousseff administration had broken the law. According to article 85 of Brazil’s Constitution, this is a crime for which a president can be impeached.

“Most of the members of Congress are implicated in corruption scandals”

This is actually true. According to an NGO called Transparência Brasil, 60% of members of Congress have been convicted or are under investigation for various crimes, including corruption and electoral fraud. The speaker of the Chamber of Deputies, Eduardo Cunha, has been charged with taking millions of dollars in bribes under the Petrobras scheme.

It is true that Brazilians aren’t simply dealing with a corrupt ruling party, but a crooked political class. Impeachment won’t fix this, but it will certainly set a powerful precedent. However, if the ultimate aim of Brazilians is to clean up the political system, they must be more rigorous in how they elect their political leaders in the future.

Other reforms are badly needed, such as overhauling the rules that grant immunity to members of Congress when they face criminal charges. Brazilians who have taken to the streets demanding the ouster of Dilma should now set their sights on political reform and those who oppose it.

“There is a political vendetta against the Worker’s Party from the Judiciary and the right-wing media”

It is true that Brazil’s judicial institutions, including the federal police, the attorney general’s office, and leading judges have been very active uncovering, prosecuting and convicting politicians involved in corruption scandals. But those implicated thus far have belonged to different political parties, including those of the opposition. As mentioned above, the speaker of the Chamber of Deputies leading the impeachment process against Dilma has been charged with corruption.

The media has also played a critical role in exposing the Petrobras scheme. This is good. Unlike other South American countries where the press has been stifled by their governments, in Brazil there is a vibrant and free press that hold politicians accountable, and not only those that belong to the incumbent party. As a matter of fact, big news outlets considered “anti-Workers’ Party” have exposed the shenanigans of Speaker Eduardo Cunha and pointed out the fact that numerous Congressmen impeaching Dilma are also facing their own corruption charges. This doesn’t look like a cover-up.

The impeachment process is without a doubt a distressing affair for Brazil’s young democracy. But the country will emerge stronger if the right lessons are learned. 

You Ought to Have a Look is a feature from the Center for the Study of Science posted by Patrick J. Michaels and Paul C. (“Chip”) Knappenberger.  While this section will feature all of the areas of interest that we are emphasizing, the prominence of the climate issue is driving a tremendous amount of web traffic.  Here we post a few of the best in recent days, along with our color commentary.

With Earth Day and the grand signing ceremony for the Paris Climate Agreement just around the corner, we thought it apt to highlight some relevant stories from around the web, particularly those critical of the central climate control enterprise.

Recall that we have pointed out the Paris Climate Agreement represents little more than a business-as-usual approach that has been spun to suggest that it represents a collective, international effort in response to a climate change “concern.” Increasing opportunities for riding your bike (etc.) now have been rebranded as efforts to save the world. Right.

We’ve shown that the U.S. pledge under the Paris “Don’t Call It a Treaty” Agreement, while a bit more aggressive than many, turns out to basically be impossible. Putting our name on such pledge seems a bit disingenuous, to put it mildly.

On top of all this comes a new economic analysis from the Heritage Foundation that basically shows that the U.S. intension under the Agreement would be mucho bad news. Here are the Key Points from the report “Consequences of Paris Protocol: Devastating Economic Costs, Essentially Zero Environmental Benefits”:

 

The justifications for all these findings are described in detail in the full report.

Clearly, considering all the negatives stacked up against the U.S.’s commitment under the Paris Agreement, it’s hard to find any justification for signing it that is built upon anything but false premises.

Next up is a notable article (h/t Judy Curry) called “Twilight of the Climate Change Movement” authored by Mario Loyola, Senior Fellow at the Wisconsin Institute for Law and Liberty, that appears in The American Interest. Loyola points out that despite the “fanfare” surrounding the Paris Agreement, “the climate change movement faces big trouble ahead.”  Loyola explains:

Its principal propositions contain two major fallacies that can only become more glaring with time. First, in stark contrast to popular belief and to the public statements of government officials and many scientists, the science on which the dire predictions of manmade climate change is based is nowhere near the level of understanding or certainty that popular discourse commonly ascribes to it. Second, and relatedly, the movement’s embrace of an absolute form of the precautionary principle distorts rational cost-benefit analysis, or throws it out the window altogether.

Lots of good information in this article, including a review of the uncertainties in the science of climate change and how those uncertainties are downplayed, or swept away, in the pursuit of an anti-industrialist agenda—be sure to check out the whole thing.

Extending a look at the dangers of an anti-industrialist agenda, Cato Adjunct Scholar Alex Epstein gave a dazzling performance in presenting testimony before the April 13th hearing of the Senate Environment and Public Works Committee “Examining the Role of Environmental Policies on Access to Energy and Economic Opportunity.” Alex laid out why restricting energy choice—which is the main premise of centralized efforts to mitigate climate change—is a really bad idea:

The energy industry is the industry that powers every other industry. To the extent energy is affordable, plentiful, and reliable, human beings thrive. To the extent energy is unaffordable, scarce, or unreliable, human beings suffer. 

His written testimony is available here.

But we’d be remiss if we left you only with that.

The real fireworks were in his oral testimony and included a tussle with Sen. Barbara Boxer (D-CA), a call for an apology or resignation from Sen. Sheldon Whitehouse (D-RI), and telling the committee that most of them would probably not be alive today without cheap, plentiful, reliable energy. The highlights are available here. It is most enjoyable!

And finally, last, but certainly not least, is the Manhattan Institute’s Oren Cass’s excellent piece in the current issue of National Affairs, titled “The New Central Planners.” The piece critically examines the administrative state’s seemingly unquenchable desire to fix market “failures”—a conceit that is perhaps nowhere more on display than in the climate change issue. Here’s a taste:

By asserting that their frameworks, tools, and data prove government action will enhance market efficiency, economists are engaging in a new form of central planning. It differs in degree from traditional command-and-control socialism, but not in kind. It is less absolute — the market economy provides a baseline until an intervention occurs. It is less totalitarian — plans are executed through rules and incentives that alter the behavior of market actors instead of through the direct assignment of resources. But it is rooted in the same conceit that technical expertise can outperform market forces and deserves deference where applied. It suffers from the same challenges of incomplete information, heterogeneous preferences, and subjective values. It relies on the same refusal to acknowledge the inherent tradeoffs that underlie the allocation of scarce resources. And, as a result, it also reduces democratic accountability, economic efficiency, and total welfare.

The alternative to technocratic planning is not post-modern, nihilistic resignation to the impossibility of evaluating policy. It is an administrative state designed around a recognition that market signals and political preferences provide a better guide than can bureaucratic analysis, that those signals and preferences vary locally, and that optimization requires constant recalibration. Many current efforts at regulatory reform focus on increasing the influence of cost-benefit analysis, but in fact we need to reduce it. Management within the executive, delegation from the legislature, and oversight by the judiciary should all assume that technocratic expertise lies only in designing the specific rules to implement when there is a political demand for intervention, not in determining when such interventions are appropriate.

Oren’s article is so chock-full of good stuff that it’s really hard to decide what to excerpt, so be sure to take the time and read his entire essay. We’ll sure that it’ll be time well spent. The same is true for all of the above articles. You really ought to have a look!

One of the most important elements of contemporary financial regulation is bank capital adequacy regulation — the regulation of banks’ minimum capital requirements.  Capital adequacy regulation has been around since at the least the 19th century, but whereas its previous incarnations were relatively simple, and usually not very burdensome, modern capital adequacy regulation is vastly both more complicated and more heavy-handed.

After I first began research on this subject years ago, I watched the Basel Committee take part in a remarkable instance of mission creep: starting from its original remit to coordinate national banking policies, it expanded into an enormous and still growing international regulatory empire.  Yet I also noticed that no-one in the field seemed to ask why we needed any of this Basel regulation in the first place.  What, exactly, were the market failure arguments justifying Basel’s interventions generally, and it’s capital adequacy regulation in particular?

On those occasions when regulatory authorities make any attempt to justify capital regulations, they typically settle for mere assertion.  The following little gem from a recent Bank of England Discussion Paper on the implementation of Basel in the UK is typical:

Capital regulation is necessary because of various market failures which can lead firms on their own to choose amounts of capital which are too low from society’s point of view.[1]

The authors don’t bother to say what the market failures consist of, let alone prove that they actually are present.  Nor do they even hint at the possibility that banks may choose unduly low levels of capital, not because of market failure, but because they are encouraged to do so by government deposit insurance or central banks’ offers of last-resort support.  Instead, there is a mere appeal to that ethereal entity, “society,” the incontrovertible opinion of which is that financial institutions ought to hold more capital than they would be inclined to hold if left to their own devices.  Policymakers are, furthermore, privy to this opinion, though why they should be so is also left unexplained.

On those rare occasions when genuine market-failure arguments for capital adequacy regulation are put forward, they are less than compelling.  An example was recently provided by my friend, the former Bank of England economist David Miles.  In an appendix to his thoughtful valedictory speech as a member of the Monetary Policy Committee last summer, David sketches out a simple model to “illustrate the tendency for unregulated outcomes to create too much risky bank lending.”

In his model, banks operate under limited liability, which allows them to pass on high losses to their creditors.  They also take risky lending decisions, but depositors do not see the riskiness of the loans that their bank makes.  Miles then obtains an equilibrium in which banks with lower capital are more prone to excessively risky lending, and he suggests that the solution to this problem (of excessively risky bank lending) is to increase capital requirements.

Let’s grant the point that banks with low capital levels would be prone to excessively risky lending. Let’s also agree that the solution is higher capital.  Miles would have this solution implemented by regulators increasing minimum capital requirements.

However, the same solution could also be implemented by depositors themselves.  They could choose not to make deposits in banks with low capital levels.  In a repeated-game version of the model, they could also run on their bank if their bank’s capital levels fell below a certain threshold.  Weakly capitalized banks would then disappear and so, too, would the excessively risky bank lending.

The mistake here — and it is a common one among advocates of government intervention — is to come up with a solution to some problem, but then assume that only the government or one of its agencies can implement that solution.  To make a convincing case for state intervention, they have to explain why only the government or its agencies can implement that solution: they have to demonstrate a market failure.[2]

A more substantial argument for capital adequacy regulation, also by David Miles, was published in the European Economic Review in 1995.  (See also here.)  The essence of this argument is that if depositors can assess a bank’s capital strength, a bank will maintain a relatively strong capital position because greater capital induces depositors to accept lower interest rates on their deposits.  However, if depositors cannot assess a bank’s capital strength, then a bank can no longer induce depositors to accept lower interest rates in return for higher capital, and the bank’s privately optimal capital ratio is lower than the socially optimal capital ratio.[3]  Information asymmetry therefore leads to a bank capital adequacy problem.  Miles’s solution is for a regulator to assess the level of capital the bank would have maintained in the absence of the information asymmetry, and then require it to maintain this level of capital.

There is, however, a problem at the heart of this analysis.  Consider first that the technology to assess and convey the quality of bank assets either exists or it does not.  If the technology does exist, then the private sector can use it, and there is no particular reason to prefer that the government use it instead.  There is then no market failure.  On the other hand, if the technology does not exist, then no-one can use it, not even the government.  Either way, there is no market failure that the government can feasibly correct.  To assume that the technology exists, but that only the government can use it, is not to demonstrate the presence of a market failure, but to assume it.

Of course, we all know that the technology in question does exist, albeit in imperfect form.  The traditional solution to this asymmetric information problem is for the shareholders (or, more accurately, bank managers acting on behalf of shareholders) to provide externally audited reports.  These reports are made credible by the managers and the auditors being liable to civil penalties in the event that either party signs off on statements that are materially misleading.  If they issue misleading statements, aggrieved creditors could then pursue them through the courts.

A potential objection is that this solution requires a high level of financial competence among depositors that cannot be expected of them.  In fact, it does not.  Instead, all it requires is that there are analysts who can interpret audited reports, and that they, in turn, convey their opinions to the public in a form that the public can understand.  The average depositor does not have to have a qualification in chartered accountancy; instead, they only need to be able to read the occasional newspaper or internet piece about the financial health of their bank and then make up their minds about whether their bank looks safe or not.  If their bank looks safe, they should keep their money there; if their bank does not, they’d better run.

One should also compare the claims underlying any model and the predictions generated by it against the available empirical evidence.  In this case, the claim that depositors cannot assess individual banks’ balance sheets is empirically falsified, at least under historical circumstances where the absence of deposit insurance or other forms of bailout gave depositors an incentive to be careful where they put their deposits.  To quote George Kaufman on this subject:

There is … evidence that depositors and noteholders in the United States cared about the financial condition of their banks and carefully scrutinized bank balance sheets [in the period before federal deposit insurance was introduced].  Arthur Rolnick and his colleagues at the Federal Reserve Bank of Minneapolis have shown that this clearly happened before the Civil War.  Thomas Huertas and his colleagues at Citicorp have demonstrated the importance of [individual] bank capital to depositors by noting that Citibank in its earlier days prospered in periods of general financial distress by maintaining higher than average capital ratios and providing depositors with a relatively safe haven.[4]

The Miles position is also refuted by the empirical evidence on the bank-run contagion issue.  If Miles is right and depositors cannot distinguish between strong and weak banks, then a run on one bank should lead to runs on the others as well.  Yet the evidence overwhelmingly indicates that bank runs do not spread in the way that the Miles hypothesis predicts.  Instead, there occurs a “flight to quality,” with depositors withdrawing funds from weak institutions for redeposit in stronger ones.  The “flight to quality” phenomenon demonstrates the very point that Miles denies, i.e., that depositors have been able to tell the difference between strong and weak banks.

So, once again, there is no market failure.

I should add, in concluding, that I’ve addressed these two arguments by David Miles because they are the best market-failure arguments for capital adequacy regulation that I’m aware of.  I invite readers to point me to stronger arguments if they can find any.

_____________________

[1] Bank of England, “The Financial Policy Committee’s review of the leverage ratio,” October 2014, p. 12.

[2] There are also two other “first-best” solutions in Miles’ model that do not involve government or central bank capital regulation.  The first is where a bank has a 100 percent capital ratio, in which case the risk of loss to depositors would be zero, but only because there would no longer be any depositors.  The “bank” would no longer be a bank either, because it would no longer issue any money: instead, it would become an investment fund.  The second is to eliminate limited liability to prevent bank shareholders walking away from their losses.  In this context, one should recall that limited liability is not a creature of the market, but a product of the limited liability legislative interventions in the 19th century.

[3] The fact that there is a suboptimal equilibrium in which banks maintain lower-than-optimal capital levels is also a little odd, and would appear to reflect the informational assumptions that Miles made in his model.  If depositors are not sure of the quality of their bank’s assets, we might have expected them to insist that their banks maintain higher rather than lower capital levels, or else keep their money under the mattress instead.  I thank George Selgin for this point.

[4] Kaufman, G. G. (1987) “The Truth about Bank Runs.” Federal Reserve Bank of Chicago. Staff Memorandum 87–3, pp. 15-16.

[Cross-posted from Alt-M.org]

A stubborn myth of the pro-tax left (exemplified by Bernie Sanders) is that the Reagan tax cuts merely benefitted the rich (aka Top 1%), so it would be both harmless and fair to roll back the top tax rates to 70% or 91%.

Nothing could be further from the truth. Between the cyclical peaks of 1979 and 2007, average individual income tax rates fell most dramatically for the bottom 80%  of taxpayers, with the bottom 40 percent receiving more in refundable tax credits than paid in taxes.  By 2008 (with the 2003 tax cuts in place), the OECD found the U.S. had the most progressive tax system among OECD countries while taxes in Sweden and France were among the least progressive.

What is commonly forgotten is that before two across-the-board tax rate reductions of 30% in 1964 and 23% in 1983, families with very modest incomes faced astonishingly high marginal tax rates on every increase in income from extra work or saving (there were no tax-favored saving plans for retirement or college).

From 1954 to 1963 there were 24 tax brackets and 19 of those brackets were higher than 35%.  The lowest rate was 20% -double what it is now.  The highest was 91%.

High and steeply progressive marginal tax rates were terrible for the economy but terrific for tax avoidance. Revenues from the individual income tax were only 7.5% from 1954 to 1963 when the highest tax rate was 91%, which compares poorly with revenues of 7.9% of GDP from 1988 to 1990 when the highest tax rate was 28%. 

The graph, from the Brookings-Urban Tax Policy Center, shows how inflation pushed more and more families into higher and higher tax brackets from 1973 to 1981, when inflation averaged 7.6% as measured by the PCE deflator.   Thanks to runaway inflation and escalating bracket creep, from 1973 to 1981 marginal tax rates were rising for everyone at or above the middle income (the 50th percentile shown in light blue). 

Unfortunately, the 1981 tax law waited until 1983 to phase-in a diluted 23% rate reduction (the 1964 Kennedy rate cuts were 30%).   Yet in 1983, as Bloomberg’s Megan McArdle points out, “the individual income tax was still taking in 8.2% of GDP, which was above the average of the 1970s.” 

In 1983-84, only 18% paid no income tax. Today, mainly because of tax laws enacted by Republicans in 1986 and 2001-2003, 45.3% of tax returns pay no tax –with many receiving refundable tax credits that exceed their Social Security taxes.

Despite dramatically lower inflation after 1981, bracket creep continued to impose small but sneaky tax hikes on middle-income taxpayers until 1985 when tax bracket income thresholds were finally indexed. If tax rate are flat, or nearly so, that whole game of pushing people into higher tax brackets is over. That happened to some extent from 1988 to 1990, when the 1986 Tax Reform briefly cut the top tax rate to 28% through 1990.  The worse that could then happen from a raise, promotion or second earner would be to shove you into a 28% tax bracket, which we now consider “middle class.” From 1988 to 1990 the nearly-flat individual income tax brought in 7.9% of GDP (despite overtaxing capital gains) with a top tax rate of 28%.  Ironically, revenues under that 28% top tax were greater than the 7.6% average of 1991-95, despite or because of the Bush and Clinton “tax increases.” 

Since 1988, despite vigorous efforts of the pro-tax establishment (e.g., Treasury, CBO and their graduates at the Tax Policy Center), marginal tax rates at all income levels remained much lower than they were from 1932 to 1981. Thank Presidents Kennedy and Reagan for that, but also Senator Bill Bradley (D-NJ) and especially Congressmen Jack Kemp (R-NY).  But thanks are also due to Congressmen Bill Steiger (R-WI) and Bill Thomas (R-CA) for lower tax rates on capital gains and dividends. Despite all this widespread relief from onerous and punishing taxation, Bernie Sanders seems oddly nostalgic about President Eisenhower’s 19 tax brackets above 35% while Secretary Clinton dreams of resuscitating a disastrous capital gains tax scheme FDR was forced to abandon in 1938. 

Neither the Kennedy tax rate reductions of 1964-65 nor the Reagan tax rate reductions of 1983-88 were enacted “to benefit the rich.”  That is just a worn-out myth. 

In the run-up to President Obama’s visit to Saudi Arabia later this week, two domestic issues which concern the U.S.-Saudi relationship are also gaining attention. Yet these developments – a congressional bill which allows Americans to sue foreign governments for supporting terrorist groups, and growing calls to declassify the remaining 28 pages of the 9/11 Commission’s report - are unlikely to substantially impact the U.S.-Saudi relationship, which is already on a downward trend due to other, more substantive factors.

Certainly, the bill would have major legal implications for relatives of victims of the 9/11 attacks, who have previously tried to sue the Saudi government for their possible involvement. However, their hope that the declassified report would yield a better understanding of the scope of that involvement is unlikely to yield any smoking gun revelations.

Some of the purported revelations are, in fact, already known. It has been long known that Saudi Arabia has had a hand in the spread, through schools and philanthropic endeavors, of a certain kind of extremist Islamic philosophy often described as Wahhabism. That this philosophy is shared by various radical groups including ISIS and Al Qaeda is likewise well-known, but there is no evidence that the Saudi government ever provided material support to either group.

Though lesser-known, it is also the case that many private Saudi citizens have provided funding to extremist groups over the years. And while it did not come from the government, as Ben Rhodes, the president’s Deputy National Security Advisor noted this week, the Saudi government often paid “insufficient attention” to such funding, particularly prior to 2001. The 9/11 Commission report, though likely to be less detailed than many of the studies of this phenomenon conducted over the last decade, may well include data on the extent to which the Saudi government turned a blind eye to terrorist funding.

Comments from those who have read the reports, and previously declassified information, also suggests that junior Saudi officials may even have played some role in the 9/11 attacks themselves. Indeed, perhaps the best known line in the 9/11 report itself is the assertion that the Commission found “No evidence that the Saudi government as an institution or senior Saudi officials individually funded the organization,” an obvious loophole that leaves little to the imagination. Yet, it is worth noting that any such revelations contained in the report would be at best preliminary, based on unvetted and unverified intelligence.

In fact, while the Saudi government has not objected to declassification of the report, it clearly perceives the congressional bill as a larger concern, threatening economic reprisals. The threat to sell-off American assets if the bill passes is likely an empty one, but it certainly underscores the concern Saudi leaders feel about the potential for such lawsuits.

Perhaps the most interesting aspect of this whole episode is that it is happening at all, a development at least partially driven by the deteriorating U.S.-Saudi relationship. President Obama’s trip to Riyadh will not be an entirely pleasant one given all the tensions in the U.S.-Saudi relationship. Indeed, only a few weeks ago, President Obama himself publicly questioned the Saudi alliance in an interview with the Atlantic’s Jeffrey Goldberg.

Ultimately, the Saudi alliance is changing. Once thought unshakeable, common U.S.-Saudi interests such as energy security and anti-communism have diverged or disappeared entirely. Meanwhile, disagreements on regional stability, Saudi involvement in conflicts like Syria and Yemen, and their support for various extreme groups have helped to sour the relationship. Whether or not the 9/11 Commission report is declassified, it is these larger tensions which present the major obstacle to smooth U.S.-Saudi relations in the future. 

A Time article by James Grant warning about rising federal debt has prompted pushback by columnists questioning whether debt is really so bad. At the Washington Post, Wonkblog columnist Matt O’Brien says “there’s no reason to cut the debt today.” Fellow Wonkblog columnist Max Ehrenfreund suggests that Grant’s figure of $42,998 government debt per person overstates the problem.

O’Brien suggests that the only reason to fear debt would be if it was leading to a financial crisis, but it isn’t because interest rates are low. But O’Brien neglects to mention that interest rates may rise substantially in coming years. CBO projects that as rates rise, federal interest costs will triple from $253 billion this year to $839 billion by 2026.

As for Ehrenfreund, he is right that $42,998 overstates the debt problem because it does not take into account our future rising population. At the same, however, $42,998 understates the problem because each year the government adds more debt. Over the next 10 years, the U.S. population will grow 8 percent, but the CBO says federal debt will rise 69 percent. So Grant’s simple debt metric will increase over time.

Other than possibly causing a financial crisis, rising federal debt creates other harms:

  • Raises Future Taxes. Taxes damage the economy by reducing incentives for productive activities, a harm called deadweight losses. With borrowing, the deadweight losses from taxes are moved to the future when taxes are raised to pay the interest and principal on the debt. So the damage from borrowing is imposed on people down the road because that is when the government will use its coercive power to extract the extra money.
  • Reduces National Saving. Rising debt may crowd out private investment, reduce the U.S. capital stock, and thus reduce future incomes. Economist James Buchanan said, “By financing current public outlay by debt, we are, in effect, chopping up the apple trees for firewood, thereby reducing the yield of the orchard forever.” Such a decline in investment may be averted if private saving rises to offset government deficits. But the CBO says, “the rise in private saving is generally a good deal smaller than the increase in federal borrowing, so greater federal borrowing leads to less national saving.”
  • Saps Business Confidence. Rising government debt may also deter private investment through the mechanism of business expectations. Businesses may be reluctant to make long-term investments, such as building new factories, if high and rising government debt creates a fear of tax increases down the road.
  • Siphoned to Pay Foreigners. Some pundits, such as Paul Krugman, tell us not to worry about government debt because we “owe it to ourselves.” But today about half of federal debt is owed to foreigners. So growing debt means that a rising share of the future earnings of U.S. workers will be siphoned off by the government to repay foreign creditors.
  • Distorts Government Decisionmaking. The availability of debt finance may induce policymakers to increase spending excessively. Since borrowing makes programs appear to be “free” to citizens and policymakers, the government has less incentive to be frugal, and is more likely to spend on low-value programs.

Wonkblog’s O’Brien says he “feels fine” about today’s $13.9 trillion of debt. But how about tomorrow’s $24 trillion, as shown in the chart? And what if America has further recessions, wars, and other negative shocks, and it becomes $30 trillion? Surely, in today’s uncertain world, we want our policymakers to err on the side of prudence, and so debt growth measured in trillions makes me feel far from fine.

 

For a brief history of federal debt and why it is a major problem, see this 2015 report.

The conventional wisdom that United States v. Texas would be one of the handful of 4-4 ties in the post-Scalia era looks pretty wise indeed. After an hour and a half of argument and huge masses of demonstrators outside the courthouse – more people than I’ve ever seen – that result would be anticlimactic: DAPA remains enjoined, without a Supreme Court opinion.

That’s a good thing for two reasons:  1) in my view, President Obama’s executive action goes beyond executive power under the immigration (and administrative) laws and 2) because the next president will almost certainly rescind (if a Republican) or expand (if a Democrat) the program – mooting or transforming the case. While the government’s supporters had been hoping that the 26-state lawsuit would be dismissed for lack of standing – perhaps Chief Justice John Roberts could be swayed to that technical solution – there do not seem to be five votes for that solution either.

But even though we aren’t likely to get a real decision, this morning’s argument highlighted the importance of the case beyond the immigration context, raising key separation-of-powers issues. As the Obama administration has taken executive power to heights it has never been before, the U.S. solicitor general at one point mentioned “the change in federal law” that DAPA represents – and, of course, it takes a new law passed by Congress to change an old law. Justice Kennedy thus asked about a limiting principle – echoing past arguments over Obamacare and other battles over federal power – and how to define “the limits of discretion.”

With respect to immigration, Texas’s solicitor general concisely boiled down the case to a matter of transforming deferred action (a non-binding decision not to seek removal) into a grant of legal status. That’s the nub: much as we would want an immigration system that makes sense, that allows peaceful people to be productive members of society, that’s not what we have, and the president can’t just use his pen and phone to fix it.

As Justice Robert Jackson put it in his canonical statement about constitutional structure in the 1952 Steel Seizure Case, courts must be last, not first, in giving up on the separation of powers. Just because we might like a policy or think that its costs outweigh its benefits, doesn’t mean that it’s constitutional.

A new study, published in the journal Circulation, adds to growing doubts about the benefits of skim or low-fat milk, NPR reports this morning: 

“People who had the most dairy fat in their diet had about a 50 percent lower risk of diabetes” compared with people who consumed the least dairy fat, says Dariush Mozaffarian, dean of the Friedman School of Nutrition Science and Policy at Tufts University, who is also an author of the study.

NPR reporter Allison Aubrey notes other recent studies on the possible benefits of dairy fat and then reports:

With all the new evidence that challenges the low-fat-is-best orthodoxy, Mozaffarian says it may be time to reconsider the National School Lunch Program rules, which allow only skim and low-fat milk.

“Our research indicates that the national policy should be neutral about dairy fat, until we learn more,” says Mozaffarian.

And there’s the problem for public policy. Why do we need a national policy on dairy fat? Why do we need national rules on what local schools can serve for lunch? And most specifically, since our understanding of nutrition science is always changing, why should we codify today’s understandings in law and regulation?

As I wrote a few months ago in response to a Washington Post story on the possibility that decades of government warnings about whole milk may have been in error,

It’s understandable that some scientific studies turn out to be wrong. Science is a process of trial and error, hypothesis and testing. Some studies are bad, some turn out to have missed complicating factors, some just point in the wrong direction. I have no criticism of scientists’ efforts to find evidence about good nutrition and to report what they (think they) have learned. My concern is that we not use government coercion to tip the scales either in research or in actual bans and mandates and Official Science. Let scientists conduct research, let other scientists examine it, let journalists report it, let doctors give us advice. But let’s keep nutrition – and much else – in the realm of persuasion, not force. First, because it’s wrong to use force against peaceful people, and second, because we might be wrong….

Today’s scientific hypotheses may be wrong. Better, then, not to make them law.

The New York Metropolitan Transportation Authority (MTA) has formally quit its membership in the American Public Transportation Association (APTA), the nation’s principle transit lobby. In a harshly worded seven-page letter, MTA accused APTA of poor governance, an undue focus on small transit agencies, and having an embarrassingly large compensation package to APTA’s president.

The MTA and its affiliates, Metro North, the Long Island Railroad, and New York City Transit, together carry 35 percent of all transit riders in America. Since MTA’s ridership has been growing while transit elsewhere has declined, this percentage is increasing.

Yet APTA’s focus has been on lobbying for increased funding for smaller agencies, including building new rail transit lines in cities that haven’t had rail transit and extending transit service in smaller cities and rural areas that have had little transit at all. As a result, says the letter, MTA has been short-changed by roughly a billion dollars a year in federal funding that it would have received if funds were distributed according to the number of transit riders carried.

This accords with the finding of a Cato policy analysis that found that New York has been shorted half a billion dollars a year in discretionary transit funds. Since discretionary funds make up less than half of all federal transit funds, it is easy to imagine that the nation’s largest urban area is losing a billion dollars a year to smaller cities that are not making effective use of those funds.

The letter observes that APTA’s executive committee, which makes most month-to-month decisions for the group, has up almost no representatives of “legacy systems,” meaning transit systems that had rail transit before 1980. The committee is thus biased towards smaller systems where transit spending is less needed and/or less effective than in big urban areas such as New York, Chicago, and Philadelphia.

The legacy systems, the letter notes, all have “State-of-Good-Repair needs that are an order of magnitude greater than the non-Legacy rail systems.” Yet APTA’s focus has been on building more rail lines rather than funding the maintenance needs of the legacy systems.

MTA’s APTA membership fee of more than $400,000 a year is only about 2 percent of APTA’s annual budget. The transit agency that carries more than a third of the nation’s transit riders could get away with contributing only 2 percent of the transit lobby’s budget because APTA has lots of “associate members” that aren’t transit agencies. Yet even this is a sore point with the MTA, as those associate members are mainly contractors, many of whom make their money from designing and building new rail transit lines, so their influence further dilutes the interests of the MTA and other legacy systems.

The letter concludes with what it calls the “elephant in the room,” the subject of which (it says) was the cause of “acrimonious discussions at the board level” over the compensation for APTA’s president and CEO. APTA’s 2014 IRS report reveals that it paid its president a whopping $892,471 in 2013, not counting another $57,248 in benefits. To many agency officials, this extremely high salary seems incongruous at a time when most transit agencies are having to cut their spending in response to the reduced tax revenues associated with the recent recession. For MTA in particular, this excessive compensation seems particularly galling considering it hasn’t resulted in greater federal funding for MTA at a time when MTA’s ridership is growing relative to that of the rest of the country.

This letter reflects an age-old battle within the transit industry: should the industry concentrate on providing transit in areas where transit usage is highest, or should it focus instead on trying to generate new transit riders in areas where usage is minimal? On one hand, per capita transit ridership is falling almost everywhere, even New York, so if the industry is to grow some efforts must be made in attracting new customers. On the other hand, the industry is clearly subject to diminishing returns: that is, the cost of getting each new customer is increasing.

One reason for that increase is the industry’s questionable strategy of spending huge amounts of money on high-cost infrastructure including light rail, streetcars, and exclusive bus lanes. Far more riders could be gained by spending the same amount of dollars on improvements to basic bus transit. But here is where APTA’s associate members come in, as they have a clear interest in promoting new infrastructure construction rather than expanded operations on existing infrastructure.

To be fair, APTA has to deal with the political environment in Washington, D.C., an environment that favors new construction over maintenance and at the same time favors distributing dollars to as many states and congressional districts as possible. In this environment, it could be argued, the natural outcome is to favor smaller urban areas over big ones such as New York.

But if this outcome is preordained, MTA might ask, then what good is APTA in the first place? The answer appears to be that APTA spends $20 million a year churning out press releases taking credit for decisions and results over which it, in fact, has little or no control. At least some transit supporters think that APTA could have done more to help its members find ways to spend money in ways that would more effectively attract new transit riders, although doing so might have lost APTA some of its associate members.

Since MTA’s annual fee represents such a small part of APTA’s budget, its departure will have little impact on APTA’s funding unless it is followed by similar resignations by other legacy systems. But the dent in APTA’s reputation may be more severe and may force APTA to reconsider its policy of promoting new construction over operation and maintenance of transit systems in the cities that most heavily use transit.

What do you get when you combine family trips to a gardening store and loose-leaf tea in your trash? To Kansas law enforcement, it’s probable cause to get a search warrant and perform a SWAT-style raid on a private home.

In 2011, Robert Harte and his 13-year-old son went to a store for hydroponic equipment to grow tomatoes for a school project. A state trooper had been assigned to watch that store and write down the license plates of any customers (apparently, shopping at a gardening store translates to marijuana production). To follow up that stellar bit of police work, the Johnson County Sheriff’s Office twice examined the Hartes’ trash. They found, both times, an ounce or so of “saturated plant material.”

The Keystone Kops couldn’t tell the difference between tea and tokes using their senses, so they field-tested the substance and the test came back positive for marijuana. (“A partial list of substances that the tests have mistaken for illegal drugs would include sage, chocolate chip cookies, motor oil, spearmint, soap, tortilla dough, deodorant, billiard’s chalk, patchouli, flour, eucalyptus, breath mints, Jolly Ranchers and vitamins,” notes Radley Balko.)

Still, after falsely reading the tea leaves, the deputy sheriffs performed a military-style raid on the family home. At 7:30am, the Hartes were woken up by pounding on their doors; as soon as Mr. Harte answered, an armed team flooded into the room, ordered him to the ground, and rifled through the home for three hours. The officers—once they realized that there was no large-scale growing operation—began searching for “any kind of criminal activity,” a far greater sweep than a warrant to search for “marijuana” and “drug paraphernalia” permits. Moreover, the deputies left the canine units in the house longer than was necessary, to give them “training or just experience”—so the terrifying armed raid was a mistaken fishing expedition that then turned into a training exercise.

Cato has filed an amicus brief in the federal appellate court where the Hartes’ lawsuit against the police is currently pending (after the district court dismissed it). We argue that the police failed to knock and announce their presence in anything but a literal sense—an important Fourth Amendment rule—and also exceeded the scope of their warrant to look for “any criminal activity.” The case thus raises pressing issues of police militarization in society and warrantless police authority. In briefing for an earlier case, Cato noted that “SWAT team deployments have increased more than 1,400% since the 1980s… . SWAT teams and tactical units were originally created to address high-risk situations, such as terrorist attacks and hostage crises. Today, however, these extreme situations account for only a small fraction of SWAT deployments; they’re used primarily to serve low-level drug-search warrants.”

Moreover, the knock-and-announce rule is an ancient one rooted in the English common law dating back to the early 17th century. The rule serves to protect the life, limb, and property of both home occupants and police serving a search or arrest warrant. When officers use the force associated with a SWAT raid, even without literally breaking the door, their pro forma compliance with the knock-and-announce rule converts the Fourth Amendment into a “parchment barrier.”

Indeed, the systemic use of SWAT-style force to execute low-risk drug warrants converts the presumption that people normally peaceably comply with police—central to the knock-and-announce rule—on its head. The police here could easily have investigated their suspicions here without going commando.

Accordingly, we call upon the U.S. Court of Appeals for the Tenth Circuit to send this case back for trial, consistent with the common law underlying the Fourth Amendment. 

Pages