Cato Op-Eds

Individual Liberty, Free Markets, and Peace
Subscribe to Cato Op-Eds feed

A new GAO study examines the Low-Income Housing Tax Credit, which is a complex government program aimed at increasing the supply of affordable housing.

How complex is it? Vanessa Brown Calder and I noted that one LIHTC guidebook is 1,400 pages long.  

The LIHTC is a classic government solution to a problem. It is complicated, raises costs, and is not very effective. Nonetheless, some people favor such approaches. Adam Smith called them “men of system.” 

An easier way to solve problems is to let markets work. This approach leans toward simplicity and low cost. Some efforts may not be effective at first, but through innovation and feedback entrepreneurs eventually nail it. Adam Smith called it the “obvious and simple system of natural liberty.”

Below, a diagram from the GAO study shows part of the LIHTC process. Little tax credit boxes float around and dollar signs flow to LIHTC investors, which are usually major banks. This is the hard way to increase affordable housing supply.

Below that, I’ve diagrammed the easy way, which is to deregulate, remove the subsidies, and let banks and developers compete in the marketplace.

Affordable Housing: The Hard Way

 

Affordable Housing: The Easy Way

            Beijing continues to intensify its diplomatic campaign to isolate Taiwan internationally, and as I describe in a recent article in China-U.S. Focus, that bullying strategy threatens to trigger dangerous tensions between China and the United States.  Chinese leaders were shocked and angered when Taiwanese voters endorsed Tsai Ing-wen and her pro-independence Democratic Progressive Party (DPP) in the 2016 elections.  The communist regime soon moved to adopt an aggressive strategy of diplomatic strangulation.  During her presidency, Beijing has induced five of the 22 countries (mostly small, poor nations in Africa and Latin America) that had still recognized Taipei when she took office to switch ties to Beijing.  The latest defector is El Salvador. 

            Although the Chinese strategy appears to be paying off in the narrow sense of achieving its primary objective, it may ultimately come at an unacceptably high price.  The campaign is producing the opposite reaction in Taiwan of what Beijing seeks.  Tsai and her government have adopted a stance of outright defiance, making it clear that Taipei will not be bullied into taking steps toward reunification with the People’s Republic of China (PRC). 

More ominously, American supporters of Taiwan are pushing back firmly, and they are moving to increase Washington’s support of the island’s de facto independence.  The State Department immediately issued a statement that Washington was “deeply disappointed” by El Salvador’s decision—even though the United States itself does not maintain formal diplomatic ties with Taiwan.  

 Taipei’s friends in Congress ratcheted-up their support for the beleaguered democratic island.  Senator Cory Gardner (R-CO), chairman of the Senate Foreign Relations Committee’s Asia subcommittee, indicated his intention to propose a measure pressuring countries to stick with Taipei.  Among other things, his legislation planned to authorize the State Department to downgrade relations or alter foreign assistance programs to discourage countries from making any decisions deemed adverse to Taiwan.  “The Taipei Act of 2018 would give greater tools and directions to the State Department in making sure we are as strong a voice as possible for Taiwan,” Gardner told Reuters.  A little more than a week later, he and a group of bipartisan co-sponsors, including Marco Rubio (R-FL) and Ed Markey (D-MA) carried through on that pledge and introduced the legislation.

Their initiative is just the latest indication that American backers of Taiwan are becoming more vocal and proactive in pushing U.S. measures to counter the PRC’s hardline policies.  A major step occurred in March 2018 when President Trump signed into law the Taiwan Travel Act, which encouraged “officials at all levels of the United States Government” to visit and meet with their Taiwan counterparts and to “allow high-level officials of Taiwan” to enter the United States and to meet with their U.S. counterparts.  That legislation, which passed both houses of Congress overwhelmingly, ended Washington’s practice adopted when the United States recognized the PRC in 1979 of authorizing meetings only with relatively low-level Taiwanese officials.  It was especially noticeable that the new law specifically promoted interaction by “cabinet-level national security officials.” 

In early July, the Pentagon sent two U.S. warships through the Taiwan Strait, the first such passage in more than a year, in a display of support for Taipei.  That move occurred on the heels of a State Department request that the Defense Department send a small contingent of Marines to guard the American Institute in Taiwan (Washington’s de facto embassy in Taipei).  The United States also invited two senior Taiwanese military officials to participate in a May ceremony at the U.S. Pacific Command. 

Any one of these episodes might not be all that significant, but taken together they confirm that Washington’s backing for Taiwan is escalating.  Beijing can blame itself for much of that development.  The PRC’s strategy of diplomatic strangulation is backfiring, and the surge of Chinese military exercises in the Taiwan Strait is making matters even worse.

Beijing would be wise to dial back its confrontational policies toward Taiwan.  However, Taiwan’s supporters in Congress, the media, and the Trump administration need to appreciate just how sensitive the Taiwan issue is to PRC leaders and the Chinese people.  Excessive, ostentatious U.S. diplomatic support for Taiwan could bring the PRC and the United States closer to a dangerous confrontation.  Both sides need to exercise much greater caution and restraint than they are showing now.

 

 

 

In today’s New York Times, Brooklyn public defender Scott Hechinger makes a very strong case that criminal defendants in American courts face a two-tiered system of justice, and most defendants get the worse of it.

Mr. Trump assailed the practice of pretrial detention as “tough” when Paul Manafort had his bail revoked before his trial began. He then bemoaned the “very unfair” power that prosecutors wield to force people in the system “to break” in the wake of the Michael Cohen plea and the Manafort jury conviction and subsequent guilty plea. He lamented the devastating collateral consequences that arise from “a mere allegation” when Rob Porter was forced to resign after being accused of domestic violence; raged about the late-night, “no knock” raids of Mr. Cohen’s properties; and expressed outrage that the government, in its investigation of Carter Page, was able to overcome the protections of the Fourth Amendment to obtain a FISA warrant with “no hearings,” while also endorsing the idea, raised by the writer Andrew McCarthy, that they should be “looking at the judges who signed off on this stuff.”

[…]

I understand President Trump’s outrage. It is remarkable that people, presumed innocent, are locked up before being convicted of any crime. It is deeply unfair that mere accusations can lead to devastating, lifelong consequences. It is alarming that, in a system theoretically built around transparency and truth seeking, police and prosecutors have such outsize power to surveil, search, detain, bully, coerce and nearly destroy a person without producing evidence sufficient to secure a conviction. (emphasis in original.)

But it’s important to note how these defendants were actually treated as they work their way through the system, and how it differs from most everyone else.

Take Mr. Manafort’s experience with pretrial detention. Despite the seriousness of the allegations and his clear ability to flee, he was not in jail for a majority of his case pretrial. He and his attorneys were able to arrange an intricate bail package that was tailored to his financial circumstances, including $10 million bond and the surrender of his passport. This is how bail is supposed to work — not as punishment to lock someone up before a conviction, but as a way to guarantee that the accused will return to court while at liberty. Mr. Manafort was detained pretrial only after the presiding judge found evidence of witness tampering, following nearly two weeks of motion practice and then oral argument while Mr. Manafort continued to sleep in his own bed.

This kind of accommodation is unheard-of for the roughly quarter million people, my clients included, in jail for no reason other than their inability to pay bail. In the real world, despite the constitutional prohibition on excessive bail, decisions to detain people happen in a matter of seconds, with little to no consideration of an individual’s ability to pay. In just the past month alone, prosecutors requested and judges set bail totaling over $200,000 on clients of mine who, collectively, could not have afforded one one-thousandth of that.

Hechinger is also correct when he writes that the treatment of these high-profile defendants should not be resented: rather, it’s the double-standard for the privileged that should be eliminated. A meaningful presumption of innocence and the other rights afforded to Manafort et al. should be replicated and applied to the accused throughout the state and federal justice systems because they reflect constitutional protections intended to curb the coercive power of government.

The whole piece is worth reading in full here.

The federal government imposes a gasoline tax of 18.4 cents per gallon. Lobby groups are pressing for an increase and President Trump has suggested that he may support one. But a federal gas tax increase makes no sense.

State governments own America’s highways, and they are free to raise their own gas taxes whenever they want. Indeed, 19 states have raised their gas taxes just since 2015, showing that the states are entirely capable of raising funds for their own transportation needs. State gas taxes average 34 cents per gallon.

Also consider that gas taxes used to be a more pure user charge for highways, but these days gas tax money is diverted to inefficient nonhighway uses such as transit. Politicians say, “We need a gas tax increase to fix our crumbling highways,” and then they spend the money on other things. It is a bait-and-switch.

Federal fuel taxes and vehicle fees raise about $41 billion per year. About 20 percent of those funds (about $8 billion) are diverted to transit and other nonhighway uses.

With state fuel taxes the diversion is even larger, as shown in this Federal Highway Administration table. In 2016, state governments raised $44 billion from fuel taxes, and they diverted 24 percent—14 percent to transit and 10 percent to other activities. Texas, for example, diverts 25 percent of its fuel taxes to education spending.

The states also raised $38 billion from vehicle fees. They diverted 34 percent of those funds—13 percent to transit and 21 percent to other activities.

In total, states raised $82 billion from fuel taxes and vehicle fees. They spent $59 billion (72 percent) on highways and $23 billion (28 percent) on other activities. If the highways in your state have congestion and potholes, it may because your government is taking money raised from highway users and diverting it to other activities.  

The chart below shows the shares of state fuel taxes and vehicle fees diverted to nonhighway uses. South Carolina, for example, diverts 31 percent.

Last year, South Carolina’s governor Henry McMaster vetoed a gas tax increase. He objected to his state’s diversion: “Over one-fourth of your gas-tax dollars are not used for road repairs … They’re siphoned off for government agency overhead and programs that have nothing to do with roads.”

As a rough user charge, gas taxes are a good way to fund highways, and our highways do need more investment. But motorists should be skeptical of gas tax increases until policymakers stop diverting funds to inefficient transit systems with declining riderships.

Many transportation experts say that the rise of electric vehicles will be the end of the road for gas taxes, and they are eager to impose new vehicle miles traveled (VMT) charges to fund highways. However, governments are diverting more than $30 billion in fuel tax revenues and vehicle charges a year to nonhighway uses. If that diversion was ended, these revenues could continue to be America’s highway funding source for years to come.

 

More on highways and the gas tax:

https://www.downsizinggovernment.org/transportation/federal-highway-policies

https://www.downsizinggovernment.org/infrastructure-investment

https://www.downsizinggovernment.org/chamber-commerce-misguided-gas-tax

https://www.cato.org/blog/federal-gas-tax-increase-misguided

https://www.cato.org/blog/federal-gas-tax-lahood-makes-no-sense

Transit ridership has been declining now for four years, and the latest census data, released last week, reveal that the biggest declines are among the groups that you might least expect: young people and low-income people. These results come from the American Community Survey, a survey of more than 3 million households a year conducted by the Census Bureau. Here are some of the key findings revealed by the data.

Young People Are Deserting Transit

Those who subscribe to the popular belief that Millennials and other young people prefer to  transit over owning and driving a car were shocked last week when the Washington Post published an article indicating that “a Millennial exodus” was “behind [Washington] Metro’s diving ridership.” This was based on a study that found that, from 2016 to 2018, young people had reduced their use of transit for commuting by 20 percent, while older people had reduced it by smaller amounts or not at all. The study used cell phone records from one of the nation’s largest wireless carriers, probably Verizon or AT&T.

Young people seem to be deserting transit more than older commuters.

Although the census data only go as far as 2017, they seem to confirm this finding. As shown in the above chart, the largest declines in transit commuting, both nationally and in the Washington DC urban area, are among younger people. Commuting forms only a part of transit ridership, but to the extent that declining ridership is due to ride-hailing services such as Uber and Lyft, those services disproportionately used by people the age of 35. For more information about transit declines by age class, including links to data files for 2017 going back to 2005, see my longer post on the subject. In addition to national data, the files show how people in various age classes commuted to work in each state and each major county, city, and urban area.

2. Low-Income People Are Deserting Transit

Although transit subsidies are often justified by the need to provide mobility to low-income people, the reality is that transit commuting by people in the lowest income classes is shrinking while transit commuting is growing fastest among people in the highest income classes.

Transit commuting in the lowest income classes is shrinking faster than the total size of those classes while in the highest classes it is growing faster than the total size of those classes.

Transit commuting is increasingly skewed to people who earn more than $75,000 a year. Even though only 19 percent of American workers were in this income class in 2017, they made up 26 percent of transit commuters, an increase from just 14 percent in 2005. Both the average and the median income of transit commuters are higher than those of all workers. For more information on transit commuting and income, including links to data files from 2006 through 2017, see my more detailed post on the subject.

3. Vehicle Ownership Continues to Rise

While ride hailing is probably responsible for much of the decline in transit ridership among young people, increasing auto ownership is responsible for much of the decline among low-income people. Between 2014 and 2017, the share of households that lacked access to a motor vehicle declined from 9.1 to 8.6 percent. Moreover, the share of workers who live in households with no vehicles declined from 4.6 to 4.2 percent.

In 1960, more than 20 percent of American households had no motor vehicles while only a small percentage owned three or more, figures that have practically reversed themselves today.

While a few tenths of a percent may not sound like much, remember that in all but a handful of urban areas more than 90 percent of commuters get to work by car while less than 2 percent take transit. Thus, a small increase in auto ownership can lead to a large percentage decrease in transit usage.

Curiously, most American workers who live in households without cars don’t take transit to work. In fact, in most states and urban areas, more workers who live in households without cars nevertheless drive alone to work than take transit to work. How do they drive alone if they don’t have a car? Probably in employer-supplied vehicles. In any case, this is just one more indicator of transit’s declining relevance. For more information on increasing auto ownership, including data files, see my detailed post on the subject.

4. Transit Is Increasingly Irrelevant

Transit agencies and their supporters act as though transit is somehow vital to the national and local economies. That may still be true in New York City, but it is only marginally true in Boston, Chicago, Philadelphia, San Francisco, and Washington, and not at all true elsewhere. The decline in transit ridership among young people who were supposed to love transit the most, and among low-income people who were supposed to need transit the most just reinforces this declining relevance and argues against any further subsidies to this obsolete industry.

At his Washington Post blog, Cato alumnus Radley Balko has cultivated a running list of data-driven reports that show persistent, measurable, widespread, and common racial disparities in criminal justice enforcement. In police stops, sentencing, pretrial detention, the death penalty, and a host of other areas, enforcement disproportionately affects African Americans and Latinos. For those who study or work in criminal justice for a living, the racial disparities are glaring and the quantitative research supports our policy prescriptions. But most people aren’t criminal justice wonks, and what Radley has created is a great public education resource about what our system is doing all around the nation. 

The abundance of evidence Radley collected shows that our criminal justice system harasses and punishes racial minorities more harshly than whites. These findings are important because so many critics of justice reform and of activist groups like Black Lives Matter deny that many of these disparities exist. The denial of these problems—which have been well-known or, at least, strongly suspected in many American minority communities for all of living memory—precludes the identification of any potential remedies. This clearinghouse of peer-reviewed academic papers, government reports, and books that measure racial disparities marks a new starting point for individuals who want to understand our criminal justice system.

Read the whole thing here.

Thank you, Radley.

Child custody is among the most fraught topics the law confronts. It is the area in which personal relationships and raw emotions must be reconciled with legal rules and court judgements. Such is the case of “Ann,” an eight-year-old girl at the center of a case now before the Wisconsin Supreme Court. Ann has periodically spent time with her paternal grandmother, but due to family squabbles, Ann’s mother stopped bringing Ann to visit. The grandmother filed a lawsuit saying she was entitled to visitation rights, which a Wisconsin statute allows grandparents to ask for in circumstances where they have a preexisting relationship with the child such that the severing of that relationship would not be in the child’s best interest.

Complicating matters, the U.S. Supreme Court has held that these familial relationships have a constitutional dimension. In the 2000 case of Troxel v. Granville, the Court struct down a Washington State law that granted grandparents visitation rights when to do so would be in “the best interests of the child.” This standard was constitutionally infirm, the Court held, because parents have important rights that cannot by overcome by a bare showing that the child would be better off being raised by someone else.

As the late Justice Antonin Scalia pointed out, a great number of children should be taken from their homes if the question is whether someone else might do a better job raising them. Wisconsin’s statute is somewhat different than the Washington law, in that it requires a greater showing before invading the parent’s decision-making. The question for the Wisconsin Supreme Court is whether that’s different enough to shift the constitutional calculus.

Cato has filed an amicus brief, without taking a position as to which member of the family should prevail in this very personal dispute. Instead, we concern ourselves with the standards the court should apply. There’s a longstanding dispute about the source and extent of constitutional rights not explicitly set out in the constitution’s text (or even if they exist at all). The parental rights the U.S. Supreme Court previously recognized are of this type: there’s no specific clause that specifies that parents are entitled to direct the upbringing of their children, but the Court has (correctly) recognized such a right as an inherent feature of liberty.

Cato’s brief argues that this understanding should be expanded, to recognize that it is not only the parent’s liberty that matters here, but also that of the child. We draw on the original understanding of the Fourteenth Amendment, in particular the Privileges and Immunities Clause, which while having fallen into disuse was intended to be the guarantor of such rights. We urge the Wisconsin Supreme Court to address the full scope of citizen’s constitutional liberties in considering Ann’s fate.

The Wisconsin Supreme Court will be hearing Michels v. Lyons this fall.

The estimated number of above-average “excess deaths” in Puerto Rico attributed to Hurricane Maria (Sept 20, 2017) is a difficult figure to estimate objectively.  Puerto Rico’s official figure of 64 deaths by December 9, 2017 (which the President remembered) counted only those deaths directly attributed to the storm and confirmed by medical examiners.  Most of the direct deaths from Katrina were from drowning – which is much easier to attribute to the storm than many other causes of death. Studies of Puerto Rican deaths from Maria aspire to account for a wide range of indirect effects that are presumed (not proven) to be consequences of the storm such as suicides and heart attacks, infectious diseases, and damage to electricity and therefore to dialysis and respirator equipment.

Among at least eight major studies of direct and indirect effects on mortality attributed to Maria, two outliers stand out as being 3-5 times larger than the others, which all cluster around 1000. The first big number was from Harvard. On September 13, Time said, “Harvard’s report, which was based on systematic household surveys throughout Puerto Rico, reached an estimate of 4,645 storm-related deaths between September and December 2017, many as a result of ‘delayed or interrupted health care.’”  Nonsense. The Harvard study extrapolated from only 15 deaths reported in a survey of 3299 households to estimate that “between 793 and 8498 people died … up to the end of 2017.” By adding 793 and 8498 and dividing the result by 2, Time and others came up with a totally meaningless “average” which were widely reported with predictable sensationalism: “The hurricane that struck Puerto Rico in September was responsible for more deaths than the Sept. 11 attacks and Hurricane Katrina combined,” exclaimed The Daily Beast.” In reality, these “estimates of death from people who were interviewed” are little better than an opinion poll, and finding 15 deaths out of a sample of 3299 can’t plausibly be multiplied into 4645 for the whole island.

The latest sensational estimate of 2,975 excess deaths over six months is from an August 28 report from the Milken Institute School of Public Health at George Washington University  (GWU) commissioned by the Government of Puerto Rico. The study mentions two “scenarios” (census and displacement) yet only publicized the one with the biggest number: “Total excess mortality post-hurricane using the migration displacement scenario is estimated to be 2,975 (2,658-3,290) for the total study period of September 2017 through February 2018.” 

The 2,975 estimate only applies to the “displacement scenario.”  That is, the study “estimates cumulative excess net migration from Puerto Rico in the months from September 2017 through February 2018 and subtracts this from the census population estimates in these months.”  The population fell by about 8%, mainly due to migration rather than death, so the fact that there were more deaths than average after the hurricane means the death rate (deaths per thousand) rose more than the unadjusted statistics would suggest because the population is smaller.  But this issue is the number of deaths, not the death rate, and displacement (migration) did not make that number any higher than half a dozen other studies found (about 1000) much less three times higher.  

Trying to explain the high “displacement scenario” estimate, Eliza Barclay at VOX writes, “The ideal way to calculate the death toll from a hurricane, disaster researchers say, generally, is to count all the deaths in the time since the event, and then compare that number to the average number of deaths in the same time period from previous years. Subtract the average number from the current number and that’s the death toll.”  Unfortunately, the GWU “displacement scenario” estimate does not do that.  What it does instead is to compare what actually happened with hypothetical simulations of what might have happened without the storm.  Those projections come from “a series of generalized linear models (GLMs)… accounting for trends in population … in terms of age, sex, seasonality and residence by municipal level of socioeconomic development.”  And the estimates “also considered Puerto Rico’s consistently high emigration during the prior decade and dramatic population displacement after the hurricane.” Such complexity adds uncertainty.

The August 28 GWU report claimed to be “the first to use actual death certificates and other mortality data in order to estimate a more precise mortality count due to Hurricane Maria.” On the contrary, an earlier Aug 2 study in the Journal of the American Medical Association, by professors from Penn State and the University of Texas, had already used death certificate data to (as Ms. Barclay recommended) “count all the deaths in the time since the event and then compare that number to the average number of deaths in the same period from previous years.” Yet that ideal method found the number of excess deaths was 1,139 from September through December of last year.  As the Table from that paper shows, “excess deaths” means the number above the 2010-2016 average.  Since 90% of these atypical deaths happened in September and October, it appears quite plausible to attribute most of them to Hurricane Maria.  That is consistent with five previous credible estimates of  Puerto Rican deaths due to Maria, which, as a Washington Post fact checker noted in June were “all … roughly around 1,000 deaths.” 

The rationale for the study’s novel choice of a six-month time frame was to find out if things are getting better.  But the more months pass after the disaster, the more arbitrary it appears to attribute deaths to the disaster, since an estimated 77% of those who died were seniors.

In marked contrast to the JAMA paper (where 90% of the deaths happened near the time of the hurricane) only 42.7% of the GWU study’s simulated 2,975 deaths occurred in September and October of 2017.  Another 27.8% occurred in November and December, and 29.5% occurred in January and February of 2018.  That timing seems counterintuitive and implausible, suggesting the September storm has lately been becoming more fatal rather than less.  

To attribute deaths over a six-month period to the hurricane per se is inherently difficult and subjective. What the Milken report calls “a failing health system “ and “multiple cascading failures in critical infrastructure” (telecom and power) may largely reflect negligence by Commonwealth or city governments, notably the island’s mismanaged government-owned electric utility, Prepa

To attribute the estimated 6-month deaths to FEMA, as some have, is even less believable. By August 8, FEMA reported it had awarded “more than $3 billion in Public Assistance funds … to the government of Puerto Rico and municipalities” for Hurricane María-related costs. “This is a massive job and it has taken a massive effort by everybody: the Government of Puerto and the municipalities, federal agencies, voluntary and faith-based organizations and the private sector,” said Federal Coordinating Officer Michael Byrne. 

The questionable 2,975 GWU estimate of hurricane-related deaths, like the unbelievable 4,645 Harvard estimate before it, is being widely misused as a criticism of emergency relief efforts by FEMA and numerous private charities, rather than to either the sheer magnitude of destruction to an isolated island, or to any shortcomings of local Puerto Rican efforts. 

In short, an actual 4-month count closer to 1,100 for above-average Puerto Rican deaths in the wake of Maria appears much more transparent and statistically relevant than the 6-month statistical simulation of 2,975 now being used.

In Washington earlier this month, one person’s words in the New York Times were were deemed a threat to national security by those at whom they were aimed.

An anonymous Trump administration official was labeled “a seditious traitor who must be identified and prosecuted for illegal conduct” for exercising his or her 1st Amendment rights by publishing an op-ed in the September 5 edition of the New York Times. Vice President Pence stated that the op-ed writer’s actions inside the Administration—trying to limit what the writer believes is the damage President Trump is doing daily to the United States—is “an assault on our democracy”—a notion unhinged from any semblance of reality. 

Like everyone else working in the Trump administration, the author of the op-ed took the same oath I did when I served in the federal government, the text of which is federal law: 5. U.S.C. § 3331. Here’s the text:

I, AB, do solemnly swear (or affirm) that I will support and defend the Constitution of the United States against all enemies, foreign and domestic; that I will bear true faith and allegiance to the same; that I take this obligation freely, without any mental reservation or purpose of evasion; and that I will well and faithfully discharge the duties of the office on which I am about to enter. So help me God.

The oath makes no reference to pledging fealty to whoever happens to be President. It is a pledge of loyalty to our form of government, not an individual. The notion that the Justice Department even has a basis to prosecute the writer does not pass the laugh test, much less constitutional muster.

The anonymous Trump administration official—and if he or she is to be believed, many more working for Trump—views him as a domestic threat to the American people and the Constitution itself. Democrats and others on the political left have viewed Trump that way since he won the Electoral College vote in November 2016. Clearly others in the Administration now view Trump the same way.

The anonymous op-ed writer is hardly the first person working for a federal chief executive to believe that an increasingly mentally unhinged boss needed to be contained, or even removed. Nixon White House counsel-turned-Watergate whistleblower John Dean is perhaps the most prominent, but he was not the only Nixon administration official prepared to ignore or even countermand a presidential order deemed a threat to the Republic.

As Daily Beast reporter Gil Troy reminded us less than a month into the Trump presidency, then-Secretary of Defense James Schlesinger made certain in the summer of 1974 that any Nixon order to the military would not be carried out unless Schlesinger approved it. Would Nixon really have tried to order the Old Guard to do something crazy, like march on Capitol Hill and round up those who voted for the articles of impeachment against him? Probably not, but Schlesinger made sure there was no way it could happen.

I think it’s fair to argue that the author of the op-ed in question should’ve resigned, then published. By remaining anonymous and in the Administration, the author has forced his or her colleagues to engage in the very public and humiliating spectacle of going out of their way to say, “It wasn’t me.” The chaos of Trump’s governing “style” has been deepened by the op-ed writer’s action, something that carries its own risks, however ill-defined they may be. 

But the reality is that the day-to-day business of keeping America’s government running is handled by hundreds of thousands of effectively anonymous civil servants, all of whom have taken the oath outlined above, the overwhelming majority of whom execute that oath faithfully every day. It is they who will help ensure that America and its government survive the Trump era, even if enduring it sometimes feels like the political equivalent of passing a kidney stone.

The existence of government infrastructure deters or “crowds out” private investment. Many airports, bridges, and urban transit systems in the United States used to be private, but during the mid-20th century entrepreneurs were squeezed out by governments.

The provision of federal aid or subsidies to government-owned airports, bridges, and transit facilities was a key factor in pushing out private enterprise. That is one reason why I favor repealing federal aid for transportation.

AIRPORTS

In the early years of commercial aviation, private airports served many American cities. For example, the main airports in Los Angeles, Miami, Philadelphia, and Washington D.C. were for-profit business ventures in the 1930s.

The airports were generally successful and innovative, but they lost ground over time due to unfair government competition:

  • City governments were often eager to set up their own airports, even if private airports already served an area.
  • Cities issued tax-exempt bonds to finance their airports, giving them a financial edge over private airports.
  • Private airports pay taxes. Government airports do not, giving them another financial edge.
  • The U.S. military and the Post Office promoted government airports over private ones.
  • Federal New Deal programs provided aid to government airports, not private ones.
  • Congress provided aid to government airports for national defense purposes during World War II.
  • The federal Surplus Property Act after the war transferred excess military bases to the states for government airport use.
  • The federal Airport Act of 1946 began regular federal aid to government airports, not private ones.
  • The new Federal Aviation Administration in 1958 “prohibited private airports from offering commercial service.”

So governments banished entrepreneurs from a major part of America’s aviation industry. In the early 1930s, about half of the nation’s more than 1,100 airports were private, but by the 1960s, private commercial airports had mainly disappeared. Very sad, as I discuss here.

However, there is good news about airports. A privatized commercial airport industry is booming abroad, particularly in Europe. U.S. policymakers should let entrepreneurs take another crack at our airport industry.

BRIDGES

Bob Poole discusses government crowd out of private bridges in his new book Rethinking America’s Highways. In the 1920s, four main bridges built in the San Francisco area were private toll facilities. In the 1930s, the Golden Gate Bridge and Oakland Bay Bridge were built as government toll facilities.

Poole picks up the story:

All six of these bridges suffered declines in traffic and revenue due to the Depression, but the Bay Bridge and the Golden Gate opened closer to its end and were therefore less affected. Their financing costs were also lower, with the Bay Bridge getting low-cost financing from the New Deal’s Reconstruction Finance Corporation, and the Golden Gate being able to issue tax-exempt toll revenue bonds, rather than the taxable bonds issued by the toll bridge companies.

In addition, the California legislature voted in 1933 to relieve the Bay Bridge of having to cover operating and maintenance costs out of toll revenues, allocating state highway fund (gas tax) monies to cover those costs. The four private toll bridges all went into receivership by 1940. Unlike the Ambassador Bridge (in Michigan), they were unable to work out refinancing plans and were eventually acquired by the state, with the Dumbarton and San Mateo transfers not taking place until the early 1950s; their shares traded on the Pacific Coast Exchange until then.

A similar fate befell many of the other 200-odd private toll bridges during the Depression. The Reconstruction Finance Corporation provided low-cost loans to public-sector toll bridges, but not to investor-owned ones. Relatively new government toll agencies offered buyouts to struggling bridge owners during those years. The New York State Bridge Commission bought four private toll bridges over the Hudson River; the Delaware River Joint Toll Bridge Commission acquired at least six private toll bridges; and the city of Dallas bought the toll bridge on the Trinity River in order to eliminate tolls.

By 1940, the Public Roads Administration (the former Bureau of Public Roads, now part of the Federal Works Agency) reported that the number of US toll bridges had declined to 241, of which 142 were still investor-owned. But nearly all the bridges had been bought out by toll agencies or state and local governments by the mid-1950s.

URBAN TRANSIT

The early history of urban transit in America is one of private-sector funding and innovation, as Randal O’Toole discusses in this study. Hundreds of cities had private streetcar and bus companies moving people in downtowns and the growing suburbs in the early 20th century.

As the century progressed, however, the rise of automobiles undermined the demand for transit. At the same time, transit firms had difficulty cutting costs because their workforces were dominated by labor unions and governments resisted allowing them to cut services on unprofitable routes.

The nail in the coffin for private transit was the Urban Mass Transportation Act of 1964, which provided federal aid to government-owned bus and rail systems. The act encouraged state and local governments to take over private systems, and a century of private transit investment came to a close.

This Transportation Research Board study discusses the decline of private transit:

As the declining fortunes of America’s cities gained national recognition during the 1960s, Congress passed legislation that for the first time gave the federal government a prominent role in the provision of urban transit. The Urban Mass Transportation Act of 1964 (later redesignated the Federal Transit Act) provided loans and grants for transit capital acquisition, construction, and planning activities.

… Notably, only public entities could apply for the federal grants. Given the availability of federal aid, many cities, states, and counties purchased or otherwise took over their local rail and bus systems. Thus by the 1970s, a largely new model of transit provision—public ownership—had become increasingly prevalent in the United States. Many jurisdictions consolidated the operations of smaller private and public systems under the auspices of regional transit authorities. A few states, such as Connecticut, Rhode Island, and New Jersey, formed statewide transit agencies.

… In 1940, only 20 transit systems in the country were publicly owned, and they accounted for just 2 percent of ridership. By 1960, although the vast majority of all systems were still in private ownership, properties in public ownership accounted for nearly half of all transit ridership, mainly because the country’s very largest systems were publicly owned. By 1980, more than 500 systems were publicly owned, accounting for 95 percent of ridership nationally.

In sum, the bad news is that when the government advances, the private sector retreats. But the good news we have seen around the world in recent decades is that when the government gets out of the way, the private sector steps in to provide better services at lower costs.

Further reading:

https://www.downsizinggovernment.org/transportation

https://www.downsizinggovernment.org/infrastructure-investment

https://www.downsizinggovernment.org/privatization

In principle, the federal housing-voucher program known as Section 8 ought to win points as a market-oriented alternative to the old command-and-control approach of planning and constructing public housing projects. While allowing recipients wider choice about where to live, it has also enabled private landlords to decide whether to participate and, if so, what mix of voucher-holding and conventionally paying tenants makes the most sense for a location. 

But there is another possibility, which is that Section 8 will in time bring with it onerous new restrictions on the private landlord-tenant relationship. For landlords, participation in the program has long carried with it some significant burdens of inspection, certification, and reporting paperwork. So long as participation was voluntary, these conditions were presumably worth it in exchange for the chance to reach voucher-holders as a class of potential tenants. When accepting Section 8 tenants stops being a voluntary choice, however, the balance is likely to shift. And one of the big policy pushes of the past decade – zealously promoted by the Obama administration – was the local enactment of laws and ordinances prohibiting so-called source-of-income discrimination, which in practice can mean making it a legal offense for a landlord to maintain a policy of declining Section 8 vouchers. Once that sort of control is in place, and landlords cannot opt out of the program, there will no longer be any natural check on Washington’s imposition of ever more burdensome conditions via Section 8 program rules on private landlords, including conditions that affect their relations with conventional non-voucher tenants. 

Now, in an en banc ruling, the Third Circuit has made clear another source of legal exposure for landlords participating in the program. A specialized portion of the program provides so-called enhanced housing vouchers to enable tenants to go on living in properties that once received “project-based” Section 8 support (akin to traditional low-income housing) but have been converted by their owners to conventional market-rate housing. Philip Harvey owned one such property a unit of which had long been rented to Florence Hayes. When Ms. Hayes died in 2015, Harvey sought to renovate the apartment for use by his daughter, while Ms. Hayes’s son wanted to take over as primary tenant. Litigation ensued and a three-judge panel of the Third Circuit ruled, over a dissent, that once her lease expired the law placed Harvey under no obligation to sign a new lease with her successor. 

On Aug. 31, however, the full Third Circuit by a lopsided margin overturned the panel opinion and ruled Ms. Hayes’s son had the right to take over as tenant and obtain lease renewals from Harvey under good behavior, and so did anyone else who had been on the lease (even as a child) at the time of such a property’s conversion. It construed language about how a tenant “may elect to remain” in a converted project as binding not just HUD in its obligation to provide assistance, but also as binding the landlord. Only Judges D. Michael Fisher and Thomas Hardiman, who had prevailed on the original panel, dissented. Various tenants’-rights amicus filers, as well as the City of Philadelphia, took the son’s side. 

Judge Fisher, in dissent, says the majority “overlooks the basic design of the enhanced voucher program as an incentive-based program, not a compulsory one.” But “overlooks” may not be the right verb. Maybe a better one is “takes another step to subvert.”

In my last post I wrote about the lawsuit TNB USA Inc has filed against the New York Fed, which has refused to grant the would-be bank a Master Account. I argued that, despite its name (TNB stands for “The Narrow Bank”), and despite what some commentators (now including, alas, The Wall Street Journal’s editorial staff) seem to think, TNB isn’t meant to supply ordinary persons with a safer alternative to deposits at ordinary banks. Instead, TNB’s purpose is to receive deposits from non-bank financial institutions only, to allow them to take advantage, indirectly, of the Fed’s policy of paying interest on bank reserves — thereby potentially earning more than they might either by investing directly in securities or by taking advantage of the Fed’s reverse repo program, which is open to them but which presently offers a rate 20 basis points lower than the Fed’s IOER rate.

A Hollow Victory?

Yet for all the controversy TNB’s lawsuit has generated, its outcome may no longer matter as much as it might once have. For one thing, TNB’s success can no longer undermine the Fed’s ON-RRP program, which is designed to implement the Fed’s target interest rate lower bound, for the simple reason that that program is already moribund. Commenting on my post, J.P. Koning observed that, while the Fed’s ON-RRP facility, first established in December 2013, once supplied non-bank financial institutions with an attractive investment alternative, it ceased being so this year. As the chart below, reproduced from J.P.’s comment, shows, the facility — which once accommodated hundreds of billions of dollars in bids — is now completely inactive:

The decline on ON-RRP activity since the beginning of this year is a byproduct of the general increase in market rates of interest, both absolutely and relative to the Fed’s ON-RRP offer rate, that has made the program both less attractive to potential participants and unnecessary as a means for establishing a lower-bound for the effective fed funds rate. But that decline is but one symptom of a more general development, to wit: the tendency of the Fed’s policy rate settings to lag further and further behind increases in market-determined interest rates, thanks in no small part to the Trump administration’s fiscal profligacy. Here, for example, is a FRED chart comparing the Fed’s policy rate settings to the yield on 1-month Treasury bills:

In the figure the “Lower Limit” of the Fed’s federal funds target range is also the Fed’s ON-RRP facility offer rate, while the “Upper Limit” is the same as the Fed’s IOER rate until mid-June 2018, and 5 basis points above the IOER rate afterwards.

Although an overnight repurchase agreement is a more liquid investment than a one-month Treasury bill, its easy to appreciate how that difference ceased, in the last year or so, to compensate for the gap between the ON-RRP rate and other money market rates. But those rates have also increased relative to the IOER rate, with the Fed’s June decision to reduce the IOER – ON-RRP rate spread from 25 to 20 basis points, reducing the attractiveness of IOER relative to money market rates by another 5 bps. Consequently, bank reserves are also much less attractive relative to money market instruments, and especially to shorter-term Treasury bills, than they were a year ago.

All of which means that TNB’s efforts could end up being in vain even if the Fed ends up granting it an account. As J.P. Koning points out in his own post concerning the TNB case, “even if TNB succeeds in its lawsuit, there is a larger threat. The gap the bank is trying to exploit is shrinking.” In contrast, when the TNB plan was originally developed in 2016, that gap was about 25 basis points.

It’s possible, of course, that future changes will see the IOER rate ruling the interest-rate roost once again, instead of becoming a bit player, in the future. But until that happens TNB USA Inc. may find landing customers just as difficult as landing a Master Account.

Whither the Floor System?

Although rising market rates may cause TNB’s efforts to come to naught, that possibility should not offer Fed officials much comfort, for the tendency for those rates to outpace its own policy rate settings poses no less a threat to its own operating framework than it does to TNB’s business plan.

That operating framework, called a “floor” system, depends on banks’ willingness to hoard reserves, so that changes in the amount of reserves in the banking system, instead of causing banks to increase their lending — thereby putting downward pressure on market interest rates — lead to like changes in banks’ excess reserve holdings. The Fed is then able, in principle, to expand or shrink its balance sheet without altering the stance of monetary policy. Instead of depending on the quantity of reserves the Fed creates, that stance will depend mainly on the interest rate the Fed pays on excess reserves, or the IOER rate, for short.

If, on the other hand, excess reserves cease to be attractive relative to other assets banks might acquire, those banks will no longer be inclined to hold substantial quantities of excess reserves. Instead, they’ll exchange them for other assets, and Treasury securities especially, since such securities are just as useful as reserves when it comes to meeting Basel III’s Liquidity Coverage Ratio rules. The Federal Home Loan Banks, on the other hand, are increasingly inclined to offer their surplus Fed balances on the private repo market instead of lending them to banks in return for a piece of the IOER pie. Eventually, either Treasury yields and private-market repo rates must decline enough, relative to the IOER rate, to make reserve hoarding attractive once again, or the passing of the reserve balance “hot potato” must eventually raise the quantity of bank deposits enough to convert unwanted excess reserves into required reserves.

Prior to October 2008, when the Fed first put its floor system in place, the “hot potato” effect had been the norm: bank reserves paid no interest at all, while even one-month Treasury bills yielded over 2 percent. Consequently, banks held only trivial amounts of excess reserves, disposing of the rest first in the fed funds market but ultimately by acquiring other assets until deposit expansion eliminated any surplus reserves. Monetary policy in turn meant  adjusting the quantity of reserves to keep the Fed’s policy rate on target.

A look at the next chart suggests why the same money market developments that might render TNB’s efforts nugatory also threaten to cause the Fed’s floor system to unravel. The chart’s red line shows the spread between the yield on 1-month Treasuries and the IOER rate (left scale), while its blue line shows the banking system’s ratio of excess reserves to total deposits (right scale).

Until October 2008, with IOER = 0, a very high Treasury-IOER spread kept excess reserves at a minimum. Afterwards, in contrast, a negative spread encouraged banks to accumulate trillions in excess reserves instead of using those reserves to support a proportional increase in deposits. But lately the Treasury-IOER rate has been back in positive territory. (Indeed, the last observations for the red line should be 5 bps higher than what’s shown, because in June the Fed established a 5 bps difference between its target rate “upper limit,” used in the chart as a proxy for the IOER rate, and the IOER rate itself.) As the chart also shows, banks have responded accordingly, by reducing their excess reserve holdings relative to their total deposits.

In conclusion, while the Fed may succeed in fending-off TNB’s attempts to give non-bank financial institutions’ access to IOER, it may find preserving its IOER-based operating system much harder. What’s more, if you ask me, the Fed’s attempts to preserve that operating system, by abandoning its plan to shrink its balance sheet or by resorting to more aggressive IOER rate increases, could ultimately do all of us a lot more harm than its treatment of TNB.

[Cross-posted from Alt-M.org]

The total number of American workers who usually commute by transit declined from 7.65 million in 2016 to 7.64 million in 2017. This continues a downward trend from 2015, when there were 7.76 million transit commuters. Meanwhile, the number of people who drove alone to work grew by nearly 2 million, from 114.77 million in 2016 to 116.74 million in 2017.

These figures are from table B08301 of the 2017 American Community Survey, which the Census Bureau posted on line on September 13. According to the table, the total number of workers in America grew from 150.4 million in 2016 to 152.8 million in 2017. Virtually all new workers drove to work, took a taxi-ride hailing service, or worked at home, as most other forms of commuting, including walking and bicycling as well as transit, declined.

Transit commuting has fallen so low that more people work at home now than take transit to work. Work-at-homes reported for 2017 total to nearly 8.0 million, up from just under 7.6 million in 2016. 

Two other tables, B08119 and B08121, reveal incomes and median incomes of American workers by how they get to work. A decade ago, the average income of transit riders was almost exactly the same as the average for all workers. Today it is 5 percent more as the number of low-income transit riders has declined but the number of high-income – $60,000 or more – has rapidly grown. Median incomes are usually a little lower than average incomes as very high-income people increase the average. In 2017, the median income of transit riders exceeded the median income of all workers for the first time.

For those interested in commuting numbers in their states, cities, or regions, I’ve posted a file showing commute data for every state, about 390 counties, 259 major cities, and 220 urbanized areas. The Census Bureau didn’t report data from smaller counties, cities, and urbanized areas because it deemed the results for those areas to be less statistically reliable. 

The file includes the raw numbers plus calculations showing the percentage of commuters (leaving out people who work at home) who drive alone, carpooled, took transit, (with rail and bus transit broken out separately), bicycled, and walked to work. A separate column shows the percentage of the total who worked at home. The last column estimates the number of cars used for commuting including drive alones and carpoolers.

I’ve also posted similar files for 20162015201420102007 and 2006. The formats of these files may differ slightly as I’ve posted them at various times in the past. Soon, I’ll post more files for commuting by income and other pertinent topics. 

The total number of American workers who usually commute by transit declined from 7.65 million in 2016 to 7.64 million in 2017. This continues a downward trend from 2015, when there were 7.76 million transit commuters. Meanwhile, the number of people who drove alone to work grew by nearly 2 million, from 114.77 million in 2016 to 116.74 million in 2017.

These figures are from table B08301 of the 2017 American Community Survey, which the Census Bureau posted on line on September 13. According to the table, the total number of workers in America grew from 150.4 million in 2016 to 152.8 million in 2017. Virtually all new workers drove to work, took a taxi-ride hailing service, or worked at home, as most other forms of commuting, including walking and bicycling as well as transit, declined.

Transit commuting has fallen so low that more people work at home now than take transit to work. Work-at-homes reported for 2017 total to nearly 8.0 million, up from just under 7.6 million in 2016. 

Two other tables, B08119 and B08121, reveal incomes and median incomes of American workers by how they get to work. A decade ago, the average income of transit riders was almost exactly the same as the average for all workers. Today it is 5 percent more as the number of low-income transit riders has declined but the number of high-income – $60,000 or more – has rapidly grown. Median incomes are usually a little lower than average incomes as very high-income people increase the average. In 2017, the median income of transit riders exceeded the median income of all workers for the first time.

For those interested in commuting numbers in their states, cities, or regions, I’ve posted a file showing commute data for every state, about 390 counties, 259 major cities, and 220 urbanized areas. The Census Bureau didn’t report data from smaller counties, cities, and urbanized areas because it deemed the results for those areas to be less statistically reliable. 

The file includes the raw numbers plus calculations showing the percentage of commuters (leaving out people who work at home) who drive alone, carpooled, took transit, (with rail and bus transit broken out separately), bicycled, and walked to work. A separate column shows the percentage of the total who worked at home. The last column estimates the number of cars used for commuting including drive alones and carpoolers.

For comparison, you can download similar files for 20162015201420102007 and 2006. The formats of these files may differ slightly as I’ve posted them at various times in the past. Soon, I’ll post similar files for commuting by income and other pertinent topics.

The total number of American workers who usually commute by transit declined from 7.65 million in 2016 to 7.64 million in 2017. This continues a downward trend from 2015, when there were 7.76 million transit commuters. Meanwhile, the number of people who drove alone to work grew by nearly 2 million, from 114.77 million in 2016 to 116.74 million in 2017.

These figures are from table B08301 of the 2017 American Community Survey, which the Census Bureau posted on line on September 13. According to the table, the total number of workers in America grew from 150.4 million in 2016 to 152.8 million in 2017. Virtually all new workers drove to work, took a taxi-ride hailing service, or worked at home, as most other forms of commuting, including walking and bicycling as well as transit, declined.

Transit commuting has fallen so low that more people work at home now than take transit to work. Work-at-homes reported for 2017 total to nearly 8.0 million, up from just under 7.6 million in 2016. 

Two other tables, B08119 and B08121, reveal incomes and median incomes of American workers by how they get to work. A decade ago, the average income of transit riders was almost exactly the same as the average for all workers. Today it is 5 percent more as the number of low-income transit riders has declined but the number of high-income – $60,000 or more – has rapidly grown. Median incomes are usually a little lower than average incomes as very high-income people increase the average. In 2017, the median income of transit riders exceeded the median income of all workers for the first time.

For those interested in commuting numbers in their states, cities, or regions, I’ve posted a file showing commute data for every state, about 390 counties, 259 major cities, and 220 urbanized areas. The Census Bureau didn’t report data from smaller counties, cities, and urbanized areas because it deemed the results for those areas to be less statistically reliable. 

The file includes the raw numbers plus calculations showing the percentage of commuters (leaving out people who work at home) who drive alone, carpooled, took transit, (with rail and bus transit broken out separately), bicycled, and walked to work. A separate column shows the percentage of the total who worked at home. The last column estimates the number of cars used for commuting including drive alones and carpoolers.

For comparison, you can download similar files for 2016, 2015, 2014, 2010, 2007and 2006. The formats of these files may differ slightly as I’ve posted them at various times in the past. Soon, I’ll post similar files for commuting by income and other pertinent topics.

Dan Cadman of the Center for Immigration Studies (CIS) has written a blog post purporting to identify issues in a short brief that I wrote about U.S. citizens in Texas for whom ICE filed detainers. In it, he makes numerous inaccurate and unsupported assertions. Cadman presents zero evidence to rebut the conclusion of the brief and instead accuses an ICE supervisory officer of perjury because his statements fail to support Cadman’s position.

My brief uses data from Travis County, Texas to identify people who claimed U.S. citizenship and presented Social Security Numbers to local authorities, but ICE submitted a detainer request for them anyway, only to later cancel or not execute it. Cadman responds:

While it’s true that people who later prove to be U.S. citizens sometimes find themselves in removal proceedings (something I’ve previously commented on and explained), most often this occurs because an individual doesn’t even know he is a U.S. citizen…

In his link in support of his “most often” claim, he cites a single case where the person didn’t know he was a U.S. citizen, while we know of many individual cases in which detainers were filed for U.S. citizens who asserted their citizenship at the start of the process (here, here, here, here, here, here, here, here, here, here, here, here, here, here, here, here, here, here, here, here, here, here, here, here, here, here, here, here, here, here, here, here, here, here, here, here, here, here, here, etc.). In any case, every person in my brief asserted U.S. citizenship at the outset from the time of their booking by Travis County Sheriff’s Office until ICE finally cancelled their detainer. Cadman continues:

[Bier] would have us believe that ICE agents actively “target” American citizens even though it is clear that they have no hand at all into what individuals are arrested by police and booked into Travis County (or any Texas) jail, and merely respond to the information passed to them as a consequence.

I never claimed that ICE agents “actively” seek out people who they know are American citizens. As I wrote in the executive summary of my brief, I state that these are “mistakes” that ICE only belated attempts to correct. In any case, if a law enforcement agency arrests hundreds of innocent people, it is perfectly legitimate to say that hundreds of “innocent people” were targeted by that agency, even if the individual agents didn’t know or intend to target innocent people. Moreover, it is incorrect to claim that ICE agents “merely respond to information passed to them”—Travis County Sheriff’s Office doesn’t make assessments of removability or citizenship, nor do they issue detainers. ICE makes those determinations.

Cadman attempts to argue that even though ICE canceled the detainers for these people, we cannot suppose that it was because they were U.S. citizens. He attempts to sketch out what he believes is happening:

ICE agents don’t, nor should they, always accept such assertions [of U.S. citizenship] at face value because they know the frequency with which false claims are made. One strategy they exercise is to immediately file the detainer while concurrently obtaining the release date of the individual being held by the police. They then work against the clock to either verify the claim or disprove it… . Keep in mind that when ICE agents withdraw a detainer, it doesn’t mean the claim isn’t false — it just means they couldn’t break it in the time frame they had to investigate.

If this is what ICE agents are doing, it would violate current ICE policies, which require agents to issue detainers based on what they believe to be is “probable cause” of removability. A simple assertion of U.S. citizenship would never overcome a determination based on actual probable cause (such as a biometric record of a prior deportation). In the bad days before even agent-determined probable cause was required, an assertion of U.S. citizenship would not have triggered cancelation either. Again, ICE would require the U.S. citizen to substantiate the claim first.

Cadman’s scenario implies that ICE agents are issuing detainers for people claiming U.S. citizenship based on their gut instincts and then hoping to prove that the person is lying before they are released. If this is what is occurring, it would indeed explain why U.S. citizens are regularly targeted by ICE as well as showing that the agency is breaking its own policy. That is a poor defense of ICE’s actions.

In any case, my brief quoted court testimony under oath from ICE Supervisory Detention and Deportation Officer John Drane from Rhode Island stating that, in fact, a detainer canceled for a person claiming U.S. citizenship is almost certainly because they were a U.S. citizen. Cadman responds:

while even ICE agents in the northeast would not be completely immune to the phenomenon of false claims, the claims would be of a significantly smaller scale and different character from those in Texas. This would certainly have had an impact on how Drane framed his response to the question of withdrawing a detainer, because his experiences would be nothing like those of ICE agents working in south or central Texas.

This is simply incorrect. The rate of U.S. citizenship claims overall was actually higher in Rhode Island around this time (7.2 percent) than in Travis County (5.7 percent), so Drane dealt with the same issue: some people do make false claims, while others, including the litigant in the case, make valid claims of U.S. citizenship when targeted with detainers. Cadman continues:

The time frame of Drane’s deposition (April 2015) is also significant. In November 2014, President Obama and then-Homeland Security Secretary Jeh Johnson announced a host of new “executive actions” that would govern how immigration agencies administered their responsibilities… . . many detainers were withdrawn as not meeting the new criteria of criminality drawn up by Secretary Johnson and his cohorts… .

Cadman presents no data or even anecdotes to support the claim that many detainers were withdrawn due to the Jeh Johnson enforcement criteria. In fact, the Johnson policies changed the criteria for issuing a detainer, so detainers for people who were not subject to enforcement priorities were not issued to begin with, leading to a significant decline in detainers issued. In any case, 90 percent of the U.S. citizens identified in my brief were targeted before Johnson’s new enforcement priorities were in effect or after the Trump administration rescinded them. In addition, the rate of cancelations for people claiming U.S. citizenship actually decreased during those years. Cadman continues:

It’s not a surprise that Drane avoided speaking to these very real, very major reasons that many detainers were withdrawn by ICE. One can surmise that he sidestepped the issue of agents being obliged to cancel detainers under the imposed-from-above priority system for fear of his job.

Here, Cadman actually accuses an ICE supervisory agent of lying under oath to avoid disclosing the reasons for the detainer cancelations. I don’t understand how Cadman can have complete faith in ICE under some circumstances while assuming the worst about them in others without any evidence. More importantly, Cadman’s claims about Drane are simply false. He has zero incentive to lie. The Obama administration was not hiding its looser enforcement policies in 2015—it was bragging about them. More importantly, in the context of this case, Drane is admitting something that would place blame on his office for wrongfully targeting U.S. citizens—something that the Obama administration would certainly not want to disclose. Lastly, why would he risk potential jail time by perjuring himself on this point? It simply makes no sense. Cadman concludes:

Bier has taken what are clearly dubious conclusions about the number of U.S. citizens against whom detainers were filed in the Travis County jail after arrest for criminal offenses, and then through extrapolation and aggregation, applied them to assert that, if this many were caught up in ICE “targeting” of citizens in the county, then as a matter of simple multiplication one can derive how many U.S. citizens must have been “targeted” statewide… . . Each county and each state is sufficiently unique in population and demographics that using any one of them to extrapolate to a whole is different entirely than using legitimate random sampling techniques.

Cadman is correct that a state-wide random sample would provide far more useful data. Every county in Texas should release this information if they have it. But the data that we do have allow us to learn something about Travis County, at a minimum. Maybe Travis County is an outlier in either direction, we simply don’t know, but I never claimed that my extrapolation from Travis County to the whole state of Texas is anything but an estimate.

Travis County, Texas is the third largest recipient of detainers in the state of Texas, providing a significant sample of the detainers in the state. Moreover, the dynamics in Travis County are substantially similar to other counties in Texas—all are fairly close to the border and all are subject to Texas law with regard to immigration enforcement. Cadman takes issue with my hedging this extrapolation, but that is simply what prudent analysts do when the evidence is incomplete.

My brief shows that ICE often issues detainer requests for people who claim U.S. citizenship and present Social Security Numbers to local authorities, only to then cancel those requests. The best explanation—based on ICE policies and ICE testimony—is that ICE issued detainers for hundreds of U.S. citizens. It is noteworthy that ICE itself in a statement to the Washington Post did not use any of Cadman’s poor defenses, but only asserted that it works to improve its processes over time. That may be true, but severe deficiencies still remain.

Not long after the limited-government U.S. Constitution was ratified and the new government resumed operation, numerous political leaders began pushing to expand federal power. Leading politicians of the 1790s did not agree with each other about the proper scope of federal authority, either legally or practically.

Treasury Secretary Alexander Hamilton proposed ideas for top-down manipulation of the economy. And fellow Federalist President John Adams signed into law the infamous Alien and Sedition Acts in 1798, which among other things outlawed any “false, scandalous and malicious writing” against the government, the Congress, and the president.

An article in the Washington Post the other day discussed some interesting details regarding the enforcement of the sedition statute:

Adams and his Federalist Party supporters in Congress passed the Alien and Sedition Acts under the guise of national security, supposedly to safeguard the nation at a time of preparing for possible war with France. The “Alien” part of the law allowed the government to deport immigrants and made it harder for naturalized citizens to vote. But the law mainly was designed to mute backers of the opposition Democratic-Republican Party led by Thomas Jefferson, who also happened to be the vice president. Jefferson had finished second to Adams in the 1796 presidential election and again ran against him in 1800.

An early target of the new law was Rep. Matthew Lyon, who had accused Adams of “ridiculous pomp.” In the fall of 1798 the government accused the Vermont congressman of being “a malicious and seditious person, and of a depraved mind and a wicked and diabolical disposition.” He was convicted of sedition, fined $1,000 and sentenced to four months in prison. Lyon campaigned for reelection from jail and won in a landslide. On his release in February 1799, supporters greeted him with a parade and hailed him as “a martyr to the cause of liberty and the rights of man.”

… Another target was James Callender, a pro-Jefferson journalist for the Richmond Examiner and the man who had exposed Federalist Alexander Hamilton’s extramarital affair. In 1800, Callender wrote an election campaign pamphlet that said of Adams: “As President he has never opened his lips, or lifted his pen, without threatening and scolding; the grand object of his administration has been to exasperate the rage of contending parties … and destroy every man who differs from his opinions.” Callander was convicted of sedition, fined $200 and sent to federal prison for nine months. He continued to write from his prison cell, calling Adams “a gross hypocrite and an unprincipled oppressor.”

… The government also came after critics of some members of the Adams administration, such as Treasury Secretary Hamilton. In 1799, Charles Holt, editor of the New London Bee in Connecticut, published an article accusing Hamilton of seeking to expand the U.S. military into a standing army. He also took personal jabs at Hamilton, asking, “Are our young officers and soldiers to learn virtue from General Hamilton? Or like their generals are they to be found in the bed of adultery?” The government promptly charged Holt with being a “wicked, malicious seditious and ill-disposed person — greatly disaffected” to the U.S. government. He was fined $200 and sent to jail for three months.

The speech crackdown extended even to private remarks, as Luther Baldwin, the skipper of a garbage boat in Newark, discovered. In July 1798, while passing through Newark on his way to his summer home in Massachusetts, Adams rode in his coach in a downtown parade complete with a 16-cannon salute. When Baldwin and his buddy Brown Clark heard the cannon shots while drinking heavily at a local tavern, Clark remarked, “There goes the president, and they are firing at his arse.” Baldwin responded that he didn’t care “if they fired thro’ his arse.” The tavern owner reported the conversation, and both drinkers were fined and jailed for sedition.

Thomas Jefferson and James Madison led the opposition to the big government Federalist policies of the 1790s, and “in the end, widespread anger over the Alien and Sedition Acts fueled Jefferson’s victory over Adams in the bitterly contested 1800 presidential election.” Free speech was restored and the incoming president would focus on cutting the excess spending, taxes, and debt built up by the prior Federalist administrations.

Hardly a day goes by without a report in the press about some new addiction. There are warnings about addiction to coffee. Popular psychology publications talk of “extreme sports addiction.” Some news reports even alert us to the perils of chocolate addiction. One gets the impression that life is awash in threats of addiction. People tend to equate the word “addiction” with “abuse.” Ironically, “addiction” is a subject of abuse.

The American Society of Addiction Medicine defines addiction as a “chronic disease of brain reward, motivation, memory and related circuitry…characterized by the inability to consistently abstain, impairment in behavioral control, craving” that continues despite resulting destruction of relationships, economic conditions, and health. A major feature is compulsiveness. Addiction has a biopsychosocial basis with a genetic predisposition and involves neurotransmitters and interactions within reward centers of the brain. This compusliveness is why alcoholics or other drug addicts will return to their substance of abuse even after they have been “detoxed” and despite the fact that they know it will further damage their lives. 

Addiction is not the same as dependence. Yet politicians and many in the media use the two words interchangeably. Physical dependence represents an adaptation to the drug such that abrupt cessation or tapering off too rapidly can precipitate a withdrawal syndrome, which in some cases can be life-threatening. Physical dependence is seen with many categories of drugs besides drugs commonly abused. It is seen for example with many antidepressants, such as fluoxetine (Prozac) and sertraline (Zoloft), and with beta blockers like atenolol and propranolol, used to treat a variety of conditions including hypertension and migraines. Once a patient is properly tapered off of the drug on which they have become physically dependent, they do not feel a craving or compulsion to return to the drug.

Some also confuse tolerance with addiction. Similar to dependency, tolerance is another example of physical adaptation. Tolerance refers to the decrease in one or more effects a drug has on a person after repeated exposure, requiring increases in the dose.

Science journalist Maia Szalavitz, writing in the Columbia Journalism Review, ably details how journalists perpetuate this lack of understanding and fuel misguided opioid policies.

Many in the media share responsibility for the mistaken belief that prescription opioids rapidly and readily addict patients—despite the fact that Drs. Nora Volkow and Thomas McLellan of the National Institute on Drug Abuse point out addiction is very uncommon, “even among those with preexisting vulnerabilities.” Cochrane systematic studies in 2010 and 2012 of chronic pain patients found addiction rates in the 1 percent range, and a report on over 568,000 patients in the Aetna database who were prescribed opioids for acute postoperative pain between 2008 and 2016 found a total “misuse” rate of 0.6 percent. 

Equating dependency with addiction caused lawmakers to impose opioid prescription limits that are not evidence-based, and is making patients suffer needlessly after being tapered too abruptly or cut off entirely from their pain medicine. Many, in desperation, seek relief in the black market where they get exposed to heroin and fentanyl. Some resort to suicide. There have been enough reports of suicides that the US Senate is poised to vote on opioid legislation that “would require HHS and the Department of Justice to conduct a study on the effect that federal and state opioid prescribing limits have had on patients — and specifically whether such limits are associated with higher suicide rate.” And complaints about the lack of evidence behind present prescribing policy led Food and Drug Administration Commissioner Scott Gottlieb to announce plans last month for the FDA to develop its own set of evidence-based guidelines.

Now there is talk in media and political circles about the threats of “social media addiction.” But there is not enough evidence to conclude that spending extreme amounts of time on the internet and with social media is an addictive disorder. One of the leading researchers on the subject stresses that most reports on the phenomenon are anecdotal and peer-reviewed scientific research is scarce. A recent Pew study found the majority of social media users would not find it difficult to give it up. The American Psychiatric Association does not consider social media addiction or “internet addiction” a disorder and does not include it in its Diagnostic and Statistical Manual of Mental Disorders (DSM), considering it an area that requires further research.

This doesn’t stop pundits from warning us about the dangers of social media addiction. Some warnings might be politically motivated. Recent reports suggest Congress might soon get into the act. If that happens, it can threaten freedom of speech and freedom of the press. It can also generate biliions of dollars in government spending on social media addiction treatment.

Before people see more of their rights infringed or are otherwise harmed by unintended consequences, it would do us all a great deal of good to be more accurate and precise in our terminology. It would also help if lawmakers learned more about the matters on which they create policy.

As Hurricane Florence spins toward the Carolina coast, the nation’s attention will be on the disaster readiness and response of governments and the affected communities. Have lessons been learned since the deeply flawed government response to Hurricane Katrina back in 2005?

I examined FEMA and the Katrina response in this study, discussing both the government failures and the impressive private-sector relief efforts.

Last year, Hurricane Maria devastated Puerto Rico, again exposing all sorts of government failures. Well-known chef José Andrés has a new book on the Maria response. He had an eye-opening experience on the island volunteering on relief efforts with his World Central Kitchen.

The Washington Post’s review of the book says that Andrés saw the flaws of top-down bureaucratic relief efforts and embraces more of a spontaneous order view of effective disaster relief:

With We Fed an Island, chef-and-restaurateur-turned-relief worker José Andrés doesn’t just tell the story about how he and a fleet of volunteers cooked millions of meals for the Americans left adrift on Puerto Rico after Hurricane Maria. He exposes what he views as an outdated top-down, para-military-type model of disaster relief that proved woefully ineffective on an island knocked flat by the Category 4 hurricane.

… ‘My original plan was to cook maybe ten thousand meals a day for five days, and then return home,’ Andrés writes. Instead, Andrés and the thousands of volunteers who composed Chefs for Puerto Rico remained for months, preparing and delivering more than 3 million meals to every part of the island. They didn’t wait for permission from FEMA.

… These grass-roots culinary efforts didn’t always sit well with administration officials or with executives at hidebound charities, in part because Andrés was no diplomat. He trolled Trump on Twitter over the situation on Puerto Rico. He badgered FEMA for large contracts to ramp up production to feed even more hungry citizens. He infamously told Time magazine that the “American government has failed” in Puerto Rico. A chef used to fast-moving kitchens, Andrés had zero patience for slow-footed bureaucracy, especially in a time of crisis.

… After dealing with so much red tape and mismanagement (remember the disastrous $156 million contract that FEMA awarded to a small, inexperienced company to prepare 30 million hot meals?), Andrés wants the government and nonprofit groups to rethink the way they handle food after a large-scale natural disaster. He wants them to drop the authoritarian, top-down style and embrace the chaos inherent in crisis. Work with available local resources, whether residents or idle restaurants and schools. Give people the authority and the means to help themselves. Stimulate the local economy.

‘What we did was embrace complexity every single second,’ Andrés writes. ‘Not planning, not meeting, just improvising. The old school wants you to plan, but we needed to feed the people.’

Andrés and World Central Kitchen have embraced complexity. 

Hail to the chef!

 

 

As of this writing, Tuesday, September 11, Hurricane Florence is threatening millions of folks from South Carolina to Delaware. It’s currently forecast to be near the threshold of the dreaded Category 5 by tomorrow afternoon. Current thinking is that its environment will become a bit less conducive as it nears the North Carolina coast on Thursday afternoon, but still hitting as a Major Hurricane (Category 3+). It’s also forecast to slow down or stall shortly thereafter, which means it will dump disastrous amounts of water in southeastern North Carolina. Isolated totals of over two feet may be common. 

At the same time that it makes landfall, there is going to be the celebrity-studded “Global Climate Action Summit” in San Francisco, and no doubt Florence will be the poster girl.

There’s likely to be the usual hype about tropical cyclones (the generic term for hurricanes) getting worse because of global warming, even though their integrated energy and frequency, as published by Cato Adjunct Scholar Ryan Maue, show no warming-related trend whatsoever.

Maue’s Accumulated Cyclone Energy index shows no increase in global power or strength.

Here is the prevailing consensus opinion of the National Oceanic and Atmospheric Administration’s Geophysical Fluid Dynamics Laboratory (NOAA GFDL): “In the Atlantic, it is premature to conclude that human activities–and particularly greenhouse gas emissions that cause global warming–have already had a detectable impact on hurricane activity.”

We’ll also hear that associated rainfall is increasing along with oceanic heat content. Everything else being equal (dangerous words in science), that’s true. And if Florence does stall out, hey, we’ve got a climate change explanation for that, too! The jet stream is “weirding” because of atmospheric blocking induced by Arctic sea-ice depletion. This is a triple bank shot on the climate science billiards table. If that seems a stretch, it is, but climate models can be and are “parameterized” to give what the French Climatologist, Pierre Hourdin, recently called “an anticipated acceptable range” of results.

The fact is that hurricanes are temperamental beasts. On September 11, 1984, Hurricane Diana, also a Category 4, took aim at pretty much the same spot that Florence is forecast to landfall—Wilmington, North Carolina. And then—34 years ago—it stalled and turned a tight loop for a day, upwelling the cold water that lies beneath the surface, and it rapidly withered into a Category 1 before finally moving inland. (Some recent model runs for Florence have it looping over the exact same place.) The point is that what is forecast to happen on Thursday night—a major category 3+ landfall—darned near happened over three decades earlier… and exactly 30-years before that, in 1954, Hurricane Hazel made a destructive Category 4 landfall just south of the NC/SC border. The shape of the Carolina coastlines and barrier islands make the two states very susceptible to destructive hits. Fortunately, this proclivity toward taking direct hits from hurricanes has also taught the locals to adapt—many homes are on stilts, and there is a resilience built into their infrastructure that is lacking further north.

There’s long been a running research thread on how hurricanes may change in a warmer world. One thing that seems plausible is that the maximum potential power may shift a bit further north. What would that look like? Dozens of computers have cranked away thousands years of simulations and we have a mixture of results: but the consensus is that there will be slightly fewer but more intense hurricanes by the end of the 21st Century. 

We actually have an example of how far north a Category 4 can land, on August 27, 1667 in the tidewater region of southeast Virginia. It prompted the publication of a pamphlet in London called “Strange News from Virginia, being a true relation of the great tempest in Virginia.” The late, great weather historian David Ludlum published an excerpt:

Having this opportunity, I cannot but acquaint you with the Relation of a very strange Tempest which hath been in these parts (with us called a Hurricane) which began on Aug. 27 and continued with such Violence that it overturned many houses, burying in the Ruines much Goods and many people, beating to the ground such as were in any ways employed in the fields, blowing many Cattle that were near the Sea or Rivers, into them, (!!-eds), whereby unknown numbers have perished, to the great affliction of all people, few escaped who have not suffered in their persons or estates, much Corn was blown away, and great quantities of Tobacco have been lost, to the great damage of many, and the utter undoing of others. Neither did it end here, but the Trees were torn up by their roots, and in many places the whole Woods blown down, so that they cannot go from plantation to plantation. The Sea (by the violence of the winds) swelled twelve Foot above its usual height, drowning the whole country before it, with many of the inhabitants, their Cattle and Goods, the rest being forced to save themselves in the Mountains nearest adjoining, where they were forced to remain many days in great want.

Ludlum also quotes from a letter from Thomas Ludwell to Virginia Governor Lord Berkeley about the great tempest:

This poore Country…is now reduced to a very miserable condition by a continual course of misfortune…on the 27th of August followed the most dreadful Harry Cane that ever the colony groaned under. It lasted 24 hours, began at North East and went around to Northerly till it came to South East when it ceased. It was accompanied by a most violent raine, but no thunder. The night of it was the most dismal time I ever knew or heard of, for the wind and rain raised so confused a noise, mixed with the continual cracks of falling houses…the waves were impetuously beaten against the shores and by that violence forced and as it were crowded the creeks, rivers and bays to that prodigious height that it hazarded the drownding of many people who lived not in sight of the rivers, yet were then forced to climb to the top of their houses to keep themselves above water…But then the morning came and the sun risen it would have comforted us after such a night, hat it not lighted to us the ruins of our plantations, of which I think not one escaped. The nearest computation is at least 10,000 house blown down.

It is too bad that there were no anemometers at the time, but the damage and storm surge are certainly consistent with a Category 4 storm. And this was in 1667, at the nadir of the Little Ice Age.

Pages