Policy Institutes

As we enter the summer of Crump (or Trinton, take your pick), many Americans are unsatisfied with the two-party oligopoly that has produced the two most unpopular presidential candidates in modern memory. While some of these will nevertheless hold their noses and pick whomever they see as the “lesser evil,” others can’t fathom pulling that proverbial lever. Of these, some are “feeling the Johnson” and look forward to becoming part of what will likely be the best showing for a Libertarian Party candidate. Still others don’t find the Johnson satisfying and so, like Bill Kristol at his rolodex, are hoping for an as-yet unnanounced candidate of whatever ideological stripe.

Not to rain on anybody’s parade, but as a lawyer – or at least someone who plays a lawyer on TV – I have to ask the question of whether this is even legally possible (forget the political and financial practicalities). During the primary season, when Donald Trump was lumbering towards the GOP nomination, we heard nervous #NeverTrumpers discussing ballot-access deadlines in Texas and elsewhere.

And indeed, the Lone Star State’s deadline for an independent candidate to collect and file the requisite signatures – 79,939 for those counting at home – came and went on May 9. We’re now past seven other states’ deadlines, with a further six being hit this week. These 13 states account for 178 of the 538 electoral votes, and include red, blue, and purple states. (There are separate, generally earlier deadlines for so-called “minor” parties, but I’ll stick to analyzing the rules for independent candidates because the logistics of having a theoretical white knight “take over” an existing third party with already-qualified ballot access are even more complicated.)

Then 17 more state deadlines arrive by August 2, accounting for 168 more electoral votes. That means that we are now three weeks away from an independent candidate’s being mathematically prevented from winning an Electoral College majority – though of course at that point Twelfth Amendment scenarios are still in play.

Or does it mean that? I haven’t yet researched this deeply, but I’ve read enough to know that there are colorable legal arguments to be made that these state ballot-access laws – signature thresholds, due dates, verification procedures, etc. – violate the First Amendment right of free association and, to the extent they privilege the major parties, the Fourteenth Amendment’s Equal Protection Clause. (Interestingly, in election-law-related litigation, the Libertarian Party’s interests are sometimes aligned with the big two, and sometimes against them.) The putative independent candidate or his/her supporters would likely have standing to force federal courts to resolve such constitutional questions.

Of course, the states would argue that their ballot-access laws serve a compelling interest in orderly election administration and related concerns: there have to be procedures in place so states can prepare the ballots and prevent fraud. The Supreme Court’s ruling Anderson v. Celebrezze (1983) governs when weighing the rights of voters and candidates against such state interests.  There, the Court rejected the use of any “litmus-paper test” for separating valid from invalid restrictions. Instead, there’s a holistic balancing test – so nobody can tell you off the bat what’s legally kosher and what’s not. But regardless, the Anderson Court noted that a state’s interest in regulating presidential elections is less important because the outcome of that election “will be largely determined by voters beyond the State’s boundaries” and “the pervasive national interest in the selection of candidates for national office … is greater than any interest of an individual State.” 

As a practical matter, ballot-access challenges would involve battles-of-the-experts, with both sides relying on political scientists to explain why the balance tilts in their favor. The challengers’ experts would discuss the practical and financial hurdles associated with gathering signatures; the unusual 2016 election cycle where the major parties didn’t settle on presumptive nominees until late May/early June; and argue, for example, that a 5,000-signature requirement (not 80,000 or Florida’s 119,316) and a deadline of Sept. 1 (or Sept. 9, as the three latest states currently have it) is adequate to further the state’s interest in keeping “frivolous candidates” off the ballot while still providing members of the public an opportunity to associate with like-minded people.  (Richard Winger of Ballot Access News has done yeoman’s work in this area.)

At its core, the argument is one about fundamental fairness, so should appeal to Sandernista Democrats as much as #NeverTrump Republicans. In an election cycle where most Americans would rather vote for someone other than the nominees of the two major parties, it just might work as a legal strategy. And an argument that the ballot box rather than ballot-access rules should dictate voters’ choices could well resonate as a matter of political messaging.

All this depends on whether a worthy nominee can be found, perhaps by Better for America, an organization working to find an attractive independent candidate and enable such a campaign. I’m not saying that I’d vote for whoever emerges from this woodwork – or that I will or won’t vote for Gary Johnson, or write in someone else – but it’s an intriguing idea that’s worth exploring in this crazy political year.

Members of NATO are meeting in Warsaw. They are dragging the U.S. back into its traditional role of guaranteeing the security of Europe, even though the continent is well able to defend itself.

The North Atlantic Treaty Organization was a necessary part of Containment, preventing the Soviet Union from dominating or conquering Western Europe. But after recovering from World War II the Europeans remained dependent on America.

NATO lost its raison d’etre once the Warsaw Pact disbanded and Soviet Union collapsed. Alliance officials eventually added “out of area” activities, that is, wars of choice irrelevant to Europe’s defense (Balkans, Libya, Mideast, Afghanistan). Such conflicts have wasted lives and resources with no benefit to Europe and America.

Still, enabling ethnic cleansing in Kosovo and spawning chaos in Libya were better than returning to a quasi-Cold War with Russia. Vladimir Putin is a nasty character, but upon taking office he appeared to bear the West little animus. However, the allies did their best to change that, expanding the alliance up to Russia’s borders, lawlessly dismembering historic Russian friend Serbia, backing Georgia in 2008 after it started the shooting, and supporting a street putsch against an elected Ukrainian president friendly to Moscow.

None of that justified the Putin government’s response, but Moscow’s ambitions have been limited. He shows no interest in conquering the NATO countries which so fear him; rather, he unsettled them by treating non-members Georgia and Ukraine rather like NATO treated Russia’s Belgrade friends.

In Georgia Moscow backed separatist-minded territories whose estrangement predated Tbilisi’s independence. In Ukraine he focused his ill attention on heavily ethnic-Russian areas.

He’s done nothing to the Baltic States: the fact that Moscow could overrun them doesn’t mean it has any rational reason to do so. Putin is no Hitler or Stalin, just a garden variety thug.

Which makes Europe’s behavior all the more disappointing. For decades the allies have been cutting military outlays. Collective defense spending by NATO’s European members continued to fall last year, when they devoted an anemic 1.45 percent of GDP to the military.

The Europeans enjoy around eight times the total GDP, devote more than three times as much to military spending, and have about three times the population of Russia. Yet they are demanding that America, with a smaller economy and population, defend them.

Many of today’s difficulties stem from NATO expansion, which treated the alliance like an international social club, bringing in countries of little strategic interest and no military value to America. For instance, Montenegro has been invited to join. With a military of precisely 2080 personnel, the best that can be said of Podgorica is that it is irrelevant to most everything geopolitically.

But Beke Kiria of Georgia’s Ministry of Defense recently asked, if Montenegro, why not Tbilisi? Luke Coffey of the Heritage Foundation and Daniel Kochis of the Davis Institute recently called on the alliance to “deepen its partnership with Ukraine” and keep the door open “for potential future Ukrainian membership.”

Including Tbilisi and Kiev would be reckless and dangerous, greatly increasing the risk of confrontation with a nuclear-armed power over minimal stakes. Of course, NATO’s security guarantee is supposed to deter Moscow. However, deterrence often has failed in Europe.

Russia’s interest in securing its borders is far greater than America’s and Europe’s interest in protecting Georgia’s and Ukraine’ borders. If deterrence failed, which NATO members would support war far from home and be prepared to escalate to nuclear weapons in order to safeguard … Georgia and Ukraine?

Washington should say no new members in NATO. But that should merely be the starting point at the meeting in Warsaw for a spirited debate about the alliance’s future.

As I ask on Forbes online: “Why, seven decades after the end of World War II, is Europe still helplessly dependent on America? Washington should announce that it plans to allow the Europeans to finally take over responsibility for their own security.”

In my last post in this series, I observed that an economy’s “base” money serves as the “raw material” that commercial banks and other private-market financial intermediaries employ in “producing” deposits of various kinds that can themselves serve as means of exchange.

If they could do so profitably, these private intermediaries would, by making their substitutes more attractive than base money itself, collectively gain possession of every dollar of base money in existence. In some past monetary arrangements, most notably that of Scotland before 1845, banks came very close to achieving this ideal, thanks to the their freedom to supply their customers with circulating paper banknotes as well as with deposits, and to the fact that between them these two substitutes could serve every purpose coins might serve, and do so more conveniently than coins themselves.

All save a handful of commercial banks today are, in contrast, able to supply deposits only, so that only base money itself can serve as currency, that is, circulating money. The extent to which national money stocks have been “privatized,” in the sense of being made up mainly of private IOUs of various kinds rather than officially-supplied base money, has been correspondingly limited, as has the extent to which private money holdings have served as a source of funding for bank loans.

When banks and other private-market intermediaries acquire base money, they do so, not for the sake of holding on to it, as they might were they mere warehouses, but in order to lend or otherwise invest it. More precisely, they do so in order to lend or invest most of the base money that comes their way, while keeping some on hand for the sake of either meeting their customers’ requests for currency, or for settling accounts with other banks, as they must do at the end of each business day, if not more frequently. While banks’ own substitutes for base money may serve in place of base money for all sorts of transactions among non-banks, those substitutes won’t suffice for settling banks’ dues to one another, for every bank is anxious to grab for itself as large a share of the market for money balances as possible, while contributing as little as possible to the shares possessed by rival banks.

Bankers have also learned, through hard experience, that by accumulating the IOUs of other banks, and thereby allowing other banks to accumulate their own IOUs in turn, they expose themselves to grave risks, which risks are best avoided by taking part in regular interbank settlements. Because even the most carefully-managed banks cannot perfectly control or predict the value of their net dues at the end of any particular settlement period, all would tend to equip themselves with a modest cushion of cash reserves even if they did not have to do so for the sake of stocking their ATM’s or accommodating their customers’ over-the-counter requests for cash.

Because banks typically receive fresh inflows of reserves every day, as a result of ordinary deposits, loan repayments, or maturing securities, a responsible banker, once having set-aside a reasonable cushion of reserves, has only to see to it that the lending and investment that his or her bank engages in just suffices to employ those inflows, in order to succeed in keeping it sufficiently liquid. Greater reserve inflows, if judged likely to be persistent, will inspire increased lending or investment, while reduced ones will have the opposite effect. The bankers’ general goal is to keep funds that come the bank’s way profitably employed, while holding on to just so many reserves as are needed to accommodate fluctuations of net reserve drains around, and therefore often above, their long-run (zero) mean. More precisely, bankers’ strive to keep reserves on hand sufficient to reduce the probability of losses exceeding available reserves to a (usually modest) level, reflecting both the opportunity of holding reserves, which is to say the interest that might be earned on other assets, and the costs involved in making-up for reserve shortages, by borrowing from other banks or otherwise.

Throughout the history of banking, and despite laws that have suppressed commercial banknotes while often imposing minimum (but never maximum) reserve ratios on banks, bank reserves have generally constituted a very modest part of banks’ total assets, and therefore a modest amount compared to their their total liabilities. Indeed, bank reserves have generally been but a small fraction of banks’ readily redeemable or “demandable” liabilities. England’s early”goldsmith” banks are supposed to have held reserves equal to only a third of their demandable liabilities — a remarkably low figure, given the circumstances; at the other extreme, Scottish banks at the height of that nation’s pre-1845 “free banking” episode often managed quite well with gold or silver reserves equal to between one and two percent of their outstanding notes and demand deposits. Modern banks, even prior to the recent crisis, have generally had to keep somewhat higher reserve ratios than their pre-1845 Scottish counterparts, mainly owing to the fact, mentioned above, that they must stock their cash machines and tills with base money, instead of being able to do so using their own circulating banknotes.

Between 1997 and 2005, for instance, U.S. depository institutions’ reserves ranged, with rare exceptions, between 11 and 14 percent of their demand deposits (see figure below). Put another way, for every dollar of base money “raw material” they acquired, U.S. commercial banks were able to “manufacture” (that is, to create and administer) just under 10 dollars of demand deposits. That figure, the inverse of the banking system reserve ratio, is what’s known as the reserve-deposit “multiplier.”

Depository Institutions’ (Demand Deposit) Reserve Ratio, 1997-2005

The multiplier’s significance to monetary policy is, or used to be, straightforward: it indicated the quantity of additional bank deposits that monetary authorities could expect to see banks produce in response to any increment of new bank reserves supplied them by means of either open-market operations or direct central bank loans. If, around 2000 (when the reserve-demand deposit multiplier was about 14), the Fed wanted to see bank’s demand deposits increase by, say, $10 billion, it had only to see to it that they acquired $(10/14) billion in fresh reserves, which meant creating a somewhat larger quantity of new base dollars — the difference serving to make up for the tendency of some of any newly created bank reserves to be converted into currency. The ratio of the total amount of new money, including both currency and bank deposits, generated in response to any new increment of base money, to that increment of base money itself, is known as the “base money multiplier.”

Since the recent crisis, all sorts of nonsense has been written about the “death” of the reserve-deposit and base-money multipliers, and even (in some cases) about how we ought to be glad to say “good riddance” to it. I say “nonsense” because, first of all, the multipliers in question, being mere quotients arrived at using values that are themselves certainly very much alive cannot themselves have “died,” and because the working of these multipliers does not, as some authorities suppose, depend on the particular operating procedures central banks employ. Finally, although it’s true that, until the recent crisis, economists were inclined, with good reason, to take the stability of the reserve-deposit and base-money multipliers for granted, it doesn’t follow that they (or good ones, at least) ever regarded these values as constants, as opposed to variables the values of which depended on various determinants which were themselves capable of changing.

What certainly has happened since the crisis is, not that the reserve-deposit and base-money multipliers have died, but that their determinants have changed enough to cause them to plummet. U.S. bank reserves, for example, have (as seen in the next picture) gone from being equal to a bit more than a tenth of demand deposits to being about twice the value of such deposits! The base-money (M2) multiplier, shown further below, has, at the same time, fallen to below half its pre-crisis level, from about 8 to 3.5 or so (not long ago it was less than 3).

The Reserve Demand-Deposit Multiplier Since 2005 The Base-Money M2 Multiplier since 2005

These are remarkable changes, to be sure. But they are hardly inexplicable. Banks’ willingness to accumulate reserves depends, as I’ve already noted, on the cost of holding reserves, which itself depends on the interest yield of reserves compared to that of other assets banks might hold instead. Before the crisis, bank reserves earned no interest at all. On the other hand, banks had all sorts of ways in which to employ funds profitably, especially by lending to businesses both big and small. Consequently, banks held only modest reserves, and bank reserve and base-money multipliers were correspondingly large.

The crisis brought with it several changes that are more than capable of accounting for the multipliers’ collapse. The recession itself has, first of all, resulted in a general reduction in both nominal and real interest rates on loans and securities, to the point where some Treasury securities are now earning negative inflation-adjusted returns. Regulators have in turn responded to the crisis by cracking-down on all sorts of bank lending, making it costly, if not impossible, for banks, and smaller banks especially, to make many of the higher-return loans, including small business loans, they would have been able to make, and to make profitably, before the crisis. (More here.) U.S. bank regulators have also begun to enforce Basel III’s new “Liquidity Coverage Ratio” rules, compelling banks to increase the ratio of liquid assets, meaning reserves and Treasury securities, to less-liquid ones on their balance sheets. Finally, since October 2008, the Fed has been paying interest on bank reserves, at rates generally exceeding the yield on Treasury securities, thereby giving them reason to favor cash reserves over government securities for all their liquidity needs.

Whatever their cause, today’s very low money multiplier values mean that commercial banks have ceased to contribute as they once did to the productive employment of scarce savings. Instead, those savings have been shunted to the Fed, and to other central banks, which use them to purchase government securities, and also for other purposes, but never, with rare exceptions (and with good reason), to fund potentially productive enterprises. Although discussions of monetary policy since the crisis have mainly had to do with the quantity of money, and central banks’ efforts to expand that quantity so as to stimulate spending, the effects of the crisis, and of governments’ response to it, on the quality of money, and especially on the investments its holders have been funding, deserve at least as much attention.

A recent outbreak of measles at the Eloy Detention Center has raised some concerns over disease and immigration.  The disease was carried in by an immigrant who was detained, allowing it to spread among some of the guards who were not vaccinated.  The Detention Center has since claimed that it vaccinates all migrants who are there and is working on getting all of its employees vaccinated.  Regardless, how much should we worry about measles brought in by unvaccinated immigrants?  Very little.

First, measles vaccines are highly effective at containing the disease.  There are two primary measles vaccinations.  The first is the MCV-1 which should be administered to children between the ages of nine months and one year.  MCV-2 vaccinations are administered later, at the age of 15 to 18 months in countries where measles actively spreads.  In countries with very few cases of measles, like the United States, the MCV-2 is optional and is not typically administered until the child begins schooling. 

Second, the nations that send immigrants tend to have high vaccination rates.  The World Health Organization (WHO) and UNICEF report the measles vaccination rates for most countries.  Figure 1 shows those rates for 2014 by major immigrant sending country.  For the MCV-1, the United States is in the middle of the pack with 92 percent coverage and no data reported for MCV-2.  The countries of El Salvador, Costa Rica, Mexico, Vietnam, Cuba, China, and South Korea all have higher MCV-1 vaccination rates than in the United States. 

Six countries do have lower vaccination rates than the United States although Indian and Filipino immigrants are more highly educated than their former countrymen, indicating that their vaccination rates are higher before beginning the immigration process.  Furthermore, legal immigrants must show they are vaccinated, meaning that the relatively low vaccination rates in some of those countries of origin don’t reflect vaccination rates among the population of immigrants here.  

However, the U.S. government’s vaccination requirements indicate that unauthorized immigrants are possibly less likely to be immunized than legal immigrants.  One way to increase vaccination rates among all immigrants, legal and illegal, would be to make green cards available to immigrants who are more likely to come unlawfully, thus guaranteeing that they are vaccinated.

 

Figure 1

MCV-1 Vaccination Rates, 2014

 

Source: WHO-UNICEF

In a more worrying trend, vaccine refusal rates are up among the native-born Americans in wealthy enclaves in California.      

North Korea is likely to be one of the worst headaches, or maybe nightmares, for the next president. He or she “must find a way to neuter Mr. Kim’s outlandish and frightening peril,” intoned the Washington Post.

 

Of course, four successive presidents have sought to do so. Yet, nothing they tried worked. Experience suggests that “neutering” Pyongyang is beyond the power of the U.S. president, at least at a cost Americans are willing to bear.

 

The United States should try a different approach. Washington should withdraw from the Korean vortex. Then the Democratic People’s Republic of Korea would be primarily a problem for its neighbors, who have the most at stake.

 

Washington’s military presence is an anachronism. Today the Republic of Korea outmatches the DPRK on every measure of national power save military, and the latter deficiency is a matter of choice.

 

With twice the population and around 40 times the GDP, South Korea could do whatever is necessary to deter and defeat its northern antagonist. Seoul doesn’t do so because America continues to spend the resources and risk the lives of its citizens on the ROK’s behalf.

 

That made sense during the Cold War, but no longer. The United States is militarily stretched, economically embattled, and fiscally endangered. It no longer can afford to subsidize the defense of prosperous and populous friends.

 

Absent its military commitment to the ROK, America would of no concern to the latest Kim scion to rule over the impoverished land to the north. For instance, Kim Jong-un was recently quoted expressing his “great satisfaction” with the test of the mid-range Mudusan missile. As a result, he explained, “We have the sure capability to attack in an overall and practical way the Americans in the Pacific.”

 

Even more dramatic have been tests, most recently in April, on a long-range missile capable of hitting North America. It is useful only for threatening the United States.

 

At the same time, the DPRK is thought to be continuing to expand its nuclear capabilities. The Institute for Science and International Security recently estimated North Korea’s arsenal at 13 to 21 weapons. It may be adding four to six weapons a year.

 

North Korea’s threats do not occur in a vacuum. Pyongyang is targeting America with weapons as well as rhetoric because America is over there, threatening his nation with war. In contrast, Kim does not spend his time denouncing Mexico or threatening to turn Toronto into a lake of fire.

 

This doesn’t mean Kim is a victim or innocent, of course. Nevertheless, in this case, he is behaving rationally.

 

The United States, which enjoys an overwhelming military advantage and imposes regime change whenever convenient, does threaten his rule. Washington’s attack on Moammar Khadafy’s regime, which had negotiated away its missile and nuclear programs, demonstrated that American officials cannot be trusted. A nuclear deterrent is the most obvious and perhaps the only sure defense.

 

This raises the obvious question whether Pyongyang would behave so provocatively if America was not on the scene. No one should expect a kinder, gentler Kim to emerge.

 

But his “byungjin” policy of pursuing both nuclear weapons and economic growth faces a severe challenge. As I contend in National Interest online: “With the U.S. far away he would have more reason to listen to China, which long has advised more reforms and fewer nukes. Since nothing else has worked, an American withdrawal would be a useful change in strategy.”

 

The justification for U.S. troops in Korea disappeared decades ago. Bringing them home and shrinking America’s military accordingly would ease an increasingly unaffordable defense burden.

 

Moreover, getting out of Korea would undercut Pyongyang’s justification for its overwhelming military spending. It’s rare to find such a win-win policy for the Korean peninsula.                                           

In a post last week, I critiqued a prominent article about Islamic State in the Washington Post, in which Carol Morell and Joby Warrick suggest that, by massacring people in various locales, the group was growing in appeal or “allure.” I argued that there was considerable evidence, on the contrary, that the appeal (or allure) of the vicious group actually is, like the scope of the territory it holds in Syria and Iraq, in severe decline.

The Post article also seeks to refute the plausible suggestion of Secretary of State John Kerry that terrorist attacks like those Turkey, Iraq, and Bangladesh are a sign of the group’s desperation as it is pushed back in Iraq and Syria. It does so with what I consider to be an irrelevant argument, noting that the group has “recently” issued its own currency in the territory it controls.

However, the Post article seems to be wrong about the currency issue as well.

According to the Economist, nearly a year ago (not “recently”), the Islamic State did try to create a currency that it called the “Gold Dinar.” In a 55-minute video, it made what the Economist calls “a bizarre sales pitch” for the new currency.

Covering a dizzying range of topics, from the importance of gold as a medium of exchange to “the dark rise of bank notes, born out of the satanic conception of banks”, it argues America has been able to avoid hyperinflation and maintain its military hegemony largely thanks to the petrodollar system. Islamic State hopes that with the introduction of what it is calling the dinar, all oil will be paid for with gold instead of being priced in dollars, which would “mark the death of this oppressive banknote” and bring America “to her knees”. Charts showing the gradual increase of the American money supply and the devaluation of the dollar are provided as evidence of the dangers of printing money.

The Economist found the whole scheme Quixotic, outlining “three rather obvious problems”:

First, its yellow currency is no different to any other version of the gold standard. The dinar’s worth will be determined by the supply and demand for gold, exposing the currency to fluctuations in the price of the yellow stuff. A fall in the gold supply would be like a tightening of monetary policy; it could cause a recession.

Second, a coin backed by a terrorist organisation has obvious credibility problems: aside from the fact that its coins cannot be traded legally, few would presumably use this currency to trade oil if it was only available in coin form, or other forms that relied on trusting Islamic State.

Finally, ending the petrodollar system would first require Islamic State to seize a far larger share of oil production in the Middle East and then persuade countries to agree to trade with it. Even if Islamic State were successful in becoming a major oil exporter, the strength of the dollar depends on far more than its use in the oil trade.

The conclusion: “American capitalism seems safe for a while yet.”

However, more recently, Islamic State seems to have scrapped its fanciful new currency and is now relying on US dollars. All utility bills, extortion payments, fines for dressing improperly, and inducements to obtain the release of detainees must be tendered in that “oppressive” and “satanic” currency.

In other belt-tightening, not only have salaries been halved, but the regime no longer supplies free energy drinks and Snicker bars to its followers. It looks like hard times. One of their top leaders has attempted to cheer the legions up by arguing that “a drowning person does not fear getting wet.” Not a terribly reassuring way of putting it I should think, particularly to people with diminished salaries who now have to pay for their Snicker bars with the enemy’s currency.

Imagine that your company’s board chairman, against the wishes of the board of directors and in contravention of the corporate charter, hires an interim CEO. Despite that illegal action, the interim CEO disciplines you in some manner. Would that discipline be any more legitimate if, two years later, the board finally agrees to hire the CEO, who then retroactively approved his own previous actions?

This is what’s happened at the highest levels of government. When Congress created the Consumer Financial Protection Bureau (CFPB) as part of the larger Dodd-Frank financial reform, it specified that the director was to be appointed by the president “by and with the advice and consent of the Senate.” This placed what’s called an Appointments Clause limitation on the director’s position. Four years ago, President Obama named Richard Cordray the CFPB director—after Elizabeth Warren’s expected appointment met significant political resistance—during what the president erroneously believed was a Senate recess. (You’ll recall that the Supreme Court unanimously invalidated the National Labor Relations Board appointments Obama made at the same time.)

Cordray was only confirmed as the director, in a larger compromise with the Senate, nearly two years later. In the interim, the CFPB filed an enforcement action against Chance Gordon regarding his provision of mortgage-relief services, and Cordray later ratified it. Gordon challenged the enforcement action as emanating from an unconstitutional authority, but the lower courts ruled against him, finding that the post hoc ratification resolved any Appointments Clause deficiencies. Curiously, though, the U.S. Court of Appeals for the Ninth Circuit then invited Gordon to move for rehearing en banc (normally the full court, but in that sprawling circuit, 11 judges), which request Cato is now supporting with an amicus brief.

Congress created the CFPB with the advice-and-consent requirement for a reason: the agency has vast power with virtually no accountability mechanisms, such that the Appointments Clause provision is one of the few meaningful checks on its activities. Furthermore, Congress did not authorize the CFPB to bring enforcement actions without a duly appointed, Senate-confirmed director. Advice and consent is “more than a matter of etiquette or protocol,” the Supreme Court held in Edmond v. United States (1997), it is a structural safeguard intended to curb executive power.

Also, when Dodd-Frank first gave the CFPB its sweeping authority to define unfair, deceptive, or abusive acts or practices, it specified that these enforcement powers could not be exercised before a director had been validly appointed. Cordray’s later ratification of his own actions cannot cure the original unconstitutional sin of an unsanctioned prosecution. Only Congress could authorize the CFPB’s use of its awesome powers without first having a fully confirmed boss in place—which Congress purposely did not do.

Allowing Cordray to ratify the agency’s otherwise illegal past conduct would prejudice Gordon’s rights (and those of many other similarly situated individuals and companies). In situations like these, courts should presume that harm was unquestionably present and not require litigants to demonstrate that they “received less favorable treatment than if the agency were lawfully constituted and otherwise authorized to discharge its functions.” Committee for Monetary Reform v. Board of Governors of the Federal Reserve (D.C. Cir. 1985).

The Ninth Circuit should indeed now rehear Gordon v. CFPB and reverse its earlier panel decision.

Last night Dallas police officers used a bomb robot to kill the suspected perpetrator of a shooting that left five Dallas-area police officers dead and seven others wounded. Two citizens were also wounded in the shooting. While police have used robots to deliver chemical agents and pizza, it looks as if the deployment of the robot bomb last night was the first time American police officers have used a robot to kill someone.

Police reportedly used the robot after hours of negotiation with the suspect broke down. According to Dallas Police Chief David Brown, “We saw no other option but to use our bomb robot and place a device on its extension for it to detonate where the suspect was.” He went on to say, “Other options would have exposed our officers to grave danger.”

The death of the alledged shooter in Dallas should prompt us to think carefully about how new technologies will be used by police to deliver lethal force. Robots like the one use by Dallas police last night are used by police departments across the country as part of bomb squads. But it’s worth keeping in mind that these robots will continue to improve, making it easier for police to use them in situations like the standoff in Dallas.

Other tools such as drones could also potentially be used to kill suspects. In a McGeorge Law Review article examining police drones and use of force Eric Brumfield pointed out that while the FAA Modernization and Reform Act of 2012 does outline requirements for law enforcement agencies that wish to use drones, it does not explicitly prohibit or allow these drones to be armed. In addition, while a federal regulation does prohibit pilots from dropping objects from aircraft, this regulation applies to civil rather than public aircraft. In fact, North Dakota has legalized the use of armed drones in some circumstances, and Flordia law defines a police drone as one that can “carry a lethal or nonlethal payload.”

While new and improving police tools might pose interesting technological questions, it’s not clear that when it comes to lethal use of force that they ought to prompt a radical rethinking of law.

Seth Stoughton, a former police officer and assistant professor of law at the University of South Carolina, outlined this point to The Atlantic:

But while there are likely to be intense ethical debates about when and how police deploy robots in this manner, Stoughton said he doesn’t think Dallas’s decision is particularly novel from a legal perspective. Because there was an imminent threat to officers, the decision to use lethal force was likely reasonable, while the weapon used was immaterial.

“The circumstances that justify lethal force justify lethal force in essentially every form,” he said. “If someone is shooting at the police, the police are, generally speaking, going to be authorized to eliminate that threat by shooting them, or by stabbing them with a knife, or by running them over with a vehicle. Once lethal force is justified and appropriate, the method of delivery—I doubt it’s legally relevant.”

However, as technology improves using tools such as robots to kill dangerous suspects will become easier, and we shouldn’t be surprised if they proliferate. Amid such changes we should keep a careful eye on how and when police use remote devices, especially in cases not as clear cut as the recent standoff in Dallas seems to have been. 

You Ought to Have a Look is a feature from the Center for the Study of Science posted by Patrick J. Michaels and Paul C. (“Chip”) Knappenberger.  While this section will feature all of the areas of interest that we are emphasizing, the prominence of the climate issue is driving a tremendous amount of web traffic.  Here we post a few of the best in recent days, along with our color commentary.

At the top of our list of things you ought to have a look at this week is a pair of blog posts by Dr. Roy Spencer updating the recent post-El Niño evolution of the satellite-observed temperature record of the earth’s lower atmosphere. In Roy’s first post, he updates the satellite record through June 2016, noting the big drop in temperatures as the effect of the recent big El Niño wanes. The take home figure looks like this:

Figure 1. Global average temperature of the lower atmosphere as derived and compiled by researchers at the University of Alabama at Huntsville, January 1979 through June 2016.

Roy notes that the “2-month temperature fall of -0.37 deg. C, which is the second largest in the 37+ year satellite record.”

In a follow-on post, Roy looks to see what the prospects are for the 2016 annual temperatures being the highest in the 38-year satellite temperature history. In late June, Roy had concluded that is “2016 Will Likely See Record Global Warmth in Satellite Data.” But with the big drop in June temperatures, he is now reconsidering, writing that his previous prediction “looks…well…premature.” 

Be sure to check out all Roy’s analysis and keep tuning in to see how the year’s temperatures are progressing. We surely will be.

Next up is a piece from Retraction Watch that notifies us that a (controversial) paper published last year that found that “Air pollution from fracking operations may pose an under-recognized health hazard to people living near them” and led to headlines like “Fracking could increase risk of cancer, new study finds,” well, was wrong. Turns out that the authors had used the wrong units in some of their calculations, and when they made the necessary correction, air pollution levels near fracking wells were well within EPA’s acceptable risk levels and below those that would increase cancer risk. Oops!

Be sure to check out the entire Retraction Watch article, it is chock-full of interesting tidbits.

And finally, the bottom of our post brings to the bottom of the world and a look at the increase in sea ice there.

The growth of sea ice around Antarctica that has been ongoing for at least the past 25 or so years is a somewhat inconvenient fact for global warming cheerleaders in that climate models predict that the region should be losing its sea ice (see for example the findings of Shu et al.).

For a long time, the U.N.’s IPCC did its best to downplay (deny?) the increase was indeed real (e.g., https://www.masterresource.org/climate-science/yet-another-incorrect-ipcc-assessment-antarctic-sea-ice-increase/). Finally, they capitulated under the weight of the growing (!) evidence, including in their 2013 Fifth Assessment Report: “It is very likely that the annual mean Antarctic sea ice extent increased at a rate in the range of 1.2 to 1.8% per decade (range of 0.13 to 0.20 million km2 per decade) between 1979 and 2012.”

Even so, when climate change worriers get together to talk impacts, they rarely mention the goings on with the sea ice around Antarctica, preferring instead to hyperventilate about Arctic sea ice declines (which, by the way, have no impact on sea level rise—like the floating ice in your cocktail glass, when it melts, it doesn’t affect the level of your drink).

That is, until now. Just this week a paper was published by Gerald Meehl and colleagues makes it safe to finally discuss Antarctic sea ice trends. Why? Because, Meehl & Co. link the increase to…wait for it…natural variability! Which means that while the poor climate models have been getting the sign of the trend wrong, it is not because of their inability to correctly ascertain the impacts of human-caused greenhouse gas increases, but rather that random processes—which the models are generally not expected to get the timing of—have been acting to overwhelm the model-projected decline. Whew!

Somehow, as is his inimitable fashion, Chris Mooney of the Washington Post manages to use this finding to proclaim that this is “bad news for climate change doubters.” We helped Valerie Richardson of the Washington Times get it right.

Bottom line is that if natural variability is leading to a slowdown of the warming around Antarctica (and leading to sea ice growth there), when it swings back the other direction, it should help prime the atmosphere with moisture (warmer air can hold more moisture than colder air) that should result in greater snowfall over Antarctica and a growing ice load on land—which does impact global sea levels, acting to lower them (that is, partially offset the rise from warming oceans). This sea level rise offsetting effect has long been expected but has been slow to be manifest (although there is some evidence that it is underway).

We’ll have to see how all of this develops in the future, but regardless of the outcome, it won’t be “bad news” for climate skeptics.

Dealing with North Korea brings to mind Sisyphus, the mythological Greek king condemned for eternity to roll a stone up a hill, only to watch it roll back down. Whatever the U.S. does, Kim Jong-un again will fire missiles, test nukes, and threaten to lay waste to his enemies.

Now the Obama administration has applied sanctions to him personally, though for human rights violations, not security concerns. The State Department explained that Kim was “ultimately responsible” for what it termed “North Korea’s notorious abuses of human rights.”

There are many, of course. The so-called Democratic People’s Republic of Korea ain’t a nice place for anyone other than the Kim family and friends.

Now any property owned by Kim and ten of his top officials in the U.S. will be frozen. And Americans will be prohibited from doing business with them. The administration predicted that “Lifting the anonymity of these functionaries may make them think twice from time to time when considering a particular act of cruelty.” Seriously?

The North’s abuses are great and the American frustrations are real. Unfortunately, imposing penalties without impact won’t turn Kim into a born-again humanitarian. And his subordinates more likely fear a god-king who has executed some 400 of his own officials, including his uncle, than the prospect of their name ending up on a list in Foggy Bottom.

This is feel good policy at its worst.

Kim isn’t the only foreign dictator who can’t do business in America. Others have been blacklisted. But, unsurprisingly, to no effect.

If the principle makes sense, the list should start with the leaders of China and Russia. Sanctions also belong on the rulers of U.S. allies Saudi Arabia and Egypt. And many more states.

Sanctions can be a useful policy tool. But they should serve a purpose. They also should have a possibility of achieving their objective.

Moreover, there is a downside. It will be harder for both the U.S. and North to shift course and begin a dialogue. In the case of North Korea, there is no military solution and sanctions so far have not changed DPRK nuclear policy. With Pyongyang responding to isolation by continuing its missile and nuclear programs, a bilateral conversation at some point seems necessary and inevitable.

But that now would require the U.S.to engage a regime headed by someone under direct sanction. And Kim would have to swallow his pride and accept the appearance of being a supplicant seeking favor from those threatening him. Negotiating something useful always would be difficult without adding a new obstacle.

While targeted sanctions avoid punishing a largely helpless population in the hopes of influencing a regime which cares little about its people, they have yet to actually bring any government to heel. Such penalties appear to be most effective in allowing officials to satisfy critics and feel good by doing something without actually doing anything—useful, anyway.

Doing so actually encourages policymakers to ignore problems as they worsen. Like North Korea. The North is steadily adding to its nuclear weapons and improving its missiles, as well as abusing its population.

Current policy, essentially to isolate him, has failed. As I wrote in the National Interest: “Instead of trying something new, Washington will confiscate Kim’s nonexistent bank account and consider its work done.”

Policymakers must grapple with the tough issues. What should U.S. policy be toward the North as a de facto nuclear power? Is it worth negotiating with Pyongyang over issues other than denuclearization? There are no easy answers, but Washington’s time would be better spent addressing these issues than in concocting fanciful punishments for the North’s leader.

No one can blame President Barack Obama for not wanting to end up like Sisyphus. However, imposing personal sanctions on Kim looks like an act of desperation: nothing else has worked, so why not try this? Unfortunately, they may make it even harder to find a workable solution to the North Korea Problem.

The Economist reports on a phenomenon I’ve been covering all year, how lawyers are beginning to churn out assembly-line complaints against businesses over their websites’ lack of Americans with Disabilities Act, or ADA, accessibility:

[Texas attorney Omar Weaver] Rosales says extending ADA rules to websites will allow him to begin suing companies that use color combinations problematic for the color-blind and layouts that are confusing for people with a limited field of vision.

While as I noted in January the Obama administration has declined to issue long-anticipated regulations prescribing web accessibility, its Department of Justice has taken the less visible route of supporting private lawsuits intended to accomplish many of the same goals, including (to quote The Economist again):

a National Association of the Deaf lawsuit against Harvard for not subtitling or transcribing videos and audio files posted online. As such cases multiply, content may be taken offline. Paying an accessibility consultant to spot the bits of website coding and metadata that might trip up a blind user’s screen-reading software can cost $50,000 for a website with 100 pages.

And of course brick-and-mortar businesses continue to be exposed to the full force of opportunistic complaints from ADA filing mills:

The hundreds of pages of technical requirements [relating to ADA’s Title III] have become so “frankly overwhelming” that a good 95% of Arizona businesses haven’t fully complied, says Peter Strojnik, a lawyer in Phoenix. He has sued more than 500 since starting in February, and says he will hit thousands more in the state and hire staff to begin out-of-state suits. … Violators must pay all legal fees [and courts ordinarily find violations]. 

With the Obama administration committed to pushing a view of the ADA inconsistent with online liberty, it is up to Congress to act, both by cutting off funds for DoJ adventurism and, more fundamentally, by amending the law to curtail theories that leave most current online content vulnerable to being chased off the web as non-compliant. While it’s at it, why doesn’t it address the ADA plight of brick-and-mortar Main Street businesses too? 

[expanded from an earlier version at Overlawyered]

America’s Bank, Roger Lowenstein’s 2015 book on the founding of the Fed, is, as I said in reviewing it for Barron’s, both well-written and well-researched.  Few pertinent details of the story appear to have escaped Lowenstein’s notice. However, in assembling and interpreting these details, Lowenstein appears not to have entertained the slightest doubt that the Federal Reserve Act, for all the political maneuvering that led to it, was the best of all possible means for ending this nation’s periodic financial crises.

Instead of turning a critical eye toward the 1913 Act, Lowenstein writes as if history itself were a reliable judge.  What it has condemned he condemns as well; and what it has favored he favors.  Consequently he treats all those persons who contributed to the Federal Reserve Act’s passage as right-thinking progressives, while regarding those who favored other solutions to the nation’s currency and banking ills as so many reactionary bumpkins.

That some strains of triumphalism should have found their way into Lowenstein’s account of the Fed’s origins is hardly surprising.  Though research by economic historians and others supplies precious little support for it, the view that the Fed has been a smashing success is, after all, a well-established element of conventional wisdom, and one that Fed officials themselves never cease to promote.  Nor have those officials ever devoted more effort to doing so than in the course of celebrating the Fed’s recent centennial.  Even a much more hard-bitten journalist than Lowenstein could hardly have been expected to resist setting considerable store by an institution so universally (if undeservedly) hallowed.

Still, one might have expected a note of skepticism, if no more than that, to have found its way into America’s Bank.  Lowenstein was, after all, writing about an institution that was supposed to end U.S. financial crises once and for all, and doing so in the wake of a crisis at least as bad, in many respects, as those that inspired its creation.  (Those who suppose that the Fed did all it could and should have done to combat the recent cataclysm are encouraged to read this, this, this, and this.)  He had, furthermore, encountered the many arguments — and most were far from being plainly idiotic — of pre-1913 experts who favored other reforms, as well as those of some of the pending Federal Reserve Act’s critics, who predicted, correctly, that it wouldn’t be long before its results would acutely disappoint those of its champions who sincerely yearned for financial and economic stability.

Perhaps most importantly, Lowenstein knew very well that Nelson (“Admit nothing.  Explain nothing”) Aldrich, whom he (following Elmus Wicker) rightly regards as the man most responsible for clearing the way for the Fed’s establishment, was the  outstanding crony capitalist politician in an epoch when such politicians were thicker on the ground than ever before or since.  Although Aldrich presented the plan known by his name, much of which ended up being incorporated into the Federal Reserve Act, as a product of the collective efforts of the National Monetary Commission’s 16 members, the plan was actually one he himself drafted, with the help of several Wall Street bankers, in secret at Jekyll Island. The commission’s other members contributed nothing save their rubber stamp.

It’s wise to view Lowenstein’s assessment of Aldrich’s contribution in light of what other journalists have had to say about the long-serving Rhode Island Senator.  Consider, for starters, Lincoln Steffens’ opinion, as expressed by him in a 1908 letter to Teddy Roosevelt.  “What I really object to in him,” Steffens wrote, “is something he probably does honestly, out of general conviction. … He represents Wall Street; corrupt and corrupting business; men and Trusts that are forever seeking help, subsidies, privileges from government.”

Bad as this sounds, it’s nothing compared to the portrait muckraking journalist David Graham Phillips drew of Aldrich in The Treason of the Senate, his sulphurous 1906 exposé of an upper-house rife with corruption:

Various senators represent various divisions and subdivisions of this colossus.  But Aldrich, rich through franchise grabbing, the intimate of Wall Street’s great robber barons, the father-in-law of the only son of the Rockefeller — Aldrich represents the colossus.  Your first impression of many and conflicting interests has disappeared.  You now see a single interest, with a single agent-in-chief to execute its single purpose — getting rich at the expense of the labor and the independence of the American people.

“Aldrich’s real work,” Phillips went on to write, consisted of “getting the wishes of his principals, directly or through their lawyers, and putting these wishes into proper form if they are orders for legislation or into the proper channels if they are orders to kill or emasculate legislation.”  The work was “all done, of course, behind the scenes.”  As chairman of the Senate Finance Committee Aldrich labored to “concoct and sugar-coat the bitter doses for the people — the loot measures and the suffocating of the measures in restraint of loot.”

Although the opinions of Steffens and Phillips might be dismissed as yellow journalism, the same cannot be said for similar verdicts reached by academic historians, including that of Jerome Sternstein, in his article “Corruption in the Gilded Age Senate: Nelson W. Aldrich and the Sugar Trust.”  According to Sternstein, “far from insulating the legislative process from big business and reducing the incentives for corruption, the concentration of institutional authority in the hands of senators like Nelson W. Aldrich had precisely the opposite effect”:

Aldrich was wedded ardently to the concept that legislation affecting businessmen should be drawn up in close collaboration with businessmen. … America’s productive economy was not the work of politicians and theorists, but of innovative businessmen making business decisions in a most practical, efficient way.  Members of Congress, therefore, had an obligation to clear appropriate legislation with them.  Effective lawmaking, he held, especially that required to carry out the Republican gospel of prosperity and economic growth through vigorous state action in the form of protective tariffs and subsidies, was next to impossible otherwise. …

Thanks to his success in achieving the legislative goals of his corporate clients, Aldrich “found money and favors flowing to him.  Businessmen did not bribe him, they did not dominate him — they simply rewarded and supported him.”  In return for his efforts to shunt monetary reform onto a spur favoring the big Wall Street banks, for instance, Aldrich earned a token of gratitude from Henry P. Davison, a partner in J. P. Morgan & Company, who arranged and took part in the Jekyll Island meeting:

The enclosed [Davison wrote to Aldrich] refers to the stock of the Bankers Trust Company, of which you have been allotted one hundred shares.  You will be called upon for payment of $40,000… It will be a pleasure for me to arrange this for you if you would like to have me do so.

I am particularly pleased to have you have this stock, as I believe it will give a good account of itself. I t is selling today on the basis of a little more than $500 a share.  I hope, however, you will see fit to put it away, as it should improve with seasoning.  Do not bother to read through the enclosed, unless you desire to do so.  Just sign your name and return to me.

In view of Aldrich’s notoriety, Lowenstein might have suspected that, whatever its merits, the Aldrich Plan was bound to be compromised by its authors’ desire to look after Wall Street’s interests.  He might therefore have entertained the possibility that neither it nor the Federal Reserve Act that drew so heavily from it was ideally suited to putting a stop to financial crises.  But rather than proffer a revised (and not-so-triumphant) view of the Fed’s origins, Lowenstein elected instead to revise the record concerning Aldrich himself, turning him into his story’s unlikely hero.  Just as some bolting horses supposedly turned Pascal into a religious mystic, the Panic of 1907 “jolted” Aldrich sufficiently, according to Lowenstein, to inspire his conversion, from Wall Street’s Man in Washington to high-minded proponent of monetary reform.

But did it?  The facts suggest otherwise.  Of the many shortcomings of the pre-Fed currency and banking system, none struck sincere reform proponents of all kinds as being in more dire need of correction than the tendency of the nation’s bank reserves to flow into the coffers of a handful of New York banks during seasons apart from the harvest, combined with the annual (and occasionally mad) harvest-time scramble for those same reserves.  That ebb and flow of reserves from countryside to New York City and back again was the sine qua non of the crises that periodically rocked the U.S. economy.  Unfortunately, that same ebb and flow was, so far as New York’s major banks themselves were concerned, good business, for it was the source of funds they lent on call to stock investors, by which they made a tidy profit.  Any reform that might undermine their status as the ultimate custodians, for most of the year, of the nations’ bank reserves was, so far as they were concerned, anathema.

Until the Panic of 1907, Aldrich was able to satisfy his Wall Street clients simply by blocking — with the help of fellow standpatters — every monetary reform measure that came his committee’s way. The panic changed things, not by convincing Aldrich to clean up his act, but by forcing him to change his tactics.  Realizing that reform could no longer be held off, he resolved to assume control of the reform movement, and to have it result in changes that, however sweeping, would nonetheless preserve, and even enhance, both the dangerous “pyramiding” of reserves in New York and his Wall Street chums’ bottom lines.

Just how Aldrich managed to achieve this goal — and to do so despite the rejection of his own plan in favor of a Democratic alternative — is a story too long to be told here.  Interested readers will find it, and many other details besides, in my recent Cato Policy Analysis, “New York’s Bank: The National Monetary Commission and the Founding of the Fed.”  The information there will, I hope, allow them to conclude that, to gain a proper understanding of Aldrich’s part in the Fed’s establishment, one needn’t alter a single brushstroke in muckrakers’ portraits of him.

The United States’ immigration system favors family reunification – even in the so-called employment-based categories.  The family members of immigrant workers must use employment-based green cards to enter the United States.  Instead of a separate green card category for spouses and children, they get a green card that would otherwise go to a worker. 

In 2014, 56 percent of all supposed employment-based green cards went to the family members of workers (Chart 1).  The other 44 percent went to the workers themselves.  Some of those family members are workers, but they should have a separate green card category or be exempted from the employment green card quota altogether. 

Chart 1

Employment Based Green Cards by Recipient Types

 

Source: 2014 Yearbook of Immigration Statistics, Author’s Calculations

If family members were exempted from the quota or there was a separate green card category for them, an additional 84,089 highly skilled immigrant workers could have entered in 2014 without increasing the quota.

Of the 151,596 green card beneficiaries in 2014, 86 percent were already legally living in the United States (Chart 2).  They were able to adjust their immigration status from another type of visa, like an H-1B or F visa, to an employment-based green card.  Exempting some or all of the adjustments of status from the green card cap would almost double the number of highly skilled workers who could enter.  Here are some other exemption options:

Chart 2

Adjustment of Status vs. New Arrivals

 

Source: 2014 Yearbook of Immigration Statistics, Author’s Calculations

  • Workers could be exempted from the cap if they have a higher level of education, like a graduate degree or a PhD.
  • A certain number of workers who adjust their status could be exempted in the way the H-1B visa exempts 20,000 graduates of American universities from the cap.
  • Workers could be exempted if they show five or more years of legal employment in the United States prior to obtaining their green card.
  • Workers could be exempted based on the occupation they intend to enter.  This is a problem because in involves the government choosing which occupations are deserving, but so long as it leads to a general increase in the potential numbers of skilled immigrant workers without decreasing them elsewhere, the benefits will outweigh the harms.

2014 Employment Based   Green Cards

  EB 1 EB 2 EB 3^ EB 4^ EB 5~   All EB Percent Workers`

16,913

23,694

20,746

2,211

3,922

 

67,486

44.52% Workers Adjusted`

16,274

23,076

18,258

1,780

700

 

60,088

  Workers New Arrivals`

639

618

2,488

431

3,222

 

7,398

  Family

23,641

25,107

22,410

6,143

6,788

 

84,089

55.47% Family Adjusted

22,539

23,796

17,330

5,150

739

 

69,554

  Family New Arrival

1,102

1,311

5,080

993

6,049

 

14,535

  Adjustment of Status

38,813

46,872

35,588

6,933

1,439

 

129,645

85.52% New Arrival

1,741

1,929

7,568

1,429

9,284

 

21,951

14.48%        

 

        Total

40,554

48,801

43,156

8,362

10,723

 

151,596

         

 

          EB 1 EB 2 EB 3^

EB 4^

EB 5~       Workers Adjusted 96.22% 97.39% 88.01% 80.51% 17.85%       Worker New Arrivals

3.78%

2.61%

11.99%

19.49%

82.15%                         Family Adjusted 95.34% 94.78% 77.33% 83.84% 10.89%       Family New Arrival

4.66%

5.22%

22.67%

16.16%

89.11%

      *Some data on spouses and children withheld.           ^Some data on spouses, children, and workers withheld.         `Investors for the EB-5.               ~Some data on spouses, children, and investors withheld.         Source: 2014 Yearbook of Immigration Statistics          

At The Health Care Blog, Jeff Goldsmith and Bruce Henderson of Navigant Healthcare offer a grim assessment of ObamaCare’s performance that is worth quoting at length:

The historic health reform law passed by Congress and signed by President Obama in March, 2010 was widely expected to catalyze a shift in healthcare payment from “volume to value” through multiple policy changes. The Affordable Care Act’s new health exchanges were going to double or triple the individual health insurance market, channeling tens of millions of new lives into new “narrow network” insurance products expected to evolve rapidly into full risk contracts.

In addition, the Medicare Accountable Care Organization (ACO) program created by ACA would succeed in reducing costs and quickly scale up to cover the entire non-Medicare Advantage population of beneficiaries (currently about 70% of current enrollees) and transition provider payment from one-sided to global/population based risk. Finally, seeking to avoid the looming “Cadillac tax” created by ACA, larger employers would convert their group health plans to defined contribution models to cap their health cost liability, and channel tens of millions of their employees into private exchanges which would, in turn, push them into at-risk narrow networks organized around specific provider systems. 

Three Surprising Developments
Well, guess what? It is entirely possible that none of these things may actually come to pass or at least not to the degree and pace predicted. At the end of 2015, a grand total of 8.8 million people had actually paid the premiums for public exchange products, far short of the expected 21 million lives for 2016. As few as half this number may have been previously uninsured. It remains to be seen how many of the 12.7 million who enrolled in 2016’s enrollment cycle will actually pay their premiums, but the likely answer is around ten million. Public exchange enrollment has been a disappointment thus far, largely because the plans have been unattractive to those not eligible for federal subsidy. 

Moreover, even though insurers obtained deep discounts from frightened providers for the new narrow network exchange products (70% of exchange products were narrow networks), the discounts weren’t deep enough to cover the higher costs of the expensive new enrollees who signed up. Both newly launched CO-OP plans created by ACA and experienced large carriers like United and Anthem were swamped in poor insurance risks, and lost hundreds of millions on their exchange lives. As for the shifting of risk, it looks like 90% plus of these new contracts were one-sided risk only, shadowing and paying providers on the basis of fee-for-service, with bonuses for those who cut costs below spending targets. Only 10% actually penalized providers for overspending their targets.

The Medicare Accountable Care Organization/Medicare Shared Savings Program, advertised as a bold departure from conventional Medicare payment policy, has been the biggest disappointment among the raft of CMS Innovation Center initiatives. ACO/MSSP enrollment appears to have topped out at 8.3 million of Medicare’s 55 million beneficiaries. The first wave, the Pioneer ACOs, lost three-fourths of their 32 original participating organizations, including successful managed care players like HealthCare Partners, Sharp Healthcare, and Presbyterian Healthcare of New Mexico and others. The second, much larger wave of regular MSSP ACO participants lost one third of their renewal cohort. Only about one-quarter of ACO/MSSP participants generated bonuses, and those bonuses were highly concentrated in a relative handful of successful participants. 

Of the 477 Medicare ACO’s, a grand total of 52, or 11%, have downside risk, crudely analogous to capitation. As of last fall, CMS acknowledged that factoring in the 40% of ACO/MSSP members who exceeded their spending targets and the costs of the bonuses paid to the ACOs who met them, the ACO/MSSP programs have yet to generate black ink for the federal budget. And this does not count the billions care systems have spent in setting up and running their ACOs. It is extremely unlikely that the Medicare ACO program will be made mandatory, or voluntarily grow to replace DRGs and the Medicare Part B fee schedule. 

And the Cadillac Tax, that 40% tax imposed by ACA on high cost employee benefit plans, a potentially transformative event in the large group health insurance market, which was scheduled to be levied in 2018, was “postponed” for two years (to 2020) by an overwhelming Congressional vote. In the Senate, a 90-10 bipartisan majority actually voted to kill the tax outright, strongly suggesting that strong opposition from unions and large employers will prevent the tax from ever being levied. Presumptive Democratic nominee Hillary Clinton has announced her support for killing the tax. So the expected transformative event in the large group market has proven too heavy a lift for the political system. 

As a result, the enrollment of large group workers in private health exchanges, the intended off-ramp for employers with Cadillac tax problems, has arrested at about 8 million, one-fifth of a recent forecast of 40 million lives by 2018. Thus, the conversion of the enormous large group market members to narrow network products seems unlikely to happen. As a recent New York Times investigation revealed, the reports of the demise of traditional group health insurance coverage (based on broad network PPO models) have been greatly exaggerated.

At the end of this week, leaders from the United States and Europe will convene in Warsaw, Poland, for a NATO summit. The meeting – only the second summit since Russia’s 2014 invasion of Ukraine – will include high level strategic discussions, and will likely see the announcement of an increased NATO troop presence in the Baltic States to counter potential Russian aggression there.

The biggest question leaders intend to address in Warsaw is how to deter Russian aggression towards NATO members in Eastern Europe following its seizure of Crimea and involvement in the conflict in Eastern Ukraine. In effect, leaders will try to find a compromise solution which reassures NATO’s eastern members, provides additional deterrence, but does not provoke further military buildup and distrust from Russia. They will almost certainly fail in this endeavor.

In fact, the expected announcement of the deployment of 4 battalions of additional troops to the Baltics has already produced heated rhetoric from Russia. These deployments will likely lead to a Russian response, ratcheting up tensions and increasing the risk for inadvertent conflict in the region. In other words, they will contribute to a classic security spiral of mistrust and overreaction. The irony is that such deployments are largely symbolic, not strategic. Even four battalions will not change the fact that Russia could likely conquer the Baltics quickly if it so chose. And even though some would argue that their deterrent value is largely as a ‘tripwire,’ it isn’t clear why the existing Article V guarantee is insufficient for that purpose.

To be frank, in the focus on how to defend the Baltics, leaders have largely overlooked the low likelihood of a conflict in that region. For one thing, there is a qualitative difference between attacking Ukraine and attacking a NATO treaty member; Vladimir Putin certainly knows this. For another, Russia’s force posture simply doesn’t indicate that it has any intentions on the Baltics.

But while leaders at the Warsaw summit focus on these issues, they continue to ignore larger strategic questions about the alliance’s mission and future. My colleague Brad Stapleton raises one of these questions today in an article at War on the Rocks. He questions why NATO – faced with a Russian threat in Europe - is still so committed to building its capacity for ‘out-of-area’ missions, such as the NATO contribution to Afghanistan. Unlike the 1990s, he notes, NATO “does not need that mission to justify its continued existence.” The Warsaw summit would be an ideal place to raise this discussion.

A bigger question is the issue of NATO expansion, which will probably take a back burner at the summit. Nonetheless, various officials have called for NATO to demonstrate that the alliance’s door remains open to countries like Bosnia, Ukraine and Georgia. But as I argued in an article several weeks ago, such arguments only serve to highlight the alliance’s fundamental identity crisis. NATO cannot simultaneously act as a defensive alliance and a tool for spreading western values; the two missions have become contradictory. As NATO’s open door policy ratchets up tensions with Russia, it serves to undermine existing members’ security. Yet there is little expectation that leaders will address this challenging question at the Warsaw Summit.

The full articles on NATO’s ‘out-of-area’ follies and on NATO’s identity crisis can be found here and here. Sadly, leaders in Warsaw are likely only to address the unimportant questions, while leaving key ones like these unanswered, kicking the can on decisions about NATO’s future down the road again.

In the early morning hours of last Tuesday police officers in Baton Rouge, Louisiana shot and killed Alton Sterling, a 37-year-old black man who was reportedly selling CDs outside a convenience store. The shooting was filmed by at least two citizens. The two officers involved in the shooting, who were wearing body cameras, are on administrative leave, and the Department of Justice has launched an investigation. The shooting raises a range of questions concerning police use-of-force, body cameras, and police procedure.

According to an unnamed senior law enforcement official, Sterling presented a gun to a homeless man, who then called 911. During the scuffle between Sterling and the officers, which ended with Sterling on his back and both officers on top of him, one of the officers yelled “He’s got a gun!” Shortly afterwards Sterling was shot numerous times at point blank range. Footage shows that Sterling did not have a gun in his hand when he was shot. 

Cato research associate Jonathan Blanks wrote about the shooting at Policemisconduct.net, highlighting (among other things) the “cooling off” period granted to many officers after they are involved in a shooting and before they answer questions.  

Today, I discussed the shooting with Caleb O. Brown, the Cato Institute’s multimedia director. 

It’s the 50th anniversary of the legendary Coleman Report, as George Will discusses today in the Washington Post. Will summarizes what experts in 1966 believed about education, and what additional experience revealed:

The consensus then was that the best predictor of a school’s performance was the amount of money spent on it: Increase financial inputs, and cognitive outputs would increase proportionately. As the postwar baby boom moved through public schools like a pig through a python, almost everything improved — school buildings, teachers’ salaries, class sizes, per-pupil expenditures — except outcomes measured by standardized tests.

Andrew Coulson put that key fact in a handy chart:

Politicians, experts, and the education establishment still aren’t willing to accept the lesson demonstrated by this chart.

But if money doesn’t work, what does? Coleman emphasized cultural factors, notably strong families. Coulson believed that schools could improve, and that competition could help us discover best educational practices. This fall, public television stations will broadcast his documentary asking why educational innovations are so rarely tested and replicated.

Papayas are spherical or pear-shaped fruits known for their delicious taste and sunlit color of the tropics. Upon his arrival to the New World, Christopher Columbus apparently could not get enough of this exotic fruit, reportedly referring to it as the “the fruit of angels.” And the fruit of angels it may indeed be, as modern science has confirmed its value as a rich source of important vitamins, antioxidants and other health-promoting substances to the consumer.

Papaya production has increased significantly over the past few years to the point that it is now ranked fourth in total tropical fruit production after bananas, oranges and mango. It is an important export in many developing countries and provides a livelihood for thousands of people. It should come as no surprise, therefore, that scientists have become interested in how this important food crop might respond to increasing levels of atmospheric CO2 that are predicted for the future.

Such interest was the focus of a recent paper published in the scientific journal Scientia Horticulturae by Cruz et al. (2016). Therein, the team of five researchers examined “the effect of the elevated CO2 levels and its interaction with Nitrogen (N) on the growth, gas exchange, and N use efficiency (NUE) of papaya seedlings,” as they note there are no publications examining such for this species to date. To accomplish their objective, Cruz et al. grew Tainung #1 F1 Hybrid papaya seeds in 3.5 L plastic pots in a climate-controlled greenhouse at the USDA-ARS Crops Research Laboratory in Fort Collins, Colorado under two different CO2 concentrations (390 or 750 parts per million) and two separate N levels (8 mM NO3- or 3 mM NO3-). CO2 fumigation was performed for only 12 hours per day (during the day, 06:00 h to 18:00 h) and N treatments were applied to the pots weekly as a nutrient solution to reach the desired N levels. The experiment concluded 62 days after treatment initiation.

In discussing their findings, Cruz et al. report that compared to ambient levels of CO2, elevated CO2 increased photosynthesis by 24 and 31 percent in the low and high N treatments, respectively. Plant height, stem diameter and leaf area in the high N treatment were also enhanced by 15.4, 14.0 and 26.8 percent, respectively, and by similar amounts for the height and stem diameter in the low N treatment. Elevated CO2 also increased the biomass of leaf, stem plus petiole, and root dry mass of papaya plants regardless of N treatment, leading to total dry mass enhancements of 56.6 percent in the high N treatments and 64.1 percent in the low N treatments (see figure below).

Figure 1. Total dry mass of papaya plants grown in controlled chambers at two different CO2 concentrations (High and Low; 750 and 390 ppm) and two different N treatments (High and Low; 8 mM NO3- or 3 mM NO3-). Adapted from Cruz et al. (2016).

 

Cruz et al. also report that “significant, but minor, differences were observed in total N content (leaf plus stem + petiole plus roots) between plants grown at different CO2 concentrations, but the same N levels.” Consequently, plant Nitrogen Use Efficiency (NUE) – the amount of carbon fixed per N unit – was around 40 percent greater in the CO2-enriched environments, regardless of the N level in the soil.

Commenting on their findings, Cruz et al. write that contrary to some other studies, which have suggested that low N reduces plant responses to increased CO2 levels, they found no such decline. In fact, their data indicate that elevated CO2 “alleviated the effect of low N on dry matter accumulation in papaya,” which they surmised is at least partially explained by a larger leaf area and higher rate of photosynthesis per leaf area unit observed under elevated CO2.

In light of all of the above, Cruz et al. conclude that “an increase in the atmospheric CO2 concentration [is] beneficial for dry mass production of papaya and alleviate[s] the negative effects of N reduction in the substrate on papaya growth.” Thus, in the future, those who cultivate this fruit of angels should find an angel in the ongoing rise in atmospheric CO2.

 

Reference

Cruz, J.L., Alves, A.A.C., LeCain, D.R., Ellis, D.D. and Morgan, J.A. 2016. Interactive effects between nitrogen fertilization and elevated CO2 on growth and gas exchange of papaya seedlings. Scientia Horticulturae 202: 32-40.

Numerous media stories have reported the first fatality in a self-driving car. The most important thing to know is that the Tesla that was involved in the crash was not a self-driving car, that is, a car that “performs all safety-critical functions for the entire trip” or even a car in which “the driver can fully cede control of all safety-critical functions in certain conditions” (otherwise known as “level 4” and “level 3” cars in the National Highway Traffic Safety Administration’s classification of automated cars). 

Instead, the Tesla was equipped with an Advanced Driver Assistance System (ADAS) that performs some steering and speed functions but still requires continuous driver monitoring. In the NHTSA’s classification, it was a “level 2” car, meaning it automated “at least two primary control functions,” in this case, adaptive cruise control (controlling speeds to avoid hitting vehicles in front) and lane centering (steering within the stripes). BMW, Mercedes, and other manufacturers also offer cars with these functions, the difference being that the other cars do not allow drivers to take their hands off the wheel for more than a few seconds while the Tesla does. This may have given some Tesla drivers the impression that their car was a level 3 vehicle that could fully take over “all safety-critical functions in certain conditions.”

The next most important thing to know about the crash is that the Florida Highway Patrol’s initial accident report blamed the accident on the truck driver’s failure to yield the right-of-way to the Tesla. When making a left turn from the eastbound lanes of the highway, the truck should have yielded to the westbound Tesla. Still, it is possible if not likely that the accident would not have happened if the vehicle’s driver had been paying full attention to the road.

Mobileye, the company that made the radar system used in the Tesla, says that its system is designed only to prevent a car from rear-ending slower-moving vehicles, not to keep them from hitting vehicles laterally crossing the car’s path. Even if the sensors had detected the truck, automatic braking systems typically can come to a full stop only if the vehicle is traveling no more than 30 miles per hour faster than the object. Since the road in question is marked for 65 miles per hour, the system could not have stopped the Tesla.

Thus, the Tesla driver who was killed in the accident, Joshua Brown, probably should have been paying more attention. There are conflicting reports about whether Brown was speeding or was watching a movie at the the time of the accident. Neither were mentioned in the preliminary accident report, but even if true it doesn’t change the fact that the Tesla had the right of way over the truck.

Just two months before the accident, Duke University roboticist Missy Cummings presciently testified before Congress that auto companies were “rushing to market” before self-driving cars are ready, and “someone is going to die.” She didn’t mention Tesla by name, but since that is so far the only car company that allows American drivers to take their hands off the wheel for more than a few seconds, she may have had it in mind.

Tesla’s autopilot system relies on two forward-facing sensors: a non-stereo camera and radar. Tests by a Tesla owner have shown that the system using these sensors will not always stop a vehicle from hitting obstacles in the road. By comparison, the Mercedes and BMW systems use a stereo camera (which can more quickly detect approaching obstacles) and five radar sensors (which can detect different kinds of obstacles over a wider range). Thus, in allowing drivers to take hands off the steering wheel, Tesla may have oversold its cars’ capabilities.

The day before information about the Tesla accident became publicly known, the National Association of City Transportation Officials issued a policy statement about self-driving cars urging, among other things, that drivers not be allowed to use “partially automated vehicles” except on limited access freeways because “such vehicles have been shown to encourage unsafe driving behavior.” While this would have prevented the Tesla crash, it ignores the possibility that partial automation might have net safety benefits overall.

A few days after the accident became publicly known, NHTSA announced that traffic fatalities had increased by 7.7 percent in 2015, the largest increase in many years. As Tesla CEO Elon Musk somewhat defensively pointed out, partial automation can probably cut fatalities in half, and full automation is likely to cut them in half again. State and federal regulators should not allow one accident in an ADAS-equipped car to color their judgments about true self-driving cars that are still under development.

Trump’s call for a wall along the border reflects a common desire to control that supposedly lawless frontier.  As far as unauthorized immigration goes, the border is coming under increasing control.  337,117 total unauthorized immigrants were apprehended by Border Patrol in 2015, the lowest number since 1971 (Chart 1).  That number will likely rise this year but will still remain low.  

Chart 1

Border Patrol Apprehensions

 

Source: Customs and Border Protection.

Like the rest of government, Border Patrol has grown considerably over the decades despite the fall in apprehensions.  In 2015 there were just over 20,000 Border Patrol agents, double the number in 2002 and 6.3 times as many as were employed in 1986 (Chart 2). 

Chart 2

Border Patrol Officers

 

Source: Customs and Border Protection.

The increase in the size of Border Patrol has likely decreased unauthorized immigration, although the precise amount is up for debate (read this excellent report for more information).  On the opposite side, there is consistent evidence that border security does not affect the number of illegal entries but can dissuade migrants from leaving once they make it in.  Although the effect of Border Patrol and security on illegal entries is not entirely clear, it is obvious that the average Border Patrol officer is apprehending fewer unauthorized immigrants than at any time in decades with the exception of 2011 (Chart 3).

Chart 3

Apprehensions Per Border Patrol Agent

Source: Customs and Border Protection.

There is already too much corruption in Customs and Border Protection, exacerbated by rapid expansions in the size of their force.  New hiring binges will likely increase the struggles with corruption still more.  Problems with agency corruption and a low period in unlawful immigration are superb arguments against expanding and perhaps to even shrink the Border Patrol back to a reasonable size.

Pages