Feed aggregator

Today the Hamilton County, Ohio prosecutor’s office released body camera footage showing University of Cincinnati police officer Ray Tensing shoot and kill 43-year-old Samuel DuBose during a routine traffic stop on July 19th. Tensing will face murder and voluntary manslaughter charges. Speaking about the killing, Hamilton County prosecutor Joe Deters used strong and condemning language, calling the killing “senseless” and “asinine.” He also said that the body camera footage of the killing was “invaluable” and that without it, he would probably have believed Tensing’s erroneous account of the incident.  

DuBose’s death demonstrates once again that body cameras are not a police misconduct panacea. Tensing, who knew his body camera was on, shot an unarmed man in the head and then lied about being dragged down the street. Nonetheless, the tragic incident does provide an example of how useful body camera footage can be to officials investigating allegations of police misconduct.

Ahead of the release of the video Cincinnati Police Chief Jeffrey Blackwell said that the video “is not good.” If convicted, Tensing faces life in prison.

I’ve seen many police body camera videos while researching and writing about the technology, and the video of DuBose’s death is certainly among the most disturbing that I have seen.

Watch the footage below.

Warning: this footage contains graphic violence.

Technology that highlights incidents of police misconduct ought to be welcomed by advocates of accountability and transparency in law enforcement. As Deters himself said in today’s press conference, the body camera led to Tensing’s murder indictment. 

But in order for police misconduct to be adequately addressed there need to be significant reforms of police practices and training, specifically related to the use of force. Indeed, Deters said in the press conference today that Tensing should never have been a police officer. A man who quickly resorts to shooting an unthreatening man in the head during a stop prompted by a missing license plate should not be given a gun and a badge. Yet, if it weren’t for body camera footage, Tensing would still be employed as a University of Cincinnati police officer rather than being behind bars.

The use of body cameras does raise a host of serious privacy concerns that should not be taken lightly. However, as Dubose’s killing has shown, the cameras can be instrumental in investigating police misconduct and getting dangerous police officers off the streets.

I hope I’m wrong to see it as racism returning to the mainstream. Indeed, I hope that the long, agonizingly slow erosion of racial fixations from our society will continue. But I found it interesting to see a Washington Post blog post explaining a recently minted epithet—“cuckservative”—chiefly with reference to the president of a “white nationalist” organization.

Apparently, we have such things in the United States, credible enough to get online ink from a major newspaper. I’m not against reporter Dave Weigel’s use of the source. I take it as confirmation that some of our ugliest politicians have even uglier supporters.

I don’t think it’s likely, but one can imagine a situation where these currents join a worsening economic situation to sow public distemper that gives actual political power to racists. Were some growing minority of political leaders to gain by advocating for ethnic or racial policies, do not count on the “good ones” standing against them. Public choice economics teaches that politicians will prioritize election over justice, morality, or any other high-minded concept.

It is poor civic hygiene to install technologies that could someday facilitate a police state. That includes a national ID system. I’ve had little success, frankly, driving public awareness that the U.S. national ID program, REAL ID, includes tracking of race and ethnicity that could be used to single out minorities. But that’s yet another reason to oppose it.   If the future sees no U.S. national ID materialize, and no political currents to exploit such a system for base injustice and tragedy, some may credit the favorable winds of history. Others may credit the Cato Institute and its fans. We’re working to prevent power from accumulating where it can be used for evil.

Speaking of myths about U.S. banking, another that tops my list is the myth that the Federal Reserve, or some sort of central-bank-type arrangement, was the best conceivable solution to the ills of the pre-1914 U.S. monetary system.

I encountered that myth most recently in reading America’s Bank, Roger Lowenstein’s forthcoming book on the Fed’s origins, which I’m reviewing for Barron’s. Lowenstein’s book is well-researched and entertainingly written. But it also suffers from an all-too-common drawback: Lowenstein takes for granted that those who favored having a U.S. central bank of some kind (whatever they called it and however they chose to disguise it) were well-informed and right-thinking, whereas those who didn’t were either ignorant hicks or pawns of special interests. He has, in other words, little patience with history’s losers, whether they be people or ideas. Like other “Whig” histories, his history of the Fed treats the past as an “inexorable march of progress towards enlightenment.”

Don’t get me wrong: I’m no Tory, and I certainly don’t think that the pre-Fed U.S. monetary system was fine and dandy. I know about the panics of 1884, 1893, and 1907. I know how specie tended to pile-up in New York after every harvest season, and that by the time it got there not one but three banks were likely to reckon it, or make claims to it, as part of their reserves. I also know how, when the harvest season returned, all those banks were likely to try and get their hands on the same gold, and how this made for tight money, if it didn’t spark a full-scale panic. Finally, I know that one way to avoid such panics, on paper at least, was to establish a central bank, or “federal” equivalent, capable of supplying banks with emergency cash when they needed it.

Yet I still think that the Fed was a lousy idea. How come? My reason isn’t simply that the Fed turned out to be quite incapable of preventing financial crises, though that’s certainly true. It’s that there was a much better way of fixing the pre-Fed system. That alternative was perfectly obvious to many who struggled to reform the U.S. system in the years prior to the Fed’s establishment. It could hardly have been otherwise, since it was then almost literally staring them in the face. But it should be equally obvious even today to anyone who delves into the underlying causes of the infirmities of the pre-Fed National Currency system.

What were these causes? Essentially there were two. First, ever since the Civil War state banks were prohibited from issuing circulating notes, while National banks could issue notes only to the extent that they backed them with specified U.S. government bonds. Those bonds were getting harder to come by (by the 1890s National banks had already acquired almost all of them). What’s more, it didn’t pay for National banks to acquire the costly securities just for the sake of meeting harvest-time currency needs, for that would mean incurring very high opportunity costs for the sake of having stacks of notes sitting idle in their vaults for most of the year.

The other, notorious cause of trouble was the fact that most U.S. banks, whether state or National,didn’t have branch networks of any kind. Instead, ours was for the most part a system of “unit” banks. This was so mainly owing to laws that prohibited them from branching, even within their own states. But even had branching been legal, the restrictions on banks’ ability to issue notes would have made it less economical by substantially raising the cost of equipping bank branches with inventories of till money.[1]

That unit banking limited U.S. banks’ ability to diversify their assets and liabilities, and thereby made the U.S. banking system much more fragile than it might have been, is (or ought to be) well-appreciated. Unit banking also encouraged banks to deposit their idle reserves with “reserve city” correspondents, who in turn sent their own surplus cash to New York. The National Banking Acts actually encouraged this practice by letting correspondent balances satisfy a portion of banks’ legal reserve requirements. The set-up kept money gainfully employed when it wasn’t needed in the countryside; but it also made for a mad scramble when cash was needed back home.

Far less well appreciated is how unit banking also contributed to the notorious “inelasticity” of the pre-Fed U.S. currency stock. Before I explain why, I’d better first lay another myth to rest, which is the myth that complaints concerning the “inelasticity” of the pre-Fed currency stock were a hobbyhorse of persons who subscribed to the “real-bills” doctrine — that is, the view that the currency supply could and should wax-and-wane in concert with the total quantity of “real bills” or short-term commercial paper presented to banks for discounting.

It’s true that many persons who complained about the “inelastic” nature of the U.S. currency system, including many who were instrumental in designing (and later in managing) the Federal Reserve System, also subscribed to the real bills doctrine, and that that doctrine is mostly baloney. But that doesn’t mean that the alleged inelasticity of the U.S. currency stock was a mere bugbear. The real demand for currency really did vary considerably, especially by rising a lot — sometimes by as much as 50 percent — during the harvest season, when migrant workers had to be paid to “move” the crops. And U.S. banks really were unprepared to meet such increases in demand by issuing more notes, even if doing so was only a matter of swapping note liabilities for deposit liabilities, owing to the legal restrictions to which I’ve drawn attention. In short, you don’t have to have drunk the real-bills Kool-Aid to agree that the pre-Fed U.S. currency system wasn’t capable of meeting the “needs of trade.”

How, then, did unit banking contribute to the problem of an inelastic currency stock? It did so by considerably raising the cost banks had to incur to redeem rival banks’ notes, and thereby limiting the extent to which unwanted banknotes made it back to their issuers. In a branch-banking system, note exchange and redemption are mostly a local, and therefore cheap, affair; add a few regional clearinghouses to handle items not settled locally, and you’ve got all that’s needed to see to it that unwanted currency is rapidly removed from circulation.

In the U.S., on the other hand, banks had to bear substantial costs of sorting and shipping notes to their sources, or to distant clearinghouses, which costs were made all the greater by the sheer number of National banks — tens of thousands, eventually — and resulting lack of economies of scale. These factors would normally have caused National banks to accept the notes of distant rivals at discounts sufficient to cover anticipated redemption costs, as antebellum state banks had been in the habit of doing. The authors of the 1863 and 1864 National Banking Acts were, however, determined to give the nation a “uniform” currency. Consequently they stipulated that every National bank had to accept the notes of all other national banks at par. That got rid of note discounts, sure enough. But it also meant that National banknotes would no longer be actively and systematically redeemed.[2] As I like to say, any fool can fix most any problem — so long as he ignores the others.

If my dog is limping, and I discover that she’s got a pebble wedged between her paw pads, I don’t think of calling for a team of stretcher bearers: I just pull the pebble out. In the same way any reasonable person, knowing the underlying causes of the infirmities of the pre-Fed U.S. currency system, would first consider removing those causes. And that was precisely what many advocates of currency reform tried to do before any dared to suggest anything like a U.S. central bank. That is, they tried to get bills passed — there must have been at least a dozen of them — calling for some combination of (1) repeal of the bond-backing requirement for National banknotes; (2) allowing National banks to branch, and (3) restoring state banks’ right to issue currency. The restrictions on note issue had, after all, been put into effect for the sake of helping the Union government fund the Civil War — a purpose now long obsolete. The restrictions on branching, on the other hand, were widely understood to be another deleterious consequence of the unfortunate decision to model the National Banking Acts after earlier, state “free banking” laws.

Might deregulation alone, as was contemplated in such “asset currency” reform proposals (so-called because they would have allowed banks to issue notes backed by general assets, rather than by specific securities), really have given the U.S. a perfectly sound and stable currency and banking system? Yes. How can I be so confident? Because it would have given the U.S. a currency system like Canada’s. And Canada’s system was, in fact, famously sound and famously stable.[3]

“Don’t mention the war!” is what Basil Fawlty tells his staff, out of concern for the sensibilities of his German guests. (Basil himself nevertheless can’t help referring to it again and again.) “Don’t mention Canada!” is what a Whig historian of the Fed must tell himself, assuming he knows what went on there, lest he should broach a topic that would muddle-up his otherwise tidy epic. For to consider Canada is to realize that there was, in fact, no need at all for all the elaborate proposals, hearings, secret meetings, and political wheeling-and-dealing, that ultimately gave shape to the Federal Reserve Act, if all that was desired was to equip the United States with a currency system worthy of a nation already on its way to becoming an economic powerhouse. Like Dorothy’s ruby slippers, the solution to the United States’ currency ills had been at hand, or at foot, all along. Legislators had only to repeat to themselves, “There’s no place like Canada,” while taking steps that would tap obstructive legal restrictions out of the banking system.

Of course that didn’t happen, thanks mainly to a combination of banking-industry opposition to branch banking and populist opposition — spearheaded by William Jennings Bryan — to any sort of non-government currency. “Asset currency” was, if you like, “politically impossible.”

So reformers at length turned to the alternative of a central bank. And how was that supposed to work? Though buckets of ink have been spilled for the sake of offering all sorts of elaborate explanations of the “science” behind the Federal Reserve, the essence of that solution, once considered against the backdrop of the “asset currency” alternative, couldn’t have been simpler. It boils down to this: instead of allowing already existing U.S. banks to branch and to issue notes backed by assets other than government bonds, the government would leave the old restrictions in place, while setting up a dozen new banks that would be uniquely exempt from those restrictions. If National banks (or state banks, if they chose to join the new system) wanted currency, but lacked the necessary bonds, they still couldn’t issue more of their own notes no matter what other assets they possessed. But they might now take some of those other assets to the Fed, to exchange for Federal Reserve Notes. The Fed was, in short, a sort of stretcher corps for banks lamed by earlier laws.

To an extent, the more centralized reform resembled an asset currency reform one step removed. But there were two crucial differences. First, by setting the “discount rate” at which they would exchange notes for commercial paper and other assets, the Federal Reserve Banks could either encourage or discourage other banks from acquiring their notes. Second, because member banks could count not just gold and greenbacks but Fed liabilities as reserves, the Fed’s discount rates influenced the overall availability of bank reserves and, hence, of money and credit. These differences, far from having been innocuous, were, as we now realize, portentous.

Still the Fed did have one incontestable advantage over previous reform proposals. For it alone was politically possible. It alone was a winning solution.

But the fact that the Fed won in 1913 doesn’t mean that other, rejected options aren’t worth recalling. Still less does it warrant treating the Fed as sacrosanct. History isn’t finished. Just a few years before the Federal Reserve Act was passed, most people still believed that Andrew Jackson had put paid once and for all to the idea of a U.S. central bank. Today most people still consider the Federal Reserve Act the last word in scientific monetary control. As for what most people will think tomorrow, well, that’s partly up to us, isn’t it?

___________________________
[1] Although they typically appreciate the debilitating consequences of unit banking, many U.S. economists and economic historians appear unaware of the crucial role that freedom of note issue played historically in facilitating branch banking. That banking systems involving relatively few restrictions on banks’ ability to issue banknotes, like those of Scotland before 1845 and Canada until 1935, also had extremely well-developed branch networks, was no coincidence.

[2] On the limited redemption of National banknotes and attempts to address it see Selgin and White, “Monetary Reform and the Redemption of National Bank Notes, 1863-1913.” Business History Review 68 (2) (Summer 1994).

[3] For a very good review of the features and performance of the Canadian system in its heyday, see R.M. Breckenridge, “The Canadian Banking System, 1817-1890,” Publications of the American Economic Association, v. X (1895), pp. 1-476. Not long ago, when I spoke favorably of Canada’s system at a gathering of economic historians, one asked afterwards, rather superciliously, whether I realized how large Canada’s economy had been back around 1913. Apparently my interrogator thought that Canada’s small size made its success irrelevant. I can’t see why. Nor, evidently, could the many persons who proposed and lobbied for various asset currency proposals over the course of over a decade or so.

[Cross-posted from Alt-M.org]

Global Science Report is a feature from the Center for the Study of Science, where we highlight one or two important new items in the scientific literature or the popular media. For broader and more technical perspectives, consult our monthly “Current Wisdom.”

Perhaps no other climatic variable receives more attention in the debate over CO2-induced global warming than temperature. Its forecast change over time in response to rising atmospheric CO2 concentrations is the typical measure by which climate models are compared. It is also the standard by which the climate model projections tend to be judged; right or wrong, the correctness of global warming theory is most often adjudicated by comparing model projections of temperature against real-world measurements. And in such comparisons, it is critical to have a proper baseline of good data; but that is easier acknowledged than accomplished, as multiple problems and potential inaccuracies have been identified in even the best of temperature datasets.

One particular issue in this regard is the urban heat island effect, a phenomenon by which urban structures artificially warm background air temperatures above what they normally would be in a non-urbanized environment. The urban influence on a given station’s temperature record can be quite profound. In large cities, for example, urban-induced heating can be as great as Tokyo’s 10°C, making it all the more difficult to detect and discern a CO2-induced global warming signal in the temperature record, especially since the putative warming of non-urbanized areas of the planet over the past century is believed to be less than 1°C.  Yet, because nearly all long-term temperature records have been obtained from sensors initially located in towns and cities that have experienced significant growth over the past century, it is extremely important that urbanization-induced warming – which can be a full order of magnitude greater than the background trend being sought – be removed from the original temperature records when attempting to accurately assess the true warming (or cooling!) of the natural non-urban environment. A new study by Founda et al. (2015) suggests this may not be so simple or straightforward a task.

Working with temperature records in and around the metropolitan area of Athens, Greece, Founda et al. set out to examine the interdecadal variability of the urban heat island (UHI) effect, since “few studies focus on the temporal variability of UHI intensity over long periods.” Yet, as they note, “knowledge of the temporal variability and trends of UHI intensity is very important in climate change studies, since [the] urban effect has an additive effect on long term air temperature trends.”

To complete their objective the four Greek researchers compared long-term air temperature data from two urban, two suburban and two rural stations over the period 1970-2004. The UHI was calculated as the difference between the urban and suburban (or rural) stations for monthly, seasonal and annual means of air temperature (max, min, and mean).

Among their several findings, the authors report notable differences in the UHI’s intensity across the seasons and in comparing the UHI when calculated using maximum, minimum, or mean temperatures. Of significance to the discussion at hand, however, the authors note “the warming rate of the air temperature in Athens is particularly large during [the] last decades,” such that the “difference of the annual mean air temperature between urban and rural stations exhibited a progressively statistically significant increase over the studied period.” Indeed, as shown in the figure below for the stations (a) National Observatory of Athens (NOA) in the center of Athens and Tanagra (TAN), approximately 50 km north of the city, as well as for (b) the coastal urban station of Hellinikon (HEL) and again the rural station of Tanagra, the anthropogenic influence of urbanization on temperatures at these two urban stations is growing in magnitude with time such that “the mean values of UHI magnitude [calculated across the entire record] are not quite representative of the more recent period.”

 

Interdecadal variation and annual trends of the Athens, Greece UHI calculated between two urban and one rural station using mean annual temperatures over the period 1970-2004. The two urban stations were the National Observatory of Athens (NOA) in the center of Athens and Hellinikon (HEL), located near the urbanized coast. The rural station Tanagra (TAN), was located approximately 50 km north of the city. Adapted from Founda et al. (2015).

Such findings as these are of significant relevance in climate change studies, for they clearly indicate the UHI influence on a temperature record is not static. It changes over time and is likely inducing an ever-increasing warming bias on the temperature record, a bias that will only increase as the world’s population continues to urbanize in the years and decades ahead. Consequently, unless researchers routinely identify and remove this growing UHI influence from the various temperature data bases used in global change studies, there will likely be a progressive overestimation of the influence of the radiative effects of rising CO2 on the temperature record.

 

Reference

Founda, D., et al. 2015. Interdecadal variations and trends of the Urban Heat Island in Athens (Greece) and its response to heat waves. Atmospheric Research, 161-162, 1-13.

One of the themes in my new study, “Why the Federal Government Fails,” is that the federal government has grown too large to manage with any reasonable level of efficiency and competence. Even if politicians worked diligently to advance the general interest, and even if federal bureaucracies focused on delivering quality services, the vast size of the government would still generate failure after failure.

Here’s an astounding fact: the federal government’s 2014 budget of $3.5 trillion was almost 100 times larger than the average state government budget of $36 billion, as shown in the figure. The largest state budget was California’s at $230 billion, but even that flood of spending was only one fifteenth the magnitude of the federal spending tsunami. Total state spending in 2014 was $1.8 trillion, which includes spending on general funds and nongeneral funds.

The federal government is not just large in size, but also sprawling in scope. In addition to handling core functions such as national defense, the government runs more than 2,300 subsidy and benefit programs, which is double the number in the 1980s. The federal government has many more employees, programs, contractors, and subsidy recipients to keep track of than any state government.

So even if federal officials spent their time diligently scrutinizing programs to prune waste, the job would be simply too large for them. With much of their time spent fundraising, meeting with lobbyists, and giving speeches, members have little time left to study policy, and they routinely miss all or most of their committee hearings. Congress grabs for itself vast powers over nonfederal activities, but then members do not have the time to see that their interventions actually work.

A really sad thing about American democracy is that we are squandering a huge built-in advantage that could greatly improve the nation’s governance. I’m talking about federalism, or allowing local and state governments to handle the great majority of governmental activities. Instead, politicians of both parties, and at all levels, have done their best over the past century to crush federalism and centralize power in Washington.

They have done so for no sound policy reason: centralization benefits politicians, not citizens. Consider that Congress has created hundreds of new federal programs to supposedly help the public since the 1960s. Yet, ironically, polling shows that the public has not grown fonder of the federal government. Quite the opposite, polling shows that Americans have become more alienated from the federal government, and more disgusted by its corruption and dysfunction.

To learn more about the sad realities of our government, see Why the Federal Government Fails.

In the midst of bitter bailout negotiations between Greece and Europe, warnings proliferated of a possible Greek Fifth Column. The European Union and even NATO would collapse should Athens turn toward Russia. It is one of the stranger paranoid fantasies driving U.S. foreign policy.

For five years Athens has been arguing with its European neighbors over debts and reform. The issue doesn’t much concern the U.S. A European economic crisis would be bad for America, but Grexit is not likely to set off such a cataclysm.

Nevertheless, some analysts speculated that Athens might fall out of the European Union and NATO as well as the Eurozone, resulting in geopolitical catastrophe. Thus, the U.S. should insist that Europe pay off Greece. Despite an apparent bailout agreement, another crisis seems inevitable, in which case the specter of a Greek Trojan Horse likely will reemerge.

This fear betrays an overactive imagination. “You do not want Europe to have to deal with a Greece that is a member of NATO but which all of a sudden hates the West and is cozying up to Russia,” warned Sebastian Mallaby of the Council on Foreign Relations.

Worse, Athens might leave the transatlantic alliance. Warned Robert D. Kaplan of the Center for a New American Security: “Europe will be increasingly vulnerable to Russian aggression if its links to Greece are substantially loosened.”

It sounds like the Cold War redux.

In fact, this all appears to be a grand bluff. To start, Russia poses little threat to Europe. President Vladimir Putin is an unpleasant authoritarian, but he is no Hitler or Stalin. While Moscow has ignored human rights and international law, so far Moscow’s aggressive interventions have reflected traditional Russian security concerns. Nothing suggests that Putin has lost his mind and hopes to rule over territory filled with Europeans.

Some worry about America’s access to naval and air bases. They are useful, not vital. After all, the Med is essentially a NATO lake and the Libya intervention was folly.

Bulgarian President Rosen Plevneliev raised another issue, complaining that “Russia uses every opportunity to divide and weaken the European Union.” Beyond a couple of friendly meetings, however, little has come from the supposed Athens-Moscow axis.

“There is fundamental value to Europe in having Greece as part of its orbit,” argued Stavridis, but the reverse also is true. Irrespective of the debt negotiations and Eurozone membership, Greece will continue to have much at stake with Europe.

Despite past anti-American feeling, Greece has remained with the West. Moreover, the Tsipras government has not obstructed continuation of sanctions against Russia. In fact, Athens has consistently affirmed its participation in Europe.

Defense Minister Panos Kammenos, head of Syriza’s small coalition partner threatened: “If Europe leaves us in the crisis, we will flood it with immigrants, and it will be even worse for Berlin if in that wave of millions of economic immigrants there will be some jihadists.” However, the Syriza government would not want to open its border to terrorists.

Athens has criticized sanctions against Russia. But Greece is not alone in taking this position. Obviously the penalties have failed to reverse Russian policy in Ukraine. Best would be to use possible sanctions repeal to negotiate an admittedly imperfect compromise deal. Such an approach would be entirely consistent with Greece remaining part of the West.

The Greek saga is far from over. The paranoid panic that Greece’s economic problems could destroy Europe’s and America’s geopolitical standing should generate a mix of scorn and laughter.

As I point out in Forbes online, “Washington should calm down, leaving the Greeks and other Europeans alone to solve their problems. Greece subsidized or not, in the Eurozone or out, really isn’t America’s business.”

This is from the New York Times editorial board: 

More than 50 countries agreed on Friday to eliminate tariffs on a wide range of technology goods like medical devices, navigation equipment and advanced semiconductors in a trade agreement that should benefit American manufacturers, consumers and the global economy.

Signatories to the Information Technology Agreement, which covers 201 product categories, include the United States, the European Union, China, South Korea and other members of the World Trade Organization. International trade in those goods totals about $1.3 trillion a year, or about 7 percent of all trade. 

I worry that I’m speaking to soon, but so far at least, I have not seen any of the usual trade critics complain about this deal.  With trade negotiations such as the Trans Pacific Partnership and the Transatlantic Trade and Investment Partnership, there are lots of groups who are fired up about protesting every stage of the process.  But with this deal to eliminate tariffs on tech goods, these same folks have not had much to say.  Which perhaps suggests a way forward for negotiating future trade deals – focus on lowering tariffs and other forms of pure liberalization, and stay away from “governance” issues such as intellectual property, labor and the environment.  The benefits are greater with this approach, and the controversy appears to be lower.

Yesterday, the Senate passed a six-year transportation bill that increases spending on highways and transit but only provides three years of funding for that increase. As the Washington Post commented, “only by Washington’s low standards could anyone confuse the Senate’s plan with ‘good government.’”

Meanwhile, House majority leader Kevin McCarthy says the House will ignore the Senate bill in favor of its own five-month extension to the existing transportation law. Since the existing law expires at the end of this week, the two houses are playing a game of chicken to see which one will swerve course first and approve the other house’s bill.

As I noted a couple of weeks ago, the source of the gridlock is Congress’ decision ten years ago to change the Highway Trust Fund from a pay-as-you-go system to one reliant on deficit spending. This led to three factions: one, mostly liberal Democrats, wants to end deficits by raising the gas tax; a second, mostly conservative Republicans, wants to end deficits by reducing spending; and the third, which includes people from both sides of the aisle, wants to keep spending without raising gas taxes.

This third group is no doubt the largest because it is politically the easiest position to take, and is the one responsible for the Senate bill. Gas taxes and other federal highway user fees bring in about $40 billion a year, while Congress is currently spending about $52 billion a year and wants to increase it by at least the rate of inflation. To make up the difference, the Senate bill includes a hodge-podge of ideas such as increasing customs fees and selling oil from the strategic petroleum reserve. As the Post noted, the one thing these sources of funds all have in common is that “none is related to surface transportation.”

According to the Congressional Budget Office’s analysis, these funding schemes will only be enough to last through 2018, after which Congress will have to find another $51 billion to keep the spending going for another three years. That shortfall alone is probably what killed the bill in the House, though it would be nice to think that House members were also wary of a 1,000-plus-page bill sprung on them at the last minute (scroll down to “SA 2266” or search for “DRIVE Act”).

Naturally, the Senate bill does nothing to fix any of the perverse incentives found in the current law, such as the fund that encourages transit agencies to choose the most expensive, rather than the most effective, transit solution in any corridor. Instead, it rewards transit agencies that have neglected their infrastructure by creating a new “state of good repair fund” to help restore that infrastructure, effectively telling the agencies that they can continue spending on new transit lines they can’t afford to maintain and Congress will bail them out.

These games won’t end until Congress does what is right rather than what is easy by returning to a true, pay-as-you-go system. While I agree with fiscal conservatives who think that the federal government doesn’t need to be involved in most transportation issues in the first place, as long as it is involved, the deficit spending is doing more harm than good by making state and local transportation agencies increasingly reliant on the federal government rather than on user fees. Opponents of the current system need to do more than support immediate devolution; they need to find a strategic path from the current system to one that is more responsive to transportation users. 

Calls are mounting in Congress (and among some influential opinion groups) for escalating Washington’s military intervention against ISIS in Iraq and Syria and for possible military action against Iran if the new nuclear agreement with that country falls apart.  Caution lights should be flashing about both the extent and durability of such sentiment for military action.  As I note in a recent article in the National Interest Online, this country has an unfortunate history of launching ill-considered armed crusades, often initially with enthusiastic public support.  But that support has a tendency to evaporate and turn to bitter recriminations unless certain conditions are met.  Policymakers need to appreciate that history as they consider intensifying U.S. involvement in the Middle East’s turbulent affairs.

Because most Americans believe that the United States embodies the values of individual liberty, human rights, and government integrity, a foreign policy that seems to ignore or violate those values is almost certain to lose the public’s allegiance sooner or later. That is what happened with such missions as the Vietnam War, the Iraq War and, more recently, the counterinsurgency war in Afghanistan.  It is not merely that the ventures failed to achieve quick, decisive results, although that aspect clearly played a role.  It was also that the United States was increasingly seen as expending blood and treasure on behalf of odious clients and dubious causes that had little or nothing to do with the republic’s vital interests.  A disillusioned public turned against those missions, and that development created or intensified bitter domestic divisions.

To sustain adequate public support for military ventures, the objective must be widely perceived as both worthy and attainable.  Without those features, public support for a policy either proves insufficient from the outset or soon erodes, and either development is fatal in a democratic political system.

Preserving public support requires officials to make an honest assessment of the issues at stake.  Too often, both during the Cold War and the post–Cold War eras, U.S. policymakers have hyped threats to American interests.  The alleged dangers posed by such adversaries as North Vietnam, Serbia, Saddam Hussein, the Taliban, and Syrian dictator Bashar al-Assad bordered on being ludicrous.  At times, it appears that U.S. officials have deliberately engaged in distortions to gin-up public support for purely elective wars.  On other occasions, officials seem to have succumbed to their own propaganda.  In either case, public support dissipates rapidly when evidence mounts that the supposed security threat to America is exaggerated.

That troubling history should reinforce the need for caution as U.S. leaders consider new military interventions, especially in the Middle East.  None of the proposed missions is likely to produce quick, decisive results—much less results with modest financial outlays and minimal casualties.  Moreover, escalating America’s involvement in the region’s myriad troubles puts the United States in a close de facto partnership with Saudi Arabia and its Gulf allies—some of the most corrupt, brutal governments on the planet.  Publics in the Middle East and around the world are watching, and the potential for unpleasant blowback is extremely high.  And as we saw with the wars in Vietnam, Iraq, and Afghanistan, the reaction of the American people to associations with sleazy foreign clients can become one of profound revulsion.  The conditions are in place for new foreign-policy debacles, if U.S. officials have not learned the appropriate historical lessons.

BEIJING—China’s capital looks like an American big city. Tall office buildings. Large shopping malls. Squat government offices. Horrid traffic jams.

The casual summer uniform is the same: shorts, athletic shoes, skirts, t-shirts, sandals, blouses. Even an occasional baseball cap.

It is a country which the Communist revolutionaries who ruled only four decades ago would not recognize. True believers still exist. One spoke to me reverently of Mao’s rise to power and service to the Chinese people. However, she is the exception, at least among China’s younger professionals.

Indeed, younger educated Chinese could not be further from Communist cadres once determined to create a revolution. The former are socially active, desire the newest technologies, and worry about going to good schools and getting good jobs. Cynicism about corrupt and unelected leaders is pervasive.

If there is one common belief, it is hostility toward government Internet controls. Students have complained to me in class about their inability to get to many websites and readily shared virtual private networks to circumvent state barriers.

But such opinions are not held only by the young. A high school student told me that his father urged him to study in America because of Beijing’s restrictions on freedom.

While Chinese from all walks of life are comfortable telling foreigners what they think, sharing those beliefs with other Chinese is problematic. The media, of course, is closely controlled. Internet sites are blocked, deleted, and revamped. Unofficial intimidation, legal restrictions, and even prison time await those who criticize Communist officialdom on social media and blogs.

But increasingly globalized Chinese are aware of their online disadvantage compared to their peers in the West. Google, YouTube, and Twitter are verboten. Today Bloomberg and the New York Times are beyond reach.

Last week as BBC television began to detail official abuses my TV went black. A couple minutes later BBC was back, after the China report had finished.

While internet and media restrictions have not prevented rapid economic growth, barring the PRC’s best and brightest to a world of information is likely to dampen innovation and entrepreneurship. Moreover, those denied their full freedoms are more likely to leave home. Many of China’s wealthiest citizens have been departing an authoritarian system unbounded by the rule of law.

Repression also stultifies China’s political evolution to a more mature and stable political order. Democracy provides an important safety valve for popular dissent.

The Chinese Communist Party’s control may not be as firm as often presumed. The oppressive establishment which most Chinese have faced for most of their lives is Communist.

Indeed, for many if not most party members, Communism is a means of personal advancement, even enrichment. President Xi Jinping’s anti-corruption campaign is popular, but is widely seen as politically motivated.

Moreover, Xi has abrogated the well-understood “deal” of the last four decades, that rulers can retire and be immune from future prosecution. Will incumbents so readily yield power in the future?

Perhaps even more threatening for the CCP is the potential for an economic slowdown and consequent political unrest. Already protests are common against local governments, which tend to be ostentatiously rapacious. What if that antagonism shifts against the center?

A poorer PRC means a poorer world: China is a major supplier and increasingly important source of global demand. A politically unstable Beijing would have unpredictable effects on its neighbors.

As I wrote for Forbes online: “Since Mao’s death in 1976, the PRC has changed dramatically—and dramatically for the better. But this second revolution has stalled. Economic liberalization remains incomplete. Political reform never started. Individual liberty has regressed.”

The Chinese people deserve to be free. The Chinese nation would benefit from their freedom. The rest of the world would gain from a freer Chinese nation. Everyone desiring a peaceful and prosperous 21st century should hope for the successful conclusion of China’s second revolution.

For almost 50 years, Dr. Ronald Hines has been a licensed veterinarian in Texas. After a spinal cord injury prevented him from continuing to provide in-person services, Dr. Hines started a website to provide advice on pet care. He never tried to be an animal’s primary veterinarian—he noted a disclaimer to that effect—and did not prescribe medication. 

After a decade of such practice without any complaints or problems, the Texas State Board of Veterinary Medical Examiners charged Dr. Hines with violating state law by failing to be physically present at the location of the pets before providing veterinary services. The U.S Court of Appeals for the Fifth Circuit upheld this restriction on Dr. Hines’s speech because, according to the court, any speech by a professional within the scope of his profession directed toward an individual’s circumstances isn’t protected by the First Amendment. 

Dr. Hines has asked the Supreme Court to review the case and Cato has filed a brief supporting that petition, joined by the Mackinac Center for Public Policy. 

The Fifth Circuit erroneously construed the Texas regulations as governing nonspeech conduct that only incidentally impacted speech. But everything that Dr. Hines did was speech!—there was no nonspeech conduct to regulate. Even if the regulations were content-neutral restrictions that incidentally restricted speech, the restrictions should have been reviewed under heightened scrutiny—meaning that the government would need to show a strong justification for its enforcement action. But the restrictions at issue here are explicitly content-based: Dr. Hines could’ve talked about any topic he wanted, except the topic of veterinary care. 

Under the lower court’s logic, the following people would be unknowingly violating Texas law: Dr. Sanjay Gupta provides health information online; Loveline Radio provides relationship and drug-addition advice; The Mutual Fund Show provides financial advice; in addition to radio talk shows on pet care. All these people, and many others, would be expected to know and follow the detailed regulations of every single state. 

The physical examination requirement doesn’t even make sense as a matter of basic veterinary practice. It only requires that vets visit a location, not that they actually examine a particular animal. It prevents a vet’s colleague from relying on notes and records when the primary-care vet is unavailable. Dr. Hines couldn’t even tell a client that her pet’s condition sounded serious and so the owner should, say, not let the animal drink water and bring it to him right away. 

Moreover, someone who wasn’t a licensed veterinarian could have provided the same advice as Dr. Hines without a problem; the law prohibits good information from qualified individuals while allowing unqualified individuals to give bad advice. The regulation just ends up hurting the poor, who can’t afford to travel to Dr. Hines, and practically creates geographic limitations on speech. 

The Supreme Court should take up Hines v. Alldredge and protect basic First Amendment rights in the context of occupational regulation.

Late last year, Reason magazine’s crack legal correspondent Damon Root chronicled the rise of the modern libertarian legal movement in his important new book, Overruled: The Long War for Control of the U.S. Supreme Court. In it, he focused especially on the struggle that some of us have been engaged in for more than four decades to recast the terms of the debate over the proper role of the courts from “judicial activism” and “judicial restraint” to “judicial engagement” and “judicial abdication.” That shift has been crucial because it refocused the debate from judicial behavior to where it should have been all along, namely, on the proper interpretation of the law before the court.

The struggle to bring about that shift, although much further along than when it began decades ago, is far from finished: Witness hearings just two days ago before the Senate Judiciary Committee’s Subcommittee on Oversight, Agency Action, Federal Rights and Federal Courts. Called by Subcommittee Chairman Ted Cruz in the wake of last month’s Supreme Court decisions in King v. Burwell, upholding Obamacare’s subsidies for insurance purchased through exchanges established by the federal government, and Obergefell v. Hodges, which made same-sex marriage the law of the land, the hearings were titled “With Prejudice: Supreme Court Activism and Possible Solutions.”

As the title suggests, committee conservatives, in the majority, remain focused on what they see as the Court’s activism. Their witnesses were two professional friends of mine, former Chapman Law Dean and now Professor John Eastman and Ethics and Public Policy Center President Ed Whelan. Nominally representing the liberal activist side was Duke Law Professor Neil Siegel.

I say “nominally” because Professor Siegel took pains early in his testimony to expose problems with the very idea of judicial activism. If defined in opposition to judicial deference, he said, many of the recent decisions of the Court’s “conservatives” would have to be called “activist.” But if the term is defined as engaging in legal infidelity, then we’re arguing not about activism or restraint but about whether the judge read the law correctly.

That’s right. In fact, “judicial engagement” emerged in libertarian thought mainly in opposition to calls from conservatives like Robert Bork and Antonin Scalia for courts to be more deferential to the political branches. But it was animated by the contention that the basic problem with conservative deference was its misreading of the law. In particular, under our Constitution, as Bork put it, majorities were entitled to rule in “wide areas” simply because they were majorities, even if in “some areas” minorities were entitled to be free from majority rule—to which many of us responded that that had the law exactly backwards, turning the Constitution on its head.

But having put his finger on the real source of the differences between the activist and restraint schools, Siegel then went on to illustrate why conservatives called the hearings in the first place, arguing that the Court got it right in both King and Obergefell. In King, Siegel said, Chief Justice John Roberts was right to ignore both the text at issue in the case and the rationale for that text and instead “to read the statute in context and as a whole.” Those, of course, are the kinds of words that enable courts to reach almost any conclusion they wish—to engage in the “activism” conservatives rightly condemn. On reading the law correctly here, credit the conservatives.

Obergefell, however, is another matter. Here too conservatives believe the Court got the law wrong, but they’re wrong. We see why in the two conservatives’ statements. Focusing almost entirely on the “possible solutions” part of the hearings’ title, Professor Eastman nonetheless noted almost in passing that the Constitution left most power with that states. That is true, but the Civil War Amendments made substantial changes to our federalism; the Fourteenth Amendment in particular, for the first time, provided federal remedies, through the courts, for state violations of our rights. Eastman appreciates that more than most conservatives, but he doesn’t go far enough in recognizing the countless unenumerated rights we retained when we reconstituted ourselves in 1787, which the Fourteenth Amendment made good against the states in 1868.

Like Ed Whelan, he would have left it to the states to define marriage in a way that excluded same-sex couples from its benefits. But the problem with that approach surfaced when Whelan rested it on the methodology of original understanding. “Every state,” he said, “had defined marriage as the union of a man and a woman when the Constitution was first adopted and when the Fourteenth Amendment was ratified.” True, but several states practiced segregation when that amendment was ratified and all prohibited interracial marriage.

When the Supreme Court finally put an end to those practices, therefore, it didn’t cite original understanding. It couldn’t, because that understanding supported those practices. Instead, it relied on the original meaning of the words the drafters wrote. And fortunately, that meaning was better than their actions. By its plain text, the Equal Protection Clause prohibits states from discriminating against its dispensation of privileges and benefits—including those pertaining to marriage—unless they have a good reason. And in that regard, the states policy reasons did not suffice, a point Siegel summarizes in his testimony.

Unfortunately, Justice Anthony Kennedy only touched on the equal protection rationale when he wrote for the Court in upholding same-sex marriage; nor did he draw a distinction between original understanding and original meaning. Had the Court drawn that distinction, it might have grounded Obergefell on the right foundation and reached the right result in King as well. For a fuller account of these issues, read the three statements in the link above for the  hearings—and see here for a libertarian response. There is more work for modern libertarian legal theory to do. 

 

This is a very interesting development—one that’s been coming for a long time: Your car is a computer, some cars can be hacked, and now we know they can be hacked in dangerous ways.

The correct public policy response is implicit in this very good Wired article describing the whole thing. “Automakers need to be held accountable for their vehicles’ digital security,” writer Andy Greenberg says, quoting auto hacker Charlie Miller thus: “If consumers don’t realize this is an issue, they should, and they should start complaining to carmakers.”

That’s two very important consumer protection systems in a couple of brief sentences: In one, carmakers suffer lost sales if their cars are hackable or perceived as such. The market feedback system—including the article itself—causes automakers to work to make their cars less hackable.

In the other, carmakers suffer monetary damages if their cars are actually hacked in ways that cause injury. The common law tort system causes automakers to work to make cars less hackable. (I don’t know if this is what Greenberg had in mind for accountability, but it’s the legal accountability that’s already in place.)

Yes, these systems cause carmakers to seek to control perceptions of hackabillity and to deny responsibility when a harmful hack occurs. But on the whole they promote good behavior on the part of automakers, and safety for drivers.

Speaking of the common law, we are on the threshhold of a sea change in how liability for software defects is apportioned by contract. Software has typically been sold or licensed without any guarantee of its fitness, letting the risk of software failures fall entirely on the purchaser. That model can’t apply where failures are dangerous such as in driving controls and many implanted medical devices. There, software sellers are liable for failure.

As software grows more secure and in applications where successful functioning is important, liability for flaws will shift to sellers. That should generally happen at the pace buyers demand, based on their willingness to pay.

As is typical, it is not the market processes and common law already husbanding automakers’ behavior that get the attention in Greenberg’s article. He writes of new legislation that would “set new digital security standards for cars and trucks.” Senators Markey (D-MA) and Blumenthal (D-CT) undoubtedly want drivers to be protected. What is open to question is whether any group of politicians in Congress and lawyers in federal agencies can set standards better than the myriad actors in the marketplace, allocating risks according to their desires and needs, under common obligations to protect others from harm.

My article in this week’s Washington Examiner magazine argues that because U.S. wars seem so cheap, they tempt us into making war too casually. I explain that while this tendency isn’t new, recent technology breakthroughs, which allowed the development of drones, have made it worse. We now make war almost like people buy movies or songs online, where low prices and convenience encourage purchase without much debate or consideration of value. I label the phenomenon one-click wars.

If we take occasional drone strikes as a minimum standard, the United States is at war in six countries: Pakistan, Somalia, Yemen, Syria, Afghanistan, and Iraq, with Libya likely to rejoin the list. In the first three, U.S. military action is exclusively the work of drones. Regular U.S. ground forces are present only in Iraq, where they avoid direct combat, and Afghanistan, where they mostly do.

There’s something remarkable in that combination of militarism and restraint. How can we be so willing to make war but so reluctant to take risks in making it?

My explanation starts with power. Wealth, technological prowess, and military might give the United States unique ability to make war around the world. But labor scarcity, liberal values, and our isolated geography that makes the stakes remote  limit our tolerance for sacrificing lives, even foreign ones, in war. This reluctance to bear the human costs of war leads to reliance on long-range technology, especially airpower.

Airpower, despite its historical tendency to fail without help from ground forces, always offers hope that we are only a few bombs away from enemy capitulation. The promise of cheap, clean wars is always alluring. They would let you escape the choice between the bloody sacrifices war entails and the liberal values it offends. 

Recent developments added to our proclivity to go for the quick military fix. Innovations in surveillance and targeting greatly enhanced airstrikes’ accuracy and paved the way for armed drones. Jihadists spread out among complex Islamist insurgencies. Meanwhile, the wars in Iraq and Afghanistan restored the U.S. public’s aversion to casualties, which the September 11 attacks had suppressed.

The belief that we can fight riskless war is a good problem to have. It causes are good things: wealth, power, and safety. But it’s still problem for a couple reasons.

One is a tendency to corrode democratic government and encourage dumb decisions. Our government’s division of war powers follows from the theory that conflict and debate about policy tends to improve it. That requires Congress to jealously guard its war powers. Unfortunately, it has tended to abdicate them where low costs keep the public disinterested. An engaged Congress is no antidote to dumb wars, But wars started by unchecked presidents are more likely to rely on dubious rationales and thus to be foolish.

The other problem is that wars are rarely as cheap as they initially seem. That’s especially true of drone strikes, I argue, because their costs are hard to see:

They initially either occur downrange, in the form of dead people whose families can’t vote, or in the future, as abstractions like resentment. Because these costs are slow to arrive and obscure, while the benefits are relatively concrete and immediate, drone strikes have a specious attraction. That makes them especially resistant to judicious debate.

I agree with those who argue that one of those risks is blowback, meaning delayed violence or diplomatic consequences. I also discuss the less-appreciated danger of escalation, where the strikes, by getting us involved in conflicts without winning them, create pressure for more costly measures.

My admittedly partial solutions to this problem of feckless war-making involve efforts to capture war costs up front to heighten debate about rationales. The piece mentions several ways to do so. It ends with the suggestion that because bombing people tends to produce unanticipated trouble, “those unwilling to pay much for wars should probably avoid them.”

Former Obama administration economist, Jared Bernstein, argues for higher taxes in a New York Times op-ed yesterday. His piece begins:

Like it or not, the campaign season is upon us, and that almost certainly means somebody is going to try to buy your vote with a tax cut — even though average federal tax rates are already low in historical terms, our tax code remains tilted in favor of the wealthy, and our children, neighborhoods and infrastructure desperately need public investment.

I tried to use my imagination and think of how a thoughtful and intelligent liberal like Bernstein might conceive of tax policy. But I could not come up with any scenario under which this statement might be considered true: “our tax code remains tilted in favor of the wealthy.”   

The plain fact of the matter is that the federal tax system is highly graduated, or what liberals call “progressive.” Lower-income households pay much smaller shares of their income in taxes than do higher-income households.

In his article, Bernstein uses data from the respected Tax Policy Center (TPC), as I do here. The first table shows TPC estimates of average federal tax rates (total taxes divided by income) for U.S. households (specifically, “tax units”) in five income groups.

Average Federal Tax Rates, 2015

Income Group Income Tax Payroll Tax Other Taxes Total Taxes Lowest

-5.0%

6.4%

2.2%

3.6%

Second

-1.9

7.6

2.1

7.8

Middle

2.9

7.9

2.3

13.1

Fourth

6.1

8.4

2.5

17.0

Highest

15.6

6.0

4.1

25.7

   Source: Tax Policy Center estimates.

The average household in the highest group will pay 25.7 percent of its income toward taxes in 2015, which compares to 3.6 percent in the lowest group. The average household in the middle group will pay a rate about half that of the highest group. I don’t see how this data can be reconciled with Bernstein’s claim.

Data from other sources shows the same tilt in tax burdens toward high earners. Actually, “piling on” on high earners is more accurate than “tilt.” The following screenshot is from Table A-6 in this Joint Committee on Taxation report. I’ve circled the key column. Average tax rates rise rapidly as income rises. The highest earners in 2015 will pay an average federal tax rate of 33.1 percent, which is about twice the rate of those with middling incomes, and many times the rate of people at the bottom.

 

Perhaps Bernstein meant “tilted in favor of the wealthy” compared to other countries. But we have pretty solid data showing that is not correct either. Tax Foundation summarizes OECD data here showing that the U.S. has the most graduated, or progressive, tax system among the high-income nations.

Bernstein is right that the “campaign season is upon us.” But that doesn’t give him license to tilt tax data upside down to fit his policy narrative.

Criticizing my recent post-mortem on King v. Burwell, Scott Lemieux kindly calls me “ObamaCare’s fiercest critic” for my role in that ObamaCare case. Other words he associates with my role include “defiant,” “ludicrous,” “farcical,” “dumber,” “snake oil,” “ludicrous” (again), “irrational,” “aggressive,” “comically transparent,” and “dishonest.”

Somewhere amid the deluge, Lemieux reaches his main claim, which is that (somehow) I admitted: “the King lawsuit wasn’t designed to uphold the statute passed by Congress in 2010. It was intended to ‘enfranchise’ the people who voted against the bill.” I’m not quite sure what Lemieux means. But perhaps Lemieux doesn’t understand my point about how the Supreme Court helped President Obama disenfranchise his political opponents.

As all nine Supreme Court justices acknowledged in King, “the most natural reading of the pertinent statutory phrase” is that Congress authorized the Affordable Care Act’s premium subsidies, employer mandate, and (to a large extent) individual mandate only in states that agreed to establish a health-insurance “Exchange.” That is, all nine justices agreed that the plain meaning of the operative statutory language allows states to veto key provisions of the ACA—sort of like the Medicaid veto that has existed for 50 years and lets states destroy health insurance for millions of poor Americans. The Exchange veto includes the power to shield millions of state residents from the ACA’s least-popular provisions: the individual mandate and the employer mandate.

When 34 states exercised those vetoes by refusing to establish Exchanges, it was a repudiation of the ACA, driven by voters who elected ACA opponents to statewide offices in 2009, 2010, 2011, and 2012.  In the 2010 elections, the first to be held after President Obama signed the ACA, Republicans netted control of six governorships, 22 state legislative chambers, and took outright control of more state legislatures than they had since 1952. During 2012, when states had to make the crucial decision about whether to establish an Exchange, Republicans controlled 29 governorships, two legislative chambers (and thus the entire legislature) in 26 states, and one legislative chamber in a further five states. That doesn’t even include Nebraska’s legislature, which is unicameral, non-partisan, and also refused to establish an Exchange. Public opposition to the ACA was a major—if not the major—factor in these gains, as well as GOP gains in Congress. This should not be surprising, given the ACA’s instant and enduring unpopularity.

So when President Obama chose to implement those subsidies and mandates in states that refused to establish Exchanges, he wasn’t just exceeding the powers Congress granted him under the ACA. He wasn’t just imposing taxes and spending money directly contrary to “the most natural reading of the pertinent statutory phrase.” He was actively disempowering his political opponents in the states by stripping state officials of a veto that “the most natural reading of the pertinent statutory phrase” granted them. He disenfranchised Republican and independent voters—individual Americans who put those state officials into office—by taking away the effect of their votes.

And when the Supreme Court strained to reach its counter-textual King ruling, it did not merely ratify these never-authorized taxes and entitlements. It also ratified the president’s sweeping attempt to disenfranchise his political opponents. Nixon had nothing on these guys.

In the face of the disenfranchisement of millions of voters for partisan advantage, Lemieux responds with hand-waving: “Allegedly, single special elections in 2009 and 2011 are supposed to be decisive repudiations of the Affordable Care Act.” I honestly don’t know what he’s talking about. The relevant 2009 and 2011 elections were for state offices in New Jersey and Virginia, and were not special elections. The only special election that really bears on any of this was Scott Brown’s special election to the U.S. Senate from Massachusetts—also a repudiation of the ACA—and that was in January 2010.

Lemieux claims the 2012 presidential race is a more accurate reflection of the demos than pesky state and congressional elections anyway, because “the man who signed the ACA [got] nearly five million more votes than his opponent.” That’s true, but it hardly means what Lemieux implies. Since the president’s opponent had supported an identical law while governor of Massachusetts, but then distanced himself from it, one could as credibly claim the voters would have preferred an ObamaCare opponent, but barring that they preferred sincerity to opportunism. That interpretation would be fortified by the fact that those same voters have consistently opposed the ACA, and elected a Congress (it’s a law-making body) devoted to repealing it.

As the high concentration of colorful descriptors in Lemieux’s blog post suggests, it could stand even more debunking. But I’ll leave off here to see if he and I can at least agree on where we disagree.

You Ought to Have a Look is a feature from the Center for the Study of Science posted by Patrick J. Michaels and Paul C. (“Chip”) Knappenberger.  While this section will feature all of the areas of interest that we are emphasizing, the prominence of the climate issue is driving a tremendous amount of web traffic.  Here we post a few of the best in recent days, along with our color commentary.

In case you missed it the House Natural Resources Committee, this week, held a hearing examining the Administration’s determination of the social cost of carbon—that is, how much future damage (out to the year 2300) the Administration deems is caused by the climate change that results from each emitted (metric) ton of carbon dioxide.

As you may imagine from this description, determining a value of the social cost of carbon is an extremely contentious issue, made more so by the fact that the Obama Administration requires that the social cost of carbon, or SCC, be included in the cost/benefit analysis of all federal actions (under National environmental Protection Act, NEPA) and proposed regulations.

Years ago, we warned about how powerful a tool the SCC was in the Administrations hands and have worked to raise the level of public awareness. To summarize our concerns:

The administration’s SCC is a devious tool designed to justify more and more expensive rules and regulations impacting virtually every aspects of our lives, and it is developed by violating federal guidelines and ignoring the best science.

The more people know about this the better.

Our participation in the Natural Resources Committee hearing helped further our goal.

That the hearing was informative, contentious, and well-attended by both the committee members and the general public is a testament to the fact that we have been at least partly successful elevating the SCC from an esoteric “wonky” subject to one that is, thankfully, starting getting the attention it deserves.

In this edition of You Ought to Have a Look, we highlight excerpts from the hearing witnesses, which along with our Dr. Patrick Michaels, included Dr. Kevin Dayaratna (from The Heritage Foundation), Scott Segal (from the Policy Resolution Group) and Dr. Michael Dorsey (from US Climate Plan).  The full written submissions by the witness are available here.

For what it’s worth, Michaels, Dayaratna, and Segal were majority witnesses, while Dorsey was the witness selected by the minority. During the hearing, the US Climate Plan (the organization which Dorsey co-founded and serves as the Vice President of Strategy) sent out this tweet:

So in the name of fair play, we’ll focus on the testimony of the other three witnesses.

Scott Segal’s testimony detailed why the SCC was a misguided concept that is inappropriate as a basis for federal rulemaking and administrative action, and further, violates the Administrative Procedures Act. Basically, Segal thinks the current SCC determination procedure should be dispensed with. Here’s a tidbit:

[A]s the President’s Climate Action Plan comes further into focus, more and more regulations claiming to reduce carbon emissions as a primary or secondary benefit will use SCC to appear cost-beneficial when the truth might be otherwise. When actual environmental benefits fail to satisfy a skeptical audience, SCC should not be used as Hamburger Helper to make the dish look larger than it really is.

In his testimony, Kevin Dayaratna focused on the models used by the Administration to calculate the social cost of carbon and how sensitive they are to changes in the initial assumptions. Using a different set of assumptions, completely in keeping with other federal guidelines and mainstream science, Dayaratna reported:

Interestingly, under a reasonable set of assumptions, the SCC is overwhelmingly likely to be negative, which would suggest the government should, in fact, subsidize (not limit) carbon dioxide emissions. I do not use these results to suggest that the government should actually subsidize carbon dioxide emissions, but rather to illustrate the extreme sensitivity of these models to reasonable changes to assumptions.

He went on to reasonably conclude:

Our results clearly illustrate that the models used to estimate the SCC are far too sensitive to reasonable changes in assumptions to be useful tools for policymaking.

And finally, Pat’s testimony focused on the scientific shortcoming of the Administration’s SCC procedures. Here’s a juicy section from his written testimony:

Here, I address why this decision [not to alter the SCC in light of new science] was based on a set of flimsy, internally inconsistent excuses and amounts to a continuation of the [Administration’s] exclusion of the most relevant science—an exclusion which assures that low, or even negative values of the social cost of carbon (which would imply a net benefit of increased atmospheric carbon dioxide levels), do not find their way into cost/benefit analyses of proposed federal actions. If, in fact, the social cost of carbon were near zero, it would eliminate the justification for any federal action (greenhouse gas emissions regulations, ethanol mandates, miles per gallon standards, solar/wind subsidies, DoE efficiency regulations, etc.) geared towards reducing carbon dioxide emissions.

A video of Pat’s oral testimony is also available.

No matter how you look at it, whether from the science, the economics, the modelling, or the procedures involved, the Administration’s determination of the social cost of carbon is simply not up to snuff. Rather than playing a major influence in virtually all federal actions, it ought simply be discarded, as it is largely unfixable at this point in time.

Nearly a month ago Greek voters rejected more economic austerity as a condition of another European bailout. Today Athens is implementing an even more severe austerity program.

Few expect Greece to pay back the hundreds of billions of dollars it owes. Which means another economic crisis is inevitable, with possible Greek exit (“Grexit”) from the Eurozone.

Blame for the ongoing crisis is widely shared. Greece has created one of Europe’s most sclerotic economies. The Eurocrats, an elite including politicians, journalists, businessmen, and academics, determined to create a United States of Europe irrespective of the wishes of European peoples.

European leaders welcomed Athens into the Eurozone in 2001 even though everyone knew the Greek authorities were lying about the health of their economy. Economics was secondary.

Unfortunately, equalizing exchange rates cemented Greece’s lack of international competitiveness. Enjoying an inflated credit rating, Greece borrowed wildly and spent equally promiscuously on consumption.

Greece could have simply defaulted on its debts. However, Paris and Berlin, in particular, wanted to rescue their improvident banks which held Athens’ debt.

Thus, in return for tough loan conditions most of the Greek debt was shifted onto European taxpayers through two bail-outs costing roughly $265 billion. Greece’s economy has suffered, and the leftwing coalition party Syriza won Greece’s January election. Impasse resulted at the end of June as the second bailout expired.

Athens denounced its creditors for insisting on repayment. Prime Minister Alexis Tsipras criticized “ultimatums, blackmail and fearmongering.”

But writing off Greek debt would require European governments to confess their financial folly to their taxpayers. Restructuring Greek debt also would set off similar demands from other heavily indebted states.

Moreover, Eurocrats committed to a consolidated continental government refused to consider a Grexit. For decades European elites have simply rolled over any opposition.

So now what?

Tsipras encouraged his people to reject their creditors’ best offer, but then almost immediately announced that he was forced to request a third bailout.

After bitter debate Euro leaders offered some $96 billion. But they insisted on even tougher conditions than before.

Alas, assuming a new deal is formalized, there is little chance that it will work. The European Commission admitted that success required “very strong ownership of the Greek authorities” of reforms. But that has never been the case.

While past Athens governments reduced easily measured outlays, they failed at more fundamental restructuring. In pressing parliament to approve the latest program, Tsipras announced: “The government does not believe in these measures.”

Understandably, European distrust of Athens is deep. Moreover, the IMF, a party to the first two bail-outs, proclaimed that the latest agreement is not viable.

Germany suggested “reprofiling” the debt, that is, further lengthening maturities and reducing interest payments. However, Euro leaders insisted that any relaxation of Greece’s debt burden could only follow full implementation of the reform program. And even “a very substantial re-profiling,” noted the European Commission, “would still leave Greece with very high debt-to-GDP levels for an extended period.”

Perhaps even more significant, as I noted in Forbes online, “instead of advancing continental consolidation the common currency has become an obstacle to European political union. Populist parties are rising across Europe. Some oppose austerity and others criticize bailouts, but all appeal to people who feel ignored and victimized by the Eurocrats and other elites.”

Indeed, the latest plan is dividing long-time allies. La Figaro reported on “extremely hard, even extremely violent” discussions among the European governments. German Finance Minister Schaeuble publicly challenged Chancellor Merkel.

Syriza could break apart. Tsipras won initial parliamentary approval for the new deal only with opposition support. On a second vote he lost 32 Syriza deputies, more than half of the party executive committee, and three cabinet members. New elections are likely this fall.

Europe’s leaders want to believe that they have solved the latest Greek crisis. However, the third bailout likely will not be the final word. For the first time in decades, the European Project is in serious doubt.

Like certain weeds and infectious diseases, some myths about banking seem beyond human powers of eradication.

I was reminded of this recently by a Facebook correspondent’s reply to my recent post on “Hayek and Free Banking.” “We had free banking in the US from 1830 until 1862,” he wrote. “It didn’t work out too well.” “During the Wildcat Era,” he added, “banks were unregulated and failed by the hundreds.”

Imagine the effect my critic must have anticipated — the crushing blow his revelations would surely deal to my cherished beliefs. Upon reading his words, my eyes widen; my jaw goes slack. Can this really be so?, I ask myself? I read the ominous sentences again, more slowly, sub-vocalizing. Beads of sweat gather across my brow. Then, pursing my lips, my eyes downcast, I turn my head, first left, then right, then left again. If only I had known! All these years…no one ever…I mean, how was I supposed…it never occurred to me… DARNITALL! Why didn’t I think of looking at the U.S. experience before shooting my mouth off about free banking?

Well, that isn’t what happened. “What cheek this fellow has!” was more like it. (OK, it wasn’t exactly that, either.) Of course I’ve looked into the U.S. record. So has Larry White. And Kevin Dowd. And every other dues-paying member of the Modern Free Banking School. We’ve looked into it, and we’ve found nothing there to change our minds concerning the advantages of freedom in banking.

So what about all those “unregulated” wildcats? First of all, there’s never been a time in U.S. history when banking was truly unregulated, or anything close. Up until 1837, just getting permission to open a bank was a hard slog, when it wasn’t altogether impossible. Here’s Richard Hildreth’s tongue-in-cheek description of how one went about becoming a banker back in 1837:

The first thing is, to get a charter. One from the General Government, with exclusive privileges, and a clause prohibiting the grant of any other bank, is esteemed best of all. But such a charter is a non-such not easy to be got.[1]

Next best is a State Bank, in which the state government takes a portion of the stock, with a clause, if possible, prohibiting the grant of any other bank within the state. But if such a bank is not to be had, a bare charter, without any exclusive privileges, should be thankfully accepted.

It is very desirable however, that no other bank should be permitted in the county, city, town or village, in which the new bank is established; and all existing banks, are to join together upon all occasions, in a solemn protest against the creation of any new banks, declaring with one voice, that the multiplication of small banks, — which, by way of emphasis, may be denounced, as “little peddling shaving shops,” — is ruinous to the country, produces a scarcity of money, &c. &c. &c.

In order to obtain a charter, it is necessary to be on good terms with the legislature applied to. Obstinate opposers may be silenced by the promise of a certain number of shares in the stock, — which shares, if very obstinate, they must be allowed to keep without paying for.

This being properly prepared, a petition is to be presented to the legislature, representing that in the town of ——–, the public good requires the establishment of a bank. … The bank is to be asked for, solely on public grounds; not a whisper about the profits the petitioners expect to make by it.

If the petition is coolly received, it may be well to revise the private list of stock-holders, and to add the names of several of the legislators. …

If nothing better can be done, employ some influential politician to procure a charter for you, and buy him out at a premium.[2]

When Hildreth wrote, around 600 U.S. banks were in business. That may seem like plenty. But the fact that the vast majority of these were in the northeast, and that hardly any had branches, meant that most U.S. communities still had no banks at all. In most territories and states west of the Mississippi, becoming a banker wasn’t just difficult: it was illegal.

1837 was also, however, the year in which Michigan passed a “free banking” law, becoming the first of thirteen states that would pass similar laws over the course of the next two decades. The laws provided for something akin to a general incorporation procedure for banks, making it unnecessary for state legislators to vote on specific bank bills, and to that extent improved upon the former bank-by-bank charter or “spoils” system. But despite the name, which suggested, if not completely unregulated banking, at least the sort of lightly-regulated banking for which Scotland was then famous, the laws didn’t even come close to allowing American banks the freedoms that their Scottish counterparts enjoyed. Indeed, the restrictions imposed on U.S. “free” banks proved so onerous that the laws don’t even appear to have achieved a substantial overall easing of entry into the banking business.[3]

Two rules, common to all U.S. free banking laws, were to have especially important consequences. The first denied U.S. “free” banks the right to establish branches—something their Scottish counterparts were famous for doing, and that even some chartered U.S. banks could and did do. The other required them to secure their notes using specific securities, which were to be lodged for safekeeping with state banking authorities. U.S. “free” banks were not free, in other words, to decide how to employ the funds represented by their notes, which were in those days a more important source of bank funding than bank deposits. Such “bond deposit” requirements were also unknown in the Scottish system.

So U.S. “free” banks were hardly “unregulated.” They did, however, “fail by the hundreds”– 2.42 hundred, to be precise, which was no small portion of the total. The question is, why did so many American “free” banks fail? Was it because they weren’t regulated enough? No sir: it was because they were over-regulated: the free banking laws of several states forced banks to invest in what very risky securities — and especially in risky state government bonds — while the rule against branching limited their ability to diversify around this risk, especially by relying more on deposits than on notes. It was owing to these restrictive components of U.S.-style free banking that scads of American free banks ended up going bust.

And that’s not just one kooky free banker’s opinion: it’s the opinion of every competent monetary historian who has looked into the matter.[4] According to Matt Jaremski, whose 2010 Vanderbilt U. dissertation is the most careful study to date, the bond-deposit requirements of antebellum free-banking laws “seem to be the underlying cause of the free banking system’s [sic] high failure rate relative to the charter banking system. While bond price declines were significantly correlated with free bank failures, they were not correlated with the failure rate of charter banks.” Moreover, it wasn’t the general level of bond prices that mattered, but only the prices of specific securities that banks were legally obliged to purchase.

And “wildcat” banking? It’s no coincidence that that expression appears to have first gained currency, so to speak, in Michigan in the 1830s, where it was used to refer to some of the more disreputable banks established under that state’s original free banking law.[5] That law proved such a fiasco that it was repealed just two years later, after inflicting heavy losses on innocent note holders.[6] The law appears to have encouraged more than a few bankers to throw large quantities of their notes onto the market, while situating their banks as remotely as possible, the better to avoid pesky redemption requests. But here, as with U.S. free bank failures generally, regulations were to blame. It just so happened that the securities banks were encouraged to hold under Michigan’s law were especially lousy, consisting as they did “either of bonds and mortgages upon real estate within this state or in bonds executed by resident freeholders of the state.”[7] Call it the Wild West version of Community Reinvestment.

Notwithstanding what happened in Michigan, and all the attention it received, “wildcat” banking, understood to mean banking of the fly-by-night sort, was actually quite rare. In Wisconsin, Indiana, and Illinois, whose free banking laws also proved disastrous, it was unimportant, if not altogether unknown; even in Michigan itself it doesn’t seem to have survived the first free-banking law.[8] Indeed, the all-around record of U.S.-style free banking improved significantly as the Civil War approached. Even banknote discounts — another consequence of unit banking that has been wrongly treated as a necessary consequence of having multiple banks of issue — had become almost trivial by the early 1860s. According to my own research, someone who, in October 1863, was foolish enough to purchase every non-Confederate banknote in the country for its full face value, in order to sell the notes to a broker in either Chicago or New York, would have suffered a loss on that transaction of less than one percent of his or her investment.[9] That’s less than the cost merchants incur today when they accept credit cards, or what people typically pay to withdraw cash from an ATM that doesn’t belong to their own bank.

The best reason I can think of for the persistence of the myth of rampant wildcat banking is simply that stories about it made for more titillating reading than ones about the mass of less colorful, if no less unfortunate, free-bank bank failures. Wildcat banking is to the history of banking what the O.K. Corral and Wild Bill Hickok are to the history of the far west.

Somewhat harder to account for is the fact that, in America at least, “free banking” has come to refer exclusively to the antebellum U.S. episodes (as well as to a similar — and mercifully short-lived — Canadian experiment). The expression was, after all, appropriated by U.S. state legislators for the sake of its appealing connotations, after having been in use for some time overseas, where it and its equivalents (“la liberté des banques,” “bankfreiheit,” etc.) continued to stand for genuinely unregulated banking, or something close to it. Sheer parochialism is, I’m afraid, partly to blame: many authorities on American banking, whether economists, historians, or economic historians, appear to be unfamiliar with European writings on free banking, or with the banking systems those writings regard as exemplary.

The limited interest that even some of the more painstaking authorities on U.S. style “free” banking have shown in free banking of the other sort seems to me a shame. After all, what could be more informative than to compare, say, Michigan’s experience with Scotland’s, so as to gain a better understanding of the consequences of laissez-faire banking on the one hand and of certain departures from laissez faire on the other? By failing, not only to make such comparisons, but (in some cases) to even recognize non-U.S.-style free banking and the literature concerning it, such experts have unwittingly encouraged people to confuse U.S.-style “free banking” with the real McCoy.

_________________

[1] Thanks to Andrew Jackson’s efforts, the Charter of the 2nd Bank of the United States had been allowed to expire the year before.

[2] Richard Hildreth, The History of Banks (Boston: Hilliard, Gray & Company, 1837), pp. 97-8.

[3] See Kenneth Ng, “Free Banking Laws and Barriers to Entry in Banking, 1838-1860.” Journal of Economic History 48 (4) (December 1988). Since the Scottish system was itself essentially a “charter” system, entry into it was also strictly limited. Limited entry was, indeed, the most important of several departures of pre-1845 Scottish banking was genuine laissez faire.

[4] See, among other works, Hugh Rockoff, The Free Banking Era: A Reexamination (New York: Arno press, 1975); Arthur J. Rolnick and Warren E. Weber, “The Causes of Free Bank Failures: A Detailed Examination,” Journal of Monetary Economics 14 (3) (November 1984); Gerald P. Dwyer, “Wildcat Banking, Banking Panics, and Free Banking in the United States,” Federal Reserve Bank of Atlanta Economic Review, December 1996; Howard Bodenhorn, State Banking in Early America: A New Economic History (New York: Oxford University Press, 2003); and Matthew S. Jaremski, “Free Banking: A Reassessment Using Bank-Level Data” (PhD Dissertation, Vanderbilt University, August 2010).

[5] Dwyer, p. 1.

[6] Michigan took another, more successful stab at free banking in 1857.

[7] Dwyer, p. 6.

[8] Ibid., pp. 9-10, and the studies mentioned therein.

[9] See my article, “The Suppression of State Banknotes.” Economic Inquiry 38 (4) (October 2000).

[Cross-posted from Alt-M.org]

Today, the Justice Department indicted Dylann Roof on 33 federal hate crime charges for the killings of nine people at Emanuel A.M.E. church in Charleston last month. This indictment is entirely unnecessary.

Hard as it may be for some to imagine now, there was a long time in this country when racially and politically motivated violence against blacks was not prosecuted by state and local authorities. Or sometimes, as in the case of Emmett Till—the young boy from Chicago who was lynched in Mississippi for allegedly being too forward with a white woman—prosecution was a farce and the perpetrators were acquitted.

But in the present case, South Carolina authorities moved quickly and effectively to catch Roof and did not hesitate to charge him with nine counts of murder. This was South Carolina’s duty and their law enforcement officers have appeared to perform professionally and competently. 

The Department of Justice should be more judicious with its funds and resources. The opportunity costs of a duplicative prosecution takes resources away from crimes that fall more appropriately in the federal purview, such as interstate criminal enterprises and government corruption. Today’s indictment is federal meddling in a case the state already has under control.

Even if some wholly unlikely chain of events leads to Roof’s acquittal, the DOJ could push forward with their prosecution at that time. But, in reality, that isn’t going to happen and no one at DOJ thinks it will. By not waiting for the outcome of the state’s prosecution, the timing strongly suggests the DOJ wants to assume jurisdiction for Roof’s prosecution. Thus, this indictment is an unabashed political move.

While the murders were rightly condemned as a national tragedy, it was a tremendous blow to the community of Charleston and the state as a whole. As such, the primary responsibility for prosecuting Dylann Roof belongs to South Carolina. Neither national grief nor DOJ politics should stand in the way of South Carolina’s prerogative to deliver justice on its own terms.

UPDATE: Shortly after this post went live, U.S. Attorney General Loretta Lynch released a statement on the indictment. Notably, she referred to the state and federal cases as “parallel prosecutions.” But Roof cannot be in two courtrooms at the same time and so one proceeding will have to take place before the other.

It is hard to identify any justice interest served by federal prosecution. Rather, this appears to be for the institutional interests of the Justice Department. 

Pages