Cato Op-Eds

Individual Liberty, Free Markets, and Peace
Subscribe to Cato Op-Eds feed

There’s no agreement on the most important variable for state tax competitiveness.

I’m sympathetic to the final option, in part because of my disdain for the income tax. And if an income tax is imposed, I prefer a simple and fair flat tax.

With that in mind, here’s a fascinating infographic I received via email. I don’t know if Reboot Illinois is left wing, right wing, or apolitical, but they did a very good job. I particularly like the map showing zero-income tax states (gray), flat tax states (red), and states with so-called progressive tax schemes (blue).

For what it’s worth, Illinois taxpayers should fight as hard as possible to preserve the state’s flat tax. If the politicians get the power to discriminate among income classes, it will just be a matter of time before all taxpayers are hit by higher rates.

Now let’s shift to the spending side of the fiscal ledger.

Like any good libertarian, I generally focus on the size of government. I compare France with Hong Kong and that tells me that big is bad and small is good.

But regardless of whether a government is large or small, it’s desirable if it spends money efficiently and generates some benefit. I shared, for instance, a fascinating study on “public sector efficiency” from the European Central Bank and was not surprised to see that nations with smaller public sectors got much more bang for the buck (with Singapore easily winning the prize for the most efficient government).

So I was very interested to see that WalletHub put together a report showing each state’s “return on investment” based on how effectively it uses tax monies to achieve desirable outcomes for education, health, safety, economy, and infrastructure, and pollution.

I’m not completely comfortable with the methodology (is it a state government’s fault if the population is more obese and therefore less healthy, for instance, and what about adjusting for demographic factors such as age and race?), but I nonetheless think the study is both useful and interesting.

Here are the best and worst states.

One thing that should stand out is that the best states are dominated by zero-income tax states and flat tax states.

The worst states, by contrast, tend to have punitive tax systems (Alaska is a bit of an outlier because it collects - and squanders - a lot of revenue from oil).

P.S. WalletHub put together some fascinating data on which cities get a good return on investment (i.e., bang for the back) for spending on police and education.

Over at Cato’s Police Misconduct web site, we have selected the worst case for the month of March.  It’s the scandal plagued Sheriff’s Office in Iberia Parish, Louisiana.

Sheriff Louis Ackal and Lt. Col. Gerald Savoy were indicted last month for criminal civil rights violations.  Eight former deputies have already pled guilty to similar charges.  False testimony in court and false allegations in official documents.  Hundreds of criminal cases are now being reopened because they could be tainted by corrupt acts.  The now former deputies admit that they lied in various reports, including search warrant applications.

The scope of this scandal is worth repeating: hundreds of cases will have to be reexamined.

Go here for the full story.

The Association of American Medical Colleges (AAMC) has released projections showing that we may have doctor shortages in coming years. The demand for doctor services is rising in our aging society, but various factors in the health care industry are hampering supply.

But policymakers should remember that high income tax rates inhibit the supply of top earners across all industries. America’s tax system is the most “progressive” or graduated among OECD nations, and that has consequences. If the government penalizes the most productive people, they will work fewer hours, retire earlier, and make other decisions to reduce their labor efforts.

Some politicians on the campaign trail want to raise tax rates on high earners, and they seem to consider them little more than economic leeches. The truth is that most high earners are very industrious people who add crucial skills to the economy. The nation’s 708,000 doctors and surgeons are a case in point.

The Bureau of Labor Statistics (BLS) reports that “physicians practicing primary care received total median annual compensation of $241,273 and physicians practicing in medical specialties received total median annual compensation of $411,852” in 2014.

That high pay makes sense because doctors are highly skilled, face substantial stress, and often work long hours. The BLS notes, “physicians complete at least 4 years of undergraduate school, 4 years of medical school, and, depending on their specialty, 3 to 7 years in internship and residency.” And after all that training, they often “work long, irregular, and overnight hours.”

So how does Congress reward that hard work? It imposes punitive marginal income tax rates on them of up to 40 percent, with state income taxes on top of that. Even lower-earning doctors can be pushed into the highest income tax brackets if their spouses work.

Doctors are exactly the type of workers who have large negative responses to high tax rates because they have substantial flexibility in managing their careers. With high tax rates, fewer people will want to go into this difficult profession, stay in it, and work the long hours—and that ends up hurting all of us who use the nation’s health care system.

Ted Cruz’s campaign has produced a new ad targeting Wisconsin voters in the lead up to that state’s primary election tomorrow. As images switch back and forth between farms and factories, Cruz lists off a number of generic demographics and blue collar occupations that his campaign “is for.” He also complains about international trade.

Here’s the substantive part of the ad:

We will repeal Obamacare, peel back the EPA and all the burdensome regulations that are killing small businesses and manufacturing. 

I’m going to stand up for fair trade and bring our jobs back from China. 

We will see wages going up. 

We’ll see opportunity again.

Senator Cruz doesn’t tell us what he means by “fair trade” or promote a specific trade policy. The term “fair trade” is usually used by politicians as a euphemism for “protectionism.” In the past, Cruz has noted that the value-added tax (VAT) he has proposed is “like a tariff” because it imposes a greater burden on imports. Perhaps this is what he means by “fair trade.”

In any event, some simple facts about trade might be helpful to explain the problems with Cruz’s approach. For example, nearly 60% of imports are materials and capital goods used by American companies. So, Cruz’s “fair trade” is a tax on the very same “small businesses and manufacturing” whose burdens he wants to lift. Oh, and reduced employment in America’s thriving manufacturing sector is not due primarily to trade with China.

The biggest difference between Donald Trump and Ted Cruz on trade is that Trump has been more specific. Trump has singled out specific trade deals he opposes and has promised to tax specific companies specific amounts. Also, Trump’s disdain for trade has been apparent from the beginning of his campaign, while Cruz’s rhetoric and positions have been getting gradually worse in response.

While Trump has been getting lots of attention for his anti-trade rhetoric, it’s worth remembering that other candidates are not offering better policy proposals. They are simply less sensational in how they present the same flawed message.

This morning, the unanimous Supreme Court ruled that Texas was constitutionally justified in drawing state electoral districts based on total population, even if this meant that great disparatives result among districts in numbers of voters. This was the case of Evenwel v. Abbott, in which Cato had filed a brief arguing that the plaintiff-voters’ proposed “citizen of voting age population” (CVAP) metric was a much better one to use when applying the “one-person, one-vote” standard. 

While the eight-justice Court managed to achieve rare unanimity in an election-law case, at least in judgment, it did so only by declining to address the elephant in the voting booth. The Court failed to fill the gaping hole in its voting-rights jurisprudence: the question whether the venerable “one-person, one-vote” principle requires equalizing people or voters (or both) when crafting representational districts.

Still, the ruling leaves open to the states the ability to experiment further with populations considered in drawing district lines both for their own legislatures and federal House seats. Some states already exclude aliens, nonpermanent residents, nonresident military personnel, inmates who were not state residents prior to incarceration, and other non-permanent or non-voting populations.

States like Texas where total-population allocations continue to diverge from eligible-voter allocations—resulting in great disparities of voters between districts—should indeed try to ensure that each vote has the same relative weight, forcing the Supreme Court’s hand in some future case. Regardless of the outcome in that eventual case, however, jurists and political scientists should take heed of Justice Alito’s concurring opinion, which concisely explains why the “federal analogy” to the Constitution’s apportionment of House seats among states is inapposite to the question posed in Evenwel regarding redistricting.

For more background on the case, see my SCOTUSblog essay.

With so much medical research funded by pharmaceutical companies and others with a financial interest in the outcome, it can be hard to avoid conflicts of interest. Years ago, Harvard Medical School revamped its policy on professors reporting potential conflicts of interest after critics, including many students, claimed the old rules were too lax and hid the financial ties many professors had to the manufacturers of the drugs they researched and discussed in class. In an article about a new study published in JAMA on how statins do in fact lead to muscle pain in some patients, the Washington Post gives recognition to Dr. Steven E. Nissen’s approach to minimizing such conflicts.

One can see the potential for conflict in how JAMA describes the role of one of the drugs’ manufacturers:

This study was funded by Amgen Inc.[, which] was involved in the design and conduct of the study, selected the investigators, monitored the trial, and collected and managed the trial data. The sponsor participated in the decision to publish the study and committed to publication of the results prior to unblinding the trial. The sponsor maintained the trial database and transferred a complete copy to the Cleveland Clinic Center for Clinical Research and the sponsor to facilitate independent analyses. The sponsor had the right to comment on the manuscript, but final decisions on content rested with the academic authors.

It’s nice that the the Cleveland Clinic’s researchers had the final say, but one can still see the potential for Amgen to influence the study.

The study’s lead author, Dr. Nissen, voluntarily imposed on himself an additional safeguard. Nissen is chair of the department of cardiovascular medicine at the Cleveland Clinic. According to the Post:

Nissen has worked with many pharmaceutical companies to determine the efficacy of heart therapies but requires the companies to donate any payments to charity so that he receives no compensation or tax breaks, according to a bio accompanying the study.

Nissen’s bio at the Cleveland Clinic’s web site clarifies that this is his personal policy, not one imposed on him by his employer:

As a physician/scientist, Dr. Nissen is often called on by pharmaceutical companies to consult on the development of new therapies for cardiovascular disease. He maintains a long-standing personal policy that requires these companies to donate all related honoraria directly to charity.

There is probably no sure-fire way to eliminate conflicts of interest in medical research. Nissen’s rule could still allow for conflicts if, for example, researchers ask that funders direct their honoraria to a charity in which the researcher has a strong financial or personal stake. Conflicts can arise when government funds medical research, too. The best we can hope for is to minimize the potential for conflicts that impede the growth of medical knowledge.

Kudos to Nissen for taking a confidence-inspiring step in that direction.

When new competition arises, government-protected monopolies often seek further regulation to help suppress the comptetition.

In a refreshing twist, however,

A group of Boston taxi operators sued the city last year alleging they are being treated unfairly, in part because they have to buy expensive medallions to operate, and ride-hailing company drivers do not.

U.S. District Judge Nathanial Gorton on Thursday dismissed many of the taxi companies claims, but allowed claims that their equal protection rights had been violated to move forward, saying ride-hailing services are no different than taxis.

So competition from Uber and Lyft might generate less regulation of the taxi industry, which would benefit everyone except existing medallion owners. Fingers crossed.

From time to time we hear calls for withdrawing today’s lowest and the highest denominations of US currency, the pennyand the $100 bill, from circulation.  In the last year a growing chorus has been calling for prohibition of the $100 bill.

The rhetoric of the anti-high-denomination gang has gotten increasingly shrill.  Erstwhile Bank of England economist Charles Goodhart in September called the European Central Bank and the Swiss National Bank “shameless” for issuing “vastly high-denomination notes,” namely the €500 and SWF 1000, “which are there to finance the drug deals.”  Last month former Treasury Secretary Larry Summers writing in the Washington Post, and citing a working paper by Harvard colleague Peter Sands and student co-authors, extended the indictment to the US $100 bill: it too is used by criminals, so let’s get rid of it.

I have an alternative suggestion for removing $100 bills from the illegal drug trades:  Legalize the trade.  Your local pharmacy doesn’t pay cash to its wholesale suppliers.  My suggestion would reduce the demand for high-denomination currency.  Today’s high-denomination-currency prohibitionists, like today’s drug prohibitionists and yesterday’s alcohol prohibitionists, only think about the supply side.  But does anyone think that banning the $100 bill during Prohibition (when it had a purchasing power more than 11 times today’s, as evaluated using the CPI) and even higher denominations would have put a major dent in the rum-running business, if an army of T-Men couldn’t?[1]

The indictment needn’t stop with facilitation of drug trafficking, of course: opponents add tax evasion, corruption, terrorism, human smuggling, and any number of other activities to the list of crimes whose perpetrators find large-denomination currency convenient.  Sands et al. argue:  “By eliminating high denomination, high value notes we would make life harder” for such criminal enterprises.  No doubt.  But we would also make life harder for everyone else.  The rest of us also find high-denomination notes convenient now and again for completely legal and non-controversial purposes, like buying automobiles and carrying vacation cash compactly.  A serious survey of Eurozone currency use finds that “in Italy, Spain and Austria, … almost one-third of the interviewees always or often use cash for purchases between €200 and €1,000.”

The currency prohibitionists aren’t doing any serious cost-benefit analysis (at least I can’t find any), however.  In a 2014 paper Ken Rogoff enumerates various pros and cons of prohibiting paper currency, but he makes no attempt to attach weights to them.  (He nonetheless hints that he favors moving toward prohibition.)  Goodhart actually assigns a zero value to the cost of lost convenience for non-criminals, asserting that there is “no value whatsoever, except in seigniorage receipts to a number of small Swiss cantons,” from the SNB or ECB issuing the high-denomination notes.  Much less do the prohibitionists consider the effects on personal liberties.  Summers, at least rhetorically, seems to consider the mere suggestion that terrorists use high-denomination currency to be a clinching argument against letting anyone use it:  “The fact that — as Sands points out — in certain circles the 500 euro note is known as the ‘Bin Laden’ confirms the arguments against it.”

In the last few years the prohibitionists have added a second argument: abolishing high-denomination currency would raise the cost of storing currency for everyone.  You might think that raising the costs of a legal activity sounds prima facie like a bad thing, but then you’d be considering the matter from the point of view of ordinary citizens.  The prohibitionists regard it as a good thing because they consider it from the point of view of macroeconomic policy-makers.  If the cost of storing currency is (let’s say) only 0.1% per year[2], then the one-year interest rate can’t go below -0.1%.  It can only be barely negative, in other words.  Why?  At any lower interest rate people will store cash rather than hold one-year bonds, so demand for the bonds vanishes.  Abolish the $100 bill, and the cost of storing a given value in cash (now in the form of $20 bills) increases five-fold: storing bulk cash requires five times as many lockers or safes.  This allows the central bank more leeway to lower rates, in the example to -0.5%.  Thus Goodhart calls the abolition of the €500 note a move that might also prove beneficial by trimming interest rates.”

Both arguments are made by BOE chief economist Andrew Haldane, who in a widely-cited speech takes the arguments to their logical conclusion that society would maximize both (purported) benefits by prohibiting not only high-denomination currency but all currency:

A more radical proposal still would be to remove the ZLB constraint [the Zero Lower Bound on nominal interest rates] entirely by abolishing paper currency.  This, too, has recently had its supporters (for example, Rogoff (2014)).  As well as solving the ZLB problem, it has the added advantage of taxing illicit activities undertaken using paper currency, such as drug-dealing, at source.

Among the prohibitionists, then, one might say that those who want to prohibit only high-denomination notes are the moderates.  The benefit of prohibiting all cash in order to give central banks more leeway to conduct negative interest rate policy, to put it mildly, is very far from having been shown to be worth the cost.

Finally, I note that currency prohibitionists too often regard those who defend high-denomination notes not as intellectually honest but mistaken opponents, but rather as morally suspect characters.  Larry Summers goes out of his way to smear an ECB executive from Luxembourg (who has had the temerity to ask for better evidence before accepting the case for prohibiting high-denomination notes), and by extension to impugn the country’s entire set of policy-makers:

I confess to not being surprised that resistance within the ECB is coming out of Luxembourg, with its long and unsavory tradition of giving comfort to tax evaders, money launderers, and other proponents of bank secrecy and where 20 times as much cash is printed, relative to gross domestic [product], compared to other European countries.

In this one sentence Summers misrepresents several issues.  Luxembourg does protect depositor privacy (aka “bank secrecy”) in general, but not in cases where it would conflict with OECD rules on anti-money-laundering.  Contrary to Summers’ insinuation, banker confidentiality and financial privacy are valuable practices, not per se grounds for suspicion.  Regarding taxation, the former Treasury Secretary Summers, like many current European fiscal authorities from high-tax jurisdictions, blurs the distinction between sheltering illegal activity (“giving comfort to tax evaders”) and allowing legal tax avoidance through competitive tax policies.  Even on its critics’ evidence, Luxembourg’s less onerous tax laws (and specially negotiated tax deals) allow corporations and individuals to legally avoid taxes that they would have to pay if domiciled in other European jurisdiction.  Such competition is naturally not welcomed by authorities from the higher-tax jurisdictions.  As for disproportionate currency issuance (not literally printing) by Luxembourg banks, it should be noted that they also hold a disproportionate volume of bank deposits (about 2.8% of total EU bank deposits, despite the country generating only about 0.36% of GDP).

The case for prohibiting large-denomination currency, to summarize, is largely based on guilt by association or on wishful thinking about the benefits of allowing greater range of action to discretionary monetary policy.  A case based on serious evaluation of costs and benefits has not been made.  Indeed, not even attempted.

Personally, I favor the same policy with regard to large, small, and medium denominations of currency: we can withdraw all the denominations that the Federal Reserve and the Treasury issue so long as we let competing private financial institutions issue dollar-redeemable notes and token coins in any denominations they wish.  Then we will have a market test and not mere hand-waving regarding which denominations are worth having in the eyes of their users.


[1] According to the Bureau of Engraving and Printing, $500, $1000, $5000, and $10,000 denomination notes were last printed in 1945, and the Federal Reserve began retiring rather than re-issuing those deposited into banks beginning in 1969.  The Bureau also produced $100,000 “Gold Certificates” as part of its 1934 series.  Gold Certificates were, however, used only for transactions between the various Federal Reserve banks.

[2] Back of the envelope time:  educated estimates of the value of the huge cash block that drug kingpin Walter White kept in a rented storage locker in Albuquerque, New Mexico, on the popular TV series Breaking Bad, range from $40 to $80 million.  Let’s take the midpoint of $60 million.  An interior-access climate-controlled storage locker in Albuquerque with “state of the art security” that is large enough to store that block (let’s say 10’x12’) currently runs around $1800 per year.  That makes the storage cost only 0.0033% per year, one-thirtieth of 0.1%. The cost to rent a safety deposit box in a bank is of course much higher per unit volume.

[Cross-posted from]

Instances of civil forfeiture abuse are common. Indictments against law enforcement are not.

However, this week an Oklahoma grand jury returned an indictment against Wagoner County Sheriff Bob Colbert and one of his deputies stemming from a civil asset forfeiture case.  The indictment charges Sheriff Colbert and Deputy Jeffrey Gragg with extortion and bribery, among other things, stemming from the stop, arrest, and subsequent release of Torrell Wallace and a 17-year old passenger.  

According to the indictment, during a traffic stop of Wallace’s car, Deputy Gragg discovered $10,000 in cash.  When asked about the money, Wallace and his passenger both claimed it belonged to them and were subsequently arrested for “possession of drug proceeds.”

At the jail, the indictment alleges that Sheriff Colbert and Deputy Gragg told the men that they’d be allowed to go free if they signed over the $10,000 to the office’s asset forfeiture fund, which they did.

If this fact pattern sounds familiar, it’s because cases like this are not isolated incidents. The case of Javier Gonzalez sounds eerily familiar. In 2013 Sarah Stillman at The New Yorker documented several similar cases of seemingly extortive forfeiture actions.  Our friends at the Institute for Justice have compiled many more

Because the burden of proof for civil forfeitures is so low, even perfectly benign behaviors can result in seizures. These include carrying too much money, carrying money in an envelope or some other “unorthodox” fashion, or even “traveling to or from a known drug source city,” which seems to include virtually every major city in America.  Such a deficient process for government seizures of cash and property makes abuse inevitable, especially when the seizing agency is permitted to keep most or all of the proceeds for itself.

Sheriff Colbert’s lawyer, however, insists that the real culprit here is the push for forfeiture reform, and was quick to condemn the prosecution as politically motivated:

“We note that there is pending legislation in the Senate hoping to drastically change drug interdiction laws. Their accusations remain politically motivated.”

The “pending legislation” can only be a reference to efforts by State Senator Kyle Loveless to reform Oklahoma’s civil asset forfeiture laws.  Oklahoma has become one of the flashpoints in the national debate over civil forfeiture, and I’ve written previously on the hyperbolic rhetoric that Oklahoma law enforcement is using to shut down reform in the state.

As for the political motivations of reformers, it’s worth noting that support for civil asset forfeiture reform comes from all corners of the political spectrum, from the conservative Heritage Foundation, to the liberal ACLU, to libertarian organizations like the Institute for Justice. The bipartisan coalition in favor of reform has grown so massive that it seems the only factions still opposed to reform are the government agencies that directly profit from the seizures.

Sheriff Colbert’s lawyer also argued that the sheriff was simply following routine procedure:

“The underlying accusation involves a routine drug interdiction (lawful cash seizure of drug funds) by the sheriff’s department. The funds are not missing and are accounted for. The Sheriff’s Office timely deposited every cent of this money in the county’s treasurer account as required by state law. This money was earmarked for fighting drug trafficking to help protect the citizens.

“The basis for this interdiction as reported includes the accusation of deleted records. That accusation cannot be supported at trial. We are looking forward to being released from the gag order such that we can release the counter-evidence exonerating Sheriff Colbert.”

The lawyer’s statement makes no mention of the alleged arrangement to let the men go free in exchange for signing over their money, which seems to be the crux of the indictment. Even that tactic, however, has routinely taken place in other jurisdictions.

But to be fair, it’s understandable that law enforcement officers may sometimes struggle to distinguish civil asset forfeiture from extortion and bribery. Many reform advocates suffer from the same problem.

That is precisely why civil asset forfeiture should be abolished.

For more on civil asset forfeiture and the need for reform, check out our explainer on the topic.

Two articles in the same section of the Washington Post remind us of how government actually works. First, on page B1 we learn that it pays to know the mayor:

D.C. Mayor Muriel E. Bowser has pitched her plan to create family homeless shelters in almost every ward of the city as an equitable way for the community to share the burden of caring for the neediest residents.

But records show that most of the private properties proposed as shelter sites are owned or at least partly controlled by major donors to the mayor. And experts have calculated that the city leases­ would increase the assessed value of those properties by as much as 10 times for that small group of landowners and developers.

Then on B5 an obituary for Martin O. Sabo, who was chairman of the House Budget Committee and a high-ranking member of the Appropriations Committee, reminds us of how federal tax dollars get allocated:

Politicians praised Mr. Sabo, a Norwegian Lutheran, for his understated manner and ability to deliver millions of dollars to the Twin Cities for road and housing projects, including the Hiawatha Avenue light-rail line and the Minneapolis Veterans Medical Center.

Gov. Mark Dayton (D) said Minnesota has important infrastructure projects because of Mr. Sabo’s senior position on the House Appropriations Committee.

We all know the civics book story of how laws get made. Congress itself explains the process to young people in slightly less catchy language than Schoolhouse Rock:

Laws begin as ideas. These ideas may come from a Representative—or from a citizen like you. Citizens who have ideas for laws can contact their Representatives to discuss their ideas. If the Representatives agree, they research the ideas and write them into bills….

When the bill reaches committee, the committee members—groups of Representatives who are experts on topics such as agriculture, education, or international relations—review, research, and revise the bill before voting on whether or not to send the bill back to the House floor.

If the committee members would like more information before deciding if the bill should be sent to the House floor, the bill is sent to a subcommittee. While in subcommittee, the bill is closely examined and expert opinions are gathered before it is sent back to the committee for approval.

When the committee has approved a bill, it is sent—or reported—to the House floor. Once reported, a bill is ready to be debated by the U.S. House of Representatives.

Ah yes … an idea, from a citizen, which is then researched, and studied by experts, and debated by representatives, and closely examined and carefully considered. But it does help if you know the mayor, or if your representative has enough clout to slip goodies for his constituents into a bill – often without being researched, and studied by experts, and closely examined, and debated.

I wrote about this in The Libertarian Mind, in a chapter titled “What Big Government Is All About” – not the civics book version, but the way laws actually get made and money actually gets spent. 

The New York Times reports today that five key members of the US women’s national soccer team have filed a complaint with the Equal Employment Opportunity Commission charging U.S. Soccer, the private federation that oversees soccer in the United States, with wage discrimination. It seems that, on average (see the article for details), the federation pays women players considerably less than players on the men’s team, and that may be a problem under current law.

If Thomas Jefferson only knew what would follow from writing “All men are created equal.” What he meant, of course, was only that we all have equal rights to “life, liberty, and the pursuit of happiness,” and we’re free to pursue happiness however we think best. Most of us do that through voluntary association with others, which can result in all kinds of inequalities, yet violate the rights of no one. After all, whose rights are violated if Mia Hamm negotiates a salary with the team that is higher than a lesser player negotiates?

“Equality” hasn’t been pressed that far yet, but its life is still unfolding. In higher education, for example, we have Title IX to the 1964 Civil Rights Act, as amended over the years, which prohibits discrimination on the basis of sex. And that has led in stages to everything from the abolition of countless college men’s athletic programs, due to a paucity of female participants in equivalent programs, and more recently to sexual harassment charges against even female professors who write articles that some students find offensive, to college kangaroo-court trials of students charged with sexual assault, and much more.

Here at issue is the Equal Pay Act of 1963, part of the Fair Labor Standards Act of 1938, as amended and as administered and enforced by the EEOC. As one might imagine, the very idea of enforcing equal pay for “equal” work is fraught with peril, as the reams of exceptions in the Act only hint at. (Don’t take my word for that; read the Act.) Not surprisingly, the Act has become a full employment scheme for lawyers and a hammer for special-interest politicians.

When the EEOC is called on to see that women in the WNBA are paid the same as Lebron James, Kevin Durant, and other NBA stars, maybe we’ll see this “fairness” fiasco seriously called into question. But don’t bet on it.

The federal government spends about $30 billion a year on the war on drugs. Much of the spending is wasteful and counterproductive. This week, for example, an auditor’s report revealed how the drug bureaucracy flushed $86 million down the drain on an anti-drug aircraft that was never used.

The Washington Post described this Drug Enforcement Administration (DEA) and Department of Defense (DOD) boondoggle:

The plan was for DOD to modify a DEA plane to be used in counter-narcotics operations in a combat zone. … The Justice Department’s Office of the Inspector General (IG) determined “collectively, the DEA and DOD spent more than $86 million to purchase and modify a DEA aircraft with advanced surveillance equipment to conduct operations in the combat environment of Afghanistan, in what became known as the Global Discovery Program. We found that more than 7 years after the aircraft was purchased for the program, it remains inoperable, resting on jacks in Delaware, and has never flown in Afghanistan.”

The IG found that the “program has cost almost four times its original anticipated amount of $22 million.” Sadly, this sort of failure is par for the course when it comes to federal capital investments.  

Thank goodness for the IGs who uncover such waste, but what will come of these findings? Will anyone be fired? Will policymakers begin to rethink the drug war? Not yet it seems. When the Washington Post asked the DEA and DOD about the report, “the Pentagon did not reply and the DEA response was short boilerplate.”

For more on the government’s drug war, see Jeff Miron’s work here.

Marijuana is now legal under the laws of four states and the District of Columbia, but not under federal law. And this creates huge headaches for marijuana businesses: 

Two years after Colorado fully legalized the sale of marijuana, most banks here still don’t offer services to the businesses involved.

Financial institutions are caught between state law that has legalized marijuana and federal law that bans it. Banks’ federal regulators don’t fully recognize such businesses and impose onerous reporting requirements on banks that deal with them.

Without bank accounts, the state’s burgeoning pot sector—2,500 licensed businesses with revenue of $1 billion a year, paying $130 million in taxes—can’t accept credit or debit cards from customers, Colorado officials say.

Marijuana-related businesses instead use cash to pay their employees, purchase equipment or pay taxes to the state. Reports abound of business owners refurbishing retired armored bank trucks to transport money and hiring heavily armed security guards.

The best solution is repeal of federal prohibition. This is not on the policy table yet, but if more states legalize marijuana in November (at least five states are likely to vote on the issue), the pressure on federal policy might just hit the boiling point.

Plant pathogens have long been a thorn in the side of the agricultural industry, reducing crop production between 10-16 percent annually and costing an estimated $220 billion in economic losses (Chakraborty and Newton, 2011). What is more, there are concerns that such damages may increase in the future if temperatures rise as predicted by global climate models in response to CO2-induced global warming. Noting these concerns, Sabburg et al. (2015) write that “to assess potential disease risks and improve our knowledge of pathogen strengths, flexibility, weakness and vulnerability under climate change, a better understanding of how pathogen fitness will be influenced is paramount.”

In an attempt to obtain that knowledge, the team of four Australian researchers set out to investigate the impact of rising temperatures on Fusarium pseudograminearum, the “predominant pathogen causing crown rot of wheat in Australia” that is responsible for inducing an average of AU$79 million in crop losses each year. More specifically, they examined “whether the pathogenic fitness, defined as a measure of survival and reproductive success of F. pseudograminearum causing crown rot in wheat, is influenced by temperature under experimental conditions.”

The experiment was conducted in controlled-environment glasshouses at the Queensland Crop Development Facility in Queensland, Australia, where eleven lines of wheat were grown under four day/night temperature treatments (15/15°C, 20/15°, 25/15° and 28/15°C for 14-hour days and 10-hour nights). The first three treatments were representative of “the range of average maximum temperatures of the various wheat-growing regions across Australia,” whereas the fourth (28/15°C) treatment was intended to simulate a future warming scenario. The minimum temperatures of all treatments were kept at 15°C because “night-time temperatures over the last 50 years in the large majority of wheat-growing regions across Australia have not shown an increasing temperature trend in all seasons.” With respect to the eleven wheat lines, they were selected based on known susceptibilities and resistances to crown rot. Fourteen days after sowing a portion of each line was infected with F. pseudograminearum and then grown to maturity.

So what did the researchers find?

With respect to disease severity, Sabburg et al. report it was highest under the lowest temperature treatment and declined with increasing temperature (Figure 1a), and this general reduction was noted in all of the eleven wheat lines. Similarly, pathogen biomass was also reduced as treatment temperature increased (Figure 1b). According to the researchers, “on average, warming reduced pathogen biomass in stem base (PB-S) by 52% at either 25/15°C or 28/15°C compared with the biomass at 15/15°C.” And it also decreased the amount of relative pathogen biomass from the stem base to flag leaf node. (The flag leaf is to top leaf on the plant.)

A third fitness measure of F. pseudograminearum – deoxynivalenol (also known as “vomitoxin,” for an obvious reason) content (DON) – was also reduced in the stem base and flag leaf node tissue as temperature treatment increased. And the significance of this finding was noted by the authors as “an encouraging result if we consider temperature rises in the future,” because “DON can make food sources including wheat grains unsafe for human or animal consumption.” That’s putting it mildly!

Figure 1. Effect of temperature on (Panel A) disease severity as expressed by the length of stem base browning (cm) and (Panel B) relative pathogen biomass in stem base (PB-S) and flag leaf node tissue (PB-F) as measured by Fusarium DNA relative to wheat DNA. All measurements in wheat plants were made at maturity following stem base inoculation by Fusarium pseudograminearum. Adapted from Sabburg et al. (2015).

In light of the above results, Sabburg et al. conclude that “this study has clearly established that temperature influences the overall fitness of F. pseudograminearum,” and that “based on our findings, warmer temperatures associated with climate change may reduce overall pathogenic fitness of F. pseudograminearum.” And given the annual production and monetary damages inflicted by this pathogen on wheat, this is news worth both reporting and celebrating!



Chakraborty, S. and Newton, A.C. 2011. Climate change, plant diseases and food security: an overview. Plant Pathology 60: 2-14.

Sabburg, R., Obanor, F., Aitken, E. and Chakraborty, S. 2015. Changing fitness of a necrotrophic plant pathogen under increasing temperature. Global Change Biology 21: 3126-3137.


Many worry about international trade and the increased competition to which it leads, while overlooking trade’s incredible benefits. In a refreshing Wall Street Journal article, the founder and CEO of FedEx, Fred Smith, reflects on how trade and deregulation have improved American living standards over the course of his lifetime. He recalls how many luxuries enjoyed by few during his youth plummeted in price and became accessible to more people than ever before. 

“Foreign travel was exotic, expensive and rare among the population as a whole” during the 1960s, Smith reminds us. Industry deregulation and international Open Skies agreements changed that. “Long-distance telephone calls were expensive, international calls prohibitively so,” and cell phones did not even exist yet. “From furniture to TVs and appliances, and especially automobiles, American brands dominated consumer spending” across the United States, and were often out of reach to the less affluent. Then trade worked its magic: 

[Trade] has rewarded Western consumers with low-cost products that have substantially improved standards of living. [Today] Americans and Europeans don’t need to be affluent to afford cell phones, digital TVs, furniture and appliances.

The moral of Smith’s story is clear: competition, which trade and deregulation facilitate, has an extraordinary tendency to enhance efficiency and bring down prices.

As we have documented, the falling cost of living improves the lives of ordinary people irrespective of how fast incomes rise. The few areas where costs have gone up instead of down—education, housing, and healthcare—have been subject to severe market distortions. Subjecting education, housing, and healthcare to more competition could have salutary results. Falling cosmetic procedure prices, for example, provide insight into how deregulated healthcare might affect healthcare costs. 

Smith’s article also recounts how increased competition has helped to move technology forward. Technology, in turn, has furthered expansion of trade—a virtuous cycle: 

During the 1970s and 1980s, while container ships and planes became increasingly efficient with each successive model, newly developed fiber-optic cables (patented in 1966) began running underseas, connecting the world at the speed of light, lowering voice and data-communication costs by orders of magnitude. Financial markets became globally integrated and transactions multiplied at an astounding rate.

In addition to improving lives in the United States, trade has also helped lift billions of people out of extreme poverty around the world, notably in East Asia

While the vast majority of Americans are made better off by trade, that is of small comfort to those working in industries that are having trouble competing with the rest of the world. Their disappointment contributes to the popularity of anti-trade political figures like Donald Trump and Bernie Sanders. 

It is important to acknowledge the “destructive” part of “creative destruction”—and trade contributes to creative destruction—but also to put trade in a proper perspective. The positives markedly outweigh the negatives. As Fred Smith concludes: 

More than three billion people are now connected to the Internet. Billions more have aspirations for a better life and are likely to come online as global consumers. The odds are good, therefore, that today’s remarkable transport systems and technologies will continue to improve and facilitate an even larger global economy as individual trade is becoming almost “frictionless.”

History shows that trade made easy, affordable and fast—political obstacles notwithstanding—always begets more trade, more jobs, more prosperity. From clipper ships to the computer age, despite economic cycles, conflict and shifting demographics, humans have demonstrated an innate desire to travel and trade. Given this, the future is unlikely to diverge from the arc of the past.

MetLife notched an important win this week, securing a ruling from a federal court that it is not a systemically important financial institution (SIFI) under Dodd-Frank. Like much of the Dodd-Frank Act, the SIFI designation has been controversial since its introduction in 2010. The designation is intended to help the Financial Stability Oversight Council (FSOC, another Dodd-Frank creation) to monitor companies whose demise could destabilize the country’s financial system. Putting aside the question of whether a group of regulators in Washington could see and stop a crisis more quickly than those in the trenches at the nation’s financial giants, the designation triggers a host of regulatory requirements that many companies would prefer to avoid. 

One of the most controversial aspects of the SIFI designation is its black box nature. There is no publicly available SIFI check-list. The rationale for following a more principles- than rules-based approach may be that the definition needs to remain flexible. Companies may be motivated to avoid the letter of such a rules-based approach without avoiding the spirit, leaving FSOC without the ability to monitor a company that, despite not triggering the SIFI designation, still poses a risk to the financial system. But this has left companies in a bind. The SIFI designation has real and substantial ramifications for any company that triggers it, but companies have been unable both to avoid designation and to challenge designation once applied.  It’s hard to argue that you don’t fit a certain definition if you don’t know what the definition is.

Of course, not all companies want to avoid SIFI status. Although some have argued that FSOC and other aspects of Dodd-Frank will prevent future bailouts, it seems naïve to think that the government could designate a company as a risk to the entire financial system and then sit idly by as it burns.  SIFI designation is a wink and a nod, all but assuring government support if the designated company founders in rocky times.

In its case against the government, MetLife argued that its designation as a SIFI was “arbitrary and capricious.” This is the famously deferential standard by which actions by federal agencies are judged. Rarely will a court overstep an agency’s decision to find that it violated this very low bar. And yet this is exactly what Judge Rosemary Collyer found. Although the government may appeal the decision, it will likely spur other SIFIs to challenge their designation as well.

More importantly, however, Judge Collyer’s decision may provide the check-list that companies have been seeking. Judge Collyer issued her opinion under seal, meaning that its details are not currently public. She has asked the parties to weigh in on whether any portions of it should remain hidden, signaling her interest in making the opinion public in the near future. Once the opinion, and Judge Collyer’s legal analysis, is made known, I foresee many lawyers scrambling to set their clients on MetLife’s path.

My previous post, inquiring as to the actual impact of the Dodd-Frank Act on bank capital, elicited some comments, mainly in the form of tweets, from Dodd-Frank’s defenders.  Here are some of the issues raised, along with my response.

A tweet from former Schumer staffer and current law professor David Min suggests that, while my analysis referred to depositories, the real issue is consolidated bank holding companies.  I don’t think the distinction matters much, because the capital requirements in Section 2(o)(1) of the Bank Holding Company Act, mirror those in Section 38 of the Federal Deposit Insurance Act, which I had discussed.  In any case the fact remains that bank regulators had more than sufficient authority before Dodd-Frank to set almost any capital standards they wanted, whether for bank holding companies or their depository subsidiaries.

That said, the question remains whether  bank holding companies’ capital levels  have in fact gone up since Dodd-Frank.  Let’s see.  In 2010, the year  Dodd-Frank was passed, the largest bank holding companies, on a consolidated basis, had a tier 1 leverage ratio of 9.05%.  By 2015 the ratio had risen to 9.69% — a mere 64 basis points.  Again, color me unimpressed.  Other leverage measures for holding companies show slightly different numbers, but the magnitudes are all similar.

In another tweet, Mike Konczal suggests that the 64 basis-point increase, although slight, is nevertheless the result of Dodd-Frank.  But that conclusion may well be doubted.  As I’ve observed, regulators might have insisted on a similar, if not greater, increase before Dodd-Frank.  In fact, I believe the  increase is most likely do to  the Basel process, rather than to domestic regulatory pressures.

Konzcal, in contrast, suggests that the increase may be due to Dodd-Frank’s capital “surcharge” on very large banks.  Were that the case, one would expect the largest holding companies to have witnessed the greatest increases in capital, with  smaller banks not subject to the surcharge experiencing smaller gains, or none at all.  As mentioned above, the leverage ratio for the largest bank holding companies increased by 64 basis points since Dodd-Frank.  But holding companies just below this in size ($3B – $10B), which are not subject to the Dodd-Frank surcharge, also witnessed an increase of 64 basis points.  If we go further down the list, and look at the community banks between $500MM – $1B, the increase in the capital ratio is actually higher, at 127 basis points.  This runs counter to Konzcal’s suggestion.  Now other things may be going on here, so my evidence doesn’t necessarily amount to a refutation.  But it does at least cast doubt upon the claim that Dodd-Frank capital surcharges were an important cause of the overall (modest) increase in post-2010 bank capital ratios.

Although my post specifically concerned bank capital, both Min and Konzcal also say that the real issue concerns non-banks (a view nicely consistent with candidate Clinton’s position).  They suggest that, even if I’m correct about the impact of Dodd-Frank on bank capital, Dodd-Frank has nonetheless made a big difference by allowing bank regulators to extend both capital and liquidity requirements to designated large non-banks.

So, let’s consider that possibility.  So far, only four non-banks have been designated under Title I of Dodd-Frank.  Three of these non-banks are predominately insurance companies.  That’s important for a number of reasons.  For starters, the Dodd-Frank liquidity requirements explicitly exclude nonbank financial companies that “have substantial insurance activities.”  It follows that three of the four non-banks cannot possibly have had their capital holdings raised as a result of Dodd-Frank’s requirements.  Also, despite what Konzcal’s tweets seems to suggest, insurance companies were not “unregulated” prior to Dodd-Frank.  State insurance regulators monitor insurance companies’ liquidity, impose liquidity requirements, and enforce capital standards, as they’ve being doing to some extent since at least 1837.

In fact it makes little sense to imagine, as Min appears to do, that imposing bank-like regulation on non-banks will raise their capital levels, since non-banks have always been far less leveraged than banks.  Even Goldman already met the standards of Dodd-Frank before that law was passed.  Yes, Title I of Dodd-Frank subjects insurance companies, hedge-funds, and some other non-banks to additional regulatory oversight;  but it doesn’t follow that it will cause them to hold more capital, for they already hold more than that law requires — as should not be surprising given that they also do not enjoy the level of government guarantees enjoyed by the banks.  In fact, the only non-banks that are more leveraged than banks, and notoriously so, are Fannie Mae and Freddie Mac.  Yet neither Dodd-Frank nor any other recent legislation attempts to impose meaningful capital requirements on either.

Two much-balleyhooed arguments for Dodd-Frank are that it has given regulators needed tools they lacked beforehand, and that it has made the financial system safer by subjecting certain non-banks to bank-like regulation.  Both arguments are false, and dangerously so.  If we truly wish to avoid future financial crises, we had better assess Dodd-Frank’s consequences honestly, instead of confusing what many claimed it would accomplish with what it has accomplished in fact.

[Cross-posted from]

William Galston’s Wall Street Journal column, “Why Trade Critics Are Getting Traction,” asks why U.S. employment in manufacturing fell from 17.2 million in December 2000 to 12.3 million last year.    He suggests that “import penetration from China [not Mexico] has been responsible for up to 20% of U.S. job losses.” But “up to” 20% explains very little, and that figure is at the high end of a range of estimates about 1999-2011 from a working paper by David Autor, David Dorn and Gordon Hanson. They speculate that “had import competition not grown after 1999” then there would have been 10% more U.S. manufacturing jobs in 2011.  In that hypothetical sense, “direct import competition [would] amount to 10 percent of the realized job loss” from 1999 to 2011.  Since 2007, however, the study’s authors find “a marked slowdown in import expansion following the onset of the global financial crisis, which halted trade growth worldwide.”

Deep recession and weak recovery is what slashed manufacturing jobs since 2007, not imports. In reality, imports always fall in recessions.  Although Autor, Dorn and Hanson emphasize imports of consumer goods (clothing and furniture), nearly half of U.S. goods imports (47.7% last year) are industrial supplies and capital goods which are essential inputs into expanding U.S. production.  That is a big reason why imports rise when U.S. industry expands and fall in slumps.

Even if “up to” 20% of manufacturing jobs lost since 2007 could be blamed on imports from China, as Galston claims, that need not mean the overall numbers of U.S. jobs were reduced.  “There is no evidence,” writes Galston, “that increased competition from China has produced offsetting employment increases in other industries whose products are traded internationally [emphasis added].”  Confining overall employment effects to “traded goods,” as Autor, Dorn and Hanson do, arbitrarily excludes services – such as financial and legal services, accounting, advertising, travel, telecom and insurance.   Services account for 32% of U.S. exports, and the U.S. runs a large and growing trade surplus with China ($28 billion in 2014) and with the world ($233 billion). Dollars foreign firms earn by exporting goods to the U.S. are commonly used to import services from the U.S. or to invest in U.S. real and financial assets; both those activities create U.S. jobs. Hollywood, Madison Avenue and Wall Street are big, high-wage U.S. exporters.

Confining the job impact to traded goods also excludes U.S. jobs in transporting, wholesaling and retailing Chinese goods (Walmart, Amazon…), as well as shipping U.S. exports to China and Hong Kong.  Incidentally, the U.S. ran a $30.5 billion trade surplus with Hong Kong last year, which isn’t counted trade with China though it really is.

Galston acknowledges that “rising productivity” [output per worker] is “part of the story” about manufacturing jobs.  In fact, it is essentially the whole story from 1987 to 2007, when U.S. manufacturing output nearly doubled.  The deep recession and slow recovery explain what happened to manufacturing jobs over the past ten years, not foreign trade.  

This morning, I attended an interesting speech by Jack Lew, Secretary of the Treasury, on the future of economic sanctions. The speech was notable in that Lew made not only a defense of the effectiveness of sanctions, but also highlighted their potential costs, a variable that is too often missing from debates over sanctions policy.

Some of the points Lew made – like the argument that multilateral sanctions are better than unilateral ones – were hardly novel. Yet others were more interesting, including the argument that sanctions implementation should be based on cost/benefit analysis and an assessment of whether they are likely to be successful. Though such an approach sounds like common sense, it has not always been the rule.

He also focused on the importance of lifting sanctions once they’ve achieved their ends. This is a rebuke to some, particularly in congress, who have argued for reintroducing the sanctions on Iran lifted by the nuclear deal through some other mechanism. As he pointed out, refusing to lift these sanctions now means that they will be less effective in the future: if states know sanctions will remain in place regardless of their behavior, what incentive do they have to change it?

Perhaps most interestingly, Lew argued for the ‘strategic and judicious’ use of sanctions and against their overuse. This is an interesting argument from an administration for whom sanctions have often been the ‘tool of first resort.’ In doing so, he referenced both growing concerns about the costs of sanctions from the business community, and the broader strategic concern that overuse of sanctions could weaken the U.S. financial system or dollar in the long-run.

I still disagree with the Secretary on several points. While he is correct that nuclear sanctions on Iran have broadly been a success, he dramatically overstates the effectiveness of sanctions in the more recent Russian case. Much of the economic damage in that case was the result of falling oil prices, and sanctions have produced little in the way of coherent policy change inside Russia.

He also overstates the extent to which today’s targeted sanctions avoid broad suffering among the population. In fact, evidence suggests that modern sanctions still suffer some of the same flaws as traditional comprehensive trade sanctions, allowing the powerful to deflect the impact of sanctions onto the population, and reinforcing, not undermining, authoritarian dictators.

Despite this, it is refreshing to hear concerns about the long-term implications of runaway sanctions policy expressed by policymakers. In alluding to these concerns – many of which have been noted for some time now by researchers – the Treasury Secretary may help to spark a broader policy discussion of the benefits and costs of sanctions. If we wish to retain sanctions as an effective tool of foreign policy moving forward, such discussion is vital.

For more on some of the big issues surrounding sanctions policy, you can read some of Cato’s recent work on sanctions policy here and here, or check out the video from our recent event on the promises and pitfalls of economic sanctions

Californian lawmakers and labor unions have reportedly reached a deal to increase the minimum wage to $15 an hour by 2022, and index it to inflation after that. If this deal becomes a reality, California would be the first statewide experiment with the $15 minimum wage. The ratio of the minimum wage to the median wage in California would be one of the highest in the world among high-income countries. California’s minimum wage deal brings with it unprecedented risks, and any resulting adverse results will be primarily borne by younger workers, people with limited job skills, and people living outside of major cities.

Ratio of Minimum Wage to Median Wage, California (2022) and High-Income Countries (2014)


Source: OECD.

Note: European OECD countries, with the addition of Australia, Canada and United States. California projection assumes two percent real wage growth.

Unlike the recent deal in Oregon, which included a tiered minimum wage with lower levels in smaller cities and rural areas, California’s increase would apply uniformly throughout the state. While major cities like San Francisco or San Jose that generally have higher wages might be able to absorb some of the adverse effects of this increase, non-metro areas will be the most impacted by this deal. The New York Times estimates that in 2022 the $15 minimum will be 40 percent of the median wage in San Jose, but 74 percent in Fresno, significantly higher than France and approaching the 77 percent seen in Puerto Rico. Arindrajit Dube, a prominent minimum wage researcher who has found relatively small disemployment effects from past increases, acknowledged that “In rural areas like Fresno, a majority of workers will be affected.” There is also more slack in the labor market in places away from the major urban metros: ten of the thirteen metropolitan statistical areas (MSAs) with the highest unemployment rates in the country are in California, and people in these places will find it even harder to deal with these minimum wage increases.

Another potential pitfall of the new deal is it would effectively lock California into a rigid trajectory for the next six years, which will limit the state economy’s ability to adapt to changing circumstances. With a phase-in stretching to 2022, it’s likely that a recession will hit at some point during this period. This could cause wage growth to stall out, which would push up the ratio of the proposed minimum to median wage ration even higher. Even Governor Brown recognized the implicit trade-offs in minimum wage increases of this magnitude, warning in his most recent budget proposal that in this scenario “such an increase would require deeper cuts to the budget and exacerbate the recession by raising businesses’ costs, resulting in more job loss.” The Sacramento Bee reports that the tentative deal includes a provision giving the governor the ability to temporarily halt future increases during a recession, but given how quickly Governor Brown has reversed course in the face of pressure from unions and activists, it is hard to see how a future governor would halt increases in the future.

The increases do not end in 2022.  After that, the minimum wage would be indexed to inflation. One of the arguments employed for increases now is that the real value minimum wage has eroded over time, because it has not been linked to inflation. This leads to a “sawtooth pattern” in the real value of the minimum wage. Businesses might be less likely to respond to minimum wage increases that they perceive as temporary in nature. Shifting investment and hiring decisions is disruptive and costly for employers, so if the impact of the minimum wage increase will be attenuated by inflation and broader wage growth, the adjustments will be more muted. Responses to a minimum wage increase of this magnitude that will then be indexed to inflation will be significantly larger than most previous cases that have been analyzed.

Sawtooth Pattern: Real Value of Federal Minimum Wage


Source: Federal Reserve Bank of St. Louis, Federal Reserve Economic Data.

Studies focusing on discrete job levels over a short time frame might be failing to accurately measure where these adjustments are taking place.  Jonathan Meer and Jeremy West suggest that the impact of a minimum wage increase is primarily driven by reduced job creation, rather than companies firing people. Over a longer time period, they estimate that a 10 percent increase in the minimum wage leads to a 0.8 percent reduction in total employment, with these effects concentrated among lower-skilled workers. California’s much greater minimum wage increase would lead to slower job growth in the future, which would disproportionately harm people at the lower end of the skills spectrum.

With this deal, California ventures into largely uncharted waters for the United States experience with the minimum wage, and the ratio of minimum wage to median wage would be one of the highest in the world. While other cities have passed a $15 minimum, and Oregon recently enacted a significant increase, California would impose uniform minimum wage hikes throughout the entire state, which could especially harm people outside major cities. After reaching $15, the minimum wage will be indexed to inflation, which could lead to disemployment effects larger than many recent studies have found. Young workers and people with limited job skills will bear the brunt of any negative consequences from California’s minimum wage experiment, while rural areas and smaller towns will see the most disruption.