Cato Op-Eds

Individual Liberty, Free Markets, and Peace
Subscribe to Cato Op-Eds feed

In the wake of their successful Tax Cuts and Jobs Act last year, Republicans are now considering Tax Reform 2.0.

For individuals, the 2017 law trimmed tax rates and changed deductions and exemptions. But it did not fix the tax code’s bias against personal savings, which is a serious problem given that many Americans save so little. 

One idea the GOP is mulling for 2.0 is the creation of Universal Savings Accounts (USAs). Such accounts were considered last year but were not included in the final bill.

USAs would be like vastly improved Roth IRAs. Individuals would contribute up to, say, $10,000 a year of their after-tax income, and then the account earnings would grow tax-free.

Account assets could be withdrawn tax- and penalty-free at any time for any reason, which would make the accounts simple, flexible and liquid.

You can read the rest in this new oped in The Hill.

Breaking News: Ways and Means Committee Republicans have just released today their framework for Tax Reform 2.0, and it includes Universal Savings Accounts.

You can read more about this revolutionary savings vehicle in this Cato study co-authored with Ryan Bourne.


I have written here and here about how patients have become the civilian casualties of the misguided policies addressing the opioid (now predominantly fentanyl and heroin) crisis. The policies have dramatically reduced opioid prescribing by health care practitioners and have pressured them into rapidly tapering or cutting off their chronic pain patients from the opioids that have allowed them to function. More and more reports appear in the press about patients becoming desperate because their doctors, often fearing they may lose their livelihoods if they are seen as “outliers” by surveillance agencies, under-treat their pain or abruptly cut them off of their pain treatment regimen.

story in the July 23, Louisville (KY) Courier Journal illustrates the harm this is causing in Kentucky. “Doctors say the federal raids on medical clinics lead to unintended consequences — patients thrust into painful withdrawals and left vulnerable to suicide or dangerous street drugs,” states the article.  Dr. Wayne Tuckerson, President of the Greater Louisville Medical Society, said, “[When investigators] go in with a sledgehammer and shut down a practice without consulting community physicians, suddenly we have patients thrown loose.” He went on to say, “Docs are very much afraid when it comes to writing pain medications…We don’t want patients to become addicted. And we don’t want to have our licenses — and therefore our livelihoods — at stake.” And if pharmacists in the area learn of a police raid or investigation of a medical practice—regardless of the outcome of that investigation—many of them refuse to fill legal prescriptions presented by patients of those practitioners.

Last week Oregon regulators announced plans for a “forced taper” of chronic pain patients in its Medicaid system. This contradicts and is much more draconian than the recommendations of the 2016 guidelines issued by the Centers for Disease Control and Prevention, which in turn have been criticized as not evidence-based. The Oregon Health Evidence Review Commission announced: 


The changes include a forced taper for all chronic pain patients on opioids (within a year), no exceptions. Opioids will be replaced with alternative treatments (cognitive behavior therapy (CBT), acupuncture, mindfulness, pain acceptance, aqua therapy, chiropractic adjustments, and treatment with non-opioid medications, such as NSAIDS, Acetaminophen).


This proposal has sparked an outcry from patients and patient advocacy groups in Oregon. While this policy proposal only applies to Medicaid patients, they fear it will soon become the standard adopted by all third-party payers in the state.

University of Alabama Medical School Associate Professor Stefan Kertesz, an addiction medicine specialist at the Birmingham VA Medical Center, tweeted in reaction to this proposal:


I cannot imagine a more violent rejection of the CDC Guideline on Prescribing Opioids of 2016 than the plan current before Oregon Medicaid : forced taper to 0 mg of all opioid receiving pain patients.



These policies are based on the false narrative that the overdose problem is primarily the result of doctors prescribing opioids to their patients in pain and getting them hooked. In fact, the problem has always been a product of drug prohibition—non-medical users accessing opioids on the black market. To illustrate, an often-overlooked study published in the American Journal of Psychiatry in 2009 followed more than 27,000 OxyContin addicts entering rehab programs from 2001-2004. It found 78 percent said they never had obtained a prescription of OxyContin for any medical reason, 86 percent said they used the drug because they liked the  “buzz” or “high,” and 78 percent reported prior treatment for substance abuse disorder.

There have been well-documented cases of unscrupulous doctors teaming up with dishonest pharmacists to operate “pill mills”—gaming the third-party payment system to receive compensation for running drug-dealing operations. But these bad apples have largely been rolled up by law enforcement and represented an exception to the rule of how doctors treat pain. Nevertheless, these stories continue to feed the narrative.

Dr. Charles Argoff, a professor of neurology at Albany Medical College and Director of its Comprehensive Pain Center recently surveyed colleagues in a report for the medical education website entitled “Readers Respond: Stop Stigmatizing Opioids.” The majority of clinicians dealing with pain bemoan the hysteria driving the governments’ response to the overdose problem. One clinician emphasized, “Dependence is the rule, addiction is the exception.” Another complained about the “misinformation, distortion of evidence-based research, political influence, and even mainstream media sensationalism-style reporting, which together has deteriorated to such an extent that it is beyond belief…A person should review all available information that is opposing the arrogantly forgotten patient.”

Dr. Argoff concluded his survey with the following comment:


In summary, I hope these comments further epitomize and suggest how complicated opioid therapy is. But what I am struck by is how much these comments point to identifying that subset of individuals for whom these medications are successful and also outlining the risk of so many other medical treatments, both interventional and noninterventional, that we consider for our patients with chronic pain.


Meanwhile the civilian casualties mount. Dr. Thomas Kline, a physician in North Carolina, is maintaining a growing list of patients who commit suicide after being cut off from their pain medication. Expect the deaths—of patients as well as non-medical users—to continue until policymakers come to the realization that the root cause of the problem is drug prohibition.

Lately the old-timers here at Cato’s Center for Monetary and Financial Alternatives — which is to say, Jim Dorn and I — have been talking a lot about the Phillips Curve, which seems to be playing a part in monetary policy discussions today almost as big as the one it played in the 1970s. And you can bet that, because both Jim and I actually remember what happened in the 70s, and afterwards, neither of us has a good word to say about the concept, except as a very reduced-form means for describing very transient relationships.

Because Jim has a CMFA Policy Briefing on Phillips Curve reasoning in the works, I won’t belabor here his — and my — general objections to it. My main concern is to draw attention to a current example of that reasoning at work, in the shape of a recent New York Times op-ed by Jared Bernstein, entitled “Why Real Wages Still Aren’t Rising.”

Noting that, despite the low and still falling U.S. unemployment rate, real wage rates for workers in factories and the service industries have been stagnant for several years. Mr. Bernstein finds this stagnancy puzzling: According to the BLS, he writes, as of this June money “wages” (presumably meaning hourly wage rates) grew at an annual rate of 2.7 percent, whereas “looking at the historical link between wages and unemployment, wage growth should have been rising about a percentage point faster.” The “historical link” to which Mr. Bernstein refers is based partly on the Phillips Curve — a negative relation between the unemployment rate on one hand and the rate of either nominal “wage” or price inflation on the other — and partly on the historical tendency for the rate of nominal wage inflation to exceed that of price inflation. In the present instance, prices have failed to rise as rapidly as the decline in unemployment suggests they should, while wages — factory workers’ wages especially — have been rising still less rapidly.

How to account for this recent failure of reality to conform to the implications of the Phillips Curve? For Mr. Bernstein, this development

is mainly the outcome of a long power struggle that workers are losing. Even at a time of low unemployment, their bargaining power is feeble… . Hostile institutions — the Trump administration, the courts, the corporate sector — are limiting their avenues for demanding higher pay.

Eventually Mr. Bernstein also points a finger at “the increased concentration of companies and their unchecked ability to collude against workers.”

We’ve No Need for These Hypotheses

The least unfavorable thing that can be said about such shadowy conjectures is that one ought not to resort to them without first exhausting more prosaic possibilities.

To his credit Mr. Bernstein himself recognizes one such possibility: the well-known, general slowdown in productivity growth since the recession, which he allows to have placed “another constraint on wages.” He recognizes as well the possibility that the recent Trump-initiated trade war may have exacerbated the decline, though he dismisses it on the grounds that “ ‘final products’ — things that consumers buy versus intermediate materials used for production — have so far been spared.” Here Bernstein is surely mistaken: when intermediate materials get more expensive, so do final products produced at home. Yet the trade war does nothing to boost nominal wages. So real wages may already have been adversely affected, not by a Trump administration anti-labor conspiracy, informed by its hostility to ordinary workers, but by one of that administration’s avowed policies, informed by its ignorance of rudimentary trade theory.

In fact, as we’ll see, the general decline in productivity since the Great Recession and the more recent trade war are alone quite capable of accounting for a considerable decline in the once substantial difference between the rate of wage inflation and that of output price inflation, and hence in the growth rate of real wages.

A Phillips Curve Refresher Course

Any historical Phillips Curve relationship is just that: a historical relationship. Whatever it was then, it may have shifted around since. Consequently a decline in real wages that might, for any given Phillips Curve relationship, point to a weakening of the labor market, may point to other developments, including declining productivity, when the (short run) Phillips Curve itself has been on the move.

To overlook the ever-shifting nature of Phillips Curves is to neglect something that was driven home, painfully, to an entire generation of economists during the 1970s, as they witnessed the baneful consequences of attempts to exploit the “historical link” between inflation and unemployment represented by the 1960s vintage Phillips Curve. In particular, it’s to neglect the fact that any short-run Phillips Curve relationship depends on some underlying state of aggregate supply. When that state changes, either because workers come to anticipate future inflation, as they did in the 70s, or because productivity declines, as it has recently, the former Phillips Curve breaks down, and a new one takes its place.

For the sake of readers seeking a more explicit explanation of the logic behind naive Phillips Curve reasoning, and why such reasoning goes awry when Phillips Curves don’t stand still, I offer here a quick review, starting with some simple supply and demand diagrams representing the markets of goods and services (left) and labor (right).

Given some state of aggregate supply, as reflected by fixed, upward-sloping short-run aggregate supply (SAS) and labor supply (LAS) schedules, changes in nominal spending on goods (AD, for aggregate demand) and labor (LD) will cause prices and wages (or their respective inflation rates), employment, and output to increase together, with no tendency for wages to fall behind prices.

The standard Phillips Curve, portraying a negative relationship between the rate of price inflation and the unemployment rate, is just a reduced-form representation of these more involved relationships, showing the set of alternative, equilibrium values of inflation, π(P), and unemployment (L-N, where L is the size of the labor force) consistent with different levels of spending (AD and LD), consistent with given short-run, upward-sloping SAS and SLS schedules. By noting that one can also express the relationship in question as one between the rate of wage inflation, π(w), and the unemployment rate, one arrives at “the historical link between wages and unemployment” to which Mr. Bernstein refers. What that relationship really means is that, for any given short-run labor supply schedule, as aggregate and labor demand schedules shift out, unemployment declines, while wage rates go up.

But a historical Phillips Curve relation, whatever it may be, ceases to hold once short-run aggregate or labor supply schedules themselves start shifting. In particular, if there’s a general productivity setback, due to a trade war or for any other reason, the aggregate supply schedule shifts in, or at least fails (in a dynamic setting) to shift out as fast as the aggregate demand, labor demand, and labor supply schedules. That difference is all it takes to cause a growing gap between the equilibrium nominal wage rate and the equilibrium price level, so that real wages stagnate, assuming they don’t actually decline.

As our diagrams are necessarily static, getting from them to a more accurate, dynamic account of recent labor market developments takes a little imagination. In reality, for starters, all of the schedules tend to be shifting outwards over time. Typically the AS schedule shifts out faster than the LS schedule, thereby providing  for a general increase in real wages. Since the Great Recession, however, although AS never actually shifted to the left, the difference between the growth rate of AS and that of LS has shrunk. Consequently, instead of actually declining, real wages have merely ceased to increase as quickly as they once did.

The Real Puzzle: Labor’s Fallen Share of Productivity Gains

There remains, however, one real wage rate puzzle that post-crisis aggregate supply developments alone can’t explain. The puzzle consists, not of the breakdown of the “historical link between wages and unemployment” to which Mr. Bernstein refers, but of the breakdown of the historical link between the real wages of workers, apart from managerial-level workers, and overall productivity growth. The puzzle is that, while productivity growth has slowed down considerably, it’s still positive, whereas real wages for many sorts of workers have been altogether flat. Labor’s share of national income has, in other words, fallen. And so far it seems down for the count.

But this more genuine puzzle, which has nothing to do with the relation between wage inflation and the unemployment rate, is itself hard to attribute to some relatively recent rise of “hostile institutions,” either within the Trump administration or elsewhere in the United States. For one thing, labor’s share of income started to decline long before the recent crisis, let alone the most recent presidential election! By most accounts, labor’s share began to drift downward in the 1980s, and reached its nadir just before the 2008 crisis. For another, the decline has occurred, not just in the U.S., but in many other developed and emerging economies — despite large differences in all these countries’ governments, court systems, and collective bargaining arrangements. As Loukas Karabarbounis and Brent Neiman show in a 2017 NBER Report on “Trends in Factor Shares: Facts and Implications,” “Country-specific changes in policies … might be important for specific countries but are unlikely to account for much of the overall trend that the world has experienced.”

And there’s no shortage of explanations for the global decline in labor’s share of income that are far more compelling than vague references to “hostile institutions.” Karabarbounis and Neiman attribute half of it to “progress with IT-related technologies” that has “induced firms to produce with greater capital intensity.” A San Francisco Fed Study  by Mary C. Daly, Bart Hobijn, and Benjamin Pyle attributes the stagnation of real wages to “secular shifts in the composition of the labor force.” In particular, while baby boomers earning relatively high wages have been retiring, younger workers “sidelined” during the recession have had to settle for relatively low-paying full-time jobs. A 2017 MIT working paper argues that an increase in product market concentration, particularly as manifested in the rise of “superstar firms,” may also have contributed to the reduction in labor’s share of total income. But the reason isn’t superstar firms’ “unchecked ability to collude against workers”: it’s just that “there is a fixed amount of overhead labor … needed for production” in the industries in question, so that greater concentration means less labor-intensive production.

In a still more recent working paper Princeton’s Gene Grossman and several coauthors suggest that the productivity slowdown may also account for part of the decline, because “when human capital is more complementary with physical capital than with raw labor” such a slowdown “can itself lead to a shift in the functional distribution of income away from labor and toward capital.” Finally, some part of the decline in labor’s share may be a figment of the data. According to a Brookings study published in 2013, “about a third of the decline in the published labor share appears to be an artifact of statistical procedures used to impute the labor income of the self-employed that underlies the headline measure.”

While none of these alternative explanations for the stagnation of workers’ earnings may alone suffice as an alternative to Mr. Bernstein’s more sinister explanations, several could easily do so. And these are but some of many plausible possibilities. For some others, along with a good general discussion of the topic, I recommend this pair of posts by Timothy Taylor.

In short, there’s no need to suppose that the courts, the Executive Branch, and “the corporate sector” have been conspiring — or conspiring more than usual, to be precise — to deprive workers of some portion of their already meager share of the real GDP pie. And even if they were trying, it couldn’t account for the actual historical and global behavior of worker’s earnings.

The moral of the story is that it’s unwise for economists to put too much faith in historical relationships — whether between inflation and unemployment or between total income growth and workers’ real wage rates — and to conclude, when these relationships “break down,” that some conspiracy must be afoot. That courts, corporations, and presidential administrations are capable of perfidy no one can deny. But historical macroeconomic relationships are themselves untrustworthy, for reasons unconnected to goings-on in smoke-filled rooms.

[Cross-posted from]

Last week Mark Zuckerberg gave an interview to Recode. He talked about many topics including Holocaust denial. His remarks on that topic fostered much commentary and not a little criticism. Zuckerberg appeared to say that some people did not intentionally deny the Holocaust. Later, he clarified his views: “I personally find Holocaust denial deeply offensive, and I absolutely didn’t intend to defend the intent of people who deny that.” This post will not be about that aspect of the interview.

Let’s recall why Mark Zuckerberg’s views about politics and other things matter more than the views of the average highly successful businessman. Zuckerberg is the CEO of Facebook which comprises the largest private forum for speech. Because Facebook is private property, Facebook’s managers and their ultimate boss, Mark Zuckerberg, are not bound by the restrictions of the First Amendment. Facebook may and does engage in “content moderation” which involves removing speech from that platform (among other actions).

Facebook F8 2017 San Jose Mark Zuckerberg by Anthony Qunintano is licensed under CC BY 2.0

What might be loosely called the political right is worried that Facebook and Google will use this power to exclude them. While their anxieties may be overblown, they are not groundless. Zuckerberg himself has said that Silicon Valley is a “pretty liberal place.” It would not be surprising if content moderation reflected the dominant outlook of Google and Facebook employees, among others. Mark Zuckerberg is presumably setting the standards for Facebook exercising this power to exclude. How might he exercise that oversight?

Mark Zuckerberg’s comments on Holocaust denial suggest an answer this question. Holocaust denial is the ultimate fake news. No decent person believes the Holocaust did not happen. And yet Holocaust denial also draws a line between narrow and broad, between European and American, visions of the freedom of speech. Europeans see censoring such speech as a militant defense of democracy rather than a lack of liberal conviction. The United States sets the limits of speech broadly enough to include even false and vile speech like Holocaust denial.

In this conflict of ideals, it would have been easy and rather conventional for Mark Zuckerberg to endorse censoring Holocaust denial. Who would have criticized him for that? After all, many people equate tolerating extreme speech with advocating it. And yet, against his interests, Zuckerberg decided to subscribe to an essentially American view of the limits of speech.

Why did he do so?  In the interview, he discusses dealing with “false news”:

There are really two core principles at play here. There’s giving people a voice, so that people can express their opinions. Then, there’s keeping the community safe, which I think is really important. We’re not gonna let people plan violence or attack each other or do bad things. Within this, those principles have real trade-offs and real tug on each other.

How should that tradeoff be resolved? He notes: “Look, as abhorrent as some of this content can be, I do think that it gets down to this principle of giving people a voice.” Zuckererg continues: “Our bias tends to be to want to give people a voice and let people express a wide range of opinions. I don’t think that’s a liberal or conservative thing; those are the words in the U.S. Constitution.”

But Zuckerberg does recognize limits to free speech as measured by a typically American test:

Let me give you an example of where we would take [speech] down. In Myanmar or Sri Lanka, where there’s a history of sectarian violence, similar to the tradition in the U.S. where you can’t go into a movie theater and yell ‘Fire!’ because that creates an imminent harm.”

In U.S. law speech directly inciting violence is an exception to the First Amendment. This broad limit sanctions tolerance of extreme speech, even speech which might buttress bigoted views of the world. It draws a line at speech likely to incite imminent violence, words tending to spark specific acts of intolerance, rather than those that might feed a more generalized grievance.

Another part of the interview increases my confidence in Zuckerberg’s judgment about free speech. In responding to a confused question about data privacy, he says, “Well facts do matter.” Passions like fear and anger threaten freedom of speech when they move politics. People who focus on facts and problem solving – engineers are a good example – are unlikely to act on such passions. When the practical-minded also support free speech in principle, our rights are even more secure.

The interview is not wholly reassuring. Zuckerberg at times seems too willing to accommodate European approaches to extreme speech. He also tends to see only the benefits (and not also the costs) of transparency.

On the whole, however, conservatives and libertarians should be reassured about the future of online speech. Zuckerberg took a risk he did not have to take by endorsing a broad conception of free speech. In an age of populisms of the left and right, Mark Zuckerberg seems a better bet for protecting free speech than current and future politicians.

The Pakistani public is headed to the polls on July 25, to vote in the third consecutive election since 2008. While it remains difficult to predict which political party will emerge victorious, one thing is clear: Pakistan’s youth will most likely determine the winner.

Pakistan is in the middle of youth bulge. According to Pakistan’s National Human Development Report, 64 percent of the population is between the ages of 15 and 29. This population is concerned with completing their education, securing a job to increase the likelihood of financial stability, having the ability to change a job if needed (indicating a desire to not only have a strong economy but also a diverse one), being able to marry and have children, having the ability to buy a house, car, and other material comforts, and being able to emigrate and/or study aboard.

But do Pakistan’s major political parties have the capacity to address the youth’s concerns? Not really.

All major political parties—Pakistan Muslim League–N (PML–N), Pakistan Tehreek-e-Insaf (PTI), and Pakistan Peoples Party (PPP)—have long understood the importance of the youth, and have tried various techniques to appeal to young voters. When campaigning for the 2013 general elections, PML–N introduced a program that provided free laptops to poor students to increase their accessibility to technology as part of a larger initiative to improve the quality of education. PPP sought to engage the youth in policymaking by creating youth councils while PTI appealed to the youth directly, urging young people to join PTI and create a “Naya (New) Pakistan” free of corruption. The 2018 campaign season has also been filled with appeals to the youth, with political parties (even religious ones) hiring DJs to “raise the passion of people.” But the political parties manifestos don’t meet the passion of the rallies.

PML–N’s 2018 manifesto describes: a self-employment scheme for youths that includes low-interest loans and increased access to community banks; the creation of low-medium skilled jobs in the agricultural sector; and an emphasis on vocational training. The manifesto states that PML–N is making youth representation in democratic forums a top priority. Yet, the manifesto is blatantly Punjab-centric. For example, the vocation training programs are all sourced from Punjab, such as TEVTA or Technical Education and Vocational Training Authority in Punjab, the PSDF or the Punjab Skills Development Fund that is designed to provide free vocational training to poor and vulnerable populations, and the PVTC or the Punjab Vocational Training Council, which focuses on vocational teacher training. What about the youth in other provinces and tribal areas?

PPP’s 2018 manifesto has a broader scope. While it goes into a more detail reforming and modernizing education, improving access to quality education, revitalizing sports, and increasing technical and vocational programs, it fails to provide actual policies and programs that can achieve these lofty goals. For example, the manifesto states that PPP aims to regulate internship programs to all young people to increase their work experience, making them more appealing when they enter the workforce. Yet, no details have been provided on this regulation program. Will it be based on a quota system? Will students be able to get university credit for internships?

Similar to PPP’s manifesto, PTI’s 2018 manifesto lists a number of noteworthy goals but fails to provide any implementation details. For example, PTI’s manifesto focuses on doubling the size of existing skill development and vocational training programs but fails to explain how. The manifesto states that PTI will launch a national program to provide practical training to graduates in the public and private organizations but fails to name any specific organizations it has been in touch regarding such a program. PTI also wants to establish a liaison under the Ministry of Foreign Affairs to promote foreign placement of Pakistani talent but does not discuss what a PTI-led government will do to reduce visa restrictions that Pakistani nationals face worldwide.  

Pakistan’s National Human Development Report found that 80 percent of Pakistan’s youth has voted in the past, and reports indicate that Wednesday’s election won’t be much different. While youth involvement in Pakistan’s political processes has evolved over time, one thing is clear: Pakistan’s political parties need to not only engage the youth but also focus on how they can meet the youth’s demands in a fiscally responsible way. For now, none of the parties seem to have a clear idea of how to deal with the country’s youth bulge. 

Last week, the Washington Post picked up on an article in Police Quarterly that showed clearance rates for property and violent crimes increased in Colorado and Washington following their legalization of marijuana for recreational purposes. The clearance rate is the percentage of reported crimes that result in an arrest for those crimes. These data support the notion that Cato and other pro-legalization advocates have been saying for years: if the government ends the drug war, it frees up police resources to solve other crimes and perform other functions more necessary to public well-being than prosecuting drug crimes. Of course, these data are not conclusively causal and different agencies may react differently to legalization in their jurisdictions, but they are a good sign for reform that academics can measure as more states legalize.

On a related note, my colleague Jeff Miron published a piece today examining the budgetary impact of ending drug prohibition. You can find that here.

Over the weekend Treasury Secretary Steven Mnuchin made some remarks that could be interpreted as positive for trade liberalization:

Treasury Secretary Steven Mnuchin is “very hopeful” the US can make progress brokering separate free trade deals with the European Union and Japan during a weekend summit in Buenos Aires.

“I’m encouraged by the EU’s trade agreement with Japan,” Mnuchin said Saturday in an interview with CNN at the sidelines of the G-20 meeting in Argentina.

The EU and Japan signed a massive trade deal earlier this week, cutting or eliminating tariffs on nearly all goods. The deal is in contrast to escalating trade disputes between the US and several of its major allies, including the European Union.

The EU-Japan agreement, which covers 600 million people and almost a third of the global economy, will remove tariffs on European exports such as cheese and wine. It will also reduce barriers on Japanese automakers and electronic firms in the European Union.

President Donald Trump has imposed tariffs on a range of foreign goods from Europe, Canada, Mexico and other trading partners, and is threatening even more action.

Mnuchin said he is still reviewing the details of the EU-Japan agreement, but stressed that any free trade deal with the EU would have to go beyond cutting tariffs on goods.

“This has to be about dropping non-tariff barriers and subsidies as well. This has to be a deal with its entirety,” he said.

Elsewhere, it was reported that he said: “If Europe believes in free trade, we’re ready to sign a free trade agreement.”

If you haven’t been following trade policy for the last two years, you might see this as a positive and constructive approach by the Trump adminstration towards trade liberalization. But the broader context makes clear that this is not the case. Among other things, the Trump administration has imposed new tariffs on the EU, Japan, and others; and while there have been offhand remarks about trade liberalization (see similar remarks from President Trump and National Economic Council Director Larry Kudlow here), the administration has not made any formal efforts to get such a process started. In short, and contrary to Mnuchin’s statements, the Trump administration does not seem the least bit ready to sign a new free trade agreement, with the EU or anyone else (it is, however, revisiting some older trade agreements).

Of course, the Trump administration could, if it wanted to, negotiate free trade agreements with the EU, Japan, and others. These agreements are not a panacea for eliminating protectionism, but they do achieve significant liberalization. As long as expectations on both sides are kept at reasonable levels (in terms of timing and scope), deals are possible. Through these agreements, most tariffs on trade between the parties could be eliminated, and some non-tariff barriers could be reduced (subsidies, by contrast, are rarely addressed in bilateral deals).

However, aside from occasional offhand remarks, the Trump administration is not taking any steps towards starting these negotiations, and instead is making the possibility of deals less likely through its confrontational and unjustified Section 232 tariffs on steel and aluminum (and possibly soon, on cars). As the EU and Japan have just shown, these trade deals are possible. It remains to be seen if the Trump administration is willing and able to negotate them.

The federal government spends an unreal amount of taxpayer money cleaning up nuclear weapons sites. In this study at Downsizing Government, I noted that between 1990 and 2016, Congress spent $152 billion on nuclear cleanup, with about $6 billion more every year.

Where does the money go? About $5 billion has been spent at a facility in South Carolina called the Savannah River Site. In the study, I said, “The facility has a negligent safety culture, and environmental issues such as water contamination plagued it for years. Cleanup costs have soared. The construction of a mixed oxide fuel facility at the site was supposed to cost $5 billion, but the price tag has soared to $17 billion.”

The Wall Street Journal provided an update on the Savannah River boondoggle today:

The U.S. Energy Department says it is spending $1.2 million a day on a partially built South Carolina nuclear facility that it wants to abandon due to soaring costs.

Congress has continued funding construction of the plant, which would be used to dispose of surplus weapons-grade plutonium, despite a series of reviews casting doubt on the financial logic involved.

… The recent jousting marks the latest twist for the troubled Mixed-Oxide Fuel Fabrication Facility. In 2007, U.S. officials said the so-called MOX plant would cost $4.8 billion and be completed by 2016. DOE officials today estimate it would cost $17.2 billion and take until 2048, assuming $350 million a year in federal funding.

… In 2014, the Energy Department concluded that plutonium could be disposed far more cheaply using a different method, known as “dilute and dispose.” The shift is opposed by South Carolina officials and members of the state’s congressional delegation, including Republican Sen. Lindsey Graham.

… From 2014 to 2016, Congress gave the Energy Department the same message: Keep building the MOX plant. Last year, Congress authorized the energy secretary to stop construction if evidence showed another method would cost less than half as much.

In May, Energy Secretary Rick Perry invoked the provision and prepared to halt construction in June. South Carolina sued, and U.S. District Judge J. Michelle Childs granted a preliminary injunction June 7 in the state’s favor, pending further litigation.

For more on energy spending, see


On numerous occasions, President Trump has described America’s asylum laws as the most accepting—or, in his words, “dumbest,” in the world. “When people, with or without children, enter our Country, they must be told to leave… only country in the World that does this!” he tweeted this month. But many other countries are much more accepting of asylum seekers than the United States is. In fact, the United States ranks 50th in the world in net increase in asylees, refugees, and people in similar situations as a share of its population since 2012.

The United Nations High Commissioner for Refugees (UNHCR) publishes data on the number of refugees and asylum seekers in each country. From 2012 to 2017, UNHCR finds that the United States accepted a net increase of 654,128 asylees, refugees, and people in similar circumstances. That amounted to 0.2 percent of the U.S. population in 2017. As the Figure below shows, 49 other countries had higher rates of acceptance than the United States did. The average rate of acceptance for the top 50 countries was 1.2 percent of the population—six times higher than the U.S. rate.

Figure: Top 50 refugee-asylee receiving nations

In absolute terms, the United States does rank in the top 10, but it is important to control for the size of the population of the receiving country both to understand the likely effects of the absolute numbers on the country and to allow a legitimate comparison across countries. This is the same reason why per capita Gross Domestic Product (GDP) is a better measure of how wealthy people in a country are than just aggregate GDP. The Chinese are not seven times wealthier than Canadians because China’s GDP is seven times larger. In fact, Canadians are five times wealthier because Canada’s per capita GDP is five times larger. To understand how wealthy or how accepting a country is, the population of the country is as relevant as the size of its aggregate wealth or the absolute number of immigrants it accepts.

The more accepting nations include Australia and most of Western and Northern Europe—Sweden, Austria, Germany, Denmark, Switzerland, Italy, Norway, Finland, Belgium, the Netherlands, and France. The average rate for these countries was 0.7 percent—3.3 times more than the United States. But it also includes many countries that are much less wealthy than the United States. Lebanon, which has accepted an astounding 14 percent of its population in asylees just since 2012, has a per capita GDP of $8,400—7 times less than the United States—but it has accepted asylees at 73 times the rate of the United States.

President Trump is simply incorrect that other countries don’t accept refugees and asylees, including those who come in unannounced. In fact, four dozen other countries are dealing with more significant asylee populations than the United States is. Some of the difference between the United States and other countries could be explained by UNHCR shifts in methodology in who is counted as a refugee or asylee. As I have explained before, however, the United States has been one of the least welcoming wealthy countries in terms of net total immigration as a share of the country’s population in recent years. America should reform its immigration laws, but it should do so to make them more welcoming, not less.

Table: Countries with net increases in refugee-asylee populations

Venezuelans are fleeing their home country in large numbers due to the economic failure of socialism as well as the increasing authoritarianism of the Venezuelan government.  The economic collapse there, inflation reached tens of thousands of percent this year, and the escalating brutality of the Maduro dictatorship are creating a crisis unlike any faced in South America in decades – if ever.  This blog post will provide some information on the scale of the Venezuelan exodus and some suggestions for what other countries can do to mitigate problems caused by the flow of refugees and asylum seekers.   


The roots of the current collapse of Venezuela run deep. Hugo Chavez became the president of Venezuela in 1999 and immediately set about concentrating economic power in the government and political power in himself personally.  He instituted tight government controls on capital, exchange rates, and started a more irresponsible monetary policy that created chaotic financial market conditions that further justified his nationalizations of business and confiscations of private property.  Revenues from the Venezuelan oil industry helped keep the government and economy afloat while the private economy suffered under increasingly harsh and punitive restrictions.  Chavez died in 2013 and was succeeded by Nicolas Maduro who continued Chavez’s economic policies and accelerated the concentration of political power in himself.  The collapse of oil prices beginning in 2014 exposed the economic damage wrought by Chavez and Maduro as inflation took off, GDP shrank, and Maduro’s regime responded with increasingly brutal police crackdowns that are continuing to today.  Most watchers of Venezuela conclude that the current death spiral began in 2015, the year after the decline in oil prices.    

The Scale of the Exodus

The number of people who have left Venezuela is staggering.  Estimates usually range from 1.6 million to 4 million Venezuelans have left their home country.  The International Organization for Migration (IOM) estimates that about 2 million Venezuelans are living outside of Venezuela as of June 2018, a number that has increased by more than a million since 2015 but is still likely an underestimate.  For instance, the number of Venezuelans living in Columbia, Peru, Chile, Brazil, Ecuador, Argentina, and Uruguay in June 2018 was over 1.85 million, up by a little less than one million since 2017. 

To try and reconcile conflicting and confusing estimates, I combined a few different sources and make some simple assumptions.  First, I made a few conservative assumptions when estimating the number of Venezuelans in Argentina, Uruguay, and Brazil.  I estimated that the 2018 number of Venezuelans in Argentina and Uruguay was unchanged from 2017.  For Brazil, I relied on recent news reports to estimate that there was a net 22,000 increase in the number of Venezuelans there in 2018 over 2017.   I then added the additional one million Venezuelans living in those countries to the 1.64 million Venezuelans who were estimated to be living outside of their home country in 2017.  Thus, I estimate that 2.61 million Venezuelans are living abroad in mid-2018 (Figure 1).

Figure 1: Venezuelans living abroad

The emigrant Venezuelan population is equal to about 7.6 percent of all Venezuelan nationals (Figure 2).  The economic collapse in Venezuelan began in 2015, the year after the oil price started declining.  The percent of Venezuelans living abroad increased from 2.2 percent in 2015 to 7.6 percent in 2018 – a 3.5-fold increase.  The Syrian refugee crisis, which began with the start of the Syrian civil war in 2011, is the biggest in recent history.  The Syrian refugee crisis boosted the number of Syrians living abroad by 4.3-fold after four years of civil war.

Figure 2: Venezuelans and Syrians living abroad as a percent of their respective populations

Venezuela has a much larger population than Syria so it will take longer for a fifth of them to flee the country if it ever gets to the point.  However, the number of Venezuelans living outside of their country could meet or exceed the numbers of Syrians in a similar position in the next couple of years if trends continue (Figure 3).  According to a recent poll, about half of Venezuelans between ages 18 and 24 said they wanted to leave Venezuela and 55 percent of upper-middle-class respondents said they wanted to.  If those polls are accurate then the duration of the economic crisis in Venezuela will determine whether it reaches Syrian refugee-level proportions.

Figure 3: Syrians and Venezuelans Living Abroad

As of mid-2018, I estimate that about 71 percent of the Venezuelans who have fled are in other South American countries (Figure 4).  About 12 percent have made it to Canada or the United States, 5 percent are in Central America, Mexico, or the Caribbean, and 13 percent are in other parts of the world.

Figure 4: Destination countries

In the United States, 65,621 Venezuelans have applied for asylum at ports of entry since February 2014, picking up substantially in 2016 and 2017 (Figure 5).  The U.S. federal government reacted to this by cutting the number of tourist B-visas that it issues to Venezuelans, aided most recently by additional restrictions put on Venezuelans through President Trump’s so-called travel ban, but the number of asylum seekers continued to grow at least through the end of 2017 (Figure 6).

Figure 5: Venezuelan asylum seekers
Figure 6: Venezuelan asylum seekers and B-Visa issuances

How Venezuela’s Neighbors are Reacting

About 71 percent of Venezuelans who have fled have gone to other countries in South America.  These countries have reacted in myriad ways to the influx of Venezuelans, mainly by issuing work and residency permits to some of them while nations bordering Venezuela are stepping up border security and deploying troops.  Other nations not mentioned do not have a special policy for admitting Venezuelans.   

In the course of writing this blog, the Migration Policy Institute published a wonderful short paper by Luisa Feline Freier and Nicolas Parent on the Venezuelans emigration crisis.  Many of my comments in this section are based on their excellent work.


Colombia initially offered a Special Stay Permit to Venezuelans as well as Border Mobility Cards which allowed free travel between the two countries.  In February 2018, Colombia stopped issuing both permits due to worries that the influx of Venezuelans was too great.  Now, many are entering illegally in dangerous circumstances.


Brazil created a temporary residency program for Venezuelans in 2017.


Peru created the Temporary Stay Permit (PTP) for Venezuelans in January 2017.  The administrative backlog for the PTP is huge so many Venezuelans are applying for asylum instead. 


The Venezuelan emigration crisis is going to worsen before it improves.  If the labor market and economic integration of Syrians refugees outside of Syria since 2011 can offer any lessons to South America, they are:

  1. Allow Venezuelans to legally work in host countries so that their employment and labor force participation rates rise.
  2. Deregulate labor markets generally because more legal work opportunities will reduce Venezuelan labor market competition with locals. 
  3. Legal employment reduces the net cost of social services and charity as well as increases feelings of belonging and contentment among the emigrants.

Special thanks to Maria Rey for her help on this.

We do not need another rift between communities in our divided nation. But that is what Congress gave us with a provision in last year’s tax bill that imposed a patchwork of divisions spread across every state.

The Tax Cuts and Jobs Act created a complex new tax structure called “Opportunity Zones.” The law tasked governors with carving up their states into tax-favored O zones and tax-disfavored areas we can call NO zones. If investors and developers put a hotel in an O zone, they receive a federal capital gains tax break, but if they put the same project in a NO zone, no such luck.

Vanessa Brown Calder and I discuss Opportunity Zones in The Hill. But pictures are better than words in showing what an unfair mess Congress has created. The U.S. Treasury has posted a national map accessible here, but you get a better idea with these maps of various cities from Bloomberg.

On their way to work, members of Congress pass powerful lettering on the Supreme Court, “Equal Justice Under Law.” So why did they think it was OK to impose unequal tax rules on neighborhoods across the nation?

Since the 1960s, the federal government has made a hash of micromanaging local development through HUD and other spending bureaucracies. I fear O zones will accelerate federal meddling into local affairs on the tax side. Will the government start tying social-engineering regulations to the O zone tax rules like they have with spending aid to local governments?

Some features of federal tax law have differential effects on the states as a byproduct of the tax system’s structure. But the O zones are purposeful geographic discrimination. Aside from the unfairness, the new tax loopholes will fuel a 50-state lobbying frenzy by landowners and developers to be included in the O zones rather than the NO zones. Is it just coincidence that the founder of Quicken Loans owns lots of property in Detroit’s new O zones?

Below is the new O zone map for Washington, D.C. with the favored zones in yellow. If you own property at 5300 East Capitol St NE, federal tax law has just made you a winner. If you own property across the street at 5300 East Capitol SE, you are a loser. Local governments make lots of such winner/loser decisions, but we don’t need the federal government compounding the problem with its powerful and corrupting tentacles.

The best parts of the Republican tax law were a step forward for equal treatment, such as the capping of state and local tax deductions. It is unfortunate that a big new loophole goes in the opposite direction.    

Vanessa has further thoughts on O zones here.

Federal tax rules inducing local corruption? Check out the LIHTC.

A few weeks ago, President Trump surpassed his 500th day in office. That’s a good vantage point to appraise his economic policies to Make American Great Again.

Over at the Library of Economics and Liberty’s Econlog, I offer my assessment. It’s not good.

This may seem surprising, given current economic conditions. But economic policy isn’t merely about the current moment, but predominantly about improving economic conditions long-term. Aside from a couple provisions in the December 2017 tax law, President Trump has done precious little in that regard and much to harm the economy long-term, from borrow-and-spend fiscal policy, to harmful trade and immigration policies, to disinterest in serious regulatory reform, to his refusal to face the country’s dreay long-term fiscal challenges.

From my conclusion:

MAGAnomics appears to be little more than an impulsive dislike of free trade and immigration, a hazy desire for less regulation, disinterest in (or perhaps a lack courage to face) the nation’s long-term fiscal problems, and a desire to temporarily lower taxes without making the hard choices necessary to fiscally balance those cuts and make them enduring. In other words, MAGAnomics is a slogan supporting a few weak and many harmful initiatives, not a serious collection of policies thoughtfully designed to strengthen the nation’s economic health.

Take a look and see if you agree.

In a Regulation article in 2013, Johnathan Lesser described how subsidies to renewable energy generators could actually increase electricity prices by reducing the profits and thus the long run supply of unsubsidized conventional alternatives like natural gas generators. 

According to Catherine Wolfram of the University of California, Berkeley Haas School of Business, the predictions of Lesser have become reality. Natural gas generators in The Pennsylvania-New Jersey-Maryland (PJM) regional electricity market have not received revenues sufficient to cover their capital costs in most years since 2009. Under such circumstances existing plants eventually will cease operation and no new plants will be built. Higher prices and uncertain supply are inevitable.

Calpine, an operator of natural gas plants, asked the Federal Energy Regulatory Commission (FERC) to require PJM to fix the generation capacity market—a government created market that pays firms for reserve generation capacity—to account for the subsidized competitors. Last month, FERC agreed with Calpine that the capacity market is currently “unjust and unreasonable” and issued an order requiring PJM to extend a price floor, which so far only applies to natural gas generators, to all resource types.

However, the FERC order falls short of the first best option: eliminating subsidies to all resources. Federal regulators, Congress, and states should work to repeal the regulations, mandates, and subsidies that complicate the capacity market. An even bolder move would be to mimic Texas, which has no capacity market; generators are paid only for the energy they generate. 

Written with research assistance from David Kemp.

Yesterday, Chris Edwards and I co-authored a piece for The Hill on “opportunity zones.” Opportunity zones were one element of last year’s tax reform law.

They’re more or less what would happen if the Low-Income Housing Tax Credit (LIHTC) and Community Development Block Grant (CDBG) produced offspring: opportunity zones both aim at generating economic development in declining areas (similar to CDBG) and use the tax code to incentivize public private partnerships (like LIHTC).

There are other similarities to CDBG and LIHTC. Opportunity zones may benefit investors and developers more than benefit the poor, which makes them like LIHTC.

The law has no provision to measure opportunity zone’s effectiveness, and measuring effectiveness would be hard anyway, which makes opportunity zones like CDBG. Currently, advocates simply cite the number of projects built with CDBG or LIHTC funding, which doesn’t tell a savvy information-consumer whether programs are meeting their objectives. 

As a result, opportunity zones will likely run on auto-pilot, while special interest groups claim it is effective based on the number of projects that were funded through the new tax mechanism. We won’t know how many of those projects would have been built anyway.

Lawyers, accountants, and financial advisors will make money advising investors and developers on program rules, who will then make money deferring and reducing their capital gains taxes.

There’s nothing wrong with cutting taxes, but opportunity zones are the wrong way to accomplish that. And national policy shouldn’t play favorites or pretend Congress or even state governors know where businesses or people should locate. (Hint: the best place for business and poor people to locate probably aren’t declining areas.) 

Rather than federal “help”, states can create their own state-wide opportunity zones by reforming their own tax codes and fixing their zoning, occupational licensing, and childcare regulations. Zoning regulations keep low-skilled workers trapped in declining places and excluded from economic opportunity, and occupational licensing makes it harder to relocate to new economic opportunities. 

Local reforms would really help poor workers, and regardless of whether they brough declining places back, they would improve poor worker’s ability to locate in non-declining places where the jobs are. Opportunity zones? Not so much.

Last month, we summarized evidence for the long-term stability of Greenland’s ice cap, even in the face of dramatically warmed summer temperatures. We drew particular attention to the heat in northwest Greenland at the beginning of the previous (as opposed to the current) interglacial. A detailed ice core shows around 6000 years of summer temperatures averaging 6-8oC (11-14oF) warmer than the 20th century average, beginning around 118,000 years ago. Despite six millenia of temperatures that are likely warmer than we can get them for a mere 500 years, Greenland only lost about 30% of its ice. That translates to only about five inches of sea level rise per century from meltwater.

We also cited evidence that after the beginning of the current interglacial (nominally 10,800 years ago) it was also several degrees warmer than the 20th century, but not as warm as it was at the beginning of the previous interglacial.

Not so fast. Work just published online in the Proceedings of the National Academy of Sciences by Jamie McFarlin (Northwestern University) and several coauthors now shows July temperatures averaged 4-7oC (7-13oF) warmer than the 1952-2014 average over northwestern Greenland from 8 to 10 thousand years ago. She also had some less precise data for maximum temperatures in the last interglacial, and they are in agreement (maybe even a tad warmer) with what was found in the ice core data mentioned in the first paragraph.

Award McFarlin some serious hard duty points. Her paleoclimate indicator was the assembly of midges buried in the annual sediments under Wax Lips Lake (we don’t make this stuff up), a small freshwater body in northwest Greenland between the ice cap and Thule Air Base, on the shore of the channel between Greenland and Ellesmere Island. Midges are horrifically irritating, tiny biting flies that infest most high-latitude summer locations. They’re also known as no-see-ums, and they are just as nasty now as they were thousands of years ago.  

Getting the core samples form Wax Lips Lake means being out there during the height of midge season.

She acknowledges the seeming paradox of the ice core data: how could it have been so warm even as Greenland retained so much of its ice? Her (reasonable) hypothesis is that it must have snowed more over the ice cap—recently demonstrated to be occurring for the last 200 years in Antarctica as the surrounding ocean warmed a tad. 

The major moisture source for snow in northwesternmost Greenland is the Arctic Ocean and the broad passage between Greenland and Ellesmere. The only way it would snow so much as to compensate for the two massive warmings that have now been detected, is for the water to have been warmer, increasing the amount of moisture in the air. As we noted in our last Greenland piece, the Arctic Ocean was periodically ice-free for millenia after the ice age.  

McFarlin’s results are further consistent, at least in spirit, with other research showing northern Eurasia to have been much warmer than previously thought at the beginning of the current interglacial.

Global warming apocalypse scenarios are driven largely by the rapid loss of massive amounts of Greenland ice, but the evidence keeps coming in that, in toto, it’s remarkably immune to extreme changes in temperature, and that an ice-free Arctic Ocean has been common in both the current and the last interglacial period. 

Federal Reserve Chairman Jerome Powell was before the Senate Banking Committee today to present the semiannual Monetary Policy Report to Congress. Unfortunately, there was little discussion of monetary policy during the proceedings.

The Senators spent nearly all of their time asking the Chairman about the recent stress tests, changes to the tax code, and concerns over additional tariffs. On tariffs, Powell deserves credit for plainly stating that “in general, countries that have remained open to trade and haven’t erected barriers, including tariffs, have grown faster, have had higher incomes, [and] higher productivity, and countries that have…gone in a more protectionist direction have done worse.”

While many Senators ignored monetary policy, the one notable exception came when Senator Pat Toomey asked whether the flattening yield curve on bonds would cause the Fed to adjust either its path for interest rates increases or the pace of its balance sheet reduction.

A flattening yield curve means the difference, or spread, between short- and long-term bonds is narrowing. When short-term bond yields end up higher than those on long-term bonds, then the yield curve has inverted. The concern that Toomey’s question points to is that, in the past, an inverted yield curve has typically signaled a coming recession.

Rather than a direct response to what the flatter yield curve potentially means for normalizing monetary policy, Powell delivered his weakest answer of the day. He admitted that the Fed has discussed yield curve dynamics in policy meetings, that “different people think about it different ways,” and that he tries to understand the yield curve in terms of what it says about neutral interest rates. He ignored the part of the question about whether or not the narrowing spread was signaling a potential economic slowdown—something not lost on seasoned Fed watchers.

While the Senators’ questions left a lot to be desired on the monetary front, the Chairman’s prepared remarks were a bit more encouraging. There, as David Beckworth notes, Powell once again highlighted the FOMC’s use of monetary policy rules when setting policy. It was only a year ago that the Fed added a new section to its semiannual report on monetary policy rules. That the Fed has continued to update and expand that section in subsequent reports is welcome news. However, Powell discusses monetary policy rules as useful insofar as they guide FOMC decisions on the path of interest rates. Because they do not accurately reflect the stance of monetary policy, this laser focus on interest rates can be problematic.

To truly improve the Fed’s performance, Powell should move beyond policy rules that fixate on interest rates and instead explore a monetary regime that would enhance macroeconomic stability.

Powell will be on the Hill again tomorrow, before the House Committee on Financial Services.

The heat and humidity are now on the rise again after a quite pleasant respite. But the last heatwave was exceedingly uncomfortable and prompted an examination of just how miserable Mid-Atlantic summers can be. My own weather equipment, in Marshall VA, showed the maximum heat index—a weighted combination of temperature and humidity that’s akin to heat stress—topped out at an astounding 125°F late in the afternoon of July 3.

This wasn’t a nationwide event, unlike the dust-bowl summers of 1934 and 1936. Instead, as shown on climatologist Roy Spencer’s blog, the unusual heat was rather circumscribed, with a fairly even distribution of above and below-normal temperatures across North America.

It’s worth having a look at the national history of very hot temperatures, shown below:

Figure 1. Despite warmer global average temperatures, there’s no trend in extremely hot days in the US record.

The heat of the 1930s has yet to be topped. In our region, none of the recent heat holds a melting candle to the summer of 1930, which was also exceedingly dry. Except for a few locations that got hit-or-miss thunderstorms, much of the Mid-Atlantic saw less than an inch of rain between June 20th and the end of August, with reports of a mere 10% of normal rain being common.

Here’s how hot it was. Leander McCormick Observatory is Charlottesville’s long-term climate station. For 23 days, beginning on July 19, 1930, the high temperature averaged 100°. Most Mid-Atlantic stations see about one such day per year. During that heatwave, on July 20, Woodstock, in the heart of the Shenandoah Valley, set the all-time credible Virginia record with 109°. (There is a 110° reading at Balcony Falls VA in 1954, but it’s not consistent with nearby temperatures.)

Urban Washington, DC was largely without air conditioning, and residents took to the parks to sleep. But that’s not a safe option now, and it’s also not clear that we have enough grid power to handle that much heat. The hottest days in the eastern U.S. come perilously close to bringing down the electrical grid.

Lack of, or loss of, air conditioning in a major urban heatwave kills people. This happened in Chicago in 1995, with 739 excess deaths as the heat index went astronomical. Nearby southern Wisconsin and eastern Iowa saw values above 130°, and one location (Appleton, WI) hit an astounding 148⁰ at 5pm on July 13, the most uncomfortable heat ever measured in the western hemisphere. That was an official airport reading made on calibrated instruments.

A peculiarity of urban heatwaves, at least in the continental U.S., is that as they become more frequent—which they must, thanks mostly to urban sprawl, as well as a slight nudge from carbon dioxide—heat-related deaths begin to decline. This was noted both in Chicago, post-1995 and in France, post-2003, as subsequent temperature extremes resulted in far few fatalities than would have been expected by heat/death models.

The response to extreme heat is both political and personal. Because of the Chicago tragedy, cities nationwide developed heat emergency plans, which include both publicity and cooling centers. The French decided that—tres gauche—American-style air conditioning wasn’t so bad after all, as they descended in droves upon big-box stores to buy them for granny’s room.

The decline in heat-related mortality is therefore a function of adaptation. Two of the hottest cities in the US are Phoenix and Tampa, and they also have some of the oldest (and therefore most susceptible) populations. Only in Seattle, where heatwaves are very rare, is there increasing heat-related mortality. And as urban heat becomes more frequent nationwide, heat-related mortality should decline as long as the power stays on.

As a historian of the Cold War, I have a passing knowledge of a number of meetings between Soviet/Russian leaders and U.S. presidents. Some are famous for getting relations off on the wrong foot (e.g. Kennedy and Khrushchev at Vienna in 1961); others set the stage for great breakthroughs, but were seen as failures at the time (e.g. Reagan and Gorbachev at Reykjavik in 1986); still others are largely forgotten (e.g. Johnson and Kosygin at Glassboro, NJ in 1967). It is impossible to predict how we will remember the first substantive meeting between Donald Trump and Vladimir Putin.

We can see, however, what President Trump wants us to remember. “I think we have great opportunities together as two countries that, frankly,…have not been getting along very well for the last number of years,” Trump said at the opening of the meeting in Helsinki. “I think we will end up having an extraordinary relationship.” 

President Trump has long said, going back to his campaign, that it is important to have good relations with Russia. I agree. I’ve never seen meetings between American leaders and senior government officials and their foreign counterparts as a “reward” for good or bad behavior. It’s called diplomacy. If this first meeting does set a tone for cooperation between the two countries, historians might eventually judge it worthwhile.

Unfortunately, the context surrounding this meeting is not conducive to long-term success. Credible evidence of Russian interference in the 2016 election, affirmed in detail as recently as Friday, casts a long shadow, and makes it very difficult to make progress on matters of mutual interest. Any genuine breakthrough will immediately run afoul of U.S. domestic politics. That President Trump continues to dismiss the Mueller investigation as a “rigged witchhunt” and mostly blames his predecessor for failing to call the Russian election hack to the attention of the American people merely confirms a widespread perception that he doesn’t take it seriously.

In addition, on the heels of last week’s NATO summit, and the G-7 meeting last month, there is the unsettling fact that President Trump seems to prefer meeting with autocrats than with leaders of democracies. We saw that again today, with President Trump praising Vladimir Putin effusively days after he humiliated European leaders. He also spoke warmly of their mutual friend, China’s Xi Jinping. Last month, the president joked about how North Koreans “sit up at attention” when Kim Jong Un speaks, and he would like “my people to do the same.” He seems particularly impressed by how others are able to stifle domestic dissent. This behavior and rhetoric plays into his critics’ warnings about Donald Trump’s authoritarian instincts, and today’s meeting does nothing to ease such concerns.

President Trump’s idiosyncrasies notwithstanding, however, I will be paying attention to what, if anything, emerges from his meeting with Vladimir Putin. These could include agreement to discuss nuclear arms control, tamping down the civil war in Syria, and possibly reaching some resolution on Ukraine. But we’d all be advised to wait a bit before rendering a definitive judgement.

As regular Alt-M readers know, I’ve been saying for over a year now that, despite their promise to “normalize” monetary policy, Fed officials have been determined to maintain the Fed’s post-crisis “floor” system of monetary control, in which changes to the Fed’s monetary policy stance are mainly achieved by means of adjustments to the rate of interest the Fed pays on banks’ excess reserve balances, or the IOER rate, for short.

Until recently the Fed’s intentions had to be inferred by reading between the lines of its official press releases, or by referring to personal preferences expressed by leading Fed officials. But with today’s release of the Fed’s official Monetary Policy Report by the Board of Governors, it’s no longer necessary to speculate. The section “Interest on Reserves and Its Importance for Monetary Policy,” on pp. 44-46, leaves hardly any room for doubt that the Board of Governors still regards the IOER rate as “the principal tool the FOMC [sic] uses to anchor the federal funds rate,” and that it plans to keep on doing so after it “normalizes” monetary policy by completing its ongoing balance sheet unwind and by further raising its fed funds rate target upper limit by another percentage point or so.[1]

An Awkward Start

Having already spilled several gallons of ink criticizing the Fed’s floor system, on these pages and in Floored!, my forthcoming book on the subject, I don’t see the point of reviewing those criticisms here, by way of a comprehensive reply to the Board’s recent remarks defending that arrangement. Still I can’t resist pointing out some especially galling aspects of those remarks, starting with this opening passage:

The financial crisis that began in 2007 triggered the deepest recession in the United States since the Great Depression. In response, the Federal Open Market Committee (FOMC) cut its target for the federal funds rate to nearly zero by late 2008. Other short-term interest rates declined roughly in line with the federal funds rate. Additional monetary stimulus was necessary to address the significant economic downturn and the associated downward pressure on inflation. The FOMC undertook other monetary policy actions to put downward pressure on longer-term interest rates, including large-scale purchases of longer-term Treasury securities and agency-guaranteed mortgage-backed securities.

These policy actions made financial conditions more accommodative and helped spur an economic recovery that has become a long-lasting economic expansion.

Although the passage itself doesn’t refer to interest on reserves, its purpose is to introduce a discussion devoted to singing the praises of that policy instrument. It’s in light of that intention that the passage raises my hackles. For what the Fed’s report doesn’t say is that, when the Fed introduced IOER in early October 2008, it did so, not because it thought “monetary stimulus was necessary to address the significant economic downturn and the associated downward pressure on inflation,” but because it was determined to prevent its then-ongoing emergency lending from having any stimulus effect, and from thereby becoming a source of unwanted upward pressure on inflation! IOER was, in other words, originally intended to serve as a contractionary monetary policy measure, just when monetary expansion was desperately needed.

And boy did it work! NGDP, which had been growing, albeit at a snail’s pace, went into a tailspin. Nor was that all. Because the Fed’s IOER rate — first set at 75 basis points, briefly lowered to 65 bps, then quickly raised to 100 basis points, and finally lowered again (in early December 2008) to 25 basis points, where it remained for the duration of the crisis — was designed to prop-up the fed funds rate by encouraging banks to accumulate excess reserves, when the Fed finally determined that the U.S. economy could use a little stimulus after all, it had no choice but to resort to “other monetary policy actions to put downward pressure on longer-term interest rates, including large-scale purchases of longer-term Treasury securities and agency-guaranteed mortgage-backed securities.”

But we mustn’t be too hard on the authors of the report. After all, it would have been awkward for them to laud the Fed’s floor system after first pointing out how, during the last months of 2008 and the start of 2009, that system played an important part in bringing the U.S. economy to its knees.

Not a Popular System

Another irksome passage in the Board’s report is the one declaring that “Interest on reserves is a monetary policy tool used by all of the world’s major central banks.” Yes, and no. Plenty of central banks pay interest on bank reserves. But the policy the report defends isn’t simply that of paying interest on bank reserve balances, including excess reserve balances. It’s that of using the IOER rate as the Fed’s chief instrument of monetary control, which is the essence of a “floor” operating system. And that means setting an IOER rate high enough to encourage banks to stock-up on  excess reserves, instead of trading them for other assets.

Although the central banks of several other nations have employed floor systems in the past, today, besides the Fed itself, only the Bank of England and the ECB still rely on floor systems — or something close. Most  central banks now rely on “corridor” systems of some kind, in which the central bank’s IOER (“deposit”) rate sets a lower bound on movements in its policy rate, and open-market operations are routinely employed to keep the actual policy rate at a target set somewhere between that lower bound and an upper bound consisting of the central bank’s own lending rate. Finally, a number of other central banks that either used floor systems before the crisis or adopted such systems during it, including the Swiss National Bank, the Bank of Japan, Norges Bank, and the Reserve Bank of New Zealand, switched to “tiered” or “quota” systems afterwards. In a tiered system, reserves may earn interest at a rate that makes them seem attractive relative to other safe assets, but they do so only up to a fixed limit. Beyond that limit they earn only a relatively modest return — if not a zero or negative return. Because the marginal opportunity cost of reserves remains positive in tiered systems, such systems operate more like corridor systems than like a floor system.

Just How Low Has the Fed Really Gone?

But of all the irritating claims of the Board’s report, the one that has gone furthest in putting me in high dudgeon is this one:

The rate of interest the Federal Reserve pays on banks’ reserve balances is far lower than the rate that banks can earn on alternative safe assets, including most U.S. government or agency securities, municipal securities, and loans to businesses and consumers. Indeed, the bank prime rate — the base rate that banks use for loans to many of their customers — is currently around 300 basis points above the level of interest on reserves.

To which the following footnote is appended:

The Congress’s authorization allows the Federal Reserve to pay interest on deposits maintained by depository institutions at a rate not to exceed the “general level of short-term interest rates.” The Federal Reserve Board’s Regulation D defines short-term interest rates for the purposes of this authority as “rates on obligations with maturities of no more than one year, such as the primary credit rate and rates on term federal funds, term repurchase agreements, commercial paper, term Eurodollar deposits, and other similar instruments.” The rate of interest on reserves has been well within a range of short-term interest rates as defined in Board regulations.

Where to begin?

It’s absurd, first of all, to treat interest rates on “loans to businesses and consumers,” the prime rate included, as rates on safe assets. But don’t take my word for it: consider what two Fed senior economists, one of whom works at the Board of Governors, have to say on the subject, in a Liberty Street Economics post entitled, “What Makes a Safe Asset?” Safe assets, they write,

are those with a very high likelihood of repayment, and are easy to value and trade …. As a result, safe assets typically trade at a premium, known in the academic literature as a “convenience yield,” which reflects the nonpecuniary benefits investors receive for holding them …

In today’s financial system, the prime example of a safe asset is U.S. Treasury securities. These securities are considered to have zero credit risk, can be easily sold, and can be used as collateral either to raise funding or to post as margin in derivatives positions. … Treasuries’ safe asset status translates into an average yield reduction of 73 basis points. This yield spread can be interpreted as a measure of the convenience yield embedded in Treasuries.

However, Treasuries differ significantly in maturity and that affects their safe asset characteristics. Treasury bills (T-bills) have the shortest maturities and are often thought of as “money-like” assets, that is, assets similar to physical currency. Because of this moneyness, yields on short-term T-bills are typically lower than those on comparable assets….

The private sector can also create safe assets. For example, many of the benefits ascribed to public safe assets are also attributed to private short-term debt of certain issuers. An important difference between public and private safe assets, however, is that the reliability of private safe assets can come into question.

Stretch the notion as much as you like, you will never get “safe assets” to include even the safest bank loans. That is, you won’t be able to do it unless you are a Fed official trying to claim that the Fed’s IOER rate has been “far lower than the rate that banks can earn on alternative safe assets.”

Nor is it possible to justify comparing the Fed’s IOER rate — a rate on assets (reserves) of essentially zero maturity — to rates on otherwise safe longer-term assets. Instead, to sustain the claim that the Fed’s IOER rate has been low relative to that on assets of comparable safety, including comparably low exposure to interest-rate (or duration) risk, Fed officials would have to show that the IOER rate is below rates on safe assets with very short (if not zero) maturities. That rules out comparisons to  Treasury and agency bonds and notes, leaving only Treasury bills. Even then the comparison is a bit unfair, as even the shortest-term Treasury bills have longer terms — and are therefore less liquid and safe — than bank reserves.

But let that pass. Instead, let’s just consider how the report’s assertion that the Fed’s IOER rate “is far lower than the rate that banks can earn on alternative safe assets” stacks up against the record regarding yields on various Treasury bills. Let FRED do the talking:

As the chart shows, throughout most of its existence the IOER rate has been well above, not just rates on shorter-term Treasury Bills, but those on 1-year T-bills; indeed, for a long interval banks had to hold T-bills of 2-year maturities or longer to earn as much interest as excess reserves paid. And while the situation isn’t nearly so bad today, it remains the case that reserves pay more than one-month Treasury bills. That’s not “far lower than the rate that banks can earn on alternative safe assets.” It’s not even a little lower. It’s higher. Nor could things be otherwise, because having a floor system means having an IOER rate that’s high enough “to remove the opportunity cost to commercial banks of holding reserve balances,” which it wouldn’t be if it were really “far lower than the rate that banks can earn on alternative safe assets.”

“D” for Deception

And what about that footnote? It just adds insult to injury by showing the lengths to which the Fed has been willing to go to twist and bend the law authorizing it to pay interest on bank reserves. As the note correctly observes, that law requires that the Fed’s IOER rate not exceed “the general level of short-term interest rates.” Since the IOER rate is itself, as we’ve seen, a rate on a riskless zero-maturity asset, any reasonable interpretation of the statute would have it refer to the general level of rates on other short-term, riskless assets, such as 4 week-Treasury Bills or, perhaps, overnight Treasury-secured repos.

So, in preparing Regulation D, how did the Fed define short-term rates for the purpose of implementing the statute? Why, as “rates on obligations with maturities of no more than one year, such as the primary credit rate and rates on term federal funds, term repurchase agreements, commercial paper, term Eurodollar deposits, and other similar instruments” (my emphasis). If you can’t see how self-serving — not to say dishonest — the Fed’s definition is, please read it again, carefully, bearing in mind what “term” rates are and that the Fed’s “primary credit rate” is what’s more commonly known as its “discount” rate — that is, “the interest rate charged to commercial banks and other depository institutions on loans they receive from their regional Federal Reserve Bank’s lending facility–the discount window.”

That Regulation D refers to “term” rates rather than overnight rates, when the latter are obviously more appropriate, is the least of it. The inclusion on the Fed’s list of comparable rates of the Fed’s primary credit rate is the real kicker. First of all, that rate isn’t a market rate but one that the Fed itself administers. What’s more, the Fed has long had a policy of setting it well “above the usual level of short-term market interest rates” (my emphasis again). These days, for example, it sets it “at a rate 50 basis points above the Federal Open Market Committee’s (FOMC) target rate for federal funds.” Because the IOER rate once defined the upper limit of the FOMC’s fed funds target rate range, and is now set 5 basis points below that limit, any interest rate that the Fed pays on reserves is bound to be lower than the Fed’s primary credit rate. Thus the Fed has cleverly interpreted and implemented the statute in a manner that allows it to claim that it is obeying the law requiring that its IOER rate not exceed “the general level of short-term interest rates” no matter how it sets that rate, including when it sets it well above truly comparable market-determined short-term rates!

Now I hope you’re at least starting to see why the Fed’s report has got my goat.

[1] “Sic” because it is the Board of Governors, rather than the FOMC, that sets the IOER rate. Concerning this anomalous exception to the rule assigning responsibility for the conduct of monetary policy to the FOMC, see my January 10, 2018 testimony before the Monetary Policy and Trade Subcommittee of the House Financial Services Committee.

[Cross-posted from]

As a physician licensed to prescribe narcotics, I am legally  permitted to prescribe the powerful opioid methadone (also known by the brand name Dolophine ) to my patients suffering from severe, intractable pain that hasn’t been adequately controlled by other, less powerful pain killers. Most patients I encounter who might fall into that category are likely to be terminal cancer patients. I’ve often wondered why I am approved to prescribe methadone to my patients as a treatment for pain, but I am not allowed to prescribe methadone to taper my patients off of a physical dependence they may have developed from long-term opioid use, so as to help them avoid the horrible acute withdrawal syndrome. I am also not permitted to prescribe methadone as a medication-assisted treatment for addiction. These last two uses of the drug require special licensing and permits and must comply with strict federal guidelines. 

The synthetic opioid methadone was invented in Germany in 1937. By the 1960s, methadone was found to be effective as medication-assisted treatment for heroin addiction, and by the 1970s methadone treatment centers were established throughout the US, providing specialized and highly structured care for patients suffering from Substance Abuse Disorder. The Narcotic Addict Treatment Act of 1974 codified the methadone clinic structure. Today, methadone clinics are strictly regulated by the Drug Enforcement Administration, the National Institute on Drug Abuse, the Substance and Mental Health Services Administration, and the Food and Drug Administration. These regulations establish guidelines for the establishment, structure, and operation of methadone clinics, in most cases requiring patients to obtain their methadone in person at one fixed site. After a period of time, some of these patients are allowed to take methadone home from the facility to self-administer while they remain closely monitored. This onerous regulatory system has led to an undersupply in methadone treatment facilities for patients in need. Furthermore, the need for patients to travel, often long distances, each day to the clinic to receive their daily dose has been an obstacle to their obtaining and complying with the treatment program.

Earlier this month addiction specialists from the Boston University School of Medicine and Public Health and the Massachusetts Department of Public Health argued in the New England Journal of Medicine that community physicians interested in the treatment of Substance Abuse Disorder should be allowed to prescribe methadone to their patients seeing them in their offices and clinics. Doctors have been allowed to prescribe the opioid buprenorphine for medication-assisted treatment of addiction for years, and in recent years nurse practitioners and physicians’ assistants have been able to obtain waivers that allow them to engage in medication-assisted treatment as well.

The authors noted that methadone has been legally prescribed by primary care providers to treat opioid addiction in other countries for many years— in Canada since 1963, in the UK since 1968, and in Australia since 1970, for example. They state, 

Methadone prescribing in primary care is standard practice and not controversial in these places because it benefits the patient, the care team, and the community and is viewed as a way of expanding the delivery of an effective medication to an at-risk population.

Policymakers serious about addressing the ever-increasing overdose rate from (mostly) heroin and fentanyl afflicting our population should take a serious look at reforming the antiquated regulations that hamstring the use of methadone to treat addiction.