Tuesday, January 23, 2018

The Regulatory Subsidy for Extreme Leverage: A Reply to Mike Konczal

Mike Konczal has written a thoughtful, highly critical review of The Captured Economy that focuses on our analysis of financial regulation. Although we are not sure how much Konczal would agree with us once all misunderstandings are resolved, his criticisms are based on a misreading of our position. No doubt all failures of communication are on us, so we welcome this chance to straighten things out and clarify and elaborate our argument.

Although Konczal raises other issues, for purposes of space we’re going to focus just on our contention that the combination of a formal and informal safety net for financial institutions and low capital requirements for those institutions amounts to a massive subsidy for excessive risk-taking—and that, consequently, the U.S. financial sector is both too big and too prone to crisis. Konczal argues that the data don’t back up our contention. As to the formal safety net, he looks at deposit insurance and notes that both the rapid growth of the financial sector in recent decades and the housing bubble meltdown of a decade ago were driven by the “shadow banking” sector, not depository institutions. As to the informal safety net, Konczal points to the absence of any significant spread in borrowing rates between large financial institutions and the rest of the pack in the run-up to the crisis. If “too big to fail” subsidies were a major reason behind both financialization and the financial crisis, where was the funding advantage for firms with access to those subsidies? Furthermore, if those institutions were piling up risk because their creditors would be held harmless regardless, why didn’t we see a buildup in equity risk (“beta”) for those firms in the years before the crisis (since equity holders remained exposed to losses)?

The fundamental problem with Konczal’s critique is that it is pitched at the wrong level of analysis. Konczal looks for the specific effects of discrete policies and comes up empty. But our argument is focused, not on specific policies and how they work in isolation, but rather on the whole underlying regulatory model. We contend that extreme reliance on debt financing is inherently destabilizing, making financial firms highly vulnerable to both insolvency and liquidity crises. Unfortunately, the regulatory system is premised on the assumption that extreme leverage is natural, unavoidable, and even desirable. So rather than eliminating this root cause of financial instability, policymakers have chosen to try to regulate around it with detailed controls on the risks that financial institutions can take. As to how that works out, pick your metaphor. Sometimes, financial regulation resembles the game of whack-a-mole: clamp down on risk-taking somewhere in the system, and soon it crops up somewhere else. Over the long run, regulation resembles putting a lid on the pot while leaving the burner on high: sooner or later, the lid will get knocked off and the pot will boil over.

To be clear, we do not argue—although Konczal suggests we do—that the problem with financial regulation is a dearth of “economic liberty” that can be remedied by “getting government out of the way.” The modern state and modern finance have been inextricably connected since the origins of both. Accordingly, our analytical starting point is to take as given an active government role in overseeing the financial sector. The question, therefore, is entirely one of choosing which institutional arrangements the state uses to facilitate and structure financial markets, with a view to the different effects of various possible arrangements on stability, growth, and inequality. Our contention is that the United States has adopted—typically at the behest of the financial industry— institutional arrangements that generate high system instability while redistributing income and wealth upwards. Nothing in our argument should be understood to suggest that the problem is “too much regulation” and the answer is “deregulation,” for the simple reason that, at the margin where policy change occurs, those terms are basically meaningless. Reducing regulation on the one hand (say, by reducing limits on leverage) may just increase the role of the state (through bailouts) on the other.

Let’s flesh out our story with a little potted history, starting with the creation of the Federal Reserve system in the early twentieth century. At this time, U.S. banks were much less dependent on leverage: debt-to-asset ratios were in the range of 75 percent, as opposed to contemporary ratios well in excess of 90 percent. Yet the financial system was notoriously crisis-prone, thanks to the original sin of U.S. banking regulation: unit banking, or strict limits on interstate and intrastate branching. The financial system that resulted was one composed of thousands of small, under-diversified local banks, for which merely local downturns could be ruinous.

Instead of shoring up the system by allowing mergers and branching, policymakers created the Fed and in particular its discount window. With the Fed in place as the lender of last resort, banks were now better able to ride out liquidity crunches. The unit banking system was somewhat stabilized as a result—and leverage gradually increased, with average debt-to-asset ratios climbing to 85 to 90 percent by the 1920s.

Then came the Great Depression and waves of ruinous bank failures. Policymakers could have responded by redesigning the U.S. financial system in ways that would eliminate the fundamental sources of instability. First, they could have allowed more liberal branching, as some states had begun experimenting with during the 1920s. Second, they could have mandated greater reliance on equity financing, as the “Chicago Plan” forwarded by Irving Fisher and others would have done in dramatic fashion by requiring that bankers hold reserves equal to 100 percent of deposits (thus ensuring that bank lending would be financed much more heavily with equity). Instead, we got deposit insurance—inserted into the Banking Act of 1933 by Henry Steagall in a bid to save unit banking (it worked too, as state-level branching liberalization all but stopped for decades). Deposit insurance made runs less likely by promising depositors they would never lose their money; at the same time, of course, it made reliance on short-term debt more attractive by reducing its attendant risks. By the 1940s, debt-to-asset ratios moved above 90 percent, where they have stayed ever since.

Deposit insurance did help to stabilize unit banking—for a few decades at least. Against a backdrop of unprecedentedly robust growth and interest rate stability, the New Deal regulatory model worked fine. With the Great Inflation of the 1970s, however, the model came under severe stress—as evidenced most spectacularly by the savings-and-loan meltdown of the 1980s. The policy response was what has come to be known as financial liberalization: decontrol of interest rates and gradual removal of restrictions on branching and investment banking activities. What didn’t change was regulatory acquiescence in, and affirmative enabling of, extreme leverage. Although capital requirements were imposed and refined, they were aimed merely at pruning outliers, not challenging the systemic reliance on extreme leverage.

The competitive environment for financial firms grew much more challenging and complex in this new era. In the sleepy, staid environment of the postwar decades, bankers followed the “3-6-3” rule: borrow at 3 percent, lend at 6 percent, out on the golf course by 3pm. Now, however, interest rates could move freely—as, increasingly, could exchange rates and international capital flows. The end of unit banking brought dramatic consolidation and created new banking giants. “Financial innovation” ushered in a host of new and increasingly exotic financial instruments, touted as sophisticated tools for managing and minimizing risk—if used correctly. The prolonged stock market boom and the rise of mutual funds and 401(k) plans—a move pushed strongly by the government, at the behest of parts of the financial industry—led to a huge increase in assets under active management. And as financial intermediation’s share of national economic output steadily rose, almost all of that growth occurred outside of traditional banking—in the so-called “shadow banking” sector that emerged to serve the rapidly expanding market for securitized assets.

In short, while under-diversification, the oldest source of market risk and financial instability, was reduced by consolidation, other sources of risk and instability were growing by leaps and bounds. Yet in this more dynamic, complex, volatile, and unpredictable market environment, reliance on extreme leverage persisted unchallenged. The old premise on which this reliance was based—namely, that regulators could keep tabs on financial firms to make sure they didn’t do anything too risky—grew increasingly unrealistic as the range and complexity of financial activities made the necessary oversight impossible.

Repeatedly during this era of financialization, the sector swerved toward a catastrophic reckoning—only to be saved by a combination of ad hoc bailouts and turn-a-blind-eye regulatory forbearance (i.e., not enforcing the rules against struggling institutions). Continental Illinois in 1984, the Latin American debt crisis of the 1980s, the peso crisis of 1994, the Asian financial crisis of 1997-1998, Long Term Capital Management in 1998, and of course the financial crisis of 2007-2009 – again and again the U.S. government has intervened with emergency assistance to prop up financial institutions deemed too big or too important to fail. This implicit safety net has extended far beyond the traditional banks covered by deposit insurance to include investment banks, the government sponsored enterprises Fannie Mae and Freddie Mac, hedge funds, money-market mutual funds, and insurance companies.

 

To understand the effect of these repeated bailouts, imagine the counterfactual: what would have happened if government had reacted to those pre-2007 episodes by doing nothing and letting the market consequences of excessive risk-taking run their course? The more you think contagion effects are a big problem, the worse the aftermath would have been—and, consequently, the bigger the shift in risk perceptions. Call it a reversal of “animal spirits,” or a “Minsky moment,” or the workings of the availability heuristic on a mass scale, but after the crisis, investors would have seen the world as a much riskier and unpredictable place than it had seemed just before. And as a consequence, the extreme levels of leverage that had created the exposure to ruin would suddenly look like heedless folly. It seems extremely unlikely that the relentless, debt-fueled expansion of the financial sector we’ve experienced since the 1970s would have reached the levels it has without those bailouts – or that the scale of dysfunction revealed by the bursting of the housing bubble would ever have been attained.

Note that this explanation departs from efficient-markets thinking and embraces a “behavioral finance” approach. We believe that bubbles are a regular feature of asset markets, and that it is possible for everyone to take on too much risk at the same time because they are blind to how risky what they are doing really is. By regularly intervening to stave off reality checks that would make people more risk-averse, the government has actively abetted the inflation of the biggest bubble of them all: the U.S. financial sector bubble.

For an analogy from everyday life, think about the last time you were involved in an accident while driving. The more serious the accident, the more likely you were to drive with an abundance, perhaps an over-abundance, of caution in the days that followed. But as memories of the accident receded, and the reassuring normality of accident-free driving reasserted itself, you gradually reverted to the way you drove before.

Now let’s imagine that you’re an aggressive driver who’s prone to excessive risk-taking on the road—lots of tailgating, rapid lane changes, and accelerating through yellow lights. The way you drive, you would get into accidents fairly regularly—but you have a guardian angel (our stand-in for the government) that regularly slams other drivers’ brakes or swerves their steering wheels to avoid those collisions. Your natural reaction will be to think your driving is much less dangerous than it really is, and you have no idea that you are a menace on the roads—until one day your guardian angel is preoccupied and you plow through a just-turned-red light into a school bus full of kids.

For the perfect quote that both describes and gives an example of this dynamic, consider this observation made by none other than Robert Rubin back in 1999: “It is in the nature of markets, and probably ultimately in human nature…, to become ever more careless about adequately analyzing and weighing risk as good times continue.” As Treasury secretary, Rubin, lionized as one of the architects of the “Great Moderation” that fostered the widespread view that major U.S. economic crises were a thing of the past, helped to extend good times, and deepen the ensuing complacency, by dutifully delivering bailouts when the need arose. Then, as a director and senior counselor at Citigroup, he had a front row seat for the meltdown. Hubris, meet nemesis.

Our understanding of the roots of the financial crisis thus accords an important role to moral hazard—but understood somewhat differently from how it is typically portrayed. Most descriptions of moral hazard assume that risks are known and properly judged but that perverse incentives lead actors astray. Picture a financial executive knowingly engaging in long-shot investments with a “heads I win, tails you lose” mentality. He knows the chance of a bad outcome is high, but he just doesn’t care: if the investment pays off, his firm makes a huge gain; if it doesn’t, the government will take the loss.

If this was the only way moral hazard worked, it would be hard to conclude that moral hazard was an important factor in what went wrong. After all, the long bookshelf of crisis postmortems makes clear that terrible blunders were regularly made by people who had plenty to lose. Far from cold-bloodedly gambling with other people’s money, most had no idea how recklessly they were gambling: they were confident that all the risk-management techniques offered up by financial innovation were working well and that what actually ended up happening was unthinkable.

But moral hazard doesn’t just incentivize consciously aggressive risk-taking. Once you’ve taken the behavioral turn, you see that moral hazard also causes people to underestimate the risks they’re taking. This is what the formal and informal safety nets constructed for the financial sector combine to do: they make the high-wire act of extreme leverage seem a lot closer to the ground than it really is.

Consider again the record of the two decades leading up to the financial crisis. Every few years, policymakers were faced with the prospect of a catastrophic collapse of the U.S. financial system. Read that sentence again, and let it sink in: a system that threatens to implode every few years is not a stable system! By intervening regularly, though, policymakers were able to maintain the appearance of stability—and, thus, the widespread delusion that all was well and the Masters of the Universe really did know what they were doing. Meanwhile, they never took the one step that would have prevented future crises: mandating that financial institutions adopt a thick equity cushion to protect themselves from market downturns and taxpayers from future bailouts.

For readers patient enough to make it through this explanation, we can now return to Konczal’s evidence against our position. As to deposit insurance, we never argued that it was a proximate cause of the financial crisis or the direct driver of financialization. Rather, we see deposit insurance as one component of an overall regulatory structure that works to enable and perpetuate an otherwise unsustainable reliance on extreme leverage by financial institutions both inside and outside that regulatory structure.

Deposit insurance, while it succeeded in stabilizing unit banking, also normalized reliance on extreme leverage. The New Deal system functioned serviceably well for a few decades under the exceptionally favorable circumstances of the postwar boom. But even as circumstances altered and policy liberalized considerably, policymakers never reconsidered extreme leverage as an industry norm. And when nonbank institutions are now similarly leveraged, they are simply following long-standing industry practice. Shadow banking grew to prominence in an era when regulators were convinced that contemporary risk management strategies made such leverage perfectly safe.

As to ad hoc bailouts, we have already explained how they propped up reliance on extreme leverage in the face of repeated brushes with disaster. In our understanding, they accomplished this, not by reducing known risks for large and systemically important institutions, but by suppressing risk aversion generally. In the years leading up to the financial crisis, market participants generally had no idea of the risks they were running—in significant part because policymakers kept stepping in and absorbing the downside. Accordingly, we are not surprised that there was no clear funding advantage, or elevated beta, for TBTF firms; a position in the front of the line for government support in the event of a system-wide collapse was not considered especially advantageous when nobody took seriously the prospect of that collapse, and equity holders didn’t generally believe that they were running heightened risks.

The evidence for the massive subsidy for excessive leverage—and, therefore, for an excessively large financial sector—lies in the heightened risk of financial crises and attendant bailouts that the current regulatory system perpetuates, institutionalizes, and normalizes. According to analysis by the Federal Reserve Bank of Minnesota, the regulatory system as of 2007, on the verge of the crisis, stood an 84 percent chance of spawning a financial crisis over the next century. Dodd-Frank reforms in the aftermath have done a bit of good, but even so,the chance of a crisis in the next 100 years has only dropped to 67 percent. As this analysis and other studies have found, higher capital standards offer the prospect of significantly higher output over the long term, as the costs in terms of higher lending spreads (caused by higher financing costs for banks) are more than offset by avoiding the enormous losses that financial crises inflict.

Accordingly, the combination of the financial safety-net and low capital requirements constitutes a large-scale, negative-sum transfer of resources to the financial sector—reducing its borrowing costs, boosting return on equity (and therefore pay for financial professionals, which is often based on ROE), and expanding the overall size of the system.

No comments:

Post a Comment