Global economic downturn linked with at least 260,000 excess cancer deaths

The economic crisis of 2008-10, and the rise in unemployment that accompanied it, was associated with more than 260,000 excess cancer-related deaths–including many considered treatable–within the Organization for Economic Development (OECD), according to a study from Harvard T.H. Chan School of Public Health, Imperial College London, and Oxford University. The researchers found that excess cancer burden was mitigated in countries that had universal health coverage (UHC) and in those that increased public spending on health care during the study period.

The study will be published May 25, 2016 in The Lancet. http://www.thelancet.com/journals/lancet/article/PIIS0140-6736%2816%2900577-8/abstract

— source eurekalert.org

Business Bankruptcies Soar 38%

Something funny happened on the way to the bank: In August, commercial and industrial loans outstanding at all banks in the US fell for the first time month-to-month since October 2010, which had marked the end of the collapse of credit during the Financial Crisis.

In October 2008, the absolute peak of the prior credit bubble, there were $1.59 trillion commercial and industrial loans outstanding. As the Great Recession chewed into the economy, C&I loans plunged. Many of them were cleansed from bank balance sheets via charge-offs. But then the Fed decided what the US needed was more debt to fix the problem of too much debt, thus kicking off what would become the greatest credit bubble in US history. By July 2016, C&I loans had surged to $2.064 trillion, 30% above their prior bubble peak.

But in August, something stopped working: C&I loans actually fell 0.3% to $2.058 trillion, according to the Federal Reserve Board of Governors. That translates into an annualized decline of 3.8%, after an uninterrupted six-year spree of often double-digit annualized increases. Note that first month-to-month dip since October 2010:

It’s still too early to tell how significant this dip is. It’s just the first one. It could have occurred because companies borrow less because they need less money as there’s less demand, and expansion is no longer on the table. Or it could have occurred because banks are beginning to tighten their lending standards, with one hand on the money spigot. And all this is occurring while banks write off more nonperforming loans (and thus remove them from the C&I balances) that have resulted from mounting defaults and bankruptcies by their customers.

The ugliest credit stories in terms of bonds, according to Standard & Poor’s Distress Ratio, are the doom-and-gloom categories of “Energy” and “Metals, Mining, and Steel.” Next down the line are two consumer-facing industries: brick-and-mortar retailers and restaurants.

But these metrics by credit ratings agencies are based on companies that are big enough to be rated by the ratings agencies and that are able to borrow in the capital markets by issuing bonds. The 18.9 million small businesses in the US and many of the 182,000 medium size businesses don’t qualify for that special treatment. They can only borrow from banks and other sources. And they’re not included in those metrics.

But when they go bankrupt, they are included in the overall commercial bankruptcy numbers, and those numbers are getting uglier by the month.

In September, US commercial bankruptcy filings soared 38% from a year ago to 3,072, the 11th month in a row of year-over-year increases, according to the American Bankruptcy Institute.

For the first nine months of 2016, commercial bankruptcy filings jumped 28% compared to the same period in 2015, to 28,789. Most of those are not the bankruptcies we hear about in the financial media. Most of them are small businesses that go that painful route – painful for their creditors too – in the shadows of the hoopla on Wall Street.

By comparison, just over 100 oil and gas companies in the US and Canada have gone bankrupt since the beginning of 2015. About a dozen retail chains have filed over the past year, along with about 12 restaurant companies, representing 14 chains.

Commercial bankruptcy filings skyrocketed during the Financial Crisis and peaked in March 2010 at 9,004. Then they fell on a year-over-year basis. In March 2013, the year-over-year decline in filings reached 1,577. Filings continued to fall, but at a slower and slower pace, until November 2015, when for the first time since March 2010, bankruptcy filings rose year-over-year. That was the turning point. Note that there is no “plateauing”:

In September this year, bankruptcies exceed those from a year ago by 855 filings – the 38% jump. March and May saw similar year-over-year increases. So this looks like it’s the beginning of a new and long trend that is not going to fit into the rosy scenario.

Rising bankruptcies are an indicator that the “credit cycle” has ended. The Fed’s policy of easy credit has encouraged businesses to borrow – those that could. But by now, this six-year debt binge has created an ominous debt overhang that is suffocating these businesses as they find themselves, against all promises, mired in an economy that’s nothing like the escape-velocity hype that had emanated from Wall Street, the Fed, and the government.

— source wolfstreet.com

Still Feeling the Great Recession?

Do you enjoy riding on roller coasters? Do you like rising, screaming, and then suddenly plummeting, never quite knowing exactly what scary sensation lurks around the next bend?

Many folks crave that sort of experience. Many others don’t. And if you fall in the latter category, you don’t have to worry about roller coasters — because you have a choice. You can choose not to ride. You can avoid all the precipitous ups and downs.

In our modern market economies, we have no such choice. Ups and downs — booms and busts — come with the territory.

Over recent years, in the United States, we’ve lived the angst of this reality. In fact, we still haven’t fully recovered from the Great Recession.

Why not? A global team of economists recently took the time to ponder that question. And their answer revolves around a key choice that — even in a market economy — we can make. We can choose to be more equal. The more equal an economy, the less severe and long-lasting economic downturns will be.

How does inequality make downturns worse? The Great Recession offers a telling case study, and the new research from Stockholm University’s Kurt Mitman, the University of Pennsylvania’s Dirk Krueger, and the University of Minnesota’s Fabrizio Perri walks us through it.

In 2007, the year the Great Recession officially began, the United States was experiencing its highest level of household economic inequality since the 1930s. America’s poorest 40 percent of households had more debts than assets. Taken together, these households essentially held 0 percent of the nation’s wealth.

The richest 20 percent of households, by contrast, held 82.7 percent of America’s wealth.

Household income figures told that same basic inequality story. The bottom two fifths of households were taking in 19.9 percent of national income. The top one fifth was pulling down over double that share, 41.2 percent.

But the story changes a bit when we look at consumption. The richest fifth may have had over 80 percent of the nation’s wealth. But their personal spending made up only 37.2 percent of what the nation consumed.

The poorest two fifths of households, on the other hand, may have had zero wealth. But they did have income, and they were spending almost all that income, month after month, on the goods and services they needed to get by. Their personal spending accounted for nearly a quarter of the nation’s total consumption, 23.7 percent.

All these consumption numbers matter. In an economic downturn, consumption levels determine how rapidly and how well an economy will recover. If people aren’t spending, businesses aren’t going to be hiring.

So what happened after the Great Recession hit? People with little or no wealth started spending less. Households in the bottom fifth decreased their spending at twice the rate of households in the top fifth.

In their new research, economists Mitman, Krueger, and Perri take pains to emphasize why exactly poorer people spend less when an economy goes south. The reason that at first glance might appear to be the most obvious — that poorer households in hard times simply have less income to spend — turns out not to be the key driver.

Yes, low-wealth households that have lost jobs and paychecks will spend less when hard times hit. But low-wealth households that have not lost jobs and paychecks will also spend less. They’ll spend less because they don’t have the resources, as Mitman, Krueger, and Perri put it, “to self-insure against idiosyncratic risk.”

In other words, low-wealth households don’t have enough cash available to tide them over if they lose a paycheck. So these households, once hard times arrive, “drastically reduce their expenditure rates, even if their income has not dropped yet.”

The more low-wealth households in a society, the more devastating the impact of these spending reductions on the economy as a whole, the longer downturns linger.

If, on the other hand, we had a more equal distribution of wealth in the United States, more households would be able to keep spending at the onset of a downturn. The rough times would end sooner.

So what should we do? Short-term, we ought to be doing our best to give poor households more income security. Our woefully inadequate system of unemployment insurance needs a total makeover. Families need to see that the loss of a job will not mean a devastating loss of income.

And in the longer term? We simply need to become more equal. We need a total makeover of the policy decisions — on everything from taxes and trade to labor rights and business regulation — that have left the distribution of household wealth in the United States so incredibly top-heavy.

— source toomuchonline.org By Sam Pizzigati

Cryptocurrency Crash

One of the more profitable trades this year was in the cryptocurrency Bitcoin.

For those unfamiliar, Bitcoin is a digital asset and payment system — a virtual currency. It’s considered a cryptocurrency because it doesn’t require a central bank to handle its transactions. It’s all self-contained through technology that encrypts and records a ledger over a distributed computer system. This technology is called the blockchain.

The benefit of blockchain technology comes from its transparency. Everyone can see every transaction. The whole system is also decentralized. There’s no single institution or bank that controls the transferring of assets back and forth. This (advocates claim) removes the possibility of corruption, theft, and a whole host of other common problems that come with your standard financial system.

Bitcoin and its fellow cryptocurrencies (a number have been launched since) have become popular as alternatives to the standard fiat currencies of governments around the world. In some ways they’re treated in a similar way to gold and other precious metals. Don’t trust the government? Scared of inflation or other market problems? Then pile into these alternative currencies.

Our team at Macro Ops likes the idea of these virtual currencies. Their technology is impressive and can be used in a variety of different applications.

But the advocates of these currencies have come to the point of pushing fantasies. Their long-term goal is to create a system completely free of human intervention — with machines operating everything. In their minds, the humans are the problem and rigid automation is the solution to create a “perfect” system.

A large percentage of cryptocurrency investors believe in this vision to some extent. And this belief is part of what fuels massive speculative runs and subsequent crashes in the prices of these assets.

We saw this happen just recently in the Ethereum market, another cryptocurrency.

The story of the crash starts with the creation of a new “revolutionary” kind of venture capital firm — the Decentralized Autonomous Organization (DAO). Its goal? To be the first VC with no executives. Computers would manage everything.

The firm used Ethereum technology to run its operations. Investors would join the fund by submitting Ether, and once they bought in, they would receive voting rights in proportion to their investment. Companies that wanted to be funded by the VC would submit their proposals which all DAO investors would vote on. Whichever proposal won the voting round would be accepted and funded. All this was facilitated through Ethereum technology.

It was a decentralized, democratic system with full transparency — a brand new kind of investment firm. Investors considered it a beautiful extension of the technology that undermined cryptocurrencies. It excited them. And they piled in. DAO quickly raised $152 million from investors around the world.

But then the unthinkable happened. The fund was robbed. A hacker exposed weaknesses in DAO’s Ethereum construct and stole over $50 million.

The hacking successfully put an end to the DAO, and what’s more, it casted doubt on the security and durability of the entire Ethereum system. The beliefs of cryptocurrency investors took a beating. And that beating transferred to virtual currency prices. The price of Ether was nearly cut in half from the incident.

But soon after the DAO robbery, Ethereum developers were actually able to catch the hacker and freeze the funds he stole.

Great… problem solved right?

Nope.

This caused a giant debate to erupt among the Ethereum community. Returning the stolen money to investors would require a manual change to Ethereum’s underlying technology. This is a huge deal because it would require human intervention which would defeat the whole purpose of a completely autonomous system. It would ruin the sanctity of the currency and fly in the face of the principles it was built on. This made the decision a polarizing one. It’s ironic because the community is now stuck in political battle, just the kind they hate and created cryptocurrencies to avoid.

There’s a few lessons to be learned from this. One is in the need for regulation.

Cryptocurrency creators believed that a completely machine-based system wouldn’t need regulation liked standard banks. This would lead to fewer costs and far better efficiency, creating a new and improved financial system.

This is a nice sentiment. But in reality, regulation is necessary. Now we agree overregulation is bad, which is what much of the financial system is suffering from now, but zero regulation is just as dumb. To think cryptocurrencies could somehow avoid any type regulation is stupid. It goes back to cases of fraud and stolen assets. There needs to be rules in place so that the right people are prosecuted and victims compensated.

It’s funny because the cryptocurrency community is starting to realize this. They’re starting to see why the original banking system is there in the first place with all its rules. It turns out not all parts of the system are worthless and in need of “disruption”. Surprise, surprise…

We’ve now seen various members of the cryptocurrency community call for the SEC to step in, claiming that “the current “wild-west” environment presents dangerous pitfalls for potential investors, as the DAO attack has shown”.

Back to regulation we go…

The second lesson here is in the unrealistic expectations of investors causing booms and busts. A lot of the price run-ups in these virtual currencies have been due to investors’ beliefs in utopian fantasies of perfect financial systems without regulations. When beliefs stretch that far out into left field, any small trip up in the investment narrative (such as a system hack) will cause prices to come tumbling down.

This is why we’ve seen multiple large crashes in these cryptocurrencies in just the few years of their existence. These are dangerous markets and investors should be wary of getting involved. They may be a good investment in future, but now is not the time. It would be best for most investors to sit on the sidelines and wait for the numerous problems that come with the creation of a new currency to be solved before jumping in. By Alex M., Macro Ops.

Investors are in for a lot of pain as the Sharp Ratio reverts to mean.

— source wolfstreet.com By Alex M.

How bank networks amplify financial crises

How financial networks propagate shocks and magnify recessions is of interest to both scholars and policymakers. The financial crisis of 2007-8 convinced many observers that financial networks were fragile, and while reforms are underway, much remains to be learned about how and why connections between financial firms matter for the macroeconomy. Indeed, the complexity and sheer number of linkages has made it particularly challenging to formulate empirical estimates of their role in amplifying downturns.

Economic theory suggests many channels through which networks may transmit shocks (Allen and Gale 2000, Cabellero and Simesek 2013) and empirical research has provided some evidence of contagious failures flowing through interbank markets, particularly for the recent financial crisis in the US and Europe (Puhr et al. 2012, Fricke and Lux 2012). History should have a lot to say about the role of networks in contributing to the severity of financial crises, but it is a surprisingly lightly studied aspect of earlier periods of financial turmoil – even for well-researched episodes such as the Great Depression. This lacuna exists despite the fact that financial networks of the past may be simpler in structure, thus making it somewhat easier to identify empirically how aggregate variables, such as lending, were affected when linkages were disrupted.

In a recent paper, we document how the interbank network transmitted liquidity shocks through the US banking system and how the transmission of these shocks amplified the contraction in real economic activity during the Great Depression (Mitchener and Richardson 2016). The paper contributes to the growing literature on financial networks and the real economy, illuminating both a mechanism for transmission (interbank deposits) as well as a source of amplification (balance-sheet effects). It also introduces an additional channel through which banking distress deepened the Great Depression and complements existing research on how bank distress during the Great Depression influenced the real economy.

We describe how a pyramid-like structure of interbank deposits developed in the 19th century, how the founding of the Fed altered the holdings of these deposits, and how this structure then influenced real economic activity during periods of severe distress, such as banking panics (Mitchener and Richardson 2016). The interbank network that existed on the eve of the Great Depression linked large money centre banks in New York and Chicago to tens of thousands of smaller rural banks throughout the US. The money centre banks served as correspondents holding deposits from institutions in the countryside. Interbank balances exposed correspondent banks to shocks afflicting banks in the hinterland. Interbank deposits were a liquid source of funds that could be deployed to meet sudden demands by depositors to convert claims to cash, and the removal of these deposits from correspondent banks peaked during periods that contemporary commentators described as – and that our detailed statistical analysis of bank suspensions confirms were – banking panics. Although the pyramided system of interbank deposits could handle idiosyncratic bank runs, when runs clustered in time and space (i.e. when panics occurred) the system became overwhelmed in the sense that banks higher up the pyramid were forced to adjust to these changes in liabilities by changing their assets (i.e. lending).

We use the timing and location of these panics to statistically identify the causal relationship between panics, deposit withdrawals, and the decline in lending that occurred in banks in reserve and central reserve cities throughout the US. During periods identified as panics, withdrawals of interbank deposits forced correspondent banks to reduce lending to businesses. These interbank outflows led to a substantial decline in aggregate lending, equal to approximately 15% of the total decline in commercial bank lending in the US, from the peak in 1929 to the trough in 1933.

Ironically, the Federal Reserve System had been created with the purpose of preventing crises such as those that had regularly plagued the banking system in the 19th century. We help to explain why the Fed failed to fulfil this basic responsibility. Because the Fed failed to convince roughly half of all commercial banks to join the system, a pyramided-structure of reserves persisted into the third decade of the 20th century and created a channel through which the interbank deposit could influence real economic activity. In theory, pyramided reserves could have been deployed to help troubled banks, but during the banking panics of the 1930s, just as in the panics of the late 19th century, the total size of these withdrawals overwhelmed correspondent banks, leaving those banks with the choice of either saving themselves, contracting on the asset side of their balance sheets, or borrowing from the Fed. With the Fed unable or unwilling to provide sufficient liquidity to support distressed correspondent banks, they were forced to react to interbank outflows by reducing lending, thus amplifying the decline in investment spending. Although the mechanism is new, our results corroborate other studies on the Depression, which emphasise how banking distress reduced loan supply (Bernanke 1983, Calomiris and Mason 2003b).

What might have alleviated this problem? One solution would have been for the Federal Reserve to extend sufficient liquidity to the entire financial system. The Fed could have done this by lending funds to banks in reserve centres. In turn, those banks could have loaned funds to their interbank clients. To do this, banks in reserve centres would have had to accept as collateral loans originated by non-member banks. Banks in reserve centres would, in turn, need to use those assets as collateral at the Federal Reserve’s discount window. However, leaders of the Federal Reserve disagreed about the efficacy and legality of such action.

Another potential solution would have been to compel all commercial banks to join the Federal Reserve System and require all commercial banks to hold their reserves at a Federal Reserve Bank. Due to powerful political lobbies representing state and local bankers, however, Congress was unwilling to contemplate legislation that would have effected such changes. Had they done so, the pyramid structure of required reserves would have ceased to exist, and the interbank amplifier, as defined here, would have been dramatically diminished. That said, given the inaction of some Federal Reserve Banks during the 1930s, had such changes taken place, they may have magnified banking distress as more banks would have depended on obtaining funds through Federal Reserve Banks that adhered to the real bills doctrine. As we show, the costs of the pyramid in terms of a contraction in lending were substantial, but banks still met some of their short-term needs through this structure during the turbulent periods of banking distress.

— source voxeu.org By Kris James Mitchener , Gary Richardson

Too Big to Fail From the Eyes of a Specialist

Andrew Ross Sorkin has written a column lamenting that “For a Generalist, ‘Too Big to Fail’ May Be Too Tricky to Judge” about the district court opinion finding in favor of MetLife on the question of whether it would pose a system risk were it to fail. Sorkin runs the NYT’s “Deal Book,” which is supposed to represent the paper’s specialized expertise with regard to Wall Street. His column demonstrates that one of the areas of expertise required to understand Wall Street is legal, and that it is beyond his understanding despite having “read hundreds of pages of legal briefs from both sides, and talked to company and government officials and outside experts….”

I will start with his description of the judge, Rosemary M. Collyer, which ignores vital information and misinterprets other information.

She’s also a member of the United States Foreign Intelligence Surveillance Court and once worked as the general counsel of the National Labor Relations Board. In other words, she’s a legal rock star.

Well, no. It does mean her specialty is employment law. Her appointment to the FISC by Chief Justice Roberts means (1) she was appointed to the federal judiciary by a Republican president (Roberts appointed only Republicans to the FISC, which is outrageous) and (2) Roberts thinks she is disposed to vote to allow the mass surveillance of Americans by the NSA. Republican appointees to the judiciary are materially more hostile to government actions – except in the case of supposed national security.

Similarly, Sorkin gives a naïve description of a scholar who claims that specialized economic courts are desirable.

Joshua D. Wright, a former commissioner of the Federal Trade Commission, co-wrote a 2011 study that determined in antitrust cases, a judge’s expertise had a significant impact on the validity of the ruling. “Decisions of judges trained in basic economics are significantly less likely to be appealed than are decisions by their untrained counterparts,” the study says. “Our analysis supports the hypothesis that some antitrust cases are too complicated for generalist judges.”

Perhaps, but a reader should be informed that Wright is a professor at the ultra-right wing George Mason University School of Law. More importantly, a judge who has been “trained in basic economics” is likely to have been trained that market power is of trivial importance. This point should be particularly clear to Wright because George Mason University (GMU) ran the leading propaganda program for the federal judiciary in economics, which focused on hostility to government regulation and antitrust enforcement by purporting to teach “basic economics.” Wright and his co-author not only admit this point – they cite the hostility of the modern judiciary to antitrust actions as evidence of the wondrous role that economics has played in changing the law and allowing the modern economy in which market power is celebrated.

The domination of the judiciary, particularly in the appellate courts, means that the district courts who rule against antitrust cases are more likely to be upheld on appeal. Appeals are expensive, so plaintiffs and the government are less likely to appeal such cases, which makes Wright’s empirical study circular (and demonstrates how poor empirical work is passed off as science).. Even Wright admits that GMU’s programs are “controvers[ial].”

Judges also perceive economic training to be beneficial; as discussed below, hundreds of judges have already sought out basic economic training. One reason judges might take time away from heavy dockets to receive such training is because doing so improves their decisions, thereby reducing appeals, reversals, or other potentially deleterious effects of economic complexity that could damage their reputations. Training judges in antitrust economics is not without controversy, however. Some have even criticized educational programs designed to teach judges basic economics. The George Mason University Law and Economics Center (LEC) has been the focus of much of the criticism, at least in some part because it is the largest of the judicial training organizations. The LEC began training judges in 1976 and has trained hundreds of federal judges currently on the bench. Teles (2008) notes that, by its height in 1990, the LEC Economic Institute for federal judges had trained 40 percent of the federal judiciary, including two Supreme Court Justices and 67 members of the federal courts of appeals.4

Critics claim that the programs amount to junkets designed to influence judicial decision-making, and are a thinly disguised attempt at indoctrinating judges with a particularly conservative, free-market oriented style of economics. Opposition to these programs recently led to proposed legislation that would effectively prohibit privately funded training programs for federal judges (Teles 2008).

4 The George Mason Law and Economics Center claims that more than 50 percent of the current federal Article III bench has attended LEC programs….

The largest financial sponsor of judicial propaganda programs is the Koch brothers, and the other major sponsors are also ultra-right wing entities dedicated to their hostility to government regulation and effective antitrust law. Note that in the quoted passage Wright and his co-author inadvertently admit the key problem with their empirical study.

One reason judges might take time away from heavy dockets to receive such training is because doing so improves their decisions, thereby reducing appeals, reversals, or other potentially deleterious effects of economic complexity that could damage their reputations.

Another reason for a district court to both take the GMU propaganda course and rule in accordance with its ideology is not to “improve” their decisions, but to “conform” their decisions to the dominant beliefs of the appellate judges – “thereby reducing appeals, reversals, or other [results] that could damage their reputations.” That is outrageous – and specialized economic courts would make it even worse, but Sorkin spots none of the empirical errors, biases, or dangers with Wright’s proposals.

The greatest problem with the GMU propaganda, however, is that it has long been falsified by reality. That, however, never penetrates the ideological barriers. Naturally, the firms with massive market power love the results of the ideology.

Sorkin does not understand the legal system and its treatment of large firms.

About two years ago, I was speaking with an executive at MetLife who floated the idea that the company should sue the government to overturn its designation as a firm that was too big to fail. The company believed that it was being unfairly labeled, and that the regulations that came with the designation were hindering its business.

My initial reaction, I distinctly remember, was to say: “That’s a terribly risky idea. The government always wins.”

Boy was I wrong.

“The government always wins?” What world does Sorkin inhabit? In the real world, we went through an enormous legislative battle precisely because that is not true. Any federal rule can be challenged in the District of Columbia, so virtually any federal rule can be blocked by the D.C. Circuit. A majority of the Court had been appointed by Republican presidents and many of them were exceptionally hostile to government programs and frequently declared new rules invalid.

Republicans viewed this judicial hostility in the D.C. Circuit to be of such extraordinary value to their Party and its corporate donors that Republican Senators refused to allow President Obama to fill vacancies in the U.S. Court of Appeals for the District of Columbia. This was outrageous, and the Democrats (to their shame) put up with it for years before adopting a version of the so-called “nuclear option” to allow a Senate majority to approve the appointment of members of the judiciary.

Why the District Court’s MetLife Decision is in Error and Dangerous

Sorkin shows that he does not understand the statute or the concept of what the statute provides as to when the Financial Stability Oversight Council (FSOC) designates an institution as systemically dangerous. (In a telling euphemism, they are actually designated “systemically important.”)

I have no idea if MetLife is too big to fail. I’ve read hundreds of pages of legal briefs from both sides, and talked to company and government officials and outside experts, and I’m still not sure. I’ve tried to make sense of it, but it is a highly complicated puzzle and to make such a determination with any degree of certainty requires mathematically projecting how money will flow between hundreds of institutions around the globe.

Well, no. The first and last sentence quoted above make no sense. Let us begin with reality. MetLife reported that at yearend 2015 it had total assets of $878 billion. That means that it poses a massive risk to the global system should it fail. Maybe, if it had $500 billion less in reported assets it might be worthy of debate. It has over $200 billion more in reported assets that Lehman claimed when it failed – and Lehman triggered a global crisis.

The last sentence demonstrates Sorkin’s failure to understand the legal test and the concept of posing a systemic risk. Sorkin thinks the regulators must “make such a determination” with “certainty” through “mathematically projecting how much money will flow between hundreds of institutions around the globe.” The statute does not require any of that. No one can predict, perhaps a decade in advance, any of these elements. Indeed, the impossibility of knowing any of these things is one of the reasons why it is essential to get rid of the systemically dangerous financial institutions. What one can determine is that the financial institution is so massive and so interconnected with the global financial system that its failure would create a substantial risk of causing substantial disruption. The regulators amply demonstrated that point.

The judge’s opinion is premised on a very different statute, the one MetLife’s lobbyists wished Congress had enacted.

Judge Collyer’s decision may well be entirely valid, but at least in certain places she appears off base. For example, she said the government “never projected what the losses would be, which financial institutions would have to actively manage their balance sheets or how the market would destabilize as a result.”

Well, the government appears to have done much of that in its report, but you’d need a pretty sophisticated understanding of finance to understand exactly how their numbers were calculated.

Judge Collyer either decided to ignore those numbers or decided they were chosen arbitrarily.

The conundrum in the case of the oversight council is that determining which companies pose a systemic risk can’t be done with a straight formula. The nature of financial crises means that, as a regulator, you’re playing against a 100-year storm that you can’t fully foresee.

Sorkin’s discussion of Judge Colyer’s decision shows that she and he do not understand the statute or the concept of systemic risk. It is, of course, impossible for FSOC to “project” (a) the losses that MetLife will sustain over the next decade or (b) the losses that MetLife’s failure would impose on other entities during some year over say the next decade. How can the FSOC know the counterparties that MetLife will have three weeks from now, much less a decade from now? Only a fool would believe that they could predict the mechanism three or ten years from now by which MetLife’s failure would destabilize a particular market, particularly because MetLife may be a critical counterparty to an entity three years from now that does not even now. I am a strong critic of Dodd-Frank, but that does not mean that every (or even most) provisions of the Act were drafted by fools to be absurd. The Act does not require the impossibility that Judge Colyer demanded – that FSOC quantify “the actual loss” that would result from MetLife’s failure.

But Sorkin and the judge are also wrong (as are the FSOC officials who make the systemic risk determinations) in their reliance on statistics and probabilities – and in the absurd belief that we ran, randomly into the equivalent of “a 100-year storm.” The probability of a global crisis is increased enormously if (a) we continue to create and make worse the criminogenic environments that produce the increasingly severe fraud epidemics that drive our financial crises and (b) if we continue to allow systemically dangerous institutions to exist rather than shrinking them. The econometric techniques being relied on by FSOC (and demanded by judges) are based on statistically invalid assumptions of a fixed distribution of risk. When we create perverse financial incentives to engage in widespread fraud we create a vastly increased risk of systemic failure.

Sorkin and other readers should read Better Markets’ analysis of the district court opinion. It would have allowed him to understand the issues and the district court’s two other major errors in addition to its inventing a requirement that FSOC divine the future and quantify the “actual loss.”

First, the court erroneously held that FSOC had to prove that MetLife was “vulnerable” to failure. The statute has no such requirement for a logical reason. If you could not designate a financial entity as systemically dangerous until it had a demonstrated, major problem that could lead to its failure it would be far too late to do the things that the statute is designed to do to reduce the risk of failure and the severity of the failure. The statute asks: if the entity fails “could” that failure pose a material risk of disrupting the economy?

Second, the court invented a requirement for a cost-benefit study. The statute has no such requirement, because doing so would require a farcical exercise.

The three central errors that the court made have nothing to do with her lacking specialized finance training. They are all easily understood errors of law and they all arise from extreme ideological hostility on the part of the judge against government regulation of the systemically dangerous financial institutions that will again blow up the global economy unless we shrink them to the point that they no longer create that danger. The issue is when the next systemically dangerous entity will fail – not “if.”

One of the reasons we, the Bank Whistleblowers United, proposed getting rid of the systemically dangerous institutions through the use of banking regulators’ powers to set individual minimum capital requirements is that it allows vastly quicker remedial action than the cumbersome FSOC procedure that took over two years to designate MetLife as posing a systemic risk.

— source neweconomicperspectives.org By William Black