You are on page 1of 120

1AC

1AC – TBTF
Advantage One is Too Big to Fail.

Globally, systemically important banks’ (G-SIBs) size ensures markets treat them as
“Too Big to Fail” (TBTF). The resultant moral hazard will cause financial crisis.
Lesche 21 (Tom Filip Lesche – Professor of Management and Economics @
Witten/Herdecke University. <KEN> “Chapter 4: TBTF Causal Chain: Explicit and Implicit
Government Guarantees” and “Chapter 5: Public Costs and Benefits of TBTF,” in “Too
Big to Fail in Banking: Impact of G-SIB Designation and Regulation of Relative Equity
Valuations,” Springer Global. https://link.springer.com/book/10.1007%2F978-3-658-
34182-4) *G-SIB = globally, systemically important bank, IGG = implicit government
guarantee, EGG = explicit government guarantee
An IGG extends deposit insurance to uninsured bank liabilities without payment of any insurance premium by the insured G-SIB. This is why the fundamental consequences of deposit insurance (see Sect. 2.5) are also applicable here, only more strongly. The frst two subsections explain

why (see Sect. 4.2.1) and how (see Sect. 4.2.2) banks receive IGGs and are able to shift the liability for their potential losses to the state. This expected government intervention on a
selective basis in a free-market economy results in moral hazard
per defnitionem in the distortion of market forces and incentives—more precisely, s. The subsequent sections discuss how the behaviour
of various bank stakeholders changes—namely, that of the creditors (see Sect. 4.2.3), the bank (management) (see Sect. 4.2.4), and the shareholders (see Sect. 4.2.5). Empirical evidence, where available, complements the fndings.

4.2.1 IGG Origin

An IGG has two possible origins:

1. An offcial government statement designates a bank TBTF for two possible reasons: a. to pre-emptively give certainty to bank stakeholders and other market participants, and to stabilise the overall banking system, or

b. to impose special regulatory requirements on the bank.

The market perceives a bank to be TBTF


2. based on the expectation of public . This, in contrast, is potential

bailout even if a bank is not officially


measures. The market participants that would potentially beneft from an EGG know what motivates policymakers to opt for a bailout (Sect. 4.1.1). Hence,

designated TBTF market participants treat a bank as such if they are aware of
as , will the

systemic importance and react to it by reasonably anticipating the EGG.40

4.2.2 IGG Strength

Even if a bank has been offcially designated TBTF, the scope of any potential bailout will rarely be defned ex ante. This said, the strength of the IGG—and so the moral-hazard effect—depends in general again on market expectations: i.e., on the expected probability and scope of EGGs
(see Sect. 4.1.2). These expectations are usually derived from past public interventions and bailout experiences.41

The value of such an IGG for a bank and its counterparties is not only dependent on the strength of the expected bailout, but also on the condition of the fnancial system. The more uncertainty or volatility there is in a market (such as during a banking crisis), the higher the value of a
potential protection. This free insurance works like a put option getting closer to the money.42

It is worthwhile to note several factors that mitigate the strength and value of IGGs:

1. TBTF regulation: TBTF regulations constitute additional regulation and supervision of G-SIBs. These may include legislation concerning the contractual liability writeoffs during a bailout—a so-called bail-in (see Sect. 6.3).43

2. ‘Too-big-to-save’: The public fnance capacity of some countries is insuffcient to credibly protect G-SIBs. In such situations, banks may be called ‘too-big-to save’, which implies that TBTF failures can cause national insolvency.44

3. ‘Too-many-to-fail’: This term names a general weakness of an entire banking sector that implies that a government is less likely to protect one bank because it cannot protect all similarly weak G-SIBs.45 This scenario is also known as a ‘too-many-to-fail’.46

Due to the above-named complexities, the extent of IGGs differs across countries and across banks within one country.47

4.2.3 Creditor Moral Hazard

The creditor moral hazard (see Sect. 2.5) extends simply from the depositors, who are already covered by the statutory deposit insurance, to all liability holders that are expected to be protected under a bailout of a G-SIB. Ultimately, creditor moral hazard leads to lower funding costs and
larger counterparty positions for G-SIBs. This is the result of the lower return requirements of the creditors and is driven by the following:

• Lower default probability of bank liabilities: In event of bank bankruptcy, liabilities are generally repaid out of the insolvency estate. For the creditors, the IGG works like a double bottom and results in a downward shift in the probability of the default of the respective liabilities, including
counterparty risk of derivative contracts.48 Bank creditors not only lend at a lower rate according to the fundamental risk/return tradeoff, but bank counterparties are also willing to accept larger positions and to price in lower counterparty risk. Two main approaches are analysed in the
literature to support the foundation of the lower default probability:49

– An ‘objective argument’, mostly measured by market CDS spreads, and

– A ‘subjective argument’, measured by credit rating differences.

• Lower bank monitoring costs: The IGG partly replaces the necessity of monitoring the counterparty bank, which results in a decrease in associated costs.
investors regard a bank as TBTF.
Economic costs, or negative public economies, accrue when come to These equal the total costs the bank and its creditors save due to its TBTF status: viz., the asserted argument of

What follows are empirical results


lower funding costs and lower bank-monitoring costs. with regard the of different studies and methods analysing the above theoretical assertions

to lowered default probability measured by credit ratings and CDS spreads.


the The lower monitoring costs and the

All studies
larger counterparty positions are not as well analysed empirically. It also seems unclear to what degree the creditors versus the banks beneft from creditor moral hazard. Studies of the overallfunding cost advantage of G-SIBs dominate this research feld. ,

regardless of method find very large funding advantages


the applied, and signifcant cost of G-SIBs.

Stronger Credit Ratings


Rating agencies publish ratings about banks’ creditworthiness a variety of credit , the issuer itself, and certain (classes of) fnancial obligations. Credit

rating influence
ratings represent the probability of default on the rating agency’s own rating scale. Such credit ratings are a subjective assessment and do not always prove accurate. Nevertheless, to some degree, the also refects and s the

market view of a bank’s solvency because debt holders base investment often their and pricing decisions to a signifcant degree

on such ratings. a better rating leads to cheaper funding


Hence, generally conditions. Moreover, external ratings of bank debt are often a benchmark for central
banks and wholesale operations and defne minimum collateral requirements. This also means that better ratings indirectly result in better funding conditions in this case as well.

The three major credit-rating agencies—Standard & Poor’s (S&P), Moody’s, and Fitch—calculate and publish two (or more) separate issuer ratings that are of particular interest for our purposes: (i) a stand-alone issuer rating,50 that refects a bank’s intrinsic capacity to repay its
obligations, and (ii) an overall issuer rating,51 that refects a bank’s capacity to repay its obligations with potential external support.

In order to measure the TBTF effect, several studies simply compare both ratings.52 The difference refects the impact or value of possible external support, primarily by the government. All studies find that banks
of the

considered TBTF receive overall rating uplifts—i.e. credit rating upgrades —compared with other banks.53 This rating “bonus” varies: it is stronger after government interventions54
and ranges from one to four notches. Furthermore, it is found that, higher IGGs are driven by a lower stand-alone rating of a bank,55 a larger domestic market share of the bank, and greater solvency of the bank’s sovereign.56

Lower CDS Spreads

Credit default swaps (CDSs) are credit derivatives used to insure against default of debt instruments. That means that CDSs securitise and reflect default risk
the , while debt instruments also comprise interest rate risks in their market
prices. Because IGGs only affect default risk, CDS are an intuitive measure for teasing out the insurance costs of an IGG. CDS investors might also rely on credit ratings; however, CDS markets are dominated by institutional investors that are potentially able to independently and accurately
assess a bank’s probability of default.57 Moreover, market discipline in the CDS market is usually strong.

Many studies illustrate TBTF status affects CDS prices.


have d that 58 One study using regression analyses fnds that ‘ a one percent age point

increase in size reduces CDS spread by two basis points the of a bank about ’. However, scholars agree that IGGs have a threshold, above which some banks are
considered ‘too-big-to-be rescued’59. Event studies identify widening CDS spreads prior to government interventions at other banks that are followed by narrowing CDS spreads around and after events.60

Lower Funding Costs

Funding costs refect investors’ assessment of risk levels. Risk is measured in terms of spreads above the risk-free rate, which is normally defned as the rate on bonds fully guaranteed by the government, such as government bonds. This is why spreads are generally aligned with credit
ratings and CDS spreads. However, systemic market factors and issue-specifc factors (such as liquidity) also affect bond prices and yields.61

When investors perceive a bank as TBTF, the risk is primarily in the probability that the government will unexpectedly not rescue the bank weighted by the likelihood of a threatening default. The funding-cost advantage is calculated by translating the rating62 or CDS63 uplift associated to
TBTF into the yields paid on banks’ liabilities. Alternatively, some authors apply econometric models64 and control for factors other than TBTF. Other event studies that observe sudden credit-spread changes such after merger-related events65 or government interventions66. Regardless
of the research methodology, the observed yield difference—also called the spread—is an estimate of the monetary measure of IGGs. It is denoted in relative terms as a credit spread or in absolute terms as a monetary amount67 and it represents the reduction of funding costs. This
funding-cost advantage comprises both the structural strength of the IGG and the time-varying market valuation of the IGG.68 A wide range of studies illustrate robust and very large funding benefts for banks considered TBTF of up to 600 basis points or several-hundred billion US$ per
year per bank.69 The relative and absolute funding advantages change materially over time and across banks and jurisdictions.70 Only explicit guarantees to (partially) government-owned banks are stronger than IGG.71 In other words, empirical studies, as a whole, suggest that even the
uninsured liabilities of G-SIBs exhibit little sensitivity to banks’ risk-taking.72 It is noteworthy that G-SIBs are also more fexible in their funding strategies and more readily change their funding mix compared to non-G-SIBs.73 This is why the full funding advantage extends beyond a simple
comparison of the yields of the same debt instruments.

4.2.4 Bank Moral Hazard

creditor moral hazard


The increased caused by the extension of guarantees of the retail depositors (see Sect. 4.2.3) to quasi all creditors of G-SIBs—even if only implicit—also exacerbates bank
moral hazards in increased risk-taking .74 Banks exploit IGGs terms of (i) and (ii) increased growth.75

Increased Risk-Taking

There are two reasons increased risk-taking is caused by creditor moral hazard stemming from the TBTF doctrine:

•Less monitoring The well-informed and fast-moving institutional market


: are the
typically participants

driving forces behind a bank’s market discipline. Without monitoring from suffcient , engagement and signalling

creditors, bank management increase profit by increasing risk increasingly works to benefit shareholders and ability , according to the risk-return
principle.

• Lower funding costs G-SIBs pay lower funding costs This makes investment
: for a given level of risk and capital.

profitable at a lower return level the relationship of risk and return worsens.
projects : i.e.,
Concerning the increase in risk-taking, ample empirical studies exhibit several different forms of risk-taking:

• Higher leverage Holding less equity in relation to assets


: total or liabilities incurs greater risk.76

•Higher asset risk Engaging in high-risk investments with higher default rates and tail
: 77

risk results in higher asset risk.78

•Higher liquidity risk G-SIBs take on liquidity risk by : pursuing a higher risk funding strategy and holding less stable
funding. 79

• Higher operational risk: Poor management of all other operational risk categories results in operational risk.80

Higher riskiness leads to higher variance in returns This results in higher


of a bank’s overall activities a .81 , in turn, potential

losses and higher stress to the economy.


82 G-SIBs have a distinct more fragile, This suggests that ‘may , possibly

business model’ .83 In addition, non-G-SIB competitor banks are also indirectly encouraged to keep the pace with regard to proftability and increased risk.84

Market Discipline and Charter Value

A bank is endogenously incentivised by its creditors and shareholders to exercise market discipline—i.e., to implement prudent risk management. Creditors want to ensure the repayment of their borrowings at par. Shareholders want to ensure that the bank maximises the profts after
(re)paying the creditors, but without breaching regulatory requirements—i.e., without losing the banking license.85 This charter value is the shareholders’ value generated by the ownership of the banking license, which is foregone after a bank bankruptcy. Hence, the charter value has an
importance for G-SIBs depending on the expected EGG. When creditor monitoring is weak, charter value is the intrinsic motivation to exercise market discipline that most reduces the moral hazard of risk-taking by G-SIBs. There is a trade-off between preserving a bank’s charter value,
which decreases as bank risk increases, and maximising the put option value from the IGG, which increases as bank risk increases.86 This implies that the optimal risk management strategy is to increase risk either when the bank’s charter value declines87 or when the risk of losing it
declines, and vice versa88. That means the cross-sectional distribution of bank risk-taking is non-uniform. Empirical studies confrm that higher capital levels are associated with higher charter value and lower risk, and vice-versa.89 However, it seems that charter value and risk only exhibit
a strong, inverse relationship during economic expansion; the opposite holds during economic contradictions.90 Furthermore, due to higher regulatory capital requirements, the disciplining effect of charter value diminishes.91 Findings92 suggest that charter value has been declining over
time, contributing to the increase in risk-taking in the years before the GFC.93

Increase of Size

There are basically two reasons why banks increase in size because of TBTF the doctrine:

Several studies show banks


• Increasing systemic importance: grow larger than the size providing that sometimes the greatest scale
and scope economies to achieve TBTF status (social optimum), especially or extend and thereby exploit IGGs. Deposit insurance only incentivises extending the magnitude of insured

TBTF
deposits to beneft from cheaper deposits, as deposit insurance is generally underpriced. The relatively stable retail deposits are, per se, benefcial to the stability of the fnancial system. The doctrine, however, not only incentivises increasing the ratio of liabilities to insured

incentivises increasing the entire balance sheet to increase IGGs.


deposits but also thereby There are also other categories of achieving
systemic importance (Sect. 3.3), but size remains the most prominent. Also, regarding motivations for M&A activities (i.e., to grow inorganically), TBTF is among the most relevant.94

Firm size and risk-taking among banks are highly positively correlated. Banks
• Increase of risk-taking: 95

manage risk through increased leverage which means balance-sheet expansion.


increase mostly , Moreover,

more risky and profitable banks grow faster. more are also able to

4.2.5 Shareholder Moral Hazard

Banks have many stakeholders—such as management, employees, and creditors—that urge them to adhere to market discipline. The incentives of the direct decision-makers of a bank—i.e., the bank management and the bank shareholders—are generally best aligned, as the
shareholders legally own the bank and appoint its operational representatives (i.e., its managers). The more the creditor’s monitoring function is weakened by the moral hazard from TBTF, the more weight shareholder monitoring receives.

The origins of creditor and bank moral hazards are relatively simple and straight-forward. Whether shareholder moral hazard96 exists from TBTF is less clear in both theory and practice. The theoretical incentives and disincentives for shareholder moral hazards are discussed below. Both
the proprietary empirical study and related empirical studies on this matter are discussed in detail in Pt. II. In practice, and according to the equity valuation principles (see Sect. 2.2), shareholder moral hazard must lead to higher relative and absolute market-equity valuation, as investors
prefer G-SIBs over non-G-SIBs.97 According to the risk-return trade-off, an increase in the (relative) equity valuations of G-SIBs depends on higher profts, while risk (volatility) does not rise on a proportional basis. Non-G-SIBs are assumed not to beneft from TBTF subsidies and remain
unchanged in their valuation.98

These are the public subsidies or incentives that could lead to a shareholder moral hazard and could positively impact shareholder wealth:

1. Competitive advantage from EGGs: Shareholders can absorb the EGG either indirectly through capital injections during a bailout or directly through compensation payments. The outcome of EGGs for stockholders is, however, uncertain, and the event experiences have been mixed
(Sect. 4.1.2).

a. Affecting creditors: If the government supports liability holders (see Sect. 4.1.4), the shareholders also beneft from lower cost of debt and preserving the bank’s charter.

b. Affecting shareholders: Shareholders can beneft from government interventions (Sect. 4.1.4) either by receiving compensation payments above market levels (put option value) when the government takes over the bank, or by preserving the bank’s charter value upon government
capital injections.

2. Competitive advantages from IGGs: The shareholders can absorb the IGG from creditor and bank moral hazard, as described above:

a. Lower funding costs: For a G-SIB, creditor moral hazard leads to lower funding costs, a stronger position as counterparty, and greater fexibility and readiness with respect to funding activities.99 All of these factors lower the overall cost of debt and subsequently improve overall bank
proftability.

b. Increase of risk-taking: When a G-SIB’s funding costs are no longer tied to the riskiness of its operations, shareholders have the incentive to transfer wealth from the IGG by pushing the management to take on more risk,100 which ultimately leads to higher proftability.101

On the other side, the following factors discourage shareholders from investing in G-SIBs and can put a strain on their equity valuations:

1. Higher volatility: An increase in bank risk, or proftability, will generally make future earnings streams more uncertain or volatile. This means shareholders will discount future cash fows with their higher inherent cost of equity.

2. Less effciency: Due to creditor and bank moral hazards, corporate governance mechanisms and market discipline are weakened. Both can lead to increased bank ineffciency, which results in reduced proftability.
3. TBTF regulation (before bailout): G-SIBs are increasingly exposed to regulations that selectively target them. These additional regulatory costs may reduce proftability in various ways (Chap. 7).

4. Burden-sharing (after bailout): Even if shareholders retain bank ownership after a direct bailout, the government can put several strains or conditions on the future profitability of a G-SIB.

In summary, while some of the competitive advantages gained through TBTF status should generally improve bank valuations, other outcomes, such as those derived from a risk increase, are much more uncertain. Overall, the net effect of TBTF on G-SIBs’ stocks. Part II sheds light on this
question.

Public Costs and Benefits of TBTF

This chapter analyses the welfare perspective of TBTF: i.e., it examines how EGGs and IGGs impact society at large. The previous chapter analysed the impact of EGGs and IGGs on private individuals or bank stakeholders. It identifed the moral hazards that lead to monetary effects for G-
SIB stakeholders. These public subsidies for G-SIB stakeholders, whether implicit or explicit, come at a cost to society. Moreover, when there are differences between private and social costs or private and social returns, the market economy does not deliver an outcome that maximises
overall effciency. This so-called market failure leads to deadweight losses. The social optimum—that is the optimal promotion of the well-being of all economic agents—is achieved by maximising social returns and minimising social costs. This implies that all costs and benefts need to be
internalised by households and frms making production decisions. G-SIBs that do not internalise all the costs of IGGs and EGGs are sources of ineffciency in this context. Their actions make other members of society worse off, which makes G-SIBs a negative externality of systemic risk. By
defnition, this leads to overproduction:1 i.e., to a market outcome that is less than optimal in terms of a society’s overall condition. It can be called ‘banking pollution’2 in the logic of environmental economics.

The intention here is to quantify (or at least name) the sources of this imbalance. A cost-beneft analysis is particularly relevant to estimating how a balance—the social optimum—can be re-achieved and before discussing the regulatory efforts on TBTF in subsequent chapters. However, a
cost-beneft analysis of the TBTF doctrine is widely acknowledged to be very diffcult. One main reason is the emergence of a fnancial crisis cannot be attributed solely to G-SIBs nor can the bailout of a G-SIB be attributed directly to the avoidance of a fnancial crisis. Hence, the costs and
benefts of TBTF are highly subjective and may also vary across countries and over time.3

The frst section, Sect. 5.1 sheds light on bank’s economies of scale and scope—in particular on limitations and who is beneftting from them. Section 5.2 compares the public costs of EGGs—i.e., the direct bailout costs from public fnances—with the public benefts of EGGs: i.e., the
avoidance of output losses from a fnancial crisis. Along the same lines, Sect. 5.3 compares the public costs of IGGs—which are the benefts G-SIB stakeholders derive from moral hazard taking (see Sect. 4.2)—with the public benefts of IGGs, which are minimal. Finally, Sect. 5.4 concludes
that the TBTF doctrine does not have any value for society, but that it is hard to abolish because of the timeinconsistency of policymakers’ decision-making.

5.1 Economies of Large Banks (Incentives for Scale and Scope)

This section is a preparation for the analysis of public costs and benefts of IGGs (see Sect. 5.3). There is a potentially important trade-off inherent in G-SIBs with respect to overall economic effciency: systemic importance and economies of scale (and scope). Both depend on various criteria
(see Sect. 3.3), but a bank’s size is believed to be one of the best measures for both—the correlation is close to one—as large players are responsible for a larger fraction of banking output (see Sect. 7.4). In this regard, size is normally measured as the sum of a bank’s total assets.

Intuitive questions arise in this context: First, what is the socially optimal bank size? In other words, up to which size are banks able to achieve economies scale and scope (see Sect. 5.1). This is equivalent to the size limit banks are incentivised to grow to while increasing public welfare.
Second, are there super-scale economies (effciency gains for G-SIBs) or any diseconomies (ineffciencies) beyond the socially optimal bank size? Answers to these questions are important for the later cost-beneft analysis: i.e., the question of whether being TBTF has any value to the public
(after deducting the public subsidy due to TBTF). Answers are also crucial for regulatory considerations such as potential regulatory limits on bank size.

Economic Effciency of Banks

In a competitive banking system, economic effciency4 is based upon the competition, regulation and economies of the frm, with various trade-offs between them. The focus here is on the economies of the frm: i.e., on how a bank itself can increase effciency. The effciency function of a
bank is potentially U-shaped. On the one hand, the sub-optimal size of a bank may be such that input or output quantities are chosen ineffciently. On the other hand, very large banks that compete only with a small number of banks could encourage X-ineffciency and diseconomies. Here
we want to understand the socially optimal bank size. To achieve this goal, only studies excluding externalities such as the TBTF doctrine from their analyses were considered.5

Table 5.1 summarises and briefy explains the ways banks can increase their effciency6 that have been identifed in the literature. Besides X-effciency, all listed sources imply some sort of growth—whether in scale, scope or any other dimension of expansion. In principle, effciency gains
increase the welfare of the public, the shareholders and the managers at the same time. They ‘are made by changing input or output quantities in ways that reduce costs, increase revenues, and/or reduce risks to increase value for a given set of prices’.7 However, agency conficts
between bank stakeholders can also lead to misalignment among incentives. Gains are then made by banks by exploiting other stakeholders. This is why the sources of effciency are here divided into (i) economies of scale, (ii) economies of scope, (iii) other economies that increase public
welfare (X-effciency), and (iv) other economies that exploit public welfare. On this broader topic, there is a large body of literature on bank M&A and bank consolidation, most of which relies on data about U.S. banks. The primary research focus is on cost effciencies on an organic scale—
in particular, bank-wide cost functions. The focus is much less on scope, in part because of the diffcult empirical issues involved.8

Economies of Scale

Economies of scale are marginal production effciencies achieved by increasing bank size and resulting in cost savings, better brand recognition, or revenue expansion. The scale effects generally account for less than 5 percent of costs, while revenues increase slightly with bank size (by
approximately 1 to 4 percent)—but only at smaller banks9 or banks with capital market activities. Most studies focus on the broader and traditional defnition of cost-based economies of scale.10 Overall, ‘there is little agreement regarding economies of scale’ for banks11. This might
equally be due to statistical complexities and to constantly evolving bank characteristics. It appears that advances in information and communication technologies have been the fundamental impact factor.12 This development favours larger banks because larger banks can spread
increasing fxed technology costs among more customers.13 As a result, economies of scale have risen steadily over the last decades, even centuries.14 Ideal bank size has grown accordingly.15 As recent technological advancements are sometimes considered to have revolutionised the
banking landscape, it has even ‘been argued that reliable estimates of scale economies for very large banks cannot be obtained in the current environment’16.

Relatively better agreement can be found with regard to smaller banks—a research focus of older studies. It is widely accepted that ‘very small banks (less than a few hundred million US$ in assets) are generally ineffcient’. The minimum effcient scale has increased rapidly from the
beginning of the millennium from US$ 0.5 billion to US$ 25 billion today—an amount that may still be rising17.

More recent studies tend to look at larger banks and so-called economies of superscale.18 The most recent literature uniformly concludes that economies of scale are achievable for banks with total assets of up to US$ 100 billion.19 This is much less than the median size of large
international banks. Above this level there is mixed evidence concerning whether scale economies,20 constant average costs or slight diseconomies of scale prevail. The results depend to a great extent on the variables that have been controlled for, such as market power and
diversifcation of risks.21 Even when explicitly controlling for TBTF factors—in particular for the funding cost advantage—evidence has been found for22 and against23 scale economies at very large international banks. Proponents of super-scale economies argue that they exist for banks
that emphasise investment banking24 to service large and global non-fnancial businesses.25

Economies of Scope

Economies of scope are marginal production effciencies achieved by extending bank activities and resulting in cost savings, revenue expansion, or better fnancial diversifcation. The scope effects generally account for less than 5 percent of costs when multiple products are produced
jointly, while revenues appear to be less affected by product mix.26 Overall, empirical studies of these scope economies are even more inconclusive than those conducted into scale economies.27 Most of the them tend towards insignifcant scope economies based on costs or revenues.
Scope economies from fnancial diversifcation, however, are of special and larger importance to banks, as, according to standard portfolio theory,28 ‘a portfolio of imperfectly correlated risks will reduce the overall volatility of proft’,29 which in turn increases shareholder value (see Sect.
2.2). There is evidence of fnancial diversifcation gains from multiple business lines or loan portfolios.30 The reduced risk and volatility, however, ‘may be more than counter-balanced by heightened exposures to volatile income-generating activities, such as trading’,31 and lower capital
ratios of the larger banks.32

Other Economies

Economies other than those described above include economies that increase public welfare and economies that exploit public welfare. The former includes only

• X-effciency: X-effciency is the observed degree of effciency maintained by a bank in practice under conditions of imperfect competition compared to effcient behaviour derived from economic theory. For example, banks may 20 to 30 percent higher costs than the industry minimum for
the same scale and product mix.33 The ability of management to control these costs is believed to be much more important than scale economies and scope economies.34 X-effciencies are not limited to certain bank sizes but to the lack of competitive pressure; they thus concern rather
large banks.35

Economies that exploit public welfare are either pursued to the beneft of shareholders or management, or both. Besides safety-net (or TBTF) economies of scale (see Sect. 4.2.4), other examples include:

• Market power: According to traditional economic theory,36 a lack of competition leads to unfair market power, monopolisation and collusion. This causes less effcient allocation of resources. Antitrust regulation establishes effective frameworks and maintains balance in the system: i.e.,
facilitates the free entry of banks and restricts the market powers of banks.

• Expense-preference-based economies of scale: The argument is that higher proft driven by larger scale or market power can be captured by management in the form of higher salaries or perks.37

• ’Quiet life’-hypothesis-based economies of scale: The argument is that higher proft driven by larger scale or market power can be captured by management in the form of reduction of risk and less need for innovation.38

These bank economies are not limited to certain sizes of banks and are believed to greatly signifcantly bank behaviour—though the impact has not been suffciently quantifed.

Results

Research indicates that the maximum efficient scale of banks could be somewhere around US$ 100 billion in total assets. This is a size that is still benefcial for the public and for bank stakeholders. Beyond this point, further economies are assumed to be exhausted, while there is at least
the possibility of X-ineffciency or diseconomies of scale and scope that lead to deadweight costs.39 This means that the incentives for banks to grow further at this point are driven by management or shareholder self-interest—they are not benefcial to the public welfare. One of the
primary drivers is considered to be TBTF scale economies (see Sect. 4.2.4).

research suggests banks with totals assets of more than 100 billion can be
Coincidentally or not, that US$

considered TBTF (see Sect. 3.3). Although both quantitative thresholds might be just rough approximations, G-SIBs can be considered to operate at a size that is too large both from a micro-level perspective on bank effciency and from a macro-level
perspective on systemic risk.40

5.2 Public Costs and Benefts of EGGs

Costs of EGGs (Bailout Costs)


The costs of EGGs are defned here in the context of narrow fscal interpretation: that is, the wealth transfer from the public to the G-SIBs as a result of a bailout.41 They comprise the direct cash fow used in the various methods of bank bailouts (see Sect. 4.1.3) while direct administrative
expenses also accrue. The costs of the intervention generally roughly equals the subsequent increase in value of debt and/or equity issued by the respective banks.42 In addition, there may be indirect opportunity costs from EGGs either because the bailout funds cannot used for other
purposes or because a government needs to borrow, raise taxes43, or print money to fnance the bailout. While an increase of debt relative to GDP negatively affects a country’s sovereign debt rating, an expansionary monetary policy can cause infation and depreciation of a country’s
currency.44 There is even more uncertainty about the eventual loss the public may eventually face from EGGs: First, government investment into G-SIBs or their associated bad banks might lead them to swiftly increase in value or make profts after the banking crisis is resolved. Second, as
EGGs are measurable and visible to the wider public, G-SIBs are often obliged to repay some or all of the direct bailout costs.

Studies show that generous rescue measures, with blanket guarantees, open-ended liquidity support, and repeated recapitalisations can lead to budgetary outlays, whether immediate or deferred, that exceed a country’s GDP or even result in public bankruptcy (see Sect. 3.4.5).45 In the
GFC, the direct costs for EGGs were on average approximately one46 to fve47 percent of a country’s GDP48. This compares to ten percent in previous crises on the back of less swift direct policy actions and indirect expansionary monetary and fscal support.49

Benefts of EGGs (Avoidance of Bankruptcy Costs and Economic Output Losses)

A banking crisis can be triggered by the failure of a G-SIB, as a relatively small or strengthened

number of G-SIBs are central to finance (see Sect. 2.6). The beneft of an EGG is basically the avoidance of the economic costs of bankruptcy (see Sect. 2.3) of a G-SIB. It is diffcult to study the
benefts of bailouts of G-SIBs, as the failures of G-SIBs are low-probability events and data is scarce.50 Studies investigating this context usually look at the (potential) foregone output losses (value of goods and services not produced) that (would) have been created (without a bailout) (see
Sect. 2.6).51 This can be quite costly, as it includes large and long-lasting ‘declines in employment, household wealth, and other economic indicators’.52 Moreover, governments face ‘greater fscal challenges, in part because of reduced tax revenues from lower economic activity and
increased spending to mitigate the impact of the recession’,53 and increased public debt.

That creates a global, unsustainable debt bubble --- guarantees financial crises
Wilmarth 20 (Arthur E. Wilmarth – JD from Harvard, Executive Director of the Center for
Law, Economics & Finance at George Washington University. <KEN> “Unfinished
Business: Post-Crisis Reforms Have Not Removed the Systemic Dangers Posed by
Universal Banks and Shadow Banks,” in “Taming the Mega-Banks,” Oxford University
Press. ISBN: 9780190260729)
during the financial crisis, governments allowed some megabanks to
In addition to bailing out megabanks

become significantly larger by acquiring troubled peers. In 2008 and 2009, the U.S. government provided extensive
financial support that enabled (1) BofA to absorb Countrywide and Merrill Lynch, (2) JPMC to
acquire Bear Stearns and Washington Mutual, and (3) Wells Fargo to acquire Wachovia. As a result of those
transactions, BofA, JPMC and Wells Fargo expanded their combined share of U.S. deposits from 20% to 32% between 2007 and 2017. The same three banks and

Citigroup increased their combined share of U.S. banking industry assets from 32% to 44%
between 2005 and 2018.109 Similarly, the U.K. government supported Lloyds TSB’s takeover of HBOS and also approved Santander’s acquisitions of Alliance and
Leicester.110

Government support helped JPMC, BofA, Citigroup, Goldman Sachs, and Morgan Stanley
to achieve unquestioned dominance over the world’s capital markets. Those five
megabanks have been the five leading global investment banks since 2015, and they
captured a third of all global investment banking revenues in 2017 and 2018. They have established a clear superiority in the world’s
capital markets over their U.K. and European competitors.111 In addition, Wells Fargo significantly expanded its investment banking operations after acquiring Wachovia in 2008, and it is now
a top-tier global universal bank.112

U.K. and European universal banks have produced disappointing results since 2010, due in large part to the EU’s sovereign debt crises and weak recoveries in their domestic economies. Even

European banks—Barclays, BNP Paribas, Credit Suisse, Deutsche Bank, HSBC, Société Générale, and UBS—continue to play
so, seven big U.K. and

significant roles in the world’s capital markets. Those seven banks consistently ranked
among the top dozen global investment banks between 2009 and 2018, despite losing ground to their U.S. competitors.113
Thus, the Big Seventeen group of financial conglomerates, which dominated the world’s financial markets in 2007, has become a slightly smaller but equally dominant group of thirteen global
universal banks. Bear Stearns, Lehman Brothers, and Merrill Lynch disappeared in 2008, and their investment banking operations were absorbed by larger universal banks (JPMC, Barclays, and
BofA) with government help. RBS divested more than 60% of its assets and gave up its global investment banking ambitions at the insistence of its majority owner, the U.K. government.

The remaining Big Thirteen universal banks—six from the U.S. and seven from the U.K. and Europe— remain the
leaders in global capital markets activities.114
JPMC, BofA, Citigroup, and Goldman Sachs are the four largest global dealers in derivatives,
followed by the rest of the Big Thirteen. The Big Thirteen are highly influential members of the three main clearinghouses for derivatives—LCH
SwapClear, ICE Clear Credit, and ICE Clear Europe. LCH SwapClear controls over 95% of the worldwide market for centrally cleared interest-rate derivatives, while the two ICE clearinghouses
jointly control over 95% of the global market for centrally cleared credit default swaps. In 2019, about four-fifths of interest-rate derivatives and half of credit derivatives were cleared in global
markets.115

Representatives of the Big Thirteen control the membership rules and dominate the risk committees of the three principal derivatives clearinghouses. The biggest derivatives dealers have
allegedly abused their controlling influence by (1) preventing smaller derivatives dealers from becoming members of the top clearinghouses and (2) blocking the growth of alternative trading
venues.116 In 2016, eleven members of the Big Thirteen and RBS paid $1.9 billion to settle claims of antitrust violations for “suppress[ing] price transparency and competition in the trading
market for CDS,” and for obstructing the creation of competing CDS trading facilities.117

The Big Thirteen also dominate the International Swaps and Derivatives Association, the trade association and standard-setting body for the derivatives industry. ISDA appoints representatives
of the biggest dealers to serve as members of “determinations committees,” which adjudicate CDS-related disputes. Those disputes include disagreements over the occurrence of designated
“credit events” that require payouts by protection sellers to protection buyers under CDS.118 Critics allege that ISDA’s determinations committees have frequently issued decisions that are
biased in favor of the trading positions of the largest banks.119

post-crisis reforms have not reduced the dominance of the largest universal banks as dealers
Thus,

in derivatives markets. The leading global derivatives dealers have extensive connections with the most important clearinghouses as members, custodians, brokers, and
providers of credit, investment, and settlement services. A recent study showed that (1) five big U.S. banks held more than half of all customer funds connected to centrally cleared derivatives

ten big banks—six from the U.S. and four from the U.K. and Europe—held more than
in U.S. markets, and (2)

three-quarters of those customer funds. Similarly, twenty large global banks maintained more
than three-quarters of the total financial reserves for the world’s three hundred top clearing
facilities. Consequently, the failure of a top derivatives dealer could potentially threaten the solvency of
major clearinghouses, thereby creating the potential for a systemic crisis in derivatives markets.120
Title VIII of Dodd-Frank empowers the Fed to regulate and provide emergency loans to systemically important clearing facilities. However, Dodd-Frank does not give the Fed (or any other
agency) explicit authority to recapitalize and resolve a failed clearinghouse. Most other developed nations have similarly failed to establish explicit resolution regimes for large clearing
facilities. In the absence of a viable resolution process, governments would probably be forced to arrange ad hoc bailouts of large troubled clearinghouses to prevent serious financial market
disruptions.121

As shown by their central roles in derivatives markets, universal banks create extensive and complex
networks with shadow banks through their capital markets activities. Universal banks work “in concert” with nonbank financial firms in the shadow banking
system, and the two sectors are “strongly interrelated.”122

Universal banks obtain a significant portion of their funding from the shadow banking system. In 2018, large U.S. and foreign
banks sold more than $600 billion of commercial paper to wholesale investors , and their broker-dealer subsidiaries
received over $2 trillion of additional funding from securities repurchase agreements (repos) and other securities lending arrangements. Money market mutual funds buy much of the
commercial paper issued by the largest banks, and those funds also provide repo loans to the broker-dealer subsidiaries of the same banks.123

banks act as prime brokers for hedge


Megabanks are closely connected to shadow banks through a wide range of transactions. The largest

funds and provide credit, trading, and clearing services to those funds.124 Big banks also make loans to many other shadow banks.
Cross-border loans from large banks to shadow banks have risen at an annual rate of 7% since 2014 and totaled $7 trillion in mid-2019.125

Megabanks work in partnership with private equity firms to finance highly leveraged corporate buyouts and reorganizations by
arranging syndicated leveraged loans and by underwriting high-yield, below-investment-grade (junk) bonds. Megabanks sell leveraged loans to institutional
investors, including sponsors of collateralized loan obligations. CLOs securitize leveraged loans for sale to other investors. Private equity firms have established their own broker-dealer

In 2017, banks held $103


subsidiaries, and they finance highly leveraged corporate transactions alongside banks—and increasingly in competition with banks.

trillion of credit assets (loans and debt securities) on a worldwide basis. Insurance companies held $17 trillion of global credit assets,
while other financial institutions (including private equity firms and hedge funds) held $41 trillion of global credit assets.126
Abundant credit provided by universal banks and shadow banks has encouraged explosive growth in private
sector debts on a national (U.S.) and global basis since 2000. Public sector debts have also increased rapidly, especially after 2007 as governments spent huge sums to mitigate
the social and economic impacts of the financial crisis. By 2019, business firms, households , and sovereigns confronted record-high

debt burdens in the U.S. and in many other countries.127


Total debts owed by U.S. nonfinancial businesses climbed to an all-time high of $16.1 trillion in 2019. At least a third of those
debts consisted of higher-risk obligations, including junk bonds, leveraged loans, and corporate bonds with the lowest
investment-grade rating (BBB).128 Similarly, U.S. household debts reached a record level of $14.1 trillion in 2019, due to significant growth in

home mortgages, auto loans, credit card loans, and student loans.129 Total U.S. private sector debts (including the obligations of nonprofits and financial firms)

rose to $48.9 trillion in 2019, up from $22.5 trillion in 2000 and $41.6 trillion in 2007. Total U.S. public sector debts (federal, state, and local) equaled $26.3 trillion in 2019, a
steep rise from $6.9 trillion in 2000 and $12.1 trillion in 2007.130

Global debt levels have followed the same trajectory of relentless growth. Worldwide private sector and public
sector debts rose from $84 trillion in 2000 to $167 trillion in 2007 and $253 trillion in 2019. As reflected in Figure 12.1, the ratio of worldwide debts to

global GDP increased from 225% in 2000 to 275% in 2007 and reached an all-time record of 322% in 2019.131 Worldwide
government debts in 2019 reached their highest level as a percentage of global GDP since World War II, “ raising profound questions about the

sustainability of the global debt pile.”132


Central banks have supported the rapid expansion of global debts since 2008 by purchasing massive volumes of government bonds, mortgage-backed securities, and other private sector
financial instruments under quantitative easing (QE) policies. The total balance sheets of central banks in the U.S., U.K., EU, and Japan grew from $4 trillion to $15 trillion between 2008 and
2018. The collective expansion of their balance sheets absorbed more than 10% of the total increase in global debts during the same period. The balance sheets of the Fed and the Bank of
England each grew from 6% to 25% of national GDP between 2006 and 2014. The balance sheet of the European Central Bank reached 34% of Eurozone GDP at the end of 2016, while the Bank
of Japan’s balance sheet equaled 88% of national GDP.133

As explained in Chapter 11, central banks adopted QE policies to push down longterm interest rates, thereby reducing debt service costs for heavily indebted households, business firms, and
governments, while also supporting housing markets, stabilizing financial markets, and encouraging economic growth. By 2017, central banks around the world held on their balance sheets
$30 trillion of financial assets (representing 8% of global financial assets), compared with $150 trillion of worldwide financial assets held by banks and an equivalent amount held by insurance
companies and other nonbank financial institutions.134

Beginning in 2017, central banks tried to change their general policy approach from quantitative easing to quantitative tightening (QT). The Fed led that policy shift by raising its short-term
interest rate target seven times in 2017 and 2018, and by shrinking its balance sheet from $4.5 trillion to less than $4 trillion during that period. The BoE raised its short-term interest rate
target in August 2018. The ECB stopped buying Eurozone government bonds in December 2018.135

The coordinated moves by central banks toward a QT policy provoked widespread alarm in global financial markets. Many investors feared that central banks would stop supporting financial
markets with fresh infusions of liquidity. Investors worried about the sustainability of economic growth and asset price levels without continued support from central banks. During the fourth
quarter of 2018, rising pessimism among investors produced a sharp sell-off of risky assets in global markets and a strong move toward “safer” assets (such as U.S. Treasury bonds). The
magnitude of that sell-off demonstrated that investor sentiment in the world’s financial markets was highly dependent on investor confidence that central banks would continue to provide
liquidity to those markets.136

The market turbulence in late 2018 jolted the Fed and other central banks. In January 2019, the Fed made “one of its sharpest U-turns in recent memory.”137 The Federal Open Market
Committee announced that it would be “patient” before making any further increases in short-term interest rates. The FOMC also said that it would “stop reducing the Federal Reserve’s asset
holdings” and would be willing to expand the Fed’s balance sheet “if future economic conditions warrant a more accommodative monetary policy.”138

The FOMC strengthened its policy U-turn at its subsequent meetings in 2019. In March, the FOMC confirmed that it did not expect to raise short-term interest rates during the rest of the year.
The FOMC then made a series of three interest rate cuts (each in the amount of 0.25%) in July, September, and October.139

In October, the Fed announced that it would again expand its balance sheet by purchasing short-term Treasury bills to increase bank reserves and improve liquidity in financial markets. The
Fed’s announcement followed a sudden and unexpected liquidity squeeze in the repo market in late September. The Fed injected almost $500 billion of fresh liquidity into financial markets
between September and December 2019 by providing repo loans and purchasing short-term Treasury bills.140

Other central banks joined the Fed in easing their monetary policies. In March 2019, the ECB made its own U-turn by stating that it would not increase interest rates during the rest of the year
and would maintain the existing size of its balance sheet. The ECB also announced a new program offering long-term loans to banks on favorable terms, thereby allowing banks (especially
Italian and Spanish banks) to refinance their existing long-term loans from the ECB.141 In September the ECB cut its short-term interest rate target to a record low of −0.5%. The ECB also
restored its QE program and promised to purchase €20 billion of Eurozone bonds each month for the indefinite future.142

The dramatic U-turns by the Fed and other central banks in 2019 demonstrated their willingness “to extend a decade-long stance of easy money” in order to prevent “market turmoil [from]
infecting the broader economy.”143 Those policy reversals “gave traders and investors everything they could have hoped for.”144 Global financial markets staged a strong rally in 2019 in
response to the renewed support provided by central banks.145

The Powell Fed’s U-turn in 2019 resembled (1) the Bernanke Fed’s decision to maintain its QE3 program in 2013, after a “taper tantrum” broke out in emerging markets following rumors that
the Fed might terminate that program, and (2) the Yellen Fed’s postponement of further interest rate hikes after a similar market disruption occurred in 2016.146 The Powell Fed’s U-turn in
2019 represents the most recent example of the “Fed put”—namely, the Fed’s willingness to ease monetary policy to calm distressed financial markets. The Powell Fed’s actions in 2019
followed a pattern established by the Greenspan Fed’s interventions in response to serious market disruptions in 1987, 1998, and the early 2000s, and by the Bernanke Fed’s extraordinary
measures to stabilize markets during the financial crisis and its aftermath.147

As Mohamed El-Erian pointed out, “The Powell-led Fed has now learned what its two predecessors did: that a highly levered economy means that, when push comes to shove, markets end up
leading central banks rather than the other way around.” Thus, central banks were “forced to abandon hopes of normalizing monetary policies” in 2019, and they resumed “unconventional
monetary policies in a world awash in debt.” El-Erian described the collective U-turn by central banks in 2019 as “the most accommodative global central bank policy stance since the global
financial crisis.”148

policymakers have not resolved the systemic


The coordinated easing of monetary policy by central banks in 2019 confirms that

problems in financial markets that led to the financial crisis of 2007–09. The same interlocking system
of universal banks and shadow banks remains in place, and that system continues to inflate a
global debt bubble comparable to the one that burst in 2007. As shown previously, universal banks and shadow banks are financing
record volumes of private and public debt that pose grave threats to the stability of
households, businesses, and governments. Market participants continue to believe—with considerable
justification—that governments and central banks will intervene to support financial markets whenever

they encounter serious problems.149


Central banks have severely distorted market signals by stabilizing financial markets whenever
there are threats of significant disruptions. As one hedge fund manager observed, “capitalism is fast disappearing” as central banks keep
interest rates artificially low and undermine the effectiveness of market discipline. Another hedge fund manager commented, “ Modern financial systems

have grown dependent on huge central bank balance sheets.”150 A Bloomberg analyst said that “the Fed has put a
ceiling on bond yields and a floor under the S&P 500.” William White, a leading international economist, warned that “markets were unable to allocate resources properly, due to the actions of
central banks.”151

As explained in Chapter 11, central banks pushed interest rates to record lows after 2008 to boost the values of mortgage-related assets held by banks, stabilize housing markets, and reduce
debt service burdens for heavily indebted governments, businesses, and households. In spite of those benefits, the prolonged maintenance of ultra-low interest rates over the past decade has
produced very disturbing side effects. Ultra-low interest rates reduce the profits earned by safer, lower-yielding loans and investments. Ultra-low rates therefore push investors and creditors
(including banks) to pursue riskier, higher-yielding assets, thereby boosting the market values of those assets. The resulting “search for yield” has caused speculative loans, securities, and
commercial real estate investments to reach dangerous levels in recent years.

In addition, ultra-low rates aggravate social inequality by widening the gap between the larger returns received by wealthy investors on riskier, higher-yielding assets and the very low returns
received by risk-averse retail investors on deposits and government bonds. Investment gains for wealthy families have risen at a much faster rate than the savings and wages of middle-class
families. Thus, ultra-low rates have widened the disparities in household income and wealth that existed before the financial crisis.152

Mike Mayo has argued that the Fed’s post-crisis monetary policies produced a large “wealth transfer from prudent savers to the borrowers and risk takers.” He calculated that American savers
have lost $500 billion to $600 billion since 2008, due to abnormally low yields on their bank accounts and money market mutual funds.153 The highly skewed wealth and income effects of
unconventional central bank policies have provoked extensive debates about the wisdom and legitimacy of those policies.154 Some commentators contend that ultra-low interest rates also
undermine economic growth and create “secular stagnation” by enabling inefficient “zombie firms” to roll over their debts and remain in operation, thereby crowding out entry by more
productive and profitable firms.155

In sum,post-crisis regulatory and monetary policies have produced a fragile and volatile global
financial system, which depends on continuous infusions of central bank liquidity to support
universal banks, large shadow banks, and the capital markets. Post-crisis policies have created what I call a “global doom loop ,”

in which governments, central banks, universal banks , shadow banks, and capital markets are
locked together in a dangerous web of mutual dependence. Governments and central banks must
prevent disorderly failures of universal banks and large shadow banks to preserve the stability of capital markets.
Governments and central banks must also prevent serious disruptions in capital markets to ensure the survival of systemically important financial institutions. Central banks must maintain
unconventional monetary policies to keep interest rates low, boost asset prices, and support the continued growth of sovereign and private sector debts. Universal banks and shadow banks
are more than happy to finance the continued expansion of those debts. William White has warned that “central-bank meddling” will be “a permanent fixture of 21st century financial
markets” until a severe “growth shock” occurs.156

The perverse dynamics of the global doom loop were vividly illustrated in September 2019, when the Fed dramatically intervened in the U.S. repo market. The repo market experienced a
sudden shortage of funding on September 17, and interest rates for overnight repo loans spiked. The primary reasons for that funding shortage were (1) increased demands by hedge funds for
repo loans to finance highly leveraged arbitrage trades, and (2) refusals by the four largest U.S. banks—the dominant repo lenders—to satisfy those demands. The Fed intervened to stabilize
the repo market by pouring almost $500 billion of additional liquidity into the financial markets (through repo loans and purchases of short-term Treasury bills) between September and
December 2019.157

Thus, a collective breakdown in repo funding between megabanks and shadow banks forced the Fed to rescue the repo market and intensify its easing of monetary policy. Zoltan Pozsar
described the Fed’s intervention as a “repo bazooka.”158 Frances Coppola warned that “the repo market is becoming the principal market through which monetary policy is transmitted. But it
is poorly understood and extremely concentrated. Do we really want the transmission of monetary policy to depend entirely upon four large banks?”159

The Fed’s dramatic intervention in the repo market confirmed once again that universal banks are not superior risk-bearers, despite their claims to the contrary. The largest U.S. universal
banks were unwilling to satisfy the repo market’s liquidity needs in September 2019, even though a crisis did not exist in the financial markets. They refused to act and forced the Fed to step
in.

The Fed’s rescue of the repo market revealed that its monetary policy measures are inextricably tied to—and effectively held hostage by—universal banks, large shadow banks, and wholesale
funding markets. As Gennadiy Goldberg pointed out in December 2019, “The Fed will not want to exit repo operations until they are absolutely certain the market can stand on its own two
feet.”160 In January 2020, former New York Fed President Bill Dudley called on the Fed to establish a “standing repo facility,” which would “address the potential problem of the Fed providing
liquidity to primary dealers but primary dealers not lending the funds to other market participants that might need short-term repo financing.”161 Dudley’s proposal, which other former Fed
officials supported,162 would permit big universal banks to shirk their roles as liquidity providers while committing the Fed to step in as the liquidity provider of last resort for all market
participants.
As has been shown, the global doom loop connecting governments, central banks, universal banks, shadow banks, and the capital markets has supported
a steady rise in global debt levels. A study by Oscar Jordà, Moritz Schularick, Alan Taylor, and Felix Ward concluded that the continuous
growth in worldwide debts over the past three decades has intensified the “global risk appetite” of investors,
boosted asset prices, and produced a “synchronization” of boom-bust cycles across global financial markets. Their study found that

bank loans, housing prices, and equity prices have displayed significantly stronger “comovements” (correlations) across national
borders since the early 1990s, as central banks, global megabanks, and shadow banks financed massive increases in private and public debts. The relentless growth of

private and public debts has produced much higher ratios of credit to GDP on both national and global levels. Those
elevated debt-to-credit ratios are leading risk indicators for future financial crises.163

Decades of bailouts has exhausted public support and financial space for future
bailouts. Covid means a new banking crisis is Armageddon.
Wilmarth 8-12 (Arthur E. Wilmarth – JD from Harvard, Executive Director of the Center
for Law, Economics & Finance at George Washington University. <KEN> “The Pandemic
Crisis Shows that the World Remains Trapped in a 'Global Doom Loop' of Financial
Instability, Rising Debt Levels, and Escalating Bailouts,” GW Law Faculty Publications &
Other Works. Issue 1558. https://scholarship.law.gwu.edu/cgi/viewcontent.cgi?
article=2814&context=faculty_publications)
The pandemic crisis has not yet forced the U.S., U.K., and EU to recapitalize large financial
institutions. U.S. and international bank regulators have said that the lack of failures among global systemically important banks (G-SIBs)
shows that big banks are “more resilient” by virtue of the stronger capital and liquidity requirements established by G20 nations after 2009.
However, regulators have also acknowledged that megabanks “benefited from the extraordinary policy measures and other supervisory and
Governments on both sides of the Atlantic
regulatory relief” that government authorities provided. 68

provided crucial support to large banks during 2020 when they rescued short-term wholesale credit
markets, gave huge amounts of financial assistance to households and business firms, and backstopped

the corporate and municipal bond markets.69


The G20’s post-crisis reforms required G-SIBs to maintain substantially higher levels of capital and liquidity reserves, compared with the
woefully inadequate amounts they held when the GFC began in 2007. Average capital and liquidity levels for G-SIBs increased steadily between
2012 and 2017. However, capital and liquidity levels stopped rising in 2017 and remained about the same through 2019. At the end of 2019, the
Basel III supplemental leverage capital ratio for the largest global banks – widely viewed as the most binding capital standard – averaged 6.4%
for U.S. G-SIBs, 4.9% for European and Canadian G-SIBs, and 6.9% for Asian G-SIBs.70 Those ratios were far below the 15% leverage capital ratio
that officials at the Federal Reserve Bank of Minneapolis and other experts have advocated as the minimum level needed to establish a truly
resilient banking system that does not require frequent government bailouts. 71

The pandemic crisis posedvery severe challenges to the survival of


universal banks
and shadow banks until
governments and central banks intervened. On March 16, 2020, the U.S. stock market’s main indexes fell by 12% or more in the market’s worst
performance since the stock market crash on October 19, 1987. Bank stocks were “among the hardest hit,” and the stock prices of the three
investor runs began against
largest U.S. banks dropped by 15% or more. Short-term wholesale credit markets froze, and

money market funds. The pandemic crisis “brought the financial crisis to the brink,” and “the
stresses to the financial system [on March 16] were broader than many had seen,” even
during the GFC.72
The federal government “unleashed a barrage of government programs [to pull] the system back from collapse.” The Fed provided over $50
billion of discount window loans to banks, and it quickly reactivated almost all of the emergency lending facilities it used during the GFC to
support large financial institutions and short-term wholesale financial markets. The Fed used its restored facilities to provide $35 billion of loans
to securities broker-dealers (many of which were affiliates of universal banks), $440 billion of repo loans, $66 billion of assistance to money
market funds and the commercial paper market, and over $460 billion of swap loans to foreign central banks (thereby indirectly providing
dollar funding needed by foreign banks).73

The Fed also conducted “a torrent of bond buying programs to stabilize markets.” The Fed purchased huge volumes of Treasury bonds and
federal agency securities and added $2.85 trillion to its balance sheet between March 11 and June 10, 2020. Congress, the Treasury and
the Fed created new lending and bond-buying programs that provided large-scale financial assistance to
households, small businesses, and large corporations. 74 Thus, large banks “escaped bailouts [during the pandemic] primarily because their
customers were bailed out instead.” 75

Universal banks and shadow banks would have suffered very large losses if federal agencies had not intervened. On March 15, 2020 – the day
before the stock market crashed – all eight U.S. G-SIBs issued a joint press release announcing that they were facing an “unprecedented
challenge” from the pandemic and were therefore suspending further stock buybacks.76 Universal banks and shadow banks benefited greatly
from the Fed’s quick actions to rescue money market funds, repos, and the commercial paper market because they relied significantly on
funding from all three sources.77

Even with the federal government’s massive support, stock prices for U.S. G-SIBs
performed “notably worse than the S&P 500 index” between March and June 2020. A
widely used stock price index for 24 large U.S. banks (including six of the eight U.S. G-SIBs) dropped by 40% between January and May 2020,
compared to a 13% decline for the broad Russell 3000 index. Risk spreads for credit default swaps issued by the six
largest U.S. G-SIBs rose significantly during the spring of 2020, although they did not reach the levels recorded in 2008.78
In June 2020, the Fed conducted a stress test of the 34 largest domestic and foreign banks operating in
the U.S. The Fed estimated – based on alternative scenarios reflecting differing degrees of severity for the

pandemic’s impact – that the stress-tested banks could suffer total losses of $560 billion to $700

billion over the next nine calendar quarters. The Fed’s stress test also determined that capital ratios for “several” banks would fall close to
their “minimum capital requirements.”79 A contemporaneous stress test performed by four Harvard economists (including former Fed
Governor Jeremy Stein) estimated that the 21 largest U.S. banks could suffer losses of $390 billion to $550 billion during the same period, and
stress tests indicated that major U.S.
four or five U.S. G-SIBs might fall below their minimum capital requirements.80 Thus, both

banks faced very serious potential threats in mid-2020, even with the extensive help they
received from the federal government.
Large banks in other advanced economies also experienced severe financial stress during 2020, as indicated by widespread downgrades in their
credit ratings and significantly higher risk spreads for their bonds. In October 2020, the International Monetary Fund (IMF) performed a “global
stress test” of 350 large banks in 29 countries with advanced banking systems. The IMF estimated – based on alternative scenarios for the
pandemic’s potential impact – that total capital levels for the stress-tested banks would fall $110 billion to $220 billion below their combined
minimum capital requirements, even after accounting for the vast support they received from governments and central banks.81 All three of
the foregoing stress tests assumed that current Basel III capital standards are sufficient to assure the resilience of large banks – an assumption
that many experts have challenged, as indicated above.

Minneapolis Fed President Neel Kashkari recently stated that “[f]iscal authorities were right to be so forceful and proactive in supporting the
economy during the Covid downturn.” He emphasized that “this was also a banking bailout. Absent these fiscal interventions, losses in the
implicit government
banking sector would have been much larger.”82 A recent New York Fed staff study concluded that

subsidies for systemically important banks around the world became significantly larger during
the pandemic crisis as a result of “unprecedented government support.”83 Thus, the
enormous rescue programs established by governments and central banks during the pandemic
entrenched the TBTF status of large universal banks.
Private equity firms also benefited greatly from those rescue programs. Private equity firms are among the most significant shadow banks, and
they managed $4 trillion of assets at the beginning of 2020. Several of the largest private equity firms became financial conglomerates after the
GFC by establishing broker-dealer subsidiaries and by acquiring insurance companies. Private equity firms have used their broader resources to
finance corporate buyouts by underwriting syndicated leveraged loans and high-yield bonds. Today’s leading private equity firms compete
directly with universal banks, and they strongly resemble the “Big Five” securities broker-dealers that were major players on Wall Street when
the GFC began in 2007. The largest private equity firms essentially replaced the “Big Five” broker-dealers after those institutions either failed,
were acquired by universal banks, or became universal banks themselves during 2008.84

Private equity firms arranged corporate buyouts valued at more than $3 trillion worldwide between 2010 and 2019. Most of those buyouts
were highly leveraged transactions that left acquired firms with heavy debt burdens. Consequently, many of the 35,000 U.S. companies
controlled by private equity firms in early 2020 faced a high risk of failure after the pandemic crisis began. 85

The four largest private equity firms – Apollo, Blackstone, Carlyle, and KKR – reported large losses during March and April 2020, and their
market values plummeted. 86 Credit ratings agencies downgraded almost $1 trillion of U.S. corporate debts during that period. Even after the
Fed’s early interventions in March, markets for leveraged loans and noninvestment-grade (junk) bonds remained virtually frozen. Many heavily
indebted companies could not pay or refinance their debts. Private equity firms appeared to be “facing a year of reckoning,” as their “often
highly-leveraged portfolio companies confronted the worst economic outlook since the Great Depression.”87

Private equity firms and their allies aggressively lobbied the federal government to help their endangered portfolio companies. On April 9,
2020, the Fed agreed to expand its programs for buying corporate bonds and bond ETFs to include bonds and leveraged loans issued by
companies whose credit ratings had been downgraded to noninvestment grade (junk) since the pandemic’s outbreak. The Fed’s expansion of its
corporate bond programs “provided a lifeline to corporate debt rated below investment grade” and ensured that private equity firms would
have “continued access to cheap credit for new deals.” Many observers viewed the Fed’s action as “an indirect bailout of the private equity
industry.”88

In addition, policymakers on both sides of the Atlantic allowed companies controlled by private equity firms to receive government-guaranteed
pandemic loans. Generous support from governments and central banks enabled the private equity industry to recover rapidly during the
second half of 2020. Private equity firms arranged worldwide buyouts valued at $560 billion in 2020 – the highest level since 2007 – and they
arranged another $500 billion of such deals during the first half of 2021.89 Rescue programs during the pandemic crisis thus affirmed the TBTF
status of large shadow banks as well as universal banks. 90

Bailouts during the pandemic crisis perpetuate the global doom loop, which creates
5.

great dangers for the financial system, general economy, and society.
The pandemic crisis confirms that governments, central banks, universal banks, shadow banks, and wealthy investors remain trapped in a global
doom loop of toxic mutual dependence. Whenever a serious economic or financial disruption occurs,
governments and central banks arrange huge bailouts to prevent disorderly failures of universal banks and systemically
important shadow banks. Central banks maintain unconventional monetary policies that keep interest

rates low, boost asset prices, and facilitate the continued growth of public and private debts. Universal banks and shadow banks eagerly

underwrite higher levels of private and public debts, given the lucrative fees they earn from such transactions.
Wealthy investors buy higher risk financial assets in a “search for yield,” based on their expectation
that governments and central banks will take all necessary steps to preserve economic and financial stability. 91

As shown above, theglobal doom loop has produced an infernal cycle of ever-increasing public and
private debts and ever-larger bailouts. From December 2007 through March 2021 – a period that included immense
government rescue programs during the GFC and the pandemic crisis – total U.S. public and private debts increased from $53.8

trillion to $83.9 trillion. The federal government’s rapidly growing debt burden accounted for over 60% of

that increase, rising from $9.2 trillion (63% of U.S. GDP) in December 2007 to $28.1 trillion (127% of U.S. GDP) in March 2021.92
Similarly, global public and private debts expanded from $167 trillion in December 2007 to $289 trillion in March 2021. Rising worldwide
government debts accounted for 40% of that increase, growing from $34.8 trillion (60% of global GDP) in 2007 to $83.4 trillion (106.5% of
global GDP) in 2021.93

The global doom loop creates unsustainable risks and burdens for the financial system, the broader economy, and society. In July 2021, a U.K.
House of Lords committee issued a report criticizing the quantitative easing (QE) policy of the Bank of England (BoE). 94 QE policies are a central
component of the global doom loop, as they enable central banks to buy huge volumes of government debt securities and other financial
assets, thereby supporting the growth and reducing the debt service costs of public and private borrowings. The House of Lords committee
identified five very troubling features of QE policies.

First, the committee pointed out that “[n]o central bank has managed successfully to reverse quantitative easing over the medium to long
term.” The Bank of Japan (BoJ) was the first central bank to adopt a QE policy in 2001, and the BoJ has never exited that policy after buying
almost $7 trillion of government bonds and other financial assets. The Fed tried to unwind its QE policy in 2013 and again in 2017-18, but the
Fed abandoned both attempts after they triggered disruptive selloffs by frightened investors in global financial markets. The House of Lords
committee warned that “central banks are facing a ‘no-exit paradigm’ from quantitative easing. . . . [T]he scale of quantitative easing has been
increased repeatedly. . . . This has only served to exacerbate the challenges involved in unwinding the policy.” 95 Michael Forsyth, the
committee’s chair, expressed his concern that “[t]he Bank of England has become addicted to quantitative easing.”96

Second, the committee stated that the BoE’s QE program may have violated the BoE’s mandate by “effectively monetizing the government
deficit.”97 During the pandemic crisis, the BoE doubled the size of its balance sheet by authorizing the purchases of £450 billion of U.K.
government bonds. The BoE’s purchases of government bonds “aligned closely” with the volume and timing of government bond issues made
by the U.K. Treasury during the crisis. Most of the 18 largest investors in U.K. government bonds and several analysts concluded that “the Bank
of England had bought gilts to keep the Government’s borrowing costs down.” The House of Lords committee determined that there was a
“widespread perception . . . that financing the Government’s deficit spending was a significant reason for quantitative easing during the COVID-
19 pandemic.” 98

Third, the committee was alarmed by QE’s potential to undermine the BoE’s political independence as well as the credibility of the BoE’s
mandate to control inflation and maintain stable prices. The committee stated that QE has “made Bank of England and HM Treasury
policymaking more interdependent, blurring monetary and fiscal policy, and this has started to erode the perception that the Bank has acted
wholly independently of political considerations.” Some experts stated that the BoE faced strong political pressure to keep interest rates low for
an extended period to suppress government borrowing costs, thereby weakening the BoE’s ability to respond to significant increases in
inflation. The committee warned that “if inflation rises, the Bank may come under political pressure to not raise interest rates to control
inflation because the risk to the public finances and debt sustainability would have increased significantly.” 99

Fourth, the committee found that “quantitative easing has distributional outcomes that exacerbate wealth inequalities” because the BoE’s
ultra-low interest rates and large infusions of liquidity boosted market prices for housing and higher-risk financial investments, which are
primarily owned by the richest households. In the committee’s view, QE “benefited wealthy asset holders disproportionately by artificially
inflating asset prices. On balance, we conclude that the evidence shows that quantitative easing has exacerbated wealth inequalities.”100

Finally, the House of Lords committee expressed its concern that QE could “compromise financial stability” by encouraging “excessive and
potentially destabilising risk-taking in markets.” Mohamed El-Erian told the committee that “consistent central bank intervention through
quantitative easing” encouraged market participants to “take excessive risks in the knowledge that central banks will provide support if
financial stability is threatened.” Lee Bucheit stated that “the normal risk aversion of private sector lenders has been anaesthetised by the fact
that they are stuffed [by central banks] with liquidity that they must re-deploy.”101 The committee’s chair, Lord Forsyth, concluded that QE
presents “a serious danger to the long-term health of the public finances.”102

The disturbing problems identified by the House of Lords committee with regard to the BoE’s QE policy apply equally to the unconventional
monetary policies of the Fed and other leading central banks. As the committee pointed out, no major central bank has successfully exited from
QE. Due to the close coordination between government deficit spending on stimulus programs and central bank purchases of government
bonds, some analysts believe that QE policies have effectively monetized the growth of government debt.103 The BoE’s purchases of £450
billion of U.K. government bonds since March 2020 have nearly matched the £486 billion of bonds issued by the U.K. government to finance its
response to the pandemic.104

The Fed bought $2.44 trillion of U.S. Treasury securities between March 2020 and March 2021, equal to half of the $4.91 increase in federal
debt during that period. The Fed’s purchases nearly doubled its percentage ownership of outstanding federal debt from 9.3% to 17.6%, making
it “the biggest player in the US bond market.”105 Some observers have concluded that QE programs are a form of “financial repression,” which
is designed to suppress interest rates on government bonds and thereby reduce government borrowing and debt service costs. 106

Considerable evidence also supports the House of Lords committee’s view that unconventional monetary policies since 2008 have increased
wealth inequality and encouraged excessive risk-taking by financial institutions and investors. The ultra-low interest rate policies and QE
programs of central banks have (i) greatly reduced the returns to ordinary savers from bank deposits and other low-risk investments, (ii)
encouraged investors to buy higher-risk, higher-yielding investments, and (iii) increased the market values of housing and other higherrisk
assets, resulting in disproportionate wealth gains for the richest households, which own the largest share of those assets. Government rescues
of financial institutions, financial markets, and wealthy investors during the GFC and the pandemic have further increased the incentives and
payoffs for high-risk, high-reward investment strategies. Statistical indicators of wealth inequality have risen substantially since 2008 and
accelerated during the pandemic.107

central banks “pick[ed] winners and losers” among private-sector


In addition to helping wealthy investors,

companies when they selected the recipients of their corporate bondbuying and corporate lending programs
forms of favoritism could turn public opinion against central banks. The Occupy
during 2020. Both

Wall Street and Tea Party movements reflected a widely-shared view that the Fed bailed out

Wall Street banks and influential investors during the GFC. The rise of a similar popular consensus that the
Fed rescued powerful financial institutions, big corporations, and wealthy investors during the pandemic
could further erode public support for the Fed. 108

An additional threat to the political independence of central banks arises out of the close coordination between central bank bond-buying
programs and government fiscal stimulus efforts during the pandemic. The Fed, the BoE, and the ECB have maintained ultra-low interest rates
and QE policies and have expressed their willingness to allow inflation rates to exceed 2% for extended periods of time.109 The heads of
Belgium’s and Germany’s central banks recently criticized the ECB for continuing to buy large amounts of EU government bonds to “cap
borrowing costs” for those governments. The Belgian central bank chief warned that the ECB was losing its political independence as it became
subject to the “fiscal dominance” of EU governments.110

The Fed announced in July 2021 that it would maintain near-zero interest rates and continue to buy $120 billion of Treasury securities and
federal agency mortgage-backed securities each month until the U.S. economy achieved “substantial further progress . . . toward its maximum
employment and price stability goals.” Fed Chair Jerome Powell stated that the Fed would consider whether to “taper” its bond purchases at
future meetings, but he “offered few specifics.” He also indicated that the Fed was “nowhere near considering plans to raise interest rates,” and
he reiterated “his longstanding view that recent surges in inflation are likely to fade over time.”111 Powell’s positions on monetary policy
during the pandemic have been “in lockstep with the White House.” It seems highly unlikely that he would be appointed or confirmed for
another term as Chair in 2022 if he advocated a significant change in the Fed’s “highly accommodative, dovish response to the pandemic.”112

As the House of Lords committee warned, the apparent erosion of political independence for central banks
could severely weaken their ability to control inflation. The Fed, the BoE, and other central banks lost much of their
independence and credibility during the 1960s and 1970s, when they adopted easy-money policies that supported extensive deficit spending by
huge government
their governments and failed to prevent high inflation rates. 113 Today there are growing concerns that (1)

deficits and extraordinary monetary stimulus by central banks could produce significant increases in inflation, and
(2) political pressures on central banks will prevent them from responding effectively to rising
inflation rates. 114 The potential threat of high inflation should not be disregarded, as past inflationary episodes have frequently led to deep
recessions with heavy losses for societies. Periods of high inflation and severe recessions are very likely to increase social inequality, as they
inflict the greatest harm on wage earners, recipients of fixed pensions, and lower- and middle-class households.115

An even greater potential danger is that escalating private and public debts could cause another systemic debt crisis
comparable to 2008 and 2020, but with even worse results. During severe financial crises, as
shown by Europe’s experiences during the Great Depression and Great Recession, heavily indebted
governments often lack sufficient credibility to borrow the funds needed to stabilize their
financial systems. In that event, private-sector financial crises rapidly become sovereign debt
crises, and governments have to choose between defaulting on their debts explicitly (through debt
repudiations, moratoria, or restructuring) or implicitly (through currency devaluations or rapid inflation). The Eurozone

barely avoided such a disastrous outcome during the Great Recession.116 In view of the colossal
debt burdens that governments and central banks now carry, it is far from clear
whether the next major debt crisis would have an equally benign conclusion. 117

Without bailouts, popped asset bubbles cause great power war


Qian 18 (Qian Liu – PhD in economics from Uppsala University. <KEN> “From Economic
Crisis to World War III,” Project Syndicate. November 2018. https://www.project-
syndicate.org/commentary/economic-crisis-military-conflict-or-structural-reform-by-
qian-liu-2018-11)
The response to the 2008 economic crisis has relied far too much on monetary stimulus,
in the form of quantitative easing and near-zero (or even negative) interest rates, and included far too little structural
reform. This means that the next crisis could come soon – and pave the way for a large-scale military

conflict.
BEIJING – The next economic crisis is closer than you think. But what you should really worry about is what comes after: in the current social,
political, and technological landscape, a prolonged economic crisis, combined with rising income inequality, could well escalate into a major
global military conflict.

The 2008-09 global financial crisis almost bankrupted governments and caused systemic collapse.
Policymakers managed to pull the global economy back from the brink, using massive
monetary stimulus, including quantitative easing and near-zero (or even negative) interest rates.
monetary stimulus is like an adrenaline shot to jump-start an arrested heart; it can revive the
But

patient, but it does nothing to cure the disease. Treating a sick economy requires structural reforms, which can
cover everything from financial and labor markets to tax systems, fertility patterns, and education policies.1

Policymakers have utterly failed to pursue such reforms, despite promising to do so. Instead, they have remained preoccupied with politics.
From Italy to Germany, forming and sustaining governments now seems to take more time than actual governing. And Greece, for example, has
relied on money from international creditors to keep its head (barely) above water, rather than genuinely reforming its pension system or
improving its business environment.

the unprecedented excess liquidity that central banks


The lack of structural reform has meant that

injected into their economies was not allocated to its most efficient uses. Instead, it raised global
asset prices to levels even higher than those prevailing before 2008.
In the United States, housing prices are now 8% higher than they were at the peak of the property
bubble in 2006, according to the property website Zillow. The price-to-earnings (CAPE) ratio, which measures whether stock-market prices
are within a reasonable range, is now higher than it was both in 2008 and at the start of the Great Depression in 1929.

collapse of asset-price bubbles will trigger


As monetary tightening reveals the vulnerabilities in the real economy, the

another economic crisis – one that could be even more severe than the last, because we have built

up a tolerance to our strongest macroeconomic medications. A decade of regular adrenaline


shots, in the form of ultra-low interest rates and unconventional monetary policies, has severely depleted their

power to stabilize and stimulate the economy.


If history is any guide, the consequences of this mistake could extend far beyond the economy. According to Harvard’s Benjamin Friedman,
prolonged periods of economic distress have been characterized also by public antipathy
toward minority groups or foreign countries – attitudes that can help to fuel unrest, terrorism, or even
war.
Thus, the plan: the United States Federal Government should substantially increase
prohibitions on the anticompetitive business practice of domestic, private sector
financial institutions amassing liabilities greater than five percent of the Federal
Deposit Insurance Corporation’s Deposit Insurance Fund by at least expanding the
scope of its core antitrust laws.

Plan solves moral hazard --- credible commitment, risk diversification, 5 percent cap,
targeting of “financial institutions,” and use of antitrust law are each key.
Macey & Holdcroft 11 (Jonathan R. Macey is Sam Harris Professor of Corporate Law,
Corporate Finance, and Securities Law, Yale Law School. James P. Holdcroft, Jr. is Chief
Legal Officer, CLS Group. <KEN> “Failure Is an Option: An Ersatz-Antitrust Approach to
Financial Regulation,” The Yale Law Journal. Volume 120, No. 6. April 2011.
https://www.yalelawjournal.org/feature/failure-is-an-option-an-ersatz-antitrust-
approach-to-financial-regulation)
In times of economic stability, governments
In this Feature, we analyze massive government bailouts of financial institutions as an example of a classic precommitment problem.

understand bailouts of massive financial institutions will


that future lead to moral be expensive and inefficient; they will significant

hazard Policymakers cannot commit to refrain from


on the part of the financial institutions that are eligible for such bailouts. , however, credibly

supporting large institutions. , important financial

inability to precommit to refrain from


The government’s bailouts creates an i g g engaging in massive mplicit overnment uarantee: those

institutions in this T B T F category will be bailed out despite government’s prior


“ oo ig o ail” , the inevitable

pledges (usually made immediately after prior bailouts) to refrain from orchestrating such bailouts in the future. These implicit guarantees would be considered bad policy if articulated as explicit guarantees. Some sort of

precommitment device is needed to end vicious circle of bailouts bring to an the in which the United States appears to be trapped. In our

the only precommitment device that enables


view, credible promise is the government to make a to refrain from future massive bailouts

to act preemptively to prevent financial institutions from growing so large that they
become t b t f oo ig o ail.

Our precommitment device takes the form of a bright-line rule that operationalizes the adage—once popular among regulators but never implemented—that “any financial institution that is too big to fail is too big to survive.” What this means, as a practical matter, seems obvious:

we must determine how big is T B T F and dismantle institutions larger than that oo ig o ail size. These

institutions should be divided into smaller sizes such that they can be wound up without
government intervention in dissolution if they become insolvent. a process

no financial institution could amass aggregate liabilities in an amount greater than


Under this rule,

5% of the value of the FDIC Deposit Insurance Fund


then-current targeted the DIF (DIF) for the current year.2 We have selected the targeted value of for

is readily identifiable
several reasons. First, it a standard that is reasonably objective ; the FDIC publishes this target in the Federal Register.3 Second, the standard is ; the FDIC’s target for the

expressed as a percent of FDIC deposits


DIF is protected from political age -insured . Third, the standard is flexible but reasonably

influence the FDIC ; use its judgment to set the DIF


is empowered by the Federal Deposit Insurance Act to target value of the , taking into account any economic factors that it
It must select a target value of not less than 1.15%
deems appropriate. , however, but no greater of aggregate insured deposits

than 1.5 % of insured deposits. This provides practical protection against arbitrary
0 aggregate 4 standard a

easing If the target value is increased


as well. banks will have to pay higher to allow bigger banks, then all

assessments into the DIF. This has two consequences: greater resources for future available possible

resolutions and higher costs for banks which will temper banks’ fervor for growth. , the latter of those

this metric is linked to our ability to absorb the failure of a financial


This leads to our fourth reason for selecting : it

institution without jeopardizing stability of the banking system. the rest of the By way of illustration, the current targeted value for the DIF is equal
to 1.15% of total insured deposits, so the bright-line limitation we propose would not allow any bank to have total liabilities in excess of 0.0575% of total deposits, or approximately $3.096 billion.5

We understand, of course, that the government does not always achieve its targets. For example, the current actual DIF reserve ratio is well below 1.15% and is not projected to reach the targeted level until 2018.6 Under our proposed rule, however, the actual DIF reserve ratio is
irrelevant; our analysis focuses solely on the targeted ratio, which, by statute, cannot fall below 1.15%. If our approach were adopted, Congress would have a strong incentive never to lower the minimum targeted ratio: if the ratio ever were reduced, then our rule would result in an even
larger number of banks being deemed “too big” than it would under the current 1.15% target figure. And though Congress might have an incentive to raise the minimum targeted ratio in response to political pressure to allow large banks to retain their current size or to grow, any
increased risks of bailouts associated with such larger banks would be offset by the larger deposit insurance premiums paid by all banks, since such premiums are tied to the targeted, not the actual, reserve ratio.

Finally, by tying the metric to the target value of the DIF and not the actual balance of the DIF, our bright-line rule does not compound a problem in times of financial crisis (that is, an unintended negative feedback loop) and avoids arguments over market accounting of DIF assets and
questions of liquidity versus capital in the fund.

The bright-line rule that we are proposing would require the largest financial institutions to choose between downsizing themselves in order to comply with the size rule or acquiescing to a government-mandated breakup plan.7 We estimate that only a small percentage of financial
institutions would be affected by our rule.

The bright-line rule is simple by design. It is simple to understand administer and . It is simple to monitor.

enforce. It
It is simple to does not require large groups of lawyers or financial
works prospectively and , accountants,

engineers for implementation or compliance. It also does not rely on the hope that the government will, in the future, permit large institutions to fail, notwithstanding the fact that the government has never permitted
such institutions to fail in the past. Importantly, it provides for corrective action before there is a crisis and not during or after a crisis, when political forces are at their strongest.

regulation, even massive regulation, has been tried and


We are limited in our choice of contingency plans for two reasons. First, has

failed. Elaborate ex ante commitments to protect some creditors—including federally sponsored deposit insurance, minimum capital requirements, activities restrictions, and government

have not enabled the government to make a credible commitment to refrain from
inspections—

bailing out all the rest during a crisis.


history is relevant. Because we bailed out banks in the past, people rationally
Second, have the have come to

expect we will bail them out in the future.


that Despite serious prior efforts to refrain from using taxpayer funds to bail out companies like AIG, Citigroup, and Goldman Sachs, the political

our country’s established record of


fallout from the failures of these or other financial behemoths was deemed too great for bailouts to be avoided in time of crisis. Put another way,

bailouts makes it far more difficult for the government to make a credible
actually remotely

commitment to stop future bailouts.


the only solution to T B T F is to break up the largest
Thus, because traditional regulation does not work and because people have come to expect bailouts, the oo ig o ail problem

financial institutions to a size that is sufficiently small so that bankers do . This should be done (1) , customers, and taxpayers

not expect these institutions to be bail out voters do not want their political leaders to ed ; (2)

bail banks out , if and when they do become insolvent; and (3) banks do not have sufficient political influence to “capture” regulators or government leaders and perpetuate a false sense of economic importance. In this Feature, we articulate the guidelines
that we believe should be used to break up the largest financial institutions in the economy.

The rule that we propose would limit the economic risk of future financial institution failures by severely limiting the size of banks and other financial institutions that benefit not only from explicit FDIC insurance but also from the broader set of implicit government guarantees.

In Part I of this Feature we discuss the current understandings of the concept of Too Big To Fail. Specifically, we treat the twin problems that we identify with “Big Banks.” First, the government cannot credibly commit to refrain from bailing them out when they get into financial distress,
as they inevitably do. Second, their size, and the certainty that they will be bailed out, creates a “follow-the-leader” mentality that magnifies the costs and the consequences of errors in judgment and analysis on the part of these institutions’ managers.

Part II of this Feature analyzes two facets of the current legal regime that governs large financial institutions. First, we argue that the Too Big To Fail doctrine, which is typically analyzed as a policy issue dealing with “interconnectedness” (the term for the complex web of transactions and
dealings that appears to bind financial institutions together), is actually a political issue. We posit that it is irrelevant whether bailouts are good public policy or bad public policy: as long as bailouts are a political necessity for elected officials and top bureaucrats, they will continue.
Consequently, rather than continue a meaningless debate about whether Too Big To Fail is good public policy or bad public policy, we must accept the fact that bailouts are inevitable as a practical matter as long as behemoth financial institutions exist.

In the second Section of Part II, we consider the role of antitrust policy in our analysis. Ironically, antitrust law has not just tolerated big banks; U.S. antitrust policy has actually created exceptions and loopholes for banks that have exacerbated the problem of excessive size. These antitrust
laws’ exceptions are misguided. The policy and practice of coddling and protecting the biggest financial institutions should not just be ended; it should be reversed. Regulators should move aggressively to dismantle banks that are too big to fail.

We recognize that our idea of breaking up the banks fits uneasily into the current paradigm of antitrust law, which posits that the only legitimate concern of antitrust law is fostering price competition in the markets for capital, products, and services.8 As we point out below, however, the
current Too Big To Fail policy actually does convey an inappropriate and inefficient competitive advantage to big banks; it provides them with artificially cheap funding because, ceteris paribus, creditors inevitably prefer financial institutions that enjoy implicit or explicit government
guarantees rather than risk their funds with smaller banks that might actually be allowed to fail.

Finally in Part III, we discuss what we call the “original” Volcker Rule, which would have put strict curbs on bank size. Legislators considered this rule when they were drafting the Dodd-Frank Act, but they ultimately discarded it.9 This is the main rival to our approach. Then, in the final
Section of Part III, we present our proposed rule, which we characterize as the “bright-line” rule. It is the only simple, objective rule that has been proposed to deal with the Too Big To Fail problem. A conclusion follows.

I. The Intractable (Political) Problem of Bigness: History and Context


bailouts may be bad policy, but politicians who face a choice between reelection and
Bank

good public policy will invariably choose reelection. politicians 10 Democracy creates an environment of “survival of the fittest” among . Those

unwilling to satisfy voters will be replaced.


or unable the If inevitably History may reward the statesman who takes unpopular views on the important issues of the day, but politics does not.

a politician fails to pursue the politically expedient path


n incumbent she will be replaced by a , then he or

politician who will make the popular choices is ing to , irrespective of whether that path is bad policy.

A. The Problems: Credible Commitments and Public Expectations

Bailouts of s i f i s are inevitable not because economic policy requires them


large, ystemically mportant inancial nstitution

but because political survival does. elected officials As long as large financial institutions exist, governments will continue to bail them out. And and regulators, all of

who can be replaced


m cannot make a credible commitment to refrain from
(either by voters or by politicians),

bailing out Those who will not orchestrate bailouts


large institutions. will be replaced by in times of crisis inevitably

others who will. When we focus on issues of interconnectedness, proprietary trading, uses of derivatives, or subprime lending, we become distracted with the details of what may have contributed to the problem and, importantly, miss what grabs the
attention of the political class that will craft the solution.

In 2008, world financial markets faced the worst financial crisis since the Great Depression. Equity markets tumbled, debt markets froze, and banks stopped lending. Brand-name financial firms like Merrill Lynch, Lehman Brothers, Bear Stearns, Citibank, Bank of America, AIG, Fannie Mae,
and Freddie Mac—all once highly regarded—either failed or required extraordinary assistance to stay afloat. Even the staunchest of free market advocates strongly advocated government investment in financial institutions in order to avoid a potential depression or, indeed, the complete
collapse of the financial system. Pundits and market participants alike thought that we were on the edge of the abyss.11

The Great Recession will also be remembered for making the phrase “Too Big To Fail” part of everyday discourse and not just an obscure term used in policy discussions among regulators, lawyers, and bankers. While concerns about bank failures have long been a part of American
financial history, the idea that a particular bank would be saved because it was considered to be too big to fail became a viable policy alternative in connection with the collapse of Continental Illinois National Bank and Trust Company (Continental Illinois) in 1984.12 At the time of its
failure, Continental Illinois was the seventh-largest banking institution in the United States and represented the largest bank failure in modern times by a wide margin.13 The resolution of Continental Illinois presented problems for regulators due to the bank’s size and complexity. In
resolving Continental Illinois, the FDIC departed from its existing policy of paying uninsured depositors only a portion of their claim at the time of the bank’s closing, with the remainder paid only if net resolution proceeds were available. Uninsured depositors were paid in full along with
insured depositors. The distinction between being insured and not being insured became meaningless. This came to be referred to as Too Big To Fail in likely reference to Continental Illinois’s significant size and general prominence compared to the other banks that were resolved under
the FDIC’s modified payoff policy. There was genuine concern at the time that a run on Continental Illinois could trigger a system-wide run with devastating consequences for all financial markets and the U.S. economy in general.

“Too Big To Fail” was essentially shorthand for saying that in extraordinary circumstances the normal rules would not apply and, in turn, that depositors and creditors of banks that were big or important would be paid more than would be the norm. In retrospect, it should have been
apparent that “extraordinary” times are the norm for when most banks fail and that Too Big To Fail was the new normal. Not surprisingly, this policy was less than popular with small banks and those who saw this as an expansion of the moral hazard that already existed in bank insurance
programs.

The Too Big To Fail policy continued in this indeterminate form until the Federal Deposit Insurance Corporation Improvement Act of 1991 (FDICIA)14 attempted to limit Too Big To Fail as a policy alternative in all but the clearest of cases. Following a flood of savings-and-loan failures and
the failure of a few large banks (for example, Bank of New England and MCorp Bank) in the late 1980s and early 1990s, the resolution process became a focal point for Congress as insurance resources became strained. FDICIA was intended to strengthen the Bank Insurance Fund15 by
providing the FDIC with access to the U.S. Treasury and requiring it to pursue the “least cost” resolution of a failed institution regardless of size.16 In other words, uninsured depositors and other creditors were not to be treated like insured depositors.

Ironically, this statute was supposed to make certain that the Too Big To Fail policy was no longer an alternative open to regulators—or so it was generally thought. FDICIA provided that the FDIC did not have to use the “least cost” resolution if the FDIC (by two-thirds majority vote of its
board), the Board of Governors of the Federal Reserve (by two-thirds majority vote), and the Secretary of the Treasury (after consultation with the President) determined that the failure of a particular bank would present “systematic risk.”17 Until the Great Recession, it was generally
considered that this exception would never be used. Not only has it now been used, but also the law has been twisted so as to permit government bailouts of uninsured financial institutions—such as investment banks and insurance companies.

Consider the BP oil spill in the Gulf of Mexico. Much of the discourse surrounding that event involved speculation as to whether it was “right to say that the BP oil spill is something like Obama’s Katrina.”18 Hurricane Katrina was considered a defining moment in the presidency of George
W. Bush, and the federal response was pointed to as “a symbol of then-President George W. Bush’s inattention to the hard work of managing the nation’s domestic business.”19 The seemingly uncontrollable flow of oil from BP’s well, which became the largest spill in U.S. history, also
turned into “a public test of [President Barack Obama’s] competence at handling an unanticipated crisis.”20

For a time at least,

Obama’s authority and credibility were leaking away with the gulf’s deep water oil.

As with Katrina, the White House responded to an unexpected problem with hesitation and missteps. Obama’s aides were slow to assert federal responsibility; they initially described the problem as BP’s to solve, not theirs. After that wore thin, Interior Secretary Ken
Salazar abruptly suggested that the federal government might seize control of the well—only to be publicly contradicted by his crisis manager, Coast Guard Adm. Thad Allen, who said such a move would be foolish.21

Tellingly, in the midst of the crisis, President Obama claimed that history would absolve him of blame for the oil spill. At a news conference in late May 2010, Obama predicted that “[w]hen the problem is solved . . . I’m confident that people are going to look back and say that this
administration was on top of what was an unprecedented crisis.”22 Ratcheting up the political rhetoric, political commentator George Will said that the oil spill was like the Iranian hostage crisis that ruined President Jimmy Carter’s chances of winning a second term as President.23

While a general consideration of when and how the government becomes liable for crises is outside the scope of this Feature, some preliminary insights can be distilled, even at this early stage of research on the topic. There is a clear consensus that government is ultimately responsible
for resolving certain problems, notwithstanding the fact that the government was not responsible for, and may have had nothing to do with, the problems that created the crisis. Under certain conditions, there exist implicit (as well as explicit) government guarantees to solve certain
social problems, notwithstanding that public policy might indicate that the best way to address the problem would be to leave the government out.24

Moreover, while we have yet to develop a theory of what causes an issue to move from the sphere of private responsibility into the sphere of public responsibility, some observations can be made. Intriguingly, for example, the issue is not purely ideological, as one might at first expect. In
other words, one would think that “liberals” and others who advocate active government involvement in the economic sphere would favor government responsibility, while “conservatives,” who favor a laissez-faire approach toward the economy, would argue that the government should
decline to take responsibility for problems that are outside of the government’s sphere of expertise or interest. But this does not appear to be the case. For example, Louisiana’s Republican governor, Bobby Jindal, has made it clear that he thinks that the federal government has a large
role to play in dealing with the BP oil spill off of the Louisiana coastline.25 Interestingly, when asked how he could reconcile his belief in limited government with his demands for more federal assistance and support of the BP disaster, Jindal observed that “[w]hen government grows too
big, it doesn’t do its core functions properly. . . . Absolutely, I believe in a limited government that is effective and competent in what it does. We need . . . our federal government exactly for this kind of crisis.”26

Of course, the public expects the government to solve crises—like the Iranian hostage crisis—that involve issues that are clearly within the government’s purview, such as national defense and foreign policy. We believe, however, that the government can also expand or contract the
issues for which it is held responsible. The recent debate over health care, for example, can be viewed as a debate about whether the government or the private sector is responsible for providing health care. Still more recently, the creation of a new Bureau of Consumer Financial
Protection, as part of the financial reform package, likely will alter expectations about the federal government’s responsibility for protecting consumers involved in commercial transactions.

Since the passage of the Depression-era financial regulations,27 banking and investment banking have been among the most heavily regulated industries in the United States. The existence of deposit insurance and the responsibility that the Board of Governors of the Federal Reserve
System has assumed for the stability of the financial system, as well as the SEC’s responsibility for the stability of the securities markets, are also likely accountable for the assumption that the government is responsible not only for managing financial crises but also for preventing financial
crises from occurring in the first place.

Thus, generally speaking, as the government has grown, so too have public expectations about its responsibilities for systemic mishaps. Those expectations exist regardless of whether such mishaps occur naturally and regardless of whether solving them is something at which the
government has any competence, much less expertise or experience.

As such, this Feature is written under the premise that, regardless of the provenance for the assumption, the ineluctable reality is that when financial crises occur, the government (at least in the United States) naturally and inevitably assumes responsibility for resolving the immediate

Government officials
crisis and for “making sure” that such a crisis will not happen again. recognize this. They no longer purport that regulation can prevent future financial crises, but they do promise to
refrain from future bailouts. President Barack Obama defended financial reform by saying, “I am absolutely confident that the bill that emerges is going to be a bill that prevents bailouts. That’s the goal.”28 Senator

the only thing that could not be “more clear”


Christopher Dodd made the same point, emphasizing that his proposed statute “ends bailouts. Nothing could be more clear.”29 In fact,

is that politicians have tried for decades without success to solve the Too Big To Fail
problem by instructing regulators to refrain from bailouts. 30
This phenomenon is not unique to the U S There have been large banking failures nited tates. or systemic

in a large group of industrialized democracies, including


and diverse Ireland the (but not limited to) Austria,31 Denmark,32 Sweden,33 ,34

U K nited Germany Italy


ingdom,35 Russia,36 and France. bank failures
,37 Indonesia,38 Japan,39 Belgium, Canada, , the Netherlands, 40 In every one of these countries,

inevitably led to bailouts Because these countries are democracies and in


have . Why? ,

democracies politicians who decline to pursue a dramatic response to crises are unlikely
to survive. 41 It is for this reason that Too Big To Fail is so often an attractive policy alternative.

In our view, then, the Dodd-Frank Act, like other schemes to deal with the Too Big To Fail problem, is likely to be ineffective because it treats the public policy problem associated with banks’ failure as a technical problem in banking regulation when it should be treated as a political
problem. Our plan to break up the banks addresses this political problem in three ways. First, because smaller banks exert less political pressure than behemoth banks, politicians are less likely to be captured by smaller banks than by large financial institutions. To be sure, community
banks have better access and more influence with members of the House of Representatives, but large national banks dominate in the Senate and with national political parties.42

because politicians
Second, have an established track record of allowing smaller
and regulators bailing out big banks and

institutions to fail, people do not expect smaller institutions will be bailed out that when they find themselves in

the public expects large financial institutions will be bailed out


financial distress. In contrast, based on past experience, that , just as surely as the

bailouts become a self-fulfilling prophecy


police and rescue teams come to the aid of reckless motorists who hit a wall. In other words, over time, : bailouts inevitably occur because people

because people expect them they plan as if they will occur.


expect them to occur. And to occur, These expectations, and the concomitant lack of planning by
bank managers, make it practically impossible for politicians to decline to respond to crises with bailouts.

the largest financial institutions should be broken up because they make bad
Third and most importantly, tend to

bets and follow each other to doom by consistently making the same bad bets.
to In other words, the big
banks act like lemmings. As former Citigroup CEO Charles Prince admitted in a famous interview with the Financial Times, “When the music stops, in terms of liquidity, things will be complicated. But as long as the music is playing, you’ve got to get up and dance. We’re still dancing.”43 It is
hard to imagine the CEO of any large bank advocating a strategy of becoming smaller, serving fewer clients, and not boldly moving forward, particularly when size of bank and size of CEO paycheck are strongly correlated.

B. The Lemmings Problem

big banks are caught in an “information cascade.”


In economic terms, the market participant An information cascade occurs when a

easily observe behavior of those around [them] and follows behavior of other
can the him the the

market participants Breaking up banks will reduce


without regard to his or her own information, beliefs, or views of the market.44 the in the way that we suggest in Part III the

proclivity of banks to play “follow the leader” in conforming to the moves of the dominant firms in the industry.

financial industry has a few


Information cascades occur because, under certain conditions, “individuals rapidly converge on one action on the basis of some but very little information.”45 In its current form, the services very

firms such as AIG in insurance Goldman


dominant in investment banking and Bank of ; Sachs and Morgan Stanley traditional ;

America in commercial banking. These firms


, Citigroup, and JPMorgan Chase actions are highly
traditional are closely linked, and their

visible to one another. This market structure leads to copycat behavior.


If the market were broken up such behavior would be far less likely in the way that we suggest, still be possible, but it would for several

increasing the number of participants make copycat behavior costly because it is


reasons. First, would more

difficult
more to observe behavior of numerous actors than to observe behavior of a
and time-consuming the the

small number of financial institutions. large Second, the process of breaking up the banks that we propose would, by definition, result in the largest, most copied institutions being broken up.

there would be no obvious


Consequently, after this breakup occurred, the institutions on which people tend to focus would no longer exist in their current form. There would be several versions of each one, and thus

leader for the market participants to follow. other

because it is the case that “the same info


In addition, sustain
often highly different objective rmation may be capable of ing different, even

belief s increasing the number of market participants increase the chances of


pattern ,”46 would

multiple responses increasing the size of the market reduce


to any particular instance of observed behavior. Finally, and perhaps most importantly, would

the returns to lemming behavior because increasing the number of competitors the very act of in the
lower the probability of and therefore the
financial markets would — expected returns associated with following the

industry leader.
As noted above, when a large financial institution fails, people expect a government response. This expectation is the sort of event that creates an information cascade: because everybody rationally expects a government bailout of certain firms, it becomes rational to bail out those firms.
Breaking up the banks can stop the current herd behavior, because such a breakup would send a strong signal that the previously observed herd behavior is no longer rational. As Bikhchandani, Hirshleifer, and Welch observe, “Cascades can be sensitive to public information releases.”47
Those information releases, however, must be credible. Empty rhetoric about “ending bailouts” is no substitute for a credible signal such as actually breaking up the banks.

if the largest banks are broken up cascade behavior will be far less likely because
Moreover, , then

there will be no “leader of the pack” to follow. cascade assumes there is a leader The literature that , or at

if the big banks are broken up, there no longer will


least a “first mover,” who may or may not have qualities that one normally associates with leadership.48 Simply put,

be first movers for the industry to follow. rest of the This, in and of itself, will make the banking system more resilient; the diminution in the current lemming-like behavior and

increase in diversity of decisionmaking will translate into a diversification of strategies


This lead to a significant reduction in systemic risk.
within the banking industry. will

Thus, as long as we have big banks, we will have implicit insurance of large financial institutions and the culture of bailouts that such an insurance scheme brings with it. Our approach is to address the root cause of the problem, which is that the size of the largest institutions makes
bailouts inevitable.

A. Interconnectivity and the Uncertainty of Too Big To Fail

Bear Stearns was the fifth-largest brokerage house in the United States before it was acquired in an assisted transaction by JPMorgan Chase, itself a very large bank. But as we would learn from the failure of the fourth-largest investment bank in the United States, Lehman Brothers, size
alone does not translate into systemic risk. Bear Stearns was a leading underwriter of mortgage securities, but Lehman Brothers was bigger. Bear Stearns was a large underwriter of equity securities and dealer of commercial paper, but, again, Lehman Brothers was bigger on both counts.
Bear Stearns was reportedly a big participant in the credit default swap market, but so was Lehman Brothers, and because this is a private and highly opaque market we have no way of knowing who was in fact bigger in this field. And while at the time of its failure Bear Stearns was not a
bank or a bank holding company, within six months the only two large investment banks remaining in the United States would be. All of this makes the question of what constitutes “too interconnected” in the context of a securities business relevant. As former Treasury Secretary Henry
Paulson wrote in his account of the Great Recession, On the Brink,

They [critics of the decision to provide assistance] thought we should have let Bear fail. . . . To be fair, I could see my critics’ arguments. In principle, I was no more inclined than they were to put taxpayer money at risk to rescue a bank that had gotten itself in a jam.
But my market experience had led me to conclude—and rightly so, I continue to believe—that the risks to the system were too great.49

In September 2008, insurance giant AIG was determined to be too important to fail. In the words of then-Secretary Henry Paulson, “If any company defined systemic risk, it was AIG, with its $1 trillion balance sheet and massive derivatives business connecting it to hundreds of financial
institutions, governments, and companies around the world.”50

Just a few weeks later, in working to find a way to inject capital into several large banks, then-Secretary Paulson offered his own interpretation of how systemic risk should be determined. In his view, systemic risk existed if “an institution’s failure would seriously hurt the economy or
financial stability.”51

We highlight these differing interpretations of systemic risk, not to be critical of former Secretary Paulson or his actions, but rather to illustrate just how nebulous the concept of Too Big To Fail has become. Even to someone as financially experienced and sophisticated as Henry Paulson, a
former Goldman Sachs CEO, systemic risk can mean, variously: too interconnected, too important, causing serious hurt to the economy, or causing financial instability. These are not insignificant differences, particularly when hundreds of billions of dollars are at stake.

In October 2008, the Federal Reserve Board of Governors, the FDIC Board of Directors, and the Secretary of the Treasury determined that several of the largest financial institutions in the United States were systemically critical to the economy and should not be allowed to fail. These
banks were at the time: Bank of America Corp., Bank of New York Mellon Corp., Citigroup, Inc., Goldman Sachs Group, Inc., JP Morgan Chase & Co., Merrill Lynch & Co. (which was soon merged into Bank of America Corp.), Morgan Stanley, State Street Corp., and Wells Fargo & Co.52 Just
four of these institutions— Citigroup, JPMorgan Chase, Bank of America, and Wells Fargo—held 39% of all of the deposits in FDIC-insured financial institutions.53 Seventy-seven percent of the $13.3 trillion in assets owned by the 8204 FDIC-insured banks are owned by the largest 116
banks, which means that 77% of banking assets are held by 1.4% of banks.54

To ensure their future security, the Treasury invested an aggregate of $205 billion in capital in these banks and other financial institutions through the Capital Purchase Program.55

One of the most dramatic moves during the crisis was the Treasury’s decision to bail out all investors’ losses in money-market mutual funds. A money-market mutual fund is a type of mutual fund that must, by law, invest in highly liquid, low-risk securities.56 The fundamental economic
principle underlying money-market mutual funds is that such funds are substitutes for, and compete with, the traditional transaction (checking) accounts offered by banks. The trade-off is that money-market mutual funds offer slightly higher rates of return but, because they are not
insured by the federal government, also pose somewhat greater risks than bank deposits do.

Without even attempting to explain the tortured logic that would allow the use of the Exchange Stabilization Fund (established by the Gold Reserve Act of 1934 in order to stabilize currency markets) to be used to backstop moneymarket mutual funds, on September 29, 2008, then-
Secretary Paulson announced that the share prices of all such funds would be protected.57 Moneymarket mutual fund assets were valued at approximately $3.45 trillion on the date of this announcement.58

Bailing out money-market mutual funds was perverse public policy because it gave the relatively affluent money-market mutual fund investors a “free lunch” in the form of government insurance of their assets that they did not pay for—or even reasonably anticipate. Unlike money-
market mutual funds, banks are required to pay insurance premiums to the FDIC for insuring their deposit accounts. Money-market mutual funds—and their investors—received the benefits of such insurance without having to pay for it.

So it is that in twenty-five years Too Big To Fail was transformed from a somewhat vague notion that, in certain cases, uninsured depositors and creditors of very large banks might be treated like insured depositors into a multifaceted rationale for investing in banks engaged in a wide
range of financial services businesses. In its original formulation, Too Big To Fail protected debtholders. In its latest version, Too Big To Fail protects all stakeholders, including common shareholders and employees with incentive plans. It protects banks, investment banks, insurance
companies, and moneymarket mutual funds. This expansion of the policy occurred despite the efforts of Congress to make the use of even the more limited, depositor-focused Too Big To Fail policy extremely difficult.59

On July 15, 2010—after several months of negotiating, drafting, compromising, lobbying, and political dealmaking—the House of Representatives and the Senate passed a bill nominally intended to reform financial services and prevent the next crisis.60 Generally referred to as the Dodd-
Frank Act, it addresses Too Big To Fail through a combination of studies, the expansion of discretionary regulatory powers, and a limitation on the ability of banks to engage or invest in proprietary trading and other alternative asset investments. The Act was signed into law by President
Obama on July 21, 2010. Accompanying the announcement of the passage of the roughly 2300-page legislation, the House Committee on Financial Services released a summary of the key provisions of the Dodd-Frank Act entitled Brief Summary of the Dodd-Frank Wall Street Reform and
Consumer Protection Act: Create a Sound Economic Foundation To Grow Jobs, Protect Consumers, Rein in Wall Street and Big Bonuses, End Bailouts and Too Big To Fail, Prevent Another Financial Crisis. 61 To be sure, this was an act born of high expectations.

Dodd-Frank Act
In fact, there is reason to believe that the actually will increase the probability that financial institutions in general, and

insurance companies in particular, will be bailed out in the future.62 While the Act gives regulators new resolution authority over large financial firms and encourages regulators to take prompt corrective action against insolvent firms, regulators
have received similar powers before and opted to continue bailouts rather than impose resolution strategies that shut down insolvent firms. Dodd-Frank does not address the fundamental issue of providing a clear end to Too Big To Fail as a policy option.

Dodd-Frank
What the increase regulators’ discretion and power. The notion that
Act does instead is 63 this

discretion and power will be used to avoid bailouts is unrealistic from a necessarily of big financial firms

political point of view. The Act also requires banks, investment banks, and insurance companies viewed as posing a systemic risk to submit periodic “funeral plans” laying out how they could be wound up in an orderly way if they become
insolvent. The idea here is that agencies will not have to bail out insolvent institutions because they can just follow the funeral plans. Of course, there is no requirement that regulators must follow these plans. And there is no reason to believe that they will.

The Act also creates a new bureaucracy, or “council of regulators,” with the authority to identify and resolve problems of systemic risk. Financial institutions like insurance companies, hedge funds, and venture capitalists that now operate to some extent without interference from federal
regulations could then be brought into the regulatory fold. But the bureaucrats running this new council could face peculiar incentives. Nobody will ever know if they have intervened too much or too early—and if in doing so they destroyed assets that were legitimate.

if
But a financial bubble
, hypothetically, burst bureaucrats would face intense criticism were ever allowed to , the in the council of regulators

for having failed council will err on the side ofin their basic mission. Thus, this consistently overintervention. When regulators fear an institution is about to become insolvent or is
operating while insolvent, they will bail out it to prevent the systemic risk ogre from running amok through the economy. This is precisely what happened during the Great Recession—first for Bear Stearns and AIG, then for the hundreds of financial institutions that
collected TARP money, and then for the thousands of banks and mutual funds that got the benefit of a vastly expanded federal safety net.

There is, undeniably, a great demand for regulation of financial institutions.64 What is less well understood is that the massive regulation of financial institutions generates expectations. Specifically, the existence of a massive regulatory scheme creates the expectation on the part of
voters that the government will confront and remediate the failure of all financial institutions, and not just commercial banks.

B. Antitrust: Change in Focus Needed


The overwhelming trend in financial services over the past thirty years has been the industry toward

consolidation. There were 10,787 banking mergers between 1980 and 2009 and in the United States , , during

no regulator challenged a
this time, merger on antitrust grounds. the prospective involving an institution with more than $1 million in assets 65 Today,

five largest banks hold an astonishing forty-two percent of all deposits


in the United States , up from twelve percent in 1980.66
This is remarkable on its own, but it is even more incredible when one factors in the amazing growth of commercial banking and financial services generally during this period.67 In fact, to date,

no federal agency ever considered


banking effect of a merger on the
or the Department of Justice has the competitive

stability of the financial system. Despite the fact that competition policy in banking has struggled to balance antitrust law and stability concerns since the development of federal statutory

antitrust law has never accommodated stability concerns


antitrust law in the late nineteenth century, as applied to banking, ...

such as the ones discussed in this Feature.68

As noted in the Introduction, our point is not that the recent consolidation of the financial services sector permits collusion or price-fixing among financial institutionsalthough we do observe significant evidence of such illegal activities in some parts of the sector, particularly in the credit

breaking up the banks is justified on the grounds that a


card industry69 and in the brokerage industry.70 Rather, our view is that fully such

breakup make the economy safer


would by eliminating bailouts and more stable limiting or the proclivity of regulators and elected officials to engineer massive of
the largest financial institutions whenever a financial crisis appears.

We recognize, however, that our plan involves a sea change in the current U.S. approach to antitrust policy, which generally embraces the idea that the only appropriate concern of antitrust law is to promote and protect competition so that the prices paid by consumers will be as low as
possible. And, clearly, antitrust law is designed to protect competition from price-fixing and other anticompetitive behavior.71

Those who subscribe to this approach to the antitrust laws might view it as inappropriate to burden antitrust with the competing goal of promoting the economic stability of financial institutions. After all, the antitrust laws were not written with stability in mind, and scholars have not
tried to incorporate stability into their analyses.

We have three responses to this criticism. First, to the extent that one considers it important that the antitrust laws remain pure in their single- minded focus on competition, we note that we are suggesting a new statute. We are not suggesting that any current antitrust laws or
regulations or judicial outcomes be revised or reinterpreted.

Second, because our proposed approach applies only to financial institutions, we do not view it as a new antitrust law so much as we view it as
a new law for financial institutions. And there can be no disagreement with the point that financial stability is a central focus, if not the central
we think that it might be preferable to have the
focus, of the law of financial institutions. Therefore, while

Antitrust Division of the Department of Justice or the Federal Trade Commission enforce the regulatory
regime that we advocate, this regime could also be implemented by financial institutions’ regulators
(subject, of course, to the not-insignificant problem of regulatory capture by the financial

institutions of their various regulators72).


Finally, we note that while the policy of protecting price competition has much to recommend it, this is by no means the only approach that one might take to antitrust policy, as students of Louis D. Brandeis73 and William O. Douglas74 are well aware. Moreover, when it comes to
banking, antitrust rules have always made exceptions. Generally, however, such exceptions have favored permitting anticompetitive banking mergers that would not be permitted for other sorts of firms. Specifically, under the Bank Merger Act, even if a merger is anticompetitive, it may
be allowed if bank regulators find that “the anticompetitive effects of the proposed transaction are clearly outweighed in the public interest by the probable effect of the transaction in meeting the convenience and needs of the community to be served.”75

Nor is it anomalous to place a heavy emphasis on stability when promulgating bank regulation. Financial stability has long been a factor in banking regulation, just as financial stability has historically been an important principle in antitrust policy.76 Ironically, in the past, it was generally
thought— erroneously, in our view—that greater consolidation in the banking sector would lead to greater stability. Even when the government was vigorously enforcing the antitrust laws, the banking sector was left untouched because antitrust policy was seen as “subordinate to
stability concerns.”77

As Bernard Shull observed, it makes sense that the stability concerns about banks and issues related to bank supervision should require that a different antitrust policy be directed toward banks. Congressional action, such as the refusal to include banks in the 1950 Celler-Kefauver
Amendment to the Clayton Act,78 reflected a “determination to deal with banking differently. The extent of the differential treatment for mergers and acquisitions in banking was the subject of congressional debates . . . undertaken within the context of awidespread belief that the
antitrust laws were largely inapplicable or impractical and, to an important degree, inappropriate for banking.”79

In fact, we believe that the opposite is true. Consolidation has led to lemming-like behavior and excessive risktaking in institutions that have been allowed to become so big that politicians and bank regulators could not survive if they were to permit those institutions to fail.

Congress has allocated most authority to deal


There have been occasional attempts by regulators to limit bank size, but

with bank size to friendly bank regulators rather than to antitrust regulators in the
Justice Department or the Federal Trade Commission. For example, the Bank Holding Company Act of 1956 empowered the Federal
Reserve to review mergers and acquisitions by bank holding companies to determine “whether or not the effect of [the proposal] would be to
expand the size or extent of the bank holding company system involved beyond limits consistent with . . . the preservation of competition.”80
It remains the case, however, that unlike other mergers, bank mergers will be permitted, even if they are anticompetitive, as long as they promote the public’s interest in stability. The Bank Merger Act exempted existing bank mergers, including those in pending government suits, from
section 1 of the Sherman Act and section 7 of the Clayton Act.81 In the 1966 amendments to the Bank Merger Act, banking agencies were prohibited from approving mergers “whose effect . . . may be substantially to lessen competition,” or that would result “in restraint of trade.”82 And,
as noted above, even when a merger is anticompetitive, regulators may nonetheless approve it if they find that “the anticompetitive effects of the proposed transaction are clearly outweighed in the public interest by the probable effect of the transaction in meeting the convenience and
needs of the community to be served.”83
Our point here is not to quarrel with the current state of antitrust law as it relates to banking. We agree with the general notion that regulators and policymakers should take financial stability concerns into account when formulating policy in general, particularly when formulating
antitrust policy. However, we believe that longstanding antitrust policy has gotten the issue precisely backwards, because concerns over financial stability should make bank regulators and policymakers more inclined to break up banks and to deny merger applicationsnot less so. By not
factoring in the enormous costs of bailouts, traditional antitrust analysis leads to a flawed conclusion.

III. Two Proposed Solutions and the Final Legislative Outcome

We readily acknowledge that one cannot implement a policy of breaking up banks that are Too Big To Fail without clear guidelines that permit regulators and market participants to delineate the parameters of the policy. Banks must have a clear rule that enables them to know precisely
the limits to their growth. Further, we believe that the specific contours of the guidelines on bank size should not appear random. Rather, they should be grounded in some rational metric.

In this Part, we discuss the “original” Volcker Rule, which was considered in early stages of the development of the Dodd-Frank bill but ultimately discarded. This rule also would have broken up banks and is the primary rival to our proposed rule. We argue that the transformation of the
Volcker Rule from its “original” version to its “as-enacted” version ironically reflects the political process that transformed the original Too Big To Fail policy into the ambiguous “too important, too interconnected, too systemically significant, Too Big To Fail” policy of the Great Recession.
We then present our proposed rule. We argue that our rule provides clear and easy-to-implement guidelines that are based on a rational metric. Our rule prevents financial institutions’ liability from growing larger than the size of the government’s deposit insurance fund. This would
prevent financial institutions from growing so large that their size outstrips the ability of the federal government to unwind their activities without bailing them out.

A. Paul Volcker’s Original Too Big To Fail Rule

In the earliest days of the financial crisis, Treasury Secretary Paulson issued a series of proposals to restructure the financial regulatory system.84 These proposals were based on the findings and recommendations of a committee of former and current regulators and industry executives
that the Secretary had asked to rethink regulation with a view to creating a market that was at once more efficient and more competitive with foreign markets. 85

More recently, President Obama modified these proposals to include a couple of ideas aimed to limit institutions from becoming Too Big To Fail. These ideas were championed by Paul Volcker, a former Federal Reserve Board chairman and the current chairman of the President’s
Economic Recovery Advisory Board. President Obama called these ideas the “Volcker Rule” and the name has stuck, even though the “rule” as variously articulated is little more than a set of objectives.86

In its original formulation, the Volcker Rule had two parts. First, Chairman Volcker proposed an absolute size limitation for banks. This rule would prohibit banks from gaining more than a ten percent market share in loans or deposits. The second part of the rule was a ban on banks’
proprietary trading, trading for their own accounts, or investing in or owning hedge funds, private equity funds, or proprietary trading operations for their own profit.87 We refer to the first part of the Volcker Rule as the “original” Volcker Rule. In its original incarnation, the second part of
the Volcker rule barring proprietary trading was tantamount to a reinstatement of the Glass-Steagall Act, at least in part.88 Underwriting, investing, hedging, and trading were allowed if they were for clients of the bank, so Volcker was not proposing a complete return to Glass-Steagall.

Paul Volcker offered his own interpretation of the Volcker Rule in an opinion piece published in the New York Times. Volcker began by noting that “President Obama 10 days ago set out one important element in the needed structural reform.”89 Then, after highlighting that Too Big To
Fail had come to mean that “really large, complex and highly interconnected financial institutions can count on public support at critical times,”90 Volcker went on to argue that “limit[ing]” ownership or sponsorship of hedge funds, private equity funds, and other proprietary trading
operations would complement existing capital and regulatory efforts to limit taxpayer exposure. Ironically, Volcker rejects Adam Smith’s advice that banks should be small to limit risk, only to justify the adoption of his rule on the basis that the risky activities that he wants to ban “are
actively engaged in by only a handful of American megacommercial banks, perhaps four or five.”91

When Volcker later considers the risks of pure capital markets firms, his underlying concerns about Too Big To Fail once again rise to the surface:

What we do need is protection against the outliers. There are a limited number of investment banks (or perhaps insurance companies or other firms) the failure of which would be so disturbing as to raise concern about a broader market disruption. In such cases,
authority by a relevant supervisory agency to limit their capital and leverage would be important, as the president has proposed.

....

To put it simply, in no sense would these capital market institutions be deemed “too big to fail.”92

At bottom, the Volcker Rule is an attempt to rein in a subset of financial companies that are Too Big To Fail. Without questioning why regulators of capital markets firms can set adequate capital requirements and manage the potential liquidation of these businesses and bank regulators
cannot, the Volcker Rule was intended to limit the risk that “four or five megabanks” will get into trouble through investing in alternative assets. The bothersome presumption underlying the Volcker Rule remains, however, that we are worried about these four or five megabanks because
they are Too Big To Fail.

On March 3, 2010, the Treasury Department provided language to the Senate Finance Committee to define the limitation on banks’ market share— the core of the “original” Volcker rule—more precisely.93 This limitation survived various attempts to excise it from the statute and is now
reflected in section 622 of the Dodd-Frank Act.94 In that version of the Volcker Rule, “too big” was defined as having more than ten percent of the aggregate riskadjusted liabilities of all financial institutions. Conceptually, the idea is not wholly without merit.

It might appear that this definition provides a limit on bank size that is easy to ascertain and monitor. And this limit might appear to be a financially intelligent standard because it focuses on risk-adjusted liabilities rather than simply on liabilities. But the rule is neither easy to implement
nor financially sensible. Rather, on closer examination, it is clear that this original version of the Volcker Rule still failed to do much of what was expected. First, there is no easy way or standard procedure used to measure aggregate risk-adjusted liabilities. Banks report risk-based assets
and risk-adjusted capital in accordance with the Basel guidelines. However, determining risk-adjusted liabilities requires calculating, for each of the roughly 8000 banks in the United States,95 total risk capital (Tier 1 capital plus qualifying Tier 2 capital)96 and then subtracting that from
total risk-adjusted assets. 97 The risk-adjusted liabilities of other nonbanking financial institutions, like insurance companies and specialty lenders (as determined by the Financial Stability Oversight Council created by the Dodd-Frank Act98), would also have to be determined and added
into the aggregate number. The risk adjustments required for these institutions are to be determined by their various regulators, which are yet to be determined and (in the case of insurance companies) may vary state by state. Compounding this problem is the fact that there is currently
no single source for the data of these nonbank financial institutions as there is for the FDIC-insured depository institutions.99

Of course, the lack of data for these other financial institutions may be moot. Using the data for banks as of December 31, 2009,100 only two banks, JPMorgan Chase and Bank of America, are over the ten percent risk-based limit—and just barely at that—when the denominator is based
solely on banks. If one were to add into the aggregate just ten other nonbank financial companies—AIG, MetLife, Prudential, TIAA-CREF, Berkshire Hathaway, New York Life, Lincoln National, MassMutual, Northwestern Mutual, and State Farm—then no company would be Too Big To Fail
under the DoddFrank Act’s new rule.101 In other words, upon closer look, this rule is neither easy to use nor effective in its application.

While the inefficacy of the Volcker Rule could, in theory, be remedied by adjusting the risk-based threshold to a more reasonable number (say eight percent instead of ten percent), making the Volcker Rule less challenging to apply will not be done easily. Here, the problem is that the
complexity of the rule requires numerous subjective decisions and interpretations by regulators.102 When compared with the simple, nondiscretionary DIF-based threshold advocated below, the Volcker Rule’s risk-adjusted liability threshold appears highly malleable. In particular, the
many discrete decisions required to calculate precisely how much to adjust various liabilities, and how those adjustments should change over time, require financial institutions and their regulators to make very difficult judgments precisely at moments when their incentives to make such
judgments are at their most perverse. It does not make sense to require that subjective judgments that likely will result in the breakup of a major financial institution be made precisely when it has been determined that the institution’s liabilities may be riskier than previously thought.
And, importantly, it should not be lost in the discussion of risk-based liabilities that, in accordance with Basel II guidelines currently in effect for U.S. banks, the initial and presumptively accurate determination of risk is made by the management of each institution.

As the Dodd-Frank Act made its way through the legislative process and initial ideas and language gave way to compromise and modification, Paul Volcker reflected on the Volcker Rule as it evolved, saying that it “went from what is best to what could be passed.”103 According to the
New York Times, Chairman Volcker was not alone in his assessment of the political process and his notion of how to prevent the next financial crisis: “Representative Barney Frank, the Massachusetts Democrat who is chairman of the House Financial Services Committee, subscribes to that
view. He says that there are stronger measures he would have preferred to see in the bill, including the original version of the Volcker rule, but that political reality dictated otherwise.”104

With the benefit of several decades of study and experience, Paul Volcker summed up his assessment of the Dodd-Frank Act and the Volcker Rule as finally incorporated with an observation that we wholly endorse. “‘There is a certain circularity in all of this business,’ he concedes. ‘You
have a crisis, followed by some kind of reform, for better or worse, and things go well for a while, and then you have another crisis.’”105 We agree with Mr. Volcker’s assessment as far as it goes but think that a more accurate statement of the circularity would be: you have a crisis,
followed by a bailout and some kind of reform, for better or worse, and things go well for a while, and then you have another crisis—and another bailout.

In its original formulation, the Volcker Rule addressed Too Big To Fail both directly, with a size limitation, and indirectly, through the logic that only a “handful of megabanks” had significant proprietary operations in derivatives and alternative investments. To be sure, this second aspect of
the original Volcker Rule was also an attempt to address the more current “too interconnected” and “too significant” extensions of Too Big To Fail. Ironically, by trying to fine-tune the Volcker Rule to address the latest interpretations and extensions of Too Big To Fail, the Volcker Rule
became more vulnerable to the political process. In concept, a simple prohibition on proprietary trading in asset classes that were seen as both risky and at the center of the financial crisis would seem to have merit. However, for the same reasons that Paul Volcker saw this rule as tied to
Too Big To Fail (that is, only a handful of megabanks were involved), it was clear from the beginning that this indirect attempt to limit big banks was not going to survive the political process. In the end, the Dodd-Frank Act limitation that no bank could invest more than three percent of its
Tier 1 capital in proprietary trading in derivatives or be invested in hedge funds and other alternative investments (without limiting management and incentive fees) poses little meaningful limitation on the riskiness of big banks or their interconnectedness or systemic importance. More
importantly, from our perspective, is that the focus on this “hot” issue distracted and confused the discussion of the core issue of Too Big To Fail so much that many have forgotten that there was an original Volcker Rule at all. And with that loss of focus, as we have argued, the real Too
Big To Fail limitation of the DoddFrank Act (in section 622) ended up failing to place limits on becoming too big.

B. Our Proposal: The Bright-Line Limit

As an alternative to the original Volcker Rule, one of us has articulated a different way of avoiding the problem of Too Big To Fail.106 The goal of the rule is to provide a more credible approach that is a simple-to-understand, simple-to-implement, and simple-to-monitor method to limit
bailouts.

The bright-line rule would limit the total liabilities of any bank, bank holding company, or other financial institution 107

[Insert Footnote 107]


definition of the
107. Of course, by including all financial institutions as well as banks and bank holding companies within our plan, we recognize that the term “financial institution” must be defined. We would embrace a variation of the

term “financial institution” contained in Dodd-Frank mean any broker the Act. That statute defines a financial institution to

or dealer, depository institution, futures commission merchant, bridge financial


company, or institution determined by the FDIC to be a financial institution.
any other Dodd-Frank Wall Street

add insurance companies, hedge funds, and private


Reform and Consumer Protection Act, Pub. L. No. 111-203, § 210(c)(9)(D)(i), 124 Stat. 1376, 1491 (2010). We would
equity firms to this definition , keeping in mind that our proposed rule does not apply to any company of any kind, financial or otherwise, unless the institution’s total liabilities exceed 0.0575% of the targeted value of the

most of the firms subject to the


Deposit Insurance Fund (currently set at 1.15% of total insured deposits; that is, approximately $3.096 billion). We see no problem with the fact that many, perhaps ,

bright-line rule are not FDIC-insured banks. all these financial institutions
that we propose The critical point is that of

enjoy implicit government protection , even if they do not have explicit insurance from the FDIC. The targeted value of the DIF is simply used as a metric, and it is just as useful a metric for non-FDIC-
insured financial institutions as it is for FDIC-insured financial institutions.

[Exit Footnote 107]


to 5% of the targeted level of the FDIC’s Deposit Insurance Fund for the current year as reported by the FDIC.108 For 2010, the targeted level of the DIF is 1.15% of total deposits insured by the FDIC. Accordingly, under the test proposed here, the limit on total liabilities would be set at
0.0575% of total insured deposits. As of December 31, 2009, the most recent date for which detailed deposit information is available, total deposits equaled $9.23 trillion and estimated total insured deposits equaled $5.38 trillion. Thus, under our approach, maximum total liabilities for a
financial institution in 2010 would be $3.096 billion.

Independently, European banks know their size will force governments to save them
with austerity --- collapses the EU
Blyth 13 (Mark Blyth – Professor of International Economics at Brown University, PhD in
political science from Columbia. <KEN> “Chapter 3: Europe—Too Big to Bail?: The
Politics of Permanent Austerity,” in “Austerity: the History of a Dangerous Idea,” Oxford
University Press. ISBN 978–0–19–982830–2)
a banking crisis is followed by a sovereign debt crisis
We really should know better. Carmen Reinhart and Kenneth Rogoff, no friends of Keynesian policy, note that

80 percent of the time. sovereign debt crises are


42 Reinhardt and Rogoff stop short of using the word “cause.” However, as Moritz Schularick and Alan Taylor have shown, almost

credit booms gone bust They develop in the private sector and end up in the public
always “ .” 43

sector. The causation is clear. Banking bubbles cause sovereign debt crises. Period. and busts To
reverse causation and blame the sovereign for the bond market crisis, as policy makers in Europe have repeatedly done to enable a policy of austerity that isn’t working, begs the question, why keep doing it?

While it is tempting to say that neither German politicians nor ECB bankers understand fallacies of composition and both are allergic to inflation, and leave it at that, there is a more satisfying answer, and it’s the same one we saw in the US case: what starts with the banks ends with the

To understand why Europe slash itself to insolvency


banks. really has been ing , we need to embed these very real ideological and political factors within an account of how

the euro enable development of a system of banks that is too big to bail.
as a currency d the If the US had banks that were too

no sovereign can cover the risks generated by its own banks


big to fail, Europe has a system of banks that are collectively too big to bail. That is,

because the banks are too big and the sovereign doesn’t have a printing press. there In this world

can be no bailout big enough to save the system if it starts to fail. the system Consequently,

cannot be allowed to fail, which is the reason we must be austere. real all In the United States we were afraid of the consequences of the
banks failing. In Europe they are terrified of the same thing, and as we shall see, they are terrified for good reason.

To get there we encounter some very familiar themes: subprime mortgage special purpose vehicles (SPVs), repo market collateral problems, and banks chasing yield in a low-interest-rate environment, as well as some unfamiliar ones such as bank resolution regimes (who has the
responsibility to bail or fail banks), moral hazard trades (too big to fail as a business model), and why national debt issued in a common currency is a really bad idea. Taken together they explain why we really all need to be austere: because once again we need to save the banks, from
themselves. But this time around no politician, especially in Europe, is going to admit that is exactly what is being done, which is why the bait and switch is needed.

The EU and the Euro: A Bridge Too Far?

The European Union, as a political project, has been an astonishing success. Built quite literally upon the ashes of a continent destroyed twice by war in a little over thirty years, it has both kept the peace in Europe and spread prosperity throughout the continent. It took in the former
dictatorships of Portugal, Spain, and Greece and turned them into stable democracies. Far from being a creature of the Cold War, its ambitions spread following the collapse of the Soviet Union. The tragedy of the Balkans in the 1990s aside, it has incorporated peoples from the Baltics to
Romania into the European project while increasing trade, expanding the rule of law, and pushing the project of “an ever closer Union” further along. If only they hadn’t tried to do this with money. While the European political project has been a resounding success, its monetary cousin,
the euro, has been a bit of a disaster for everyone, except possibly the Germans. 44

The project of bringing Europe even closer together through a common currency was supposed to work on two levels. First, economies that were not well integrated and that had different business cycles and little specialization according to their relative economic strengths would
converge, becoming more similar and more efficient simply by using the same unit of account. That, at least, was the idea. Second, having different currencies meant different exchange rates, which had different consequences for states, people, and firms. 45 For people and firms, it was a
pain to have to change currencies to travel or trade, and having to do so reduced both. At the state level the argument was that all the different exchange rates moving together generated currency volatility that was hard to hedge against, and it created incentives for weaker currencies
to seek respite by devaluing against their stronger trading partners to improve their own competitiveness, which many European states did, repeatedly. The problem with devaluation as an adjustment policy is not only that it beggars thy neighbor; it also leads to import inflation in the
countries that devalue. Italy became the poster child for these problems, having devalued the lira every year between 1980 and 1987, save 1984, thereby suffering much higher-than-average inflation than the rest of Europe while effectively reducing the average Italian’s real wage
through the inflationary back door.

Keeping up with the Germans

European leaders struggled with these inflation/devaluation/volatility problems throughout the past few decades, building successively more elaborate exchange-rate mechanisms to keep European currencies together. Currency arrangements called “snakes” were replaced by “snakes in
tunnels” and then by formal “exchange rate mechanisms” that were all variants of keeping up with the Germans. The “German problem” in Europe used to be the problem of how to constrain the Germans to keep the peace. The “German problem” after 1970 became how to keep up
with the Germans in terms of efficiency and productivity. One way, as above, was to serially devalue, but that was beginning to hurt. The other way was to tie your currency to the deutsche mark and thereby make your price and inflation rate the same as the Germans, which it turned out
would also hurt, but in a different way.

The problem with keeping up with the Germans is that German industrial exports have the lowest price elasticities in the world. 46 In plain English, Germany makes really great stuff that everyone wants and will pay more for in comparison to all the alternatives. So when you tie your
currency to the deutsche mark, you are making a one-way bet that your industry can be as competitive as the Germans in terms of quality and price. That would be difficult enough if the deutsche mark hadn’t been undervalued for most of the postwar period and both German labor costs
and inflation rates were lower than average, but unfortunately for everyone else, they were. That gave the German economy the advantage in producing less-than-great stuff too, thereby undercutting competitors in products lower down, as well as higher up the value-added chain. 47
Add to this contemporary German wages, which have seen real declines over the 2000s, and you have an economy that is extremely hard to keep up with. On the other side of this one-way bet were the financial markets. They looked at less dynamic economies, such as the United
Kingdom and Italy, that were tying themselves to the deutsche mark and saw a way to make money.
The only way to maintain a currency peg is to either defend it with foreign exchange reserves or deflate your wages and prices to accommodate it. To defend a peg you need lots of foreign currency so that when your currency loses value (as it will if you are trying to keep up with the
Germans), you can sell your foreign currency reserves and buy back your own currency to maintain the desired rate. But if the markets can figure out how much foreign currency you have in reserve, they can bet against you, force a devaluation of your currency, and pocket the difference
between the peg and the new market value in a short sale.

George Soros (and a lot of other hedge funds) famously did this to the European Exchange Rate Mechanism in 1992, blowing the United Kingdom and Italy out of the system. Soros could do this because he knew that there was no way the United Kingdom or Italy could be as competitive
as Germany without serious price deflation to increase cost competitiveness, and that there would be only so much deflation and unemployment these countries could take before they either ran out of foreign exchange reserves or lost the next election. Indeed, the European Exchange
Rate Mechanism was sometimes referred to as the European “Eternal Recession Mechanism, ” such was its deflationary impact. In short, attempts to maintain an anti-inflationary currency peg fail because they are not credible on the following point: you cannot run a gold standard
(where the only way to adjust is through internal deflation) in a democracy. 48

Well, you can try, and the Europeans building the EU are nothing if not triers. Following the Exchange Rate Mechanism debacle, in a scene reminiscent of one in Monty Python’s movie The Holy Grail in which the king tells his son that “they said you couldn’t build a castle on a swamp, so I
did it anyway, and it fell down, so I did it again, and it fell down, so I did it again, and it fell down, ” the Europeans decided to go one step further than pegging to the deutsche mark—they would all become German by sharing the same currency and the same monetary policy.

The euro, the successor to the Exchange Rate Mechanism, would become a one-time internal fix of all the different European currencies in exchange for a single external floating currency, with one important difference. 49 Rather than pegging and retaining national currencies and
printing presses, after the fix the national currencies would be abolished and the printing presses would be handed over to the Germans to make sure that neither inflation nor devaluation of the currency would ever again be options. Instead, armed with a new independent central bank
that had only one goal, to keep inflation around 2 percent, regardless of the output and employment costs, via control of interest rates, prices and wages would automatically adjust to the external balance. In other words, they built a gold standard in a democracy, again. Einstein is
credited with the observation that doing the same thing over and over while expecting different results is the definition of madness. The European monetary project was a bit mad from the get-go. It has only recently revealed itself to be an exercise in insanity.

Why the Euro Became a Monetary Doomsday Device

At the time of its launch, many economists predicted that the euro would fail. Martin Feldstein noted that the countries adopting the euro did not constitute an “optimal currency area, ” where business cycles and the like would be strongly integrated such that efficiency gains could be
realized. 50 Paul Krugman saw trouble in the decade of recession and unemployment necessitated by the convergence criteria of the Maastricht Treaty of 1992, the precondition for adoption of the euro, where budget deficits, debts, and inflation rates all had to be cut at the same time.
51 Both were correct, but what really caused problems was that instead of creating convergence, the introduction of the euro created a great divergence between European economies (see figure 3.1) in almost everything except their bond spreads and balance of payments.

Figure 3.1 Eurozone Current Account Imbalances

Notice that before the introduction of the euro, France was the only country with a current account surplus. After its introduction, France held on until 2005 before moving into deficit. Germany moved into surplus in 2001, and the rest of the Eurozone moved further into deficit. There
was a convergence of sorts. Everyone except Germany started to run deficits. To see why this happened, we need to turn to how such deficits were financed, which takes us into the realm of sovereign debt markets, and what the introduction of the euro did to the incentives of European
banks (figure 3.2).

If a picture paints a thousand words, figure 3.2 paints a million. On the left-hand side, we see what the markets used to think of sovereign bonds before the euro was introduced. Greek ten-year bond yields started out at 25 percent, fell to 11 percent, and then came within fifty basis
points (half a percent) of German bonds by 2001. Similarly, Italian bonds fell from a high of 13 percent in 1994 to becoming “almost German” in 2001 in terms of yields. Yet it is manifestly obvious that neither Greece nor Italy, nor Ireland, nor anyone else, actually became Germany, so
why then did we see this convergence in yields? The popular answer is that the introduction of the ECB and its unending quest for anti-inflationary credibility signaled to bond buyers that both foreign exchange risk and inflation risk were now things of the past. The euro was basically an
expanded deutsche mark, and everyone was now German.

Despite the fact that national bonds were still issued by the same national governments, banks and other financial players loaded up with them, assuming that the risks we saw on the left-hand side of figure 3.2 had all been magically sponged away by adoption of the euro. This flooded
the periphery states with cheap money, completely swamping local wholesale funding markets, thereby making them vulnerable to the capital flight that was to render them illiquid in 2011, while pumping up, in the case of Spain in particular, private-sector indebtedness. While the
Northern lenders lent to local banks, property developers, and the like, periphery consumers used this tsunami of cheap cash to buy German products, hence the current account imbalances noted earlier.

But why did these bond buyers believe that this new and untested institution, the ECB, would in fact guard the value of their bonds, that national governments didn’t matter any more, and that Greece was now Germany? The answer was, they didn’t need to believe anything of the sort
because arguably what they were doing was the mother of all moral hazard trades.

Figure 3.2 Eurozone Ten-Year Government Bond Yields

Source: European Central Bank Statistical Data Warehouse

The Mother of All Moral Hazard Trades


If you were a European bank back in the late 1990s seeing sovereign bond yields falling, it might have bothered you since a source of risky profits was disappearing. On the other hand, if this new ECB gizmo really did get rid of exchange-rate risk for the sovereigns issuing the debt and take
inflation off the table by housing in Frankfurt the only money press in Europe, then it really was a banker’s dream—a free option—safe assets with a positive upside, just like those CDOs we saw in the United States. So you would be a fool not to load up on them, and European banks did

There was a small but significant difference in yield


exactly that. But as yields converged, you would have to buy more and more bonds to make any money. , however,

between bonds of Northern Europe


the and those of the periphery So you an sovereigns after the yields converged. , if

swap out your low-yield German debt and replace it with as much PIIGS debt as you
ped and Dutch d

find and turbocharge that by running leverage ratios as high as 40 to 1


could , then d operating —higher than your US counterparts—

you would have one heck of a money machine. n institutionally guaranteed What makes this a moral hazard trade?

Imagine that you knew Greece was still Greece and Italy was still Italy and that the prices quoted in the markets represented the bond-buying activities of banks pushing down yields rather than an estimate of the risk of the bond itself. Why would you buy such securities if the yield did

if you bought enough of them if you became really big and those assets
not reflect the risk? You might realize that — —

lost value, you become a danger to your national banking system and would have to
would

be bailed out by your sovereign. If you were not bailed out, given your exposures , cross-border linkages

you pose a systemic risk to the whole European financial sector. the
to other banks, and high leverage, would As such,

more risk you took onto your books especially in the form of sovereign debt the
that , periphery ,

more likely it was your risk would be covered by your national government
that the ECB, , or both. This would be a moral
hazard trade on a continental scale. The euro may have been a political project that provided the economic incentive for this kind of trade to take place. But it was private-sector actors who quite deliberately and voluntarily jumped at the opportunity.

because they
Now, either saw the moral hazard trade major
really believed that the untested ECB had magically removed all risk from the system or possibilities of a , or both,

European banks took on as much periphery sovereign debt as they could. (and other periphery assets) Indeed, as we shall
see below, these banks were incentivized by the European Commission to get their hands on as many periphery bonds as they could and use them as collateral in repo transactions, thereby upping the demand for them still further. 52 There was, however, one slight flaw in the plan.

bailout responsibilities
While bank lending and borrowing may be cross-border in the Eurozone, bank resolution and (notwithstanding the 2012 proposal for an EU banking union, which does little to fundamentally address these

are still national. while any individual bank play this moral hazard trade if they all
problems) 53 So, could ,

did all at once then what was individually too big to fail became
it, , too big to bail very quickly as a whole. Once
again, the dynamics of the system were different from those of the sum of the parts.

Dwarfing the King


To get an idea of the risks involved in this trade for the sovereigns, recall that if you take the combined assets of the top six US banks in the third quarter of 2008 and add them together, it comes to just over 61 percent of US GDP. Any one of these banks, on average, could then claim to
impact about 10 percent of US GDP if it failed. Add the risk of contagion discussed earlier, and you have what the US authorities saw as a too big to fail problem. Now, do the same with European banks in the fourth quarter of 2008, which you must do on a national basis (the ratio of bank
assets to national GDP) since there is at the time of writing, no EU-wide deposit-guarantee scheme, no EU-wide bailout mechanism for banks: it all falls on the national sovereign—and you get some seriously scary results. 54

In 2008, the top three French banks had a combined asset footprint of 316 percent of
France’s GDP. The top two German banks had assets equal to 114 percent of German GDP. In 2011, these figures were 245 percent and 117 percent, respectively. Deutsche Bank alone had an asset footprint of over 80 percent of German GDP and runs an
operational leverage of around 40 to 1.55 This means a mere 3 percent turn against its assets impairs its whole balance sheet and potentially imperils the German sovereign. One bank, ING in Holland, has an asset footprint that is 211 percent of its sovereign’s GDP. The top four UK banks

have a combined asset footprint of 394 percent of UK GDP. The top three Italian banks constitute a mere 115 percent of GDP , and yet Britain seems to get a
free pass by the bond markets in comparison to Italy. The respective sovereign debts of these countries pale into insignificance. 56

In the periphery states the situation is no better. Local banks weren’t going to miss out on the same trade, so they bought their own sovereign debt by the truckload. According to a sample of Eurozone banks that underwent stress tests in July 2011, Greek banks hold 25 percent of Greek
GDP in domestic bonds, and Spanish banks hold about 20 percent, and those bonds became increasingly national in terms of ownership through 2012. 57 Remember, these assets don’t all have to go to zero to create a problem. You just have to impair enough to wipe out the bank’s tier-
one capital, which can be as little as 2 percent of its assets, especially when cross-border liabilities and contagion risks are factored in. 58

in each country and across the Eurozone as a whole European banks have become
In sum, , ,

too big to bail. No sovereign, even with its own printing press, can bail out a bank with
exposures of this magnitude. If you have signed up to a currency arrangement whereby
you gave yours away, you really are in trouble. As Simon Tilford and Philip Whyte put it bluntly, the Eurozone crisis is “a tale of excess bank leverage and poor risk management in
the core … [and] the epic misallocation of capital by excessively leveraged banks.” 59

From the start the euro was a banking crisis waiting to happen . One trigger for the crisis was Greece and the discovery that the PIIGS were pushing up yields, as detailed earlier.
The other trigger was a series of events that happened, just as we saw in the United States in 2008, deep within the banking system itself and that centered upon the use of government bonds as repo collateral for funding banks. Once again, what is portrayed as a public-sector crisis is, at
its core, an almost entirely private-sector (banking) problem.

Collateral Damage, European Style

So let’s imagine that you are a big universal (retail and investment together) European bank and you have executed a giant moral hazard trade against EU sovereigns; or, you just really believe in the ECB’s powers. To profit from this, you need to run very high levels of leverage. Where do
you get the money to run such an operation? Generally speaking, banks can fund their activities in two ways, by increasing deposits and issuing equity, on the one hand, and by increasing debt, on the other. If equity is issued, the value of each share falls, so there is a limit at which equity
issuance becomes self-defeating. Raising deposits, especially in an economy in which savings rates are falling, also has limits. Debt has no such limit.

where could European banks find huge amounts of cheap debt to fund themselves
So ? The

repo markets in London we encountered in chapter 2 were one place, but this time they were located rather than New York. 60 US money-market funds that were looking for positive returns in a lowinterest-rate world after 2008 was

why not buy lots of their short-term debt? The ECB will
the other. After all, those conservative European banks were nowhere near as risky as those US banks, so

never let them fail, right?


As the 2000s progressed, those supposedly conservative European banks increasingly switched out of safe, local, deposit funding and loaded up on as much short-term internationally sourced debt as they could find. After all, it was much cheaper than getting your hands on granny’s
savings and paying her relatively high interest for the privilege. So much so that according to one study, by “September 2009, the United States hosted the branches of 161 foreign banks who collectively raised over $1 trillion dollars’ worth of wholesale bank funding, of which $645 billion
was channeled for use by their headquarters.” 61 US banks at this time sourced about 50 percent of their funding from deposits, whereas for French and British banks the comparable figure was less than 25 percent. 62 By June 2011, $755 billion of the $1.66 trillion dollars in US money-
market funds was held in the form of short-term European bank debt, with over $200 billion issued by French banks alone. 63 Just as in 2008, these banks were borrowing overnight to fund loans over much longer periods.

Besides being funded via short-term borrowing on US markets, it turned out that those conservative, risk-averse European banks hadn’t missed the US mortgage crisis after all. In fact, over 70 percent of the SPVs set up to deal in US “asset backed commercial paper” (mortgages) we
encountered in chapter 2 were set up by European banks. 64 The year 2008 may have been a crisis in the US mortgage markets, but it had European funders and channels, and most of those devalued assets remain stuck on the balance sheets of European banks domiciled in states with
no printing presses. By 2010 then, just as the sovereign debt yields on the right-hand side of figure 3.2 began to really move apart, the ability of European banks to fund themselves through short-term US borrowing collapsed in a manner that was an almost perfect rerun of the United
States in 2008.

Recall that in the United States in 2008, the collateral being posted for repo borrowing began to lose value. As such, the firms involved had to post more collateral to borrow the same amount of money, or they ran out of liquidity real fast, which is what happened to the US banking
system. The same thing began to happen in Europe. While mortgage-backed securities, the collateral of choice for US borrowers in the US repo markets, were AAA-rated, for European borrowers in London the collateral of choice was AAA-rated European sovereign debt. Just as US
borrowers needed a substitute for T-bills and turned to AAA mortgage bonds, so European borrowers had too-few nice, safe German bonds to pledge as collateral since the core banks were busy dumping them for periphery debt. So they began to pledge the periphery debt they had
purchased en masse, which was, after all, rated almost the same, a policy that was turbocharged by a EC directive that “established that the bonds of Eurozone sovereigns would be treated equally in repo transactions” in order to build more liquid European markets. By 2008, PIIGS debt
was collateralizing 25 percent of all European repo transactions. 65 You can begin to see the problem.

As investors fretted about European sovereigns, credit-ratings agencies started to downgrade those sovereigns, and their bonds went from AAA to BBB and worse. As such, you needed to pledge more and more of these sovereign bonds to get the same amount of cash in a repo.

with around 80 percent of repo agreements using European sovereign debt as


Unfortunately, all such

collateral, when those bonds fell the ability of European banks to fund themselves in value,

and keep their highly levered structures going began to evaporate. 66

European banks were stuffed full of


Banks with healthy assets might have been able to withstand this sudden loss of funding, but as well as US mortgages cluttering up their books, other

rapidly devaluing periphery assets. exposures were astonishing. The once again By early 2010, Eurozone banks had a collective exposure to
Spain of $727 billion, $402 billion to Ireland, and $206 billion to Greece. 67 French and German bank exposures to the PIIGS were estimated in 2010 to be nearly $1 trillion. French banks alone had some $493 billion in exposures to the PIIGS, which was equivalent to 20 percent of French
GDP. Standard & Poor’s estimated French exposures to be as high as 30 percent of GDP, all told.

Again, the vast majority of these exposures were private-sector exposures—property lending in Spain, and the like. The sovereign component of these figures was comparatively small. But what mattered was how levered these banks were and how important those sovereign bonds were
for funding these banks. Once these bonds lost value, European banks increasingly found themselves shut out of US wholesale funding markets at the same time that US money markets began dumping their short-term debt. What happened in the United States in 2008, a general
“liquidity crunch, ” gathered pace in Europe in 2010 and 2011. It was only averted by the LTROs of the ECB in late 2011 and early 2012. This unorthodox policy of quasi-quantitative easing offered only temporary respite. Paul De Grauwe called it “giving cheap money to trembling banks
with all the problems this entails.” 68 The results were that within two months of the first LTRO by the ECB, sovereign bond yields were rising again, and the banks those sovereigns were responsible for now had even more sovereign debt on their balance sheets—a fact not lost on
investors now worrying about Spain and Italy. Another continent, another banking crisis, and yet all we heard about was profligate sovereigns spending too much—why?

You Can Run a Gold Standard in a Democracy (for a While)

with Europe’s banks levered up beyond anything the US banks had managed,
The short answer is that

with asset footprints that were multiples of their sovereigns’ GDPs and balance sheets
that are seriously impaired and opaque
both the banks’ problems become the states’ seriously , once again,

problems. the states cannot even begin to solve those problems since
But unlike the US case (and the UK case), in question

they gave up their printing presses while letting banks become too big to bail. their Recognizing this, when
France’s AAA status was threatened in 2011, the bond markets were not worried about the ability of the French state to pay the pensions of retired teachers in Nancy. They were worried, quite reasonably, about its ability to deal with any of its big three banks (Societe Generale, BNP

If states cannot inflate their way out of trouble no printing


Paribas, and Credit Agricole) going bust, especially in an environment of unrelenting austerity. (

press or devalue ) no sovereign currency they can default which will blow up the
to do the same ( ), only (

banking system so it’s not an option which leaves only, austerity. This is ), internal deflation through prices and wages—

the real reason we have to be austere it’s about saving the banks.
all . Once again, all

But You Cannot Tell the Truth about Why You Are Doing It

So, why, then, do European governments play the great bait and switch and then blame it all on sovereigns that have spent too much? Basically, it’s because in a democracy you can hardly come clean about what you are doing and expect to survive. Imagine a major European politician
trying to explain why a quarter of Spain needs to be unemployed, and why the whole of periphery Europe needs to sit in a permanent recession just to save a currency that has only existed for a decade. What would it sound like? I suspect that it would go something like this.

To: The Voting Public

From: Prime Minister of Eurozone Periphery X

My fellow citizens. We have been telling you for the past four years that the reason you are out of work and that the next decade will be miserable is that states have spent too much. So now we all need to be austere and return to something called “ sustainable public finances.” It is,
however, time to tell the truth. The explosion of sovereign debt is a symptom, not a cause, of the crisis we find ourselves in today.

What actually happened was that the biggest banks in the core countries of Europe bought lots of sovereign debt from their periphery neighbors, the PIIGS. This flooded the PIIGS with cheap money to buy core country products, hence the current account imbalances in the Eurozone that
we hear so much about and the consequent loss of competitiveness in these periphery economies. After all, why make a car to compete with BMW if the French will lend you the money to buy one? This was all going well until the markets panicked over Greece and figured out via our
“kick the can down the road” responses that the institutions we designed to run the EU couldn’t deal with any of this. The money greasing the wheels suddenly stopped, and our bond payments went through the roof.

The problem was that we had given up our money presses and independent exchange rates—our economic shock absorbers—to adopt the euro. Meanwhile, the European Central Bank, the institution that was supposed to stabilize the system, turned out to be a bit of fake central bank.
It exercises no real lender-of-last-resort function. It exists to fight an inflation that died in 1923, regardless of actual economic conditions. Whereas the Fed and the Bank of England can accept whatever assets they want in exchange for however much cash they want to give out, the ECB is
both constitutionally and intellectually limited in what it can accept. It cannot monetize or mutualize debt, it cannot bail out countries, it cannot lend directly to banks in sufficient quantity. It’s really good at fighting inflation, but when there is a banking crisis, it’s kind of useless. It’s been
developing new powers bit-by-bit throughout the crisis to help us survive, but its capacities are still quite limited.

Now, add to this the fact that the European banking system as a whole is three times the size and nearly twice as levered up as the US banking system; accept that it is filled with crappy assets the ECB can’t take off its books, and you can see we have a problem. We have had over twenty
summits and countless more meetings, promised each other fiscal treaties and bailout mechanisms, and even replaced a democratically elected government or two to solve this crisis, and yet have not managed to do so. It’s time to come clean about why we have not succeeded. The
short answer is, we can’t fix it. All we can do is kick the can down the road, which takes the form of you suffering a lost decade of growth and employment.

You see, the banks we bailed in 2008 caused us to take on a whole load of new sovereign debt to pay for their losses and ensure their solvency. But the banks never really recovered, and in 2010 and 2011 they began to run out of money. So the ECB had to act against its instincts and flood
the banks with a billion euros of very cheap money, the LTROs (the longterm refinancing operations), when European banks were no longer able to borrow money in the United States. The money that the ECB gave the banks was used to buy some short-term government debt (to get our
bond yields down a little), but most of it stayed at the ECB as catastrophe insurance rather than circulate into the real economy and help you get back to work. After all, we are in the middle of a recession that is being turbocharged by austerity policies. Who would borrow and invest in
the midst of that mess? The entire economy is in recession, people are paying back debts, and no one is borrowing. This causes prices to fall, thus making the banks ever more impaired and the economy ever more sclerotic. There is literally nothing we can do about this. We need to keep
the banks solvent or they collapse, and they are so big and interconnected that even one of them going down could blow up the whole system. As awful as austerity is, it’s nothing compared to a general collapse of the financial system, really.

So we can’t inflate and pass the cost on to savers, we can’t devalue and pass the cost on to foreigners, and we can’t default without killing ourselves, so we need to deflate, for as long as it takes to get the balance sheets of these banks into some kind of sustainable shape. This is why we
can’t let anyone out of the euro. If the Greeks, for example, left the euro we might be able to weather it, since most banks have managed to sell on their Greek assets. But you can’t sell on Italy. There’s too much of it. The contagion risk would destroy everyone’s banks. So the only policy
tool we have to stabilize the system is for everyone to deflate against Germany, which is a really hard thing to do even in the best of times. It’s horrible, but there it is. Your unemployment will save the banks, and in the process save the sovereigns who cannot save the banks themselves,
and thus save the euro. We, the political classes of Europe, would like to thank you for your sacrifice.

This is a speech that you will never hear because if it were given the politician making it would
be putting a resume up on Monster.com ten minutes later. But it is the real
reason we all need to be austere. When the banking system becomes too big to bail, the
moral hazard trade that started it all becomes systemic “immoral hazard”—an extortion racket aided and abetted by
the very politicians elected to serve our interests. When that trade takes place in a set of institutions that is incapable of resolving the crisis it
faces, the result is permanent austerity.
Conclusion: The Euro’s Hubris and Hayek’s Nightmare

Jay Shambaugh sees the euro as caught in three interlocking crises, each of which worsens the others. 69 He sees the Euro Area’s banking problem compounding the sovereign debt problem, which in turn (via austerity) hurts growth in the name of fostering competitiveness, which is
undermined by deflation. 70 That about sums it up. But to his diagnosis and this one, which stresses the role of the banks first and foremost, one can add many more layers of misery. The LTRO money that was supposed to give the banks time to restructure and restart lending was used
instead as catastrophe insurance. As periphery credit conditions worsened, capital flight from the periphery to the core (when Greek savers moved their accounts to German banks, for example), huge financial imbalances (like the trade imbalances in figure 3.1) appeared in the accounts
of the so-called Target Two payments system of the Eurozone that threaten the German central bank with billions in foreign obligations to periphery central banks. 71 With no EU-wide fiscal authority, only a monetary one, there are no shock absorbers in the system of the type you would
find in, for example, the United States. When a firm closes in Michigan and moves to Mississippi, capital flows out of one state and into another, but taxes raised in Connecticut smooth the adjustment through federal transfers. Labor is also much more mobile in the United States than in
the EU, and America also lets its cities die, thus speeding adjustment. None of this happens in Europe. It turns out that cross-border borrowing in euros is, when bond markets reflect true risk premiums, just like borrowing in a foreign currency, with the result that banks increasingly want
to match local loans with local assets. 72 Although there is no exchange-rate risk to cover, if your sovereign’s yields go up and your parent economy deflates, then your ability to pay back your loans declines as if you were making payments in a depreciating currency.

I could go on listing the way that the euro emerges every day to be an ever more creative financial doomsday weapon. But what makes the situation in Europe terrible at its core aren’t just these glaring holes in its institutional design or the immoral hazard posed by its banks. Rather, it’s
what might be termed the “epistemic hubris” behind the whole euro monetary project, which again comes down to the power of a set of economic ideas that blinds us to the effects of our institutional designs, just as it did in the US case.

No one knows the future, but we do know that it will have some shocks in store for us. We can imagine those shocks to be exogenous, and design mechanisms to compensate for them, such as well-conceived welfare states. 73 Or, we can view them as endogenous, always and
everywhere the result of our own bad policy choices. If we take the latter view, and we do not know what the future may have in store, as well as take policy tools away from democratically elected politicians, we may want to try to make the future conform to our preferences. So, how
would we do this?

Imagine the future as a space of unrealized possibilities. You can accept that uncertainty and roll with it, or you can try to make the future behave within certain specified parameters, narrowing the space of possible futures. The way you do that is with rules. So long as they are clearly
stated and everyone follows them, then according to this logic, the future will unfold, as you would like to see it, in accord with the rules. This is ordoliberalism gone mad, as well as the logic behind the euro. From the Maastricht convergence criteria to the Stability and Growth Pact to the
promised new fiscal treaty that will solve all the euro’s problems once and for all (except that it will not), it’s all about the rules. But those rules only ever apply to sovereigns. There was of course a worry that some states may not follow the rules, so more rules were put in place. But there
was never much attention paid to the possibility that private actors, such as banks, would behave badly. Yet this is exactly what happened, and the EU is still blaming sovereigns, tying them down with new rules, and insisting that this will solve the problem. We can but think again about
the old adage that drunks only look for their keys under the lamppost because that’s where the light is.

Friedrich Hayek is often seen as the father of neoliberal economics. 74 It’s not an unfair reading, and he was certainly no fan of the state. But what he really railed against was the epistemic arrogance of the planner who assumes that he can anticipate the future better than a local actor
whose knowledge is much more fine grained. Although the Hayekian critique is usually applied to postwar Keynesian planners, today it is more germane to EU planners who think that by setting up rules they can make the future conform to the probability space that they want to see. As
Paul De Grauwe put it beautifully, “This is like saying that if people follow the fire code regulations scrupulously there is no need for a fire brigade.” 75
unremitting
By looking only at inflation rates, budget deficits, and state debts, EU planners failed to see the growth of a banking system that is too big to bail. The price of their hubris is the belief among European elites that only a decade or more of

austerity will cost the European political project. the true price
suffice to prop them up, perhaps at the ultimate of undermining of This may be

of saving the banks the end of the European political project itself which
. Not just the end of the euro, but ,

would be the ultimate tragedy for Europe


perhaps .

Extinction
Barfod 19 (Mikael Barfod – Visiting Professor at the University of Huddersfield, MA in
Government from Essex University. “Can the European Union Save Multilateralism?”
American Diplomacy. May 2019. http://americandiplomacy.web.unc.edu/2019/05/can-
the-european-union-save-multilateralism/) [language modified]
Science increasingly claims that that we will hardly survive on this planet unless we can
agree on a set of common solutions to its main problems. Mankind [Humanity] has basically
got its back against the wall when it comes to climate, energy, water and other
resources, pandemics, pollution, regulation of technology, numerous socio-economic
challenges such as growing inequality and migration. Without multilateral
organizations seeking global solutions, it will be almost impossible for the countries of
the world to find common solutions to international or planet-wide problems, which no
country can handle on its own.
Donald Trump entered the international scene in 2017. His electoral promise of “America First” is now supported by a philosophy that “national
sovereignty rules”. He sees international relations not as sustained international cooperation for mutual benefit but rather as a zero-sum game.

Mr Trump has threatened Europe, China, and other countries with trade wars and has shown little concern for human right abuses by
authoritarian regimes around the world. He has also shown contempt and disregard for the institutions and principles of both NATO and the
EU. He has directed US withdrawal from a host of multilateral institutions and programmes, including:

 The Paris climate deal


 The Trans-Pacific Partnership
 UN female reproduction programmes
 The Iran nuclear deal
 The UN Global Compact for Migration.
 The Universal Postal Union (UPU) (dating from 1874)
Trump has also cut back US aid to the UN High Commissioner for Refugees and support to the Relief and Works Agency for Palestine. There will
probably be more to come.

today, who
The UN founding fathers started during the chaos of World War II to rebuild multilateralism into the shape of the UN. But

can effectively replace a USA withdrawing from its multilateral commitments? There is in my
opinion only one actor that can aspire to fill the vacuum currently left by the US, the European

Union. There are several reasons why:


1. The EU is committed to effective multilateralism. Support for the UN remains a cornerstone of
European Union policy. The Union’s unwavering political support of the UN is an expression of its commitment to effective
multilateralism.
2. The EU is the only fully participant non-state actor in the UN.
3. The EU is the largest financial contributor to the UN. Collectively, the European Union and its
Member States remain by far the largest financial contributor to the UN, providing 30% of all contributions to the budget and 31% of
peace-keeping activities in addition to substantial contributions towards voluntary funding.
4. The EU supports the UN reform agenda. The European Union has actively supported UN reform with the
idea that the UN should be better equipped to face such modern threats as
irregular conflicts, global pandemics, climate etc. The reform debate, which is still ongoing, shows a
clear tendency towards regional/sub-regional representation to boost the legitimacy of the UN and provide broader input to the
organization. Some may object that the European Union has been hampered by the lack of a common position among EU Member
States on the future of the UN Security Council (UNSC), where two member-states, UK and France, currently have permanent seats
and one, Germany, is desperate to get one. There is an obvious solution: the European Union is the best choice for representing its
member states on the UNSC and the European region in accordance with well-defined coordination procedures.
For all its flaws, the UN remains a representative, legitimate, and global structure,
uniquely suited to serve as a forum for mitigating the world’s problems. The European Union
understands this and, due to its self-interest, is likely to continue exerting significant pressure on the UN to reform. The European Union

could in turn be trusted to encourage the US to return to its traditional international


role in the future. The EU might have its own internal squabbles at times. But which
other international actor could aspire to keep multilateralism on track when we need it
the most?

Turns everything
Fielder 18 (Lauren Fielder, Professor Fielder is Assistant Dean of Graduate and
International Programs and Director of the Institute for Transnational Law at the
University of Texas at Austin. “Is Nationalism the Most Serious Challenge to Human
Rights? Warnings from BREXIT and Lessons from History,” 53 Tex. Int'l L.J. 212)
the EU is essential for peace
The political might of and stability in the world. Brexit "fractures the Western alliance and weakens NATO solidarity and resolve." 224 The politics of scale

225 This can be seen in the work that the EU currently


and multilateralism foster peace and human rights with regard to third countries.

doing, albeit imperfectly, in trying to de-escalate the tension between Iran and Saudi
Arabia in Yemen.
, a source of the conflict brewing aid Eastern
226 The clearest example of these politics of scale is the essential role of the EU in ing the peaceful transition of former

Bloc states into largely democratic and open societies upon the end of the Cold War. 227 The entry requirements into the EU reflected this European identity, including democracy, the rule of law, human rights,

Putin's Russia has a vital


and respect for minorities. 228 However, the transition to democracy is not finished: "It still could (with the enthusiastic support from Moscow) go into reverse." 229

interest in the breakup of the EU, the threat of nuclear war is not far behind 230 and we see that us. 231
Further, current destabilization in parts of the Balkans is reminiscent of past patterns that preceded violence in the region. 232

The end of the E uropean U nion could return Europe to , as one writer describes, the "dark days of poisonous tribal hatreds " in which destructive forces
could unleash the undoing of 70 years of statesmanship. 233 Indeed, the last seven decades, the [*234] European Union has largely been a "place of peace, stability, prosperity, cooperation, democracy, and social harmony." 234
However, "[we would] be wrong to assume the permanence of European political and economic stability … . Across the grand sweep of European history, countries and empires disintegrating into smaller governing units or being
violently subsumed into larger empires is the norm." 235

The EU is brought about by World Wars and


not just an international economic organization; it is an organization created from the destruction two

designed to prevent conflict


promote peace and history of . 236 European integration is doubtless problematic but "the alternative is so much worse." 237 The

Europe is fraught with violent conflict : "War, twice in the Twentieth Century and for ages previously, has plagued the European continent." 238 Conflict stretches back

an unbroken chain of war from the fifteenth century to World


across the entire history of Europe. There has been almost

War II fought over family rivalries, religion, deep hatreds, and territorial expansion. In the fifteenth century, the War of the Roses was fought over a dispute over title to the English throne. 239 In the sixteenth century,
there were religious wars in Austria, Germany, France, and Spain over Catholicism and Protestantism. 240 The seventeenth century included the Thirty Years' War - a war that started over religion, but expanded to include
territorial acquisition - the English Civil War, France's Dutch wars that were fought over frontiers, and the War of the League of Augsburg, which was possibly the first war over the Alsace-Lorraine. 241 In the eighteenth century,
European countries fought to block the coalition of France and Spain in the War of Spanish Succession; and, also fought in the War of Austrian Succession, the Seven Years' War, and the French Revolution. 242 In the nineteenth
century, there were the Napoleonic Wars to build an Empire, the second and third French Revolutions, the Wars for Italian Unification, the Crimean War - which was the first modern war, with massive casualty rates, mechanized
warfare, and modern weapons - and the wars for German unification. 243 Finally, in the twentieth century, there was the Russian Revolution, the First and Second Balkan Wars, World War I, and World War II. 244

The plan is modeled by Europe --- antitrust law key


Gerber 10 (David J. Gerber – Distinguished Professor of Law Chicago-Kent College of
Law. <KEN> “U.S. Antitrust Law: Models and Benefits,” Chapter 5 in “Global
Competition: Law, Markets, and Globalization,” Oxford University Press. ISBN 978–0–19–
922822–5)
D. Looking at US Antitrust: US Antitrust as a Model

US law and US antitrust experience have played central roles in the development of competition
law virtually everywhere, and they are central to global competition law
development. The US system is referred to as a ‘model,’ and this model role often has

shaped global competition law


the dynamics of Many foreign officials development. and commentators

assume that they must follow it. should or ⁵⁰ Others have been sceptical that it is appropriate for their own circumstances.

I hear use the term ‘model’ in a broad sense to refer to an identifiable set of legal principles and institutions to which others commonly refer. In this sense, US antitrust law is a model, because
it is commonly referred to as such. As we shall see, a model can have many functions, and can be used in a variety of ways. As we investigate the role played by US antitrust, it is important to
emphasize that its roles are typically based on perceptions and images rather than extensive knowledge of the US system. The term does not necessarily imply a positive assessment of the
identified characteristics.

1. Distinguishing among roles

The US model plays several roles and performs several functions. Distinctions among them are seldom clearly drawn, but failure to make them can distort analysis of the dynamics of global

the US model is important because it is a common point of


competition law today as well as assessment of future policies. At a basic level,

reference for virtually all who participate in the global competition law arena. Some have studied US antitrust formally, but
most have merely picked up pieces of information about it. All have at least some idea of some of its features. This dimension of the US role often goes unnoticed, but it frames

assessments of the US system and anchors assumptions about the directions of global competition law. It

is important to identify such cognitive factors, because many are unaware of them, and thus their influence can easily be underestimated.
associated with its role as a heuristic—a cognitive device for
Th e US model’s role as a common reference point is

thinking about complicated issues. Basic images of US antitrust law often orient discussions of competition law
issues and supply a language for those discussions. Discussions of global competition law often

contain comments such as ‘we’re moving toward a US system’ or ‘this is like the US model.’ In this way, the US model simplifies
and structures complex information and facilitates discussion of competition law issues among participants who may share few other points of reference.

Use of US antitrust as a shared point of reference easily blends into a related use in which it serves as a standard of comparison and a criterion for evaluating competition law systems.
Comments such as ‘country X’s system is still immature or undeveloped in comparison to the US antitrust system’ are common. Th e assumption here is that the US system is not only a point
of reference, but it also represents a better or more mature system that others should emulate.

The US model also plays more specifically normative roles. It is often used as a source of authority
for claims about what competition law should be. In this use, a proponent of a particular viewpoint or
decision in a foreign system seeks to strengthen her argument by showing that she is
advocating a position from US law. US antitrust law here represents a form of normative ‘authority’ that can be used to support claims in other
antitrust systems. Similarity to the US system in and of itself supports such claims. No further argument is required. Th e low cost of arguments based on this type of authority makes them
particularly attractive for use by those with limited resources and those for whom lack of experience or other constraints make more sophisticated analysis difficult.

Finally, US experience also serves as a source of data. Here the focus is on the evolution of the US model rather than on the model itself. The long history of US antitrust law makes it a valuable
source of antitrust experience. Th ere is an unparalleled depth of judicial opinions spanning more than a century, and many contain far more material about the practices involved than is
available in other systems. In addition, there is a rich body of scholarly writing about antitrust law, and it includes a wide variety of theoretical perspectives. Importantly, the material is
available in English, and it is thus far more accessible than are other rich sources of competition law experience such as German experience in the twentieth century.

2. Evolution of the model’s functions

These functions are intertwined, and their relative importance has changed over time, generally paralleling the changing role of the US in global economic and political aff airs in the twentieth
century. As noted in chapter two, reviews of the US antitrust system prior to the Second World War tended to be negative, and they appear to have often been based on very little actual
knowledge of the system. Comments often focused on the then ‘radical’ practice of prohibiting certain conduct that was deemed anticompetitive. European economic thinking and political
realities made such a prohibition seem unwarranted and unrealistic. Moreover, the US prohibition system was portrayed as harmful, because it forced fi rms to merge rather than cooperate,
thus intensifying the concentration of industry, a spectre that haunted Europe during the early decades of the twentieth century.

In the aftermath of the Second World War, European views changed dramatically. Th e US was now in a dominant position in the market-oriented part of the world, and it promoted antitrust
as a tool for fostering democracy and peace and for generating wealth. Many forgot that there had been a diff erent model of competition law in Europe in the 1920s, and they came to
identify competition law with its US variant. Over the next forty years, the US model was eff ectively imposed on transnational markets, because its courts and institutions applied or
threatened to apply US antitrust law anywhere, and US hegemony generally blunted resistance to its imposition. Th is meant that scholars, lawyers and offi - cials involved with competition law
throughout the world had little choice but to learn at least something about US antitrust law and to respect its potential impact.

The fall of the Soviet Union and the successes of the US economy in the 1990s opened another chapter in the evolution of this model role. The
return of global markets and their new prominence brought renewed attention to competition law, and
much of the attention underscored the model role of US antitrust law. US officials , lawyers and economists have

taken leading roles in the internationalizing networks that have formed during this period. They have
promulgated US antitrust thinking, touting it as an important factor in building economic
progress and political stability in countries previously operating on non-market principles. Officials in the many new competition law systems have
needed technical assistance, and the US has been willing and able to provide it. All of this reinforces the image
of US antitrust as the ‘leader’ in the field.

3. Infl uences and incentives

Why have others sought to know, use and follow the US antitrust model? Isolating these factors allows us to assess their impact
on current dynamics as well as on future strategies. One factor is the status of US antitrust as the oldest and best

established antitrust system in the world. Th is ‘father’ image itself tends to confer status and authority on it. A decision maker
outside the US, particularly one with a little developed competition law, can often support a position or claim by identifying it as
a borrowing from the world’s oldest and most ‘mature’ system. The claim is thereby sanctioned by time and
experience. A more refined version of this claim is that the long history of US antitrust does not by itself justify its authority, but that US antitrust has undergone a long process of trial and
error learning that has revealed mistakes and produced a better system. US writers are fond of using this latter version of the claim, and often fervently believe that US experiences in the
1950s and 1960s show the follies of older and less economically-based versions of competition law.

US economic successes, particularly in the 1990s and early 2000s, created another set of incentives to follow the US
model. For many, the soaring US economy of the period appeared to confirm the superiority of US economic
policy. Antitrust is part of that economic policy package and thus derives status and authority from
its success. Ideological factors have sometimes enhanced this attractiveness and augmented the authority it provides. US antitrust is a symbol of ‘US-style capitalism’ with its
resistance to government interference with business, and thus those who support this view of the relationship between government and markets have tended to welcome and support the
introduction of US antitrust principles and practices into their own systems. For almost two decades prior to the financial crisis that began in 2008, governments virtually everywhere sought to
emulate at least portions of this policy package.

US antitrust law is often also seen as a surrogate for an international standard. Discussions of economic
globalization often seek international standards, and this has been particularly prominent in discussions of competition law. A competition law decision

maker can expect support for a claim to the extent that it represents ‘what the others
are doing,’ ie an international standard. Although there is no international standard, many assume that US power will require that US antitrust
law serve that function.

US economic and political power sometimes also directly supports the infl uence of US antitrust law. Th ese issues are seldom discussed, but their influence can be extensive. One form of
power is governmental. Th e US government has actively sought to influence the development of foreign systems. Sometimes this is overt and well-publicized, as, for example, during the early
1990s when the US government pressured the government of Japan to increase enforcement of its antitrust laws, thereby hoping to increase the access of US fi rms to the Japanese market.
More commonly, pressure is exerted in the context of aid and technical assistance programs, where a country can expect to gain US support and/or assistance by conforming its conduct to the
wishes of the US authorities.

Private power and influence play somewhat similar, less obvious, but potentially more pervasive roles. Here there is no direct use of governmental power. Instead, the power is
‘soft’—ie the capacity to induce others without coercion to make decisions that correspond to the interests of the private parties
involved.⁵¹ One forum for this exercise of soft power is the international competition law conferences

that have become increasingly common since the mid-1990s. Th ese conferences provide fora where lawyers, economists and public

officials present their views and experiences make contacts and often seek to influence each other. In these contexts, US officials and
lawyers have played leading roles. They often host the most prestigious of these conferences, and they are often featured

speakers.⁵² As a group, their prominence is based on many factors, including their experience in international competition law
matters, the richness of US scholarship, and the practical importance of US antitrust enforcement throughout the
world. US lawyers and economists also benefi t from the weight and influence of the institutions with which they are associated. Especially since the 1990s, very large international law firms
have formed, primarily to provide services to large, internationally-structured business firms. These firms often commit significant resources to influencing foreign decision makers to favor the
interests of their clients. This creates incentives for lawyers, officials and economists from other countries to seek contacts with them for their own benefit, eg through the potential for client
referrals and so on. Large multinational corporations represent a potentially significant source of income for lawyers and consultants in the competition law fi eld. Th ese factors can also
influence the literature of antitrust.

The EU empirically models specific details of US antitrust in banking


Brandenburger & Matelis 14 (Rachel Brandenburger – recognised internationally as a leading antitrust and competition law
and policy advisor, Senior Advisor to Hogan Lovells, non-governmental advisor to the International Competition Network, a member of the
ABA's Section of Antitrust Law's International Task Force, and sits on the editorial boards of two international antitrust publications. Joseph
Matelis – served in several positions at the US Department of Justice's Antitrust Division, including Chief Counsel for Innovation and Counsel to
the Assistant Attorney General in charge of the Antitrust Division. At the Division, he developed and implemented competition policy on a wide
range of matters, including the 2010 Horizontal Merger Guidelines, the 2011 Policy Guide to Merger Remedies, and other investigation
procedures. <KEN> “ From
Philadelphia National Bank to Global Guidelines
,” Competition Law International. Vol. 10.
No. 2. October 2014. https://heinonline.org/HOL/LandingPage?handle=hein.journals/cmpetion10&div=20&id=&page=)

the Supreme Court handed down


Fifty years ago, Philadelphia National Bank. its opinion in The enduring effect of the opinion on

the considerable influence that Philadelphia National Bank


US merger policy is well recognised. Less well recognised is , if indirect,

has on merger policy outside the U S


also had would have been inconceivable nited tates. That international influence

when the Supreme Court gave its opinion. Fifty years ago, barely a handful of
jurisdictions had an antitrust regime today, there are 100 such regimes
aside from the United States or merger ; over
around the world. This article briefly traces that important if indirect influence.

Philadelphia National Bank announced


In 1963, when the Supreme Court decided 'the structural , it what is now known as

presumption' under which courts presume that a merger significantly increasing a generally

market's concentration is illegal :

'Specifically, we think that a merger which produces a firm controlling an undue percentage share of the relevant market, and results in a significant increase in the concentration of firms in that market, is so
inherently likely to lessen competition substantially that it must be enjoined in the absence of evidence clearly showing that the merger is not likely to have such anticompetitive effects.

the
Citing several leading authorities of the day, the Court explained that the presumption was 'fully consonant with economic theory'.2 With regard to the application of the presumption to the specific facts at issue,

Court stated that the merger 'will result in a single bank controlling 's at least 30 percent of the
business
commercial banking and in the four-county Philadelphia metropolitan area' that '[w] ithout attempting to specify the smallest market share which would still be considered to threaten undue

we are clear that 30 percent presents that threat


concentration, ." The Court went on to state that 'whereas presently the two largest banks in the area
(First Pennsylvania and PNB) control between them approximately 44 per cent of the area's commercial banking business, the two largest after the merger (PNB-Girard and First Pennsylvania) will control 59 per cent. Plainly, we
think, this increase of more than 33 per cent in concentration must be regarded as significant.'4

The US Department of Justice issued its first merger guidelines five years later.' Consistent with the focus on market shares and the presumption of illegality set forth in Philadelphia National Bank, the 1968 Merger Guidelines
delineated marketshare thresholds that, if exceeded, would prompt the Department to 'ordinarily challenge' a merger.' The specific shares that would likely prompt a Department challenge under the 1968 Merger Guidelines were
low; for instance, the Department stated in those guidelines that it would ordinarily challenge any merger whereby a firm with more than a 25 per cent share of a market merged with a firm with more than a one per cent share.'
Fourteen years later, in 1982, the Department issued substantially revised merger guidelines. Critically, the 1982 Merger Guidelines retained the presumption construct adopted in Philadelphia National Bank, albeit reworked

it was 'likely to challenge mergers' that


around HHI (Herfindahl-Hirschman Index) calculations as opposed to market shares. In particular, the Department stated that

resulted in post-merger HHI levels above those 1,800 points and an increase of 100 points or more. Those HHI thresholds roughly approximated

triggering the presumption of illegality in Philadelphia National Bank , which involved a post-merger HHI of about 2,000
points. The structural presumption was similarly retained in the revisions to the US merger guidelines issued in 1984, 1992, 1997 and 2010, the last of which increased the HHI threshold triggering a presumption to 2,500 points to
better reflect actual agency practice.

Many non-US jurisdictions issued merger guidelines since the U S have nited tates first adopted them.
those non-US merger guidelines share significant similarities with
As we have discussed elsewhere,8 many

the US merger guidelines, including a focus on market shares and inferences to be drawn from them. The international community,
however, generally uses market shares to establish safe harbours or to determine whether the parties can make an abridged or simplified filing, as opposed to presumptions of illegality as in Philadelphia National Bank and the US

merger guidelines. Mergers below certain market-share


that fall thresholds are presumed permissible
functionally to be absent

special considerationunder the merger guidelines of many non-US jurisdictions. For example, the EU Guidelines state that the Commission is 'unlikely to identify
horizontal competition concerns' with transactions that result in an HHI below 1,000, an HHI between 1,000 and 2,000 where the change in HHI is less than 250, or an HHI greater than 2,000 where the change in HHI is less than
150.' The Australian Guidelines rely on a single HHI threshold, stating that the agency will be 'less likely to identify horizontal competition concerns' where the post-merger HHI is less than 2,000, or greater than 2,000 where the
change in the HHI is less than 100.0 Similarly, the UK Guidelines state that competitive concerns are 'less likely' when either: (i) the merger increases the HHI by less than 250 when the post-merger HHI is between 1,000 and 2,000;
or (ii) the merger increases the HHI by less than 150 when the post-merger HHI exceeds 2000." The Canadian Guidelines employ the four-firm concentration ratio as their relevant measure of market concentration in coordinated
effects cases, indicating that competitive concerns are 'generally' absent when the post-merger share of the four largest firms is less than 65 per cent. 2 In unilateral effects cases, the Canadian Guidelines provide that the Bureau
will generally not challenge a merger where the post-merger market share of the merged entity is less than 35 per cent."

In December 2013, the Irish Competition Authority issued revised Guidelines for Merger Analysis' 4 explaining that '(t) hese Guidelines are thoroughly updated based on the Authority's experience and in line with best practices
internationally."' Among the changes made were increases to the post-merger HHIs and the change (or 'delta') pre- and post-merger' 6 bringing the thresholds in line with those in the EU Guidelines. The revised Irish Guidelines
note that '(m)arket concentration... is not determinative in itself. A high level of market concentration post-merger is not sufficient, in and of itself, to conclude that a merger is likely to lead to an SLC [substantial lessening of
competition]."' The Guidelines also state '... the HHI is a screening device for deciding whether the Authority should intensify its analysis of the competitive impact of a merger."8

Of particular interest is the Guidance on Substantive Merger Control issued by the German Federal Cartel Office (Bundeskartellamt) in March 2012. German merger control law contains a rebuttable presumption of dominance if
certain market share thresholds are met." The Guidance explains that '... the fact that these thresholds have been reached or exceeded is not in itself sufficient proof of high market power or even dominance. The presumption only
applies if, after a thorough investigation, neither the existence nor the absence of dominance can be proved (non liquet). The provision is without prejudice to the BKartA's obligation to investigate fully the competitive situation on
the relevant market and to prove that all the requirements of dominance have been fulfilled.'10

The role of the presumption in German merger law has been explained as resulting from and reflecting certain characteristics of German merger law.2 ' First, the presumption does not mean that the Bundeskartellamt is not
obliged, like other agencies, fully to investigate all the facts that are relevant to the substantive investigation of the merger. Rather, it determines the legal burden of proof, namely that it is the merging parties that bear the burden
of rebutting the presumption if it applies following the Bundeskartellamt's investigation of an individual case. Second, because of the lighter information burden that a German as opposed to, for example, an EU merger filing
imposes on the parties, and also the tighter review timetable, the parties are in a better position than the Bundeskartellamt to know what information is relevant for the investigation of a particular merger. Third, under German
merger law, the courts have full jurisdiction to review the findings of the Bundeskartellamt, whereas the EU courts have less extensive powers of review. The rebuttable presumption enables the German courts to focus their review
and assessment. In sum:

'... the rebuttable presumptions are not really tools for the substantive assessment of mergers, but primarily procedural rules that are necessary to obtain the relevant information from the merging parties in
order to ensure effective merger control. They do not reflect a bias against the legality of mergers, but should be understood as an instrument to ensure that errors are avoided and mergers can be decided on the
basis of all relevant facts (and not on the basis of the burden of proof). They are a functional equivalent to the information requirements under Form CO, mandatory pre-notification, and the standard of review in
EU competition law....'2

Market shares and HHIs play a role in new rules for 'simplified'
are also ing ly amended or introduced by some agencies or 'simple'

procedures for mergers that are unlikely to raise competition concerns. generally For example, under EU rules,

companies have been able since 2004 to use a shorter notification form for such mergers, and the Commission is permitted to approve such mergers without a market investigation. At the end of 2013,23 the scope of the
EU's simplified procedure was expanded to include more mergers by increasing the relevant market share thresholds as
follows:

* For markets in which two merging companies compete, mergers below a 20 per cent combined market share now qualify for the simplified procedure (previously, the threshold was 15 per cent).

mergers below a 30 percent combined market


* For mergers where one of the companies sells an input to a market where the other company is active,

share now qualify for the simplified procedure (previously, the threshold was 25 percent).

*A new criterion was also introduced for mergers where the combined market shares of the two merging companies is between 20 per cent and 50 per cent. If the increment in the HHI resulting from the merger is below 150, the
merger may, at the Commission's discretion, also be assessed under the simplified procedure.

In February 2014, China's Ministry of Commerce (MOFCOM), the antimonopoly authority responsible for merger control, published Interim Provisions on Thresholds for Simple Cases in Concentrations of Undertakings.24 These
Provisions specify circumstances in which a merger may be regarded as a 'simple' case subject to reduced filing requirements. MOFCOM issued implementing rules in April 2014.25 For the purposes of this article, it is interesting to
observe that MOFCOM has identified three circumstances where, on the basis of market share criteria, the merger will be regarded as 'simple':

* In the case of a horizontal merger, if the merging parties have a combined market share of less than 15 per cent of the relevant market.

* In the case of a vertical merger, if the parties' market share is less than 25 per cent in the relevant upstream or downstream market.

* Absent any horizontal or vertical relationship, the market share of the parties in their respective relevant market is less than 25 per cent.

In January 2014, MOFCOM included an analysis of HHIs in its conditional clearance decision in the Thermo Fisher and Life Technologies merger." This is the first time there has been such an analysis in a decision published by
MOFCOM. MOFCOM's analysis focused on market concentration levels and post-merger estimates of potential price increases. MOFCOM identified 13 product markets for in-depth review based on HHI and estimated post-merger
price increases. The combined HHI for the identified products exceeded 1,500 with a post-merger HHI increment of at least 100. The merger was also reviewed by the European Commission, the US Federal Trade Commission, and
several other antitrust agencies around the world.
A second recent MOFCOM decision is also noteworthy for its use of market shares and HHIs. On 17June 2014, MOFCOM blocked the proposed P3 Network shipping alliance among AP Moeller-MAERSK A/S of Denmark,
Mediterranean Shipping Company of Switzerland, and CMA CGM of France, three of the world's largest container-shipping lines.2 7 The MOFCOM decision focused on the Asia-Europe container liner shipping market. MOFCOM
concluded that the three proposed alliance members' combined market share had reached as high as 46.7 per cent and went on to note, apparently attributing each alliance member's share to the proposed alliance itself, that the
alliance would result in a post-transaction HHI of 2,240 with an increase of 1,350 from the pre-transaction level.2 1

Whether expressed as condition for simplified filing or a presumption of


a safe harbour, a use of an abridged or ,

illegality what is clear is that


, concentration thresholds now permeate
market shares and the

international merger policy. That flows from Philadelphia National


community's discourse about common theme

Bank which has


, thereby played an important part in form merger policy around the
ing

world.
1AC – Credit Suisse
Advantage two is Credit Suisse.

Applying antitrust laws to banks overturns the Credit Suisse decision, which broadly
prevented application of antitrust to regulated industries.
Kobayashi & Wright 20 (Bruce Kobayashi – Professor of Law at George Mason
University, senior economist with the Federal Trade Commission, a senior research
associate with the U.S. Sentencing Commission, and an economist with the U.S.
Department of Justice. He recently served as the director of the FTC’s Bureau of
Economics. Joshua D. Wright – University Professor & Executive Director at the Global
Antitrust Institute in Scalia Law School. <KEN> "Antitrust and Ex-Ante Sector
Regulation," The Global Antitrust Institute Report on the Digital Economy 25. November
2020. https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3733740) *language
modified
The Court’s Trinko decision also affected the relationship between regulation and antitrust by providing a model for the expansion of the
implied immunity doctrine, which governs the ability of antitrust law to reach conduct subject to federal regulation in the absence of an
antitrust savings clause.74 The Court’s most recent antitrust implied immunity holding came in Credit Suisse v. Billing, 75 decided three years
after Trinko.In Credit Suisse, the Court dismissed a variety of class action antitrust claims brought by investors
against investment banks that had formed underwriting syndicates used to sell securities in connection with an initial public offering
(IPO).76 The Supreme Court, in a 7- 1 decision held that the antitrust claims against the investment banks

were impliedly preempted under a “clear incompatibility” standard.77 The influence of the Court’s
approach in Trinko is apparent, as it adopted an approach to clear incompatibility based on a cost-benefit analysis that focused on the marginal
benefit of antitrust enforcement in the presence of sector regulation.78 The Court relied on a number of factors to find that the marginal
benefit of applying antitrust did not outweigh its costs in this setting, observing that intricate securities-related standards separate encouraged
from outlawed behaviors, that securities-related expertise is needed to properly decide such cases, that “reasonable but contradictory
inferences” may be reached from the same or overlapping evidence, and that there is a substantial risk of inconsistent court results. The Court
then concluded that “these factors suggest that antitrust courts are likely to make unusually serious mistakes.”79

Credit Suisse potentially expands the application of


Some have noted that the Court’s marginal benefit analysis in

implied immunity beyond the bounds set in the Court’s previous implied immunity decisions.80 The Court’s prior
holdings imply that antitrust law is plainly repugnant if it can disallow conduct that the regulator could authorize under the statute.81 These
decisions were broad in the sense that they required only the potential for a conflict, and did not require that the antitrust laws conflict with
actual implementation of the regulatory statute.82 But the Court’s decisions before Credit Suisse also narrowed the applicability of the doctrine
and facilitated the use of the antitrust laws as a complement to regulation. Implied immunity would not apply to
complementary or overlapping antitrust actions challenging anticompetitive conduct that could not be authorized under the statute.83 Nor
antitrust actions challenging anticompetitive conduct that is also illegal
would it apply to duplicative

under the regulatory statute.84


Credit Suisse decision did not overrule its earlier precedents in this area, it expanded the implied
Although the Court’s

immunity doctrine beyond its previous boundaries by applying it to a case of overlapping laws,
where challenged conduct (in this case, the tying claim) would be a potential violation of both the antitrust
and securities laws.85 Indeed, the case where the enforcement of antitrust law duplicates the enforcement of securities law would
present a clear case where the marginal benefit of applying antitrust is low. In such a case, any cognizable costs of erroneous antitrust decisions
could easily outweigh these low marginal benefits. In addition, even if overlapping enforcement does not produce conflicting outcomes on
liability, it can, without explicit coordination between the laws and agencies, result in over deterring remedies.86

Credit Suisse advance a “broad regulatory


Setting aside the merits of the Court’s holdings, it is clear that Trinko and

displacement standard for federal antitrust claims in fields subject to regulation."87 A primary
implication of this broad approach to regulatory displacement of antitrust is that the ex-
ante decision to allocate control of competition to sector regulation will take on even

more importance, as such a choice can be a choice to disable [limit] antitrust and its potential function as
a complement to fill gaps left by regulation.88 Moreover, the Court’s marginal analyses in Trinko and Credit Suisse focus on low benefits
and error costs generated by enforcing the antitrust laws, but take the presence of sector regulation as given. But the ex-ante choice between antitrust and
regulation begins with a different perspective in which the potentially significant imperfections and error costs of both regulation and antitrust are taken into
account. In particular, such an approach would ensure that great care be exercised in ensuring that the use of ex-ante regulation is limited to those areas in which
such an approach possesses a comparative advantage.

That causes competition-stifling over-regulation and access-denying under-regulation


– only antitrust is goldilocks.
Shelanski 11 (Howard Shelanski – Professor of Law, Georgetown University Law Center. Senior Articles Editor, California Law Review.
<KEN> “The Case for Rebalancing Antitrust and Regulation ,” Michigan Law Review. Vol. 109. Issue 5.
https://repository.law.umich.edu/cgi/viewcontent.cgi?article=1160&context=mlr)

III. CONSEQUENCES FOR THE BALANCE BETWEEN ANTITRUST AND REGULATION


In many cases, the practical effect of Trinko and Credit Suissewill be to impose a reasonable limitation on conceptually weak antitrust
claims where regulation specifically addresses the conduct at issue. There are important circumstances, however, in which the effects of those
will lead to unintended, harmful consequences. The potential harm will arise
cases will not be so modest and

because Trinko and Credit Suisse will also limit antitrust law's ability to complement regulation
where the latter has gaps in coverage or effectiveness (as in the AT&T divestiture case); those cases will limit antitrust in
substituting for regulation where antitrust would be a more targeted and less costly means of

competition enforcement.
Contrary to the Court's presumption,17 in many cases regulation will be more costly than either antitrust
enforcement or a combination of antitrust and regulation would be. In the words of Justice (then Judge) Breyer,

"[A]ntitrust is not another form of regulation. Antitrust is an alternative to regulation and, where feasible,

a better alternative."' Of course, if Congress requires an agency to regulate, policymakers cannot choose antitrust as an alternative.
But antitrust might still be a beneficial supplement even if it is not a full substitute; and in the far more usual case where agencies have some
Even
discretion in the promulgation and enforcement of regulations, the comparative benefits of antitrust as a substitute become important.

if regulators have authority to regulate, they may decide that forbearance from "gearing up the
cumbersome, highly imperfect bureaucratic apparatus of classical regulation" in favor of
antitrust enforcement will be the better policy choice.'"9 This will be a particularly important option as economic
conditions in the regulated industry change. The case-by-case approach of antitrust
enforcement, which targets specific instances of anticompetitive conduct as they arise, can

usually deal with unique or unexpected factual situations better than can regulatory

rulemaking, which depends more on specifying competitive obligations and prohibitions


prospectively, in advance of actual conduct. After Trinko and Credit Suisse, however, statutory
authority to regulate has become a greater potential barrier to antitrust law as a substitute for
regulation.
This Part begins by discussing the costs and benefits of regulation and then explains why the comparative costs and benefits of antitrust change
benefits of regulation diminish as
as an industry moves from monopoly toward competition. It argues that the

markets become competitive while the costs of regulation remain and even increase as that
transition occurs. Regulatory costs that might result in a net benefit in the presence of monopoly become less likely to do so as a
market moves away from a concentrated structure. This change in the relative costs and benefits of regulation has implications for the socially
Credit Suisse may require
desirable balance between antitrust and regulation and, in turn, shows how Trinko and

administrative agencies to make inefficient choices between underregulation and


overregulation.
A. The Costs and Benefits of Regulation

Economic regulation typically arises because there is some reason that competition is either undesirable or unattainable in a market. Natural
monopoly, where it costs less to have one firm serve the entire market than to have multiple firms competing to do so, is a prominent
example.6 In such cases, the regulator's main job is to ensure that the monopolist meets its service obligations without extracting monopoly
profits from consumers. In other settings, regulators might want to keep multiple firms in the market but allow them to cooperate to overcome
certain market failures. Thus, securities regulators might want to let underwriters cooperate in gathering broad information on the potential
retail market for securities a firm plans to issue in the future, but at the same time use regulatory oversight to mitigate the scope and harmful
effects of collusive pricing that might result from such cooperation in the concentrated securities underwriting market.6' As a final example,
regulators might oversee the development of competition in historically monopoly markets. In such cases, the job of the agency may involve
establishing conditions on which competitive entrants can gain access and interconnection to the incumbent monopolist's customers and
facilities. 62

regulation takes, its potential costs have numerous causes: information


Whatever the particular form economic

asymmetries, regulatory capture, incentive distortions, and a host of other ills have long been the subject of
substantial commentary and concern from policymakers, firms, and researchers. The kinds of regulation at issue in Trinko and Credit Suisse are
variants of common forms of economic regulation. The regulation at issue in Trinko involved access rules and decisions about what pieces of
incumbent networks competitors should be able to use, and at what price.At issue in Credit Suisse was the SEC's
oversight of the process by which syndicates of securities underwriters collectively work out the retail
price and quantity of securities that members of the syndicate would sell to investors. A more general discussion of pricing and access
regulation can help shed light on the cost-benefit assumptions underlying Trinko and Credit Suisse and on potential policy consequences of
those cases.

1. Price Regulation

In the presence of natural monopoly, distributional objectives, or other circumstances in which an unregulated market will not work well, the
government may decide to intervene to control prices. In principle, this price regulation can be socially beneficial. Limits on monopoly pricing
can protect consumers from the absence of competition; mandated differential pricing to commercial and residential consumers can ensure
cross-subsidies that enable poorer households to afford electricity or communications services; and common carriage rules can ensure that
firms of various kinds cannot discriminate by charging differing prices to different customers. In practice, however, achieving long-term benefits
of any kind through price regulation is hard and should be tried only when markets will likely fail to achieve society's objectives. Again, as put by
Justice Breyer, "Regulation is viewed as a substitute for competition, to be used only as a weapon of last resort-as a heroic cure reserved for a
serious disease."

regulation, difficulties that apply whether wholesale or retail prices are at issue.'6
Regulators have long recognized the difficulties of price

One threshold problem with determining "reasonable" terms for sale of a product or service is that the information
necessary for the relevant calculations is in the hands of the very companies being
regulated. Moral hazard problems thus arise because a firm can affect a regulatory
agency's determination of allowable terms by manipulating underlying accounting data.'67 Even in
cases where regulators can resolve such information asymmetries and obtain accurate cost data and other relevant market information, retail
regulation raises several perplexing problems. Regulators must figure out which costs the seller may pass on to buyers with a mark-up that
allows the seller a positive return, which costs the seller may pass on to buyers without any markup, and which costs the seller may not pass on
at all. Typically, regulators allow firms to earn a return on things like expenditures on physical capital and the costs of financing the firms'
operations,169 but not, for example, on executive pay bonuses or investments that regulators deem "imprudent.”170
The last category of costs can be particularly contentious and involve protracted regulatory proceedings. It can also raise some of the very potential for investment disincentives that the Court ascribed to antitrust enforcement in Trinko. For example, the California Public Utilities
Commission excluded from the rate base nearly 80 percent of the costs Pacific Gas & Electric incurred in building the Diablo Canyon Nuclear Power Plant because it believed that "unreasonable management was to blame for a large part of [the] cost overrun."' 7 ' While that particular
decision and others like it protect consumers from bearing costs that dominant firms could not pass through if they faced competition, regulators must be careful about punishing a firm for decisions that were well-founded when made but later turned out badly. Mistaken or politically
motivated hindsight could have as strong a deterrent effect on innovation incentives as could unwarranted antitrust liability.

Putting aside the difficulties of assessing a firm's costs for purposes of determining a "rate base" on which to calculate a firm's allowable return from its regulated sales, regulators face the challenge of how to value that rate base. As a general matter, regulators try to meet constitutional
requirements by allowing a return on the "fair value" of a utility's assets.172 The principle of the fair value measure is to allow return on those investments that have resulted in productive facilities and to disallow return on investments that have failed to produce beneficial assets for the
firm.1' 7 Another way to frame the fair value approach is to ask what the current market value of relevant assets would be, were they hypothetically to be sold-a determination the Supreme Court has called a "laborious and baffling task." 7 4

2. Access Regulation

The Telecommunications Act of 1996 tries to foster competition in U.S. telecommunications markets by allowing new entrants to have access to the infrastructure of incumbent firms. In this respect, the statute was part of a broader evolution in regulatory policy away from conduct and
pricing rules designed to control monopoly power and toward rules designed to speed the growth of competition and eliminate or reduce the need for costly economic regulation in the future.

The 1996 act charges the FCC with identifying the parts of the incumbent networks to which new entrants should have such access and at what prices, the very regulation at issue in Trinko. The difficulty with the first step was in distinguishing network facilities that new entrants could not
economically provide for themselves from facilities entrants could obtain from sources other than the incumbent. Too lax an access rule would create disincentives for entrants or third parties to invest in building competing infrastructure and deprive consumers of the competition and
innovation such investment could bring. Too strict an access rule would prevent potential entrants from gaining a foothold in the telecommunications market and defeat the act's purpose.

The FCC's challenge in drawing up a list of incumbent facilities to which new firms would have access was fraught with many of the same difficulties found in price regulation: information asymmetries, moral hazard, the identification of relevant costs, and so on. But in the case of access
regulation the problem was to some degree magnified because the commission needed detailed information not just about the incumbent firms' costs but also about the costs and technologies of the competitive entrants and of potential third-party providers of telecommunications
facilities. This proved to be a tall order for the FCC, if the agency's record before the courts is an indicator of success; it took the FCC four rulemaking proceedings over 176 nearly ten years to issue an unbundling order that finally held up in court, lending some irony to Trinko's skepticism
about the ability of antitrust enforcement to be adequately discriminating about refusal-to-deal liability for incumbent telephone companies. In 1999 the Supreme Court itself struck down the FCC's first attempt at unbundling rules as overbroad and devoid of a limiting principle.1

With respect to prices, the 1996 act prescribes rates for parts of the network to which the FCC grants entrants access-known as unbundled network elements ("UNEs")--that are based on cost.' To avoid building historical inefficiencies of the monopoly network into the rate base, the FCC
determined that the relevant costs for setting UNE rates should not be based on what the firm actually spent to build its network in the past. Instead, the cost base should be no higher than what it would cost to buy the firm's network technology in the current market." 9 The idea was
that new entrants should have to pay only what the technology is worth today, not the potentially higher amount it actually cost to build historically. Properly implemented, this approach requires calculating the forward-looking economic value of each element of a network. This
calculation resembles the fair-value approach already discussed, with all of its attendant difficulties.

After several years of experience trying to set UNE rates based on forward-looking cost and successfully defending the rate setting mechanism in court, the FCC declared the enterprise to be counterproductive. First, the commission found that the pricing rules "have proven to take a great
deal of time and effort to implement. . .. The drain on resources for the state commissions and interested parties can be tremendous."'8' The FCC further observed that "these complicated and time-consuming proceedings may work to divert scarce resources from carriers that otherwise
would use those resources to compete in local markets."' Second, the commission found the costly proceedings to produce inconsistent results:

[F]or any given carrier there may be significant differences in rates from state to state, and even from proceeding to proceeding within a state. We are concerned that such variable results may not reflect genuine cost differences but instead may be the product of
the complexity of the issues, the very general nature of our rules, and uncertainty about how to apply those rules.'

Finally, the FCC found that "[t]he lack of predictability in UNE rates is difficult to reconcile with our desire that UNE prices send correct economic signals."'84 As the commission's observation about incorrect economic signals indicates, the rate-setting function of monopoly regulation is
costly not only in its administrative burdens, but in its effects on economic incentives of market participants.

The FCC example shows that one cannot presume that regulatory processes are more accurate or efficient than antitrust. Just as mistaken antitrust enforcement can deter innovation or other beneficial conduct, regulatory errors can be costly to consumers and the regulated firm alike. If
regulators set rates too high, then price regulation is not protecting consumers very well, yet is still incurring administrative costs and distorting incentives. Consumers might be better off with competition that, although perhaps less efficient from a cost standpoint, does a better job of
disciplining pricing behavior. If, on the other hand, regulators set prices too low, then the regulated firm might have trouble attracting the financial investment necessary to maintain, develop, and deploy capital in the way that best benefits consumers in the long run.

Even if one assumes there is no industry capture and no political or economic distortion of individual regulators' incentives, regulation is unlikely to be error-free. Just like errors in antitrust enforcement, regulatory errors have potentially serious consequences for consumer welfare and
firms' incentives. There are likely to be substantial costs incurred through agency oversight and firms' compliance with regulation as well. There thus seems little basis to presume, as the Court appears implicitly to do in Trinko, that the costs of regulation are of lesser concern than are the
costs of antitrust enforcement. There are, however, reasons why the costs of regulation relative to those of antitrust are likely to rise as an industry moves from the concentrated structures that originally motivated regulation to competition.

B. Why Regulation Gets Harder as Competition Develops

As competition develops in a regulated industry, regulators face new challenges on top of the
difficulties discussed above. The conundrum for regulatory agencies and Congress is that regulation is likely to become more

difficult before the industry's evolution to competition is sufficiently developed for a


laissez-faire approach to serve consumer welfare. The problem is particularly complex
where, as in telecommunications, the growth of competition may for a time depend on the very
regulations that are becoming harder for the FCC to implement successfully.
suppose regulators want to protect buyers from a monopolist's exercise of its market power and allow the
To illustrate,

seller only a "fair" or competitive rate of return on its sales. Mistakes in setting the rates could either deliver
consumers too little benefit compared to monopoly pricing (if the regulated rate is too high) or deter efficient levels of

investment by the regulated firm (if the regulated rates are too low). In either case, it is still possible for the regulated rates
to improve both consumer welfare and total welfare. If the regulated prices are lower than those the monopolist would charge unconstrained,
then the buyers are still better off even if the regulator overshoots the fair-return benchmark. If the regulated prices are too low, then
consumers will at least temporarily gain through lower prices and the regulated firm can seek redress through a new rate tariff.

The emergence of competition in regulated markets increases both the likelihood of rate-setting
errors and their potential costs because the rate affects not just consumer surplus and incumbent carriers'
decisions, but the incentives of the new entrants as well. Competitive entry has important welfare consequences for
If regulators set prices too low,
consumers who would benefit from the competition and innovation it could yield.

potential entrants will stay out of the market. This is particularly true when firms must make
large, fixed investments in infrastructure to provide service. In regulating the incumbent's rates, therefore, regulators
in a market undergoing transition to competition must also consider whether the rates provide the return competitors need to attract
investment and profitably enter the market; new entrants will not be able to attract customers if they set prices above the incumbent's rate. In
their efforts to restrain a dominant firm's perceived market power, regulators risk deterring the competitive entry that could improve long-run
consumer welfare and ultimately obviate the need for regulation at all.

Regulated prices that are too high can also cause harm, although differently and less predictably than undercompensatory
prices. There is evidence that under some conditions regulated prices that are above those that would have emerged from unregulated
competition among the incumbent and the new entrants can act as focal points around which market prices
cluster. That is, even if the regulated firm has downward pricing flexibility, prices may be higher than in an
unregulated setting if the incumbent must file tariffs that give advance notice of its intention to lower prices. As the Supreme Court
noted in Albrecht v. Herald, when competition emerges that would drive prices down toward cost, a scheme setting maximum

prices "tends to acquire all the attributes of an arrangement fixing minimum prices ."16
Empirical data suggest that rate regulation in the long-distance telephone market for several years
kept rates higher than they would have been in the absence of price regulation.17
Similar concerns arise with regulation of the rates competitive entrants pay for access to elements of incumbent facilities-e.g., rail tracks,
pipelines, or telephone lines-in a regulated market. If regulated access rates are too high then they do not facilitate efficient entry and are
therefore not worth their administrative costs. Access rates that are too low can deter an incumbent from investing in its network and deter
entrants from building their own networks by providing them with subsidized use of the incumbent's network.' Underpriced access for
competitors may in fact create disincentives for the most desirable entrants from coming into the market. An entrant with beneficial new
technology might not deploy its innovation if artificially low (i.e., below an appropriate measure of cost) access rates allow less efficient
entrants to use incumbent facilities to enter the market. Such inefficient entry can drive up the cost of capital for desirable (i.e., efficient and
innovative) entrants by increasing the latter's competitive risks and driving down the returns from their innovative investments. The result may
be less investment by incumbents and entrants alike, less innovation, and less price competition over time for consumers.

Regulators must therefore walk a fine line in markets in which competition is emerging: they must set rates at a level high enough to allow an
efficient firm to attract the investment necessary to compete in the marketplace, but not so high as to create a de facto, noncompetitive price
floor. Rates above the targeted level will make consumers worse off than they would be in an unregulated market; rates below that level could
deter competition that would naturally lower prices and obviate the need for administratively costly regulation. Given the difficulties that
regulators inevitably face in setting rates with such precision, one must be skeptical about the wisdom of importing rate-regulation schemes
from a monopoly setting into an emerging competitive environment.

The discussion above highlights several points for understanding the comparative purposes and advantages of antitrust and regulation. First,
regulation is neither costless nor necessarily beneficial and, more importantly, is not presumptively more efficient than antitrust enforcement.
Second, the comparative benefits of antitrust and regulation vary with market conditions. The benefits of price and access regulation depend
largely on the existence of some reason, like natural monopoly, that competition cannot or should not exist. In monopoly, rates that are too
low do not by definition distort competition, and rates that are too high relative to a competitive benchmark may still be better than what the
Monopoly thus allows regulation to
monopolist (or regulated syndicate, as in Credit Suisse) would charge unconstrained.

be imprecise and still create consumer benefits. Under competition, especially emerging
competition, regulators have much less margin for error. The errors and administrative costs that may still
be compatible with net social gains under regulated monopoly become less so as competition develops. Rather than restraining the significant
harms of monopoly, regulation risks impeding the greater benefits of competition.
Antitrust protections are key to broadband deployment
Srago 21 (Josh Srago – EFF Legal Fellow, JD from Santa Clara University in 2021. <KEN>
“Why Can’t You Sue Your Broadband Monopoly?” Electronic Frontier Foundation. March
2021. https://www.eff.org/document/why-you-cant-sue-your-broadband-monopoly)
The Court’s Deference to the FCC

Where does this leave us in regards to the FCC, its oversight authority, and antitrust claims? Under the Chevron framework, unless Congress
expressly spoke to a given issue in a statute discussing a regulated industry, it will be left for the agency granted oversight authority to interpret
the statute. So long as they do so reasonably, the courts will defer to the agency’s interpretation and judgment. The 1996 Act provides for
Under Trinko, and its expansion in Credit Suisse, we find that “the
specific rules for telecommunications service providers.

Supreme Court's decision prevents . . . courts from engaging in [an antitrust] inquiry at all
for claims that push the boundaries of antitrust in the context of a regulated industry.” 47
Telecommunications service providers must work with the FCC in order to offer the services in compliance with the 1996 Act and any other
rules or regulations laid down by the FCC. As a result of telecommunications being a regulated market with agency
oversight, including the ability to monitor for anticompetitive behavior and enforce penalties for such behavior, the courts will defer to

the FCC’s conclusions. Howard Shelanski, former Director of the Federal Trade Commission’s (FTC’s) Bureau of Economics put it most
succinctly:

By broadening the conditions under which regulation blocks antitrust enforcement,


those cases redrew the boundary between antitrust and regulation and would
likely have prevented the government from bringing, in previous decades, a number of important antitrust cases in regulated
industries. Most notably, Trinko and Credit Suisse would likely have blocked the suit by the U.S. Department of Justice
("DOJ") that in 1984 broke up AT&T's monopoly over telephone service, considered among the most important antitrust
enforcement actions in history.48

The Court’s creation of antitrust immunity for regulated industries extends the premise
that if an antitrust claim were to include conduct that has been approved by the regulating agency, any
such enforcement of antitrust laws could be contrary to the enforced regulatory regime. The FTC drew
upon this comparison in its amicus filing in Credit Suisse where it stated “the complaint’s allegations must give rise to a reasonably grounded
inference of an antitrust violation without relying on conduct that was authorized under the regulatory scheme or inextricably intertwined with
such immune conduct.” 49 And further that, “the complaint must make clear that the claims alleged do not rest on impermissible inferences
from protected conduct. A court should not permit discovery to go forward as a fishing expedition based on conclusory or ambiguous
allegations that focus on immune conduct.” 50 The Court agreed, stating that in order for the antitrust suit to be allowed, there must be, “a
plain repugnancy between . . . antitrust claims and the federal . . . law.”51 Therefore, if the FCC establishes regulations that dictate that 1996
Act’s competition policies are no longer applicable under its regulatory structure, the Court will be required to dismiss an antitrust claim as
being implicitly precluded under the telecommunications laws, as to do otherwise would violate the authorized regulatory regime.

This antitrust enforcement reasoning is in direct conflict with the reasoning of the FCC in the retraction of net neutrality rules when they
enacted RIFO. The FCC heavily leaned on the logic that the “antitrust and consumer protection laws would provide means for consumers to
take remedial action if an Internet Service Provider (ISP) engages in behavior inconsistent with an open Internet.”52 However, RIFO is an
express regulation dictating that broadband service providers must merely disclose their network management practices, performance, and
commercial terms of service. The FCC’s decision to determine which express regulation should be upheld would be subject to the Chevron
deference. So long as the statute was ambiguous and the FCC’s interpretation is reasonable, the Courts must defer to the FCC’s judgment.
Additionally, the FCC has jurisdiction over the matters defined in the 1996 Act, as was determined in Trinko, and under Credit Suisse the Court
must imply an antitrust preclusion when there is a plain repugnancy with the federal law.

As such, the weight the FCC gave to antitrust being the better mechanism for consumer protection under the RIFO53 is irrelevant, because the
FCC has expressly decided to not regulate. That would mean that all conduct that falls outside of the transparency requirements would be
protected conduct as part of the regulatory regime and prevent a claim under antitrust laws.
this creates a significant barrier because a private actor, be it a person or municipality
Collectively,

acting on behalf of its residents, has lost the private right of action to file a lawsuit under antitrust
laws and seek legal recourse under the Sherman Act against a broadband service provider. They do have the option to file a complaint with the
redress using the agency’s procedures, however, any possible remedy would be available only through
FCC to seek

the FCC, pursuant to its granted authority and interpretation of the 1996 Act and any subsequent rulemaking it established. This includes
refraining from acting based on its reasonable interpretation of the 1996 Act.

A World Without Trinko and Credit Suisse Real-World Access

Under these precedents, consumers may have little recourse when broadband providers disserve them. Truckee, California is a small mountain
town of with a population of 16,377. 54 A cursory search for broadband internet providers shows that there are six companies claiming to offer
services to the town.55 AT&T and Earthlink offer DSL connectivity with a download speed of up to 10 Mbps. This means that they don’t
technically qualify as a fixed broadband provider because they are below the FCC’s standard of 25 Mbps download and 3 Mbps upload.56
HughesNet and Viasat offer satellite services, advertising download speeds up to 25 Mbps, but whether satellite service is an equivalent to fixed
wireline broadband is very much up for debate.57 That leaves Oasis Broadband, a fixed wireless internet provider58 advertising up to 100
Mbps, 59 and Suddenlink providing 1000 Mbps (1-Gig) over cable. When we take into consideration the actual real-world needs of modern
broadband usage, the relevant market of choices is far more limited with only one fixed wireline choice for a broadband connection that
provides a broadband service that is sufficient. 60 Under these circumstances, the relevant market is defined as broadband service providers
offering a minimum connection of 100 Mbps download and 10 Mbps upload. Truckee is therefore subject to the monopoly of Suddenlink.

The FCC has classified broadband as a Title I service, which means the agency has jurisdictional authority over the service under the 1996 Act,
but under their own interpretation of the Act has elected to limit that authority to ensuring that the broadband services operate with
transparency. 62

Broadband is a complicated service to deploy. Under our hypothetical, we can assume that Suddenlink is in the process of upgrading its facilities
in the region, and while it is claiming that it can hit 1000 Mbps download speeds, that is not the case for all homes until the upgrades are done.
Consumers in the region begin signing up for the services, relying on the fact that this is the only advertised 1000 Mbps service in the region.
However, due to costs, delays, or other factors, such as a lack of willingness to invest, the broadband service provider is unable to fulfill the
promised network speeds and instead is only able to provide its customers 250 Mbps downloads. A customer can file a complaint with the FCC,
but as a Title I service, the FCC’s authority over the provider is limited to ensuring that Suddenlink is being transparent with its existing and
potential customers regarding its network management practices, performance and commercial terms of service.63

Further, the monopolist broadband provider , Suddenlink, has control of the local market andcan charge a monopoly
price. As the Court declared in Trinko, merely taking advantage of the monopoly position to charge more is not enough to violate antitrust laws.
restrict other parties from entering the market by
However, if Suddenlink were to use its position to

undercutting pricing to the point where it was not feasible for a competitor to enter,
the consumers would be suffering at the hands of a monopoly engaging in anticompetitive
behavior and have no legal redress. This is because the decisions in Trinko and Credit Suisse provide
that when a regulatory agency has oversight authority, the courts are to defer to the
agency’s interpretation because it has the broad enforcement powers, the specialized knowledge to know whether or not the practices in
question are reasonable under the circumstances, and the authority to pass national regulations to ensure that there will not be confusion
between jurisdictions, all under the statutory authority granted by Congress to make such determinations.

In the hypothetical, whether the consumer seeks to improve oversight of Suddenlink’s transparency or whether they seek redress for the
anticompetitive behavior of a monopoly, the consumers must turn to the FCC because it has deferential authority as the oversight agency of a
regulated market. It doesn’t matter whether it is regarding the practices of the service provider in a competitive market. Nor does it matter if
broadband were classified as Title II and the consumers were asking for oversight enforcement as to unjust or unreasonable in a given
circumstance. In either case, private actors have lost their access to seek legal remedies from the justice system.

This nuanced restriction calls for Congress to pass legislation that would overturn the decisions in Trinko
and Credit Suisse and return a private right of action to people and municipalities to file

claims against the broadband service providers for anticompetitive behavior. If Congress
wishes to see improved competition in services under the 1996 Act, which was its original intent, then restoring
the private right to enforce antitrust laws when broadband providers behave in an
anticompetitive fashion falls in line with that end.
This was also addressed in the October 2020 report from the United States House of Representatives Subcommittee on Antitrust, Commercial
and Administrative Law of the Committee on the Judiciary’s Investigation of Competition in Digital Markets.64As a part of the subcommittee’s
recommendations, it suggested that “Congress should consider overriding judicial decisions that have treated unfavorably essential facilities65 -
and refusal to deal-based theories of harm,”66 specifically citing Trinko as well as Pacific Bell Telephone Co. v. linkLine Communications, Inc. 67

A Private Right of Action Is Not A Guarantee

The key action that can be taken to enact swift change and return a private right of action for people and municipalities to file claims against
their broadband service providers under antitrust laws would be for Congress to write a law that would overturn the decisions in Trinko and
Credit Suisse. This law could stand on its own and be a sweeping reform of regulatory authority, or it could be narrowly tailored and applicable
to each regulated industry under the relevant act. In 2004, Representative James Sensenbrenner (R-WI) introduced a bill68 that would narrowly
amend the Clayton Act in order to close the gap created by the Trinko decision. The determination as to whether it is better to pursue all-
encompassing or narrow legislation has farreaching implications that need to be considered and would require examining a much broader body
of law than is examined here.

The existing savings clause of the 1996 Act69 states the protections should not “modify, impair, or supersede” the applicability of the antitrust
laws. An express provision granting a private right of action would retract the Trinko decision, as the Court based its decision not on the
availability of an antitrust claim, but rather on the fact that the actions taking place were also regulated by the 1996 Act. The Trinko decision
binds the judiciary to defer to the regulators.70 If Congress were to expressly grant the authority of a private right of action, the FCC and the
Courts would be required to enforce it.

Merely providing a private right of action does not actually mean that there would be sweeping change in broadband services. In fact, an
antitrust claim was brought post-Trinko to seek redress to supposed anticompetitive behavior in Bell Atlantic Corp. v. Twombly.71 In that case,
telephone and high-speed internet customers brought a claim alleging that Bell Atlantic conspired to restrain trade by inflating charges for the
services by engaging in parallel conduct to inhibit growth of CLECs attempting to compete in the market using unfair agreements, providing
inferior connections to the networks, overcharging, and billing in ways designed to sabotage the CLECs’ relationship with their customers.72

Twombly’s result established the requirements that must be met regarding the initial pleading in a case. “The pleading must contain something
more than a statement of facts that merely creates a suspicion of a legally cognizable right of action.”73 Specifically, in regards to a Sherman
Act Section 1 claim,74 there must be enough factual assertions, taken as true by the court, to show that an agreement was made, or, stated
otherwise, that there must be enough facts to give rise to a reasonable expectation that evidence of an illegal agreement will be found at the
point of discovery.75 Demonstrating enough facts to show such an agreement exists without the benefit of discovery can be exceedingly
difficult. The threshold to show a Sherman Act Section 2 claim may also be very difficult to meet, as it requires demonstrating that monopoly
power exists and that the monopolist took improper action to maintain that power. As stated in Trinko, merely charging monopoly prices while
having the advantage of monopoly power is not an unlawful act. The imbalance of access to information regarding possible anticompetitive
behavior is one place where Congress could take action in overhauling antitrust regulation. While overturning Twombly could increase the
number of frivolous lawsuits, there is room to expand access to broadband provider information under the 1996 Act’s transparency
requirements which would ultimately provide greater consumer protection.

Precedential Uncertainty and Regulatory Capture


Without the right to seek relief from the courts for anticompetitive behavior, people and
municipalities are subject to the FCC’s discretion. This is dangerous because the FCC’s
interpretations of law can change over time and provide no certainty as to how it might be
interpreted in a specific circumstance. The courts establish precedent and then will often follow it
under the doctrine of stare decisis, but the agencies have not been required to follow their prior
decisions.
This principle was clearly demonstrated in the recent fight over net neutrality and the classification of broadband
services. After the transition of power following the 2016 general election, the makeup of the FCC changed as

well. The new FCC decided to undo the rulemaking of the prior regime regarding the classification of

broadband services. This means that as the makeup of the commission changes, so too might the
determinations as to what is and is not anticompetitive behavior. Removing that
discretionary authority in order to provide consistency is in the public interest as it provides for more
certainty when a claim is brought that there are established rules and principles that will be followed.
That certainty exists in the courts and should be available to the public as they pursue redress for potential claims against broadband

service providers which will likely have significantly more experience in dealing with the
regulatory agency, not to mention greater resources at their disposal.
An additional factor to consider, particularly in regards to the FCC, is a history of
regulatory capture where the FCC’s actions have been to the benefit of the companies
that it was created to oversee. As stated in 2004 by former FCC Chairman Michael Powell, 76 “[W]hen something
happens that [the FCC] doesn’t understand, kill it. We tried to kill cable. We tried to kill long-
distance. When Bill McGowan started stringing out microwave towers that threatened
AT&T, the FCC tried to stop him. The FCC tried to kill cable because it was going to threaten
broadcasting.” 77 As EFF pointed out in 2008,
telephone and
The FCC policy machine is led by unelected Commissioners, often able to ignore public opinion. The

cable companies have deep, long-standing relationships with the FCC and lobby
heavily to neuter, derail, or otherwise reverse the course of regulation that goes against their
agenda. Many FCC officials treat their time at the Commission as a way station, looking
forward to careers lobbying on behalf of the companies they are supposed to be

regulating.78
There are currently 200 people that either came from a company overseen by the FCC now working there or have left the FCC in order to work
for companies that it oversees.79 While the FCC employs a total of 1454 employees,80 that number demonstrates that there are close ties
between telecommunications companies and the FCC.

Restoring Competition with Antitrust Laws

Restoring the ability to bring antitrust claims raises several possibilities. Municipalities, representing their citizens,
would be able to bring a claim against a broadband services provider under Section 1 of the Sherman Act for collusion if they can show there
was an agreement between providers to not compete in other local markets in exchange for other competitors electing to not compete in its
local market. There is a threshold that must reach beyond mere allegations in the pleading for this claim to succeed, but restoring this cause of
immediately creates a means to encourage competition in markets
action for municipalities

where there is currently only one service provider. 81 While competition in markets has been
historically sparse for broadband providers, there are some companies that might be showing signs of

bucking the previous trends to expand their offerings into new areas.82
Restoring private antitrust claims would also allow for municipalities and consumers to bring a claim under Section 2 of the Sherman Act. Were
the existing broadband service provider that already has monopoly power in a given region to act in a way that demonstrated they were
attempting to maintain or strengthen their existing monopoly, the private right of action provides for judicial examination of the behavior.

Municipalities having this tool at their disposal could provide a significant advantage when
considering that there are at least 49.7 million Americans that are served by only one broadband
provider.83 The antitrust claims could be used as a way to ensure that a local monopoly is not engaging in
anticompetitive pricing behavior or other practices allowing them to hold on to their
monopoly, it could also be used to increase competition through the creation of municipal
broadband services. Rather than relying on an outside company to come in and build out the network
infrastructure, the municipality itself could build the infrastructure and begin providing the

services to its citizens.84


Municipalities that find themselves continually underserved or entirely unserved by
broadband providers could seek traditional remedies modeled after the ILEC/CLEC structure of the 1996 Act.
If the municipality were to provide its own service using the infrastructure already put in

place by the incumbent broadband provider, the courts would have a framework to understand
how that remedy would function under the 1996 Act’s requirements for ILECs and the provisioning of UNEs at wholesale
costs.

This circumstance could become increasingly relevant in the near future with AT&T electing to
discontinue its offering of DSL services for new customers. 85 While AT&T will retain the ownership of the
networks, and provide DSL services to existing customers, those wishing to sign up for a new DSL service will be
out of luck and forced to switch to a more expensive smartphone plan, also provided by AT&T, for access to network services. 86 87 In
such a case, the local municipalities would now have legal recourse under several different scenarios. First, the

municipality can consider whether the forced transition is an exercise of monopoly power on the part of
AT&T, harming competition in the region. Second, in the event that a company in the chain of service providers engages in any anticompetitive
behavior, the municipality can take action. Examples of this could be AT&T licensing access to
the existing infrastructure at such a high rate that it favors AT&T’s wireless or fiber alternatives, or the next company
to step in where AT&T has vacated takes advantage of its monopoly position as the only fixed wireline provider. Alternatively, if the

municipality itself steps in to provide the services and fill that need, the access to an antitrust claim
gives it protection and a path of redress in the event that the relationship with AT&T
sours. This might occur after an initial investment on the part of the municipality to provide
services to the region followed by an improper action by AT&T to undercut the municipality.

Conclusion

The conditions addressed in this paper have all focused on restoring the ability to bring an antitrust claim against a broadband service provider.
This demonstrates that the FCC rules that have been put in place, be they to classify broadband as Title II and enforce net neutrality rules as
adopted in the 2015 Open Internet Order or to classify broadband as Title I and retract enforcement authority as adopted in the 2017 Restoring
Internet Freedom Order, can remain in place and be treated as a separate issue. The FCC retains the authority to monitor how network
deployments are operating. It retains the jurisdictional authority to ensure that the practices of broadband services providers are just and
reasonable. All of that can coexist, regardless of the political stance of the commission, along with restoring private rights of action for
violations of antitrust laws. The narrow change that is being proposed is revoking the deference to regulatory agencies that the Supreme Court
put in place in its Trinko and Credit Suisse decisions.

By revoking that deference, the potential for significantly increased competition in isolated
and underserved markets could be realized because of greater opportunities for legal
enforcement against antitrust violations by broadband service providers. Combine that with the
opportunity for municipalities to take matters into their own hands to force competition and protect their residents from the encroaching
monopolization of broadband access. Then, add in the situations where the seemingly unspoken agreements between broadband service
more service providers
providers to avoid competing may be subject to greater scrutiny. All of these would open the doors to

to participate in the market. With the increase in competition, a decrease in price of the
will follow. Ultimately, that is the goal: to drive down the cost of some of the highest prices for broadband in the world, 88
services

making the services more affordable and more available in order to ensure equal access for everyone, and work to
close the ever-widening digital divide.

Smart cities are inevitable, but locally-adapted broadband is key to make them
effective
Young 17 (Peter Young – Masters of Professional Studies in Urban & Regional Planning
received in 2017. <KEN> “Broadband Infrastructure to Enable Smart Cities: Emerging
Strategies and Partnership Models,”
https://repository.library.georgetown.edu/bitstream/handle/10822/1044668/Peter
%20Young%20Capstone%20Thesis%20Final.pdf?sequence=1&isAllowed=y)
The Criticality of Broadband Infrastructure
In the questto implement the various connected solutions that will make up the smart city, “the first step for any policymaker is to foster the
development of a rich environment of broadband networks that support digital applications, ensuring that these networks are available

throughout the city and to all citizens.”48 This imperative is critical because cities not only need to focus on building infrastructure to support connected technologies, but also need

to ensure that all residents and businesses have access to high-speed broadband. If not,
cities and their residents will be left behind as the world becomes increasingly reliant on the
digital realm for everything from education to inventory management.
the U.S. as a whole lags behind in terms of average Internet connection speeds, ranking
Relative to its global economic standing,

14th globally, according to Akamai’s Q4 2016 State of the Internet Report. 49 In fact, 10% of the U.S. population representing 34 million people
does not have access to fixed broadband speeds, according to the U.S. Federal Communications Commission’s (FCC) 2016 Broadband
Progress Report. 50 Of those that have access, more than half (51%) of Americans must rely on only one provider for broadband, regardless of whether or not they are satisfied with their
service.51 The FCC’s benchmark for broadband speeds is 25Mbps for downloads and 3Mbps for uploads. 52 While some onlookers have claimed that the FCC’s benchmark is too aggressive,
the FCC contends that these “commenters…ignore the realities of today’s broadband marketplace.”53 In fact, the FCC states that the U.S. “has seen a rapid expansion in service offerings far
exceeding the 25Mbps/3Mbps threshold…[and] consumers have increasingly flocked to these higher-speed services.”54 This expansion has been concentrated in large urban areas, however
(see figure 1.2).

Figure 1.2: Area Without Acccess to Broadand Speeds (In Blue) Source: U.S. Federal Communications Commission, 2016

In fact, many believe that the FCC’s threshold isn’t forward-looking enough. While FCC’s definition of broadband may suffice today, much faster networks will
be needed to support the capacity needs of the future. Reflecting on his experience at a consumer electronics show in 2013, former
FCC commissioner Julius Genachowski wrote the following in a guest post for Forbes:

Internet-connected, data hungry gadgets that are coming to market sent a strikingly clear message: we’re going to need
“All the

faster broadband networks. Making sure the U.S. has super-fast, high-capacity, ubiquitous
broadband networks delivering speeds measured in gigabits, not megabits isn’t just a matter of consumer convenience…It’s essential to economic growth, job
creation and U.S. competitiveness.”55

Genachowski goes on to praise the success of some of the first U.S. communities to adopt gigabit speeds, which include Chattanooga, TN, Kanas City, MO/KS, and Lafayette, LA.56 He claims
that more “gigabit test beds” are needed and that a “critical mass of gigabit communities will spur innovation and investment.”57 Genachowski challenges large telecom providers that call
gigabit speeds unnecessary; “greater network speeds will certainly lead to unexpected new inventions,” he claims.58 “We need U.S. innovators to develop tomorrow’s technologies here,” he
says—citing the promise of being able to use gigabit networks to develop innovations in medicine, distance learning, and big data analytics.59

Even large telecoms that originally resisted calls to upgrade their networks are beginning to get the message. Today, Verizon now offers 940Mbps download speeds in select cities, Comcast is
in the process of offering gigabit connection in 15 cities, AT&T plans to expand its gigabit services, and Time Warner Cable is upgrading its top speed from 50Mbps to 300Mbps.60 In large part,
it is because of Google Fiber’s entry into the marketplace. When the company began offering fiber-enabled gigabit services in 2010, it began to put pressure on telecoms to upgrade their
networks in a much more powerful way than any FCC bureaucrat’s musings about American competitiveness could.61 “The day after Google made their announcement…I got phone calls from
both Time Warner people and AT&T saying they were going to do the same thing,” Mayor Lee Leffingwell of Austin, TX told Ars Technica. 62 Indeed, Google Fiber success stories coming out of
Kansas City, Austin, TX, and Provo, UT helped change the mindset around what gigabit speed broadband could enable on city streets, in living rooms, or at a tech startup.

To achieve improved speeds, most large incumbent telecoms are upgrading their networks by laying new high-capacity fiber-optic cables and/or making improvements to their existing coaxial
cables through a new standard called DOCSIS 3.1.63 Without diving too much into the details, it is important to understand that former is a much more sustainable and effective way to
upgrade broadband infrastructure than the latter. Fiber-optic cables are the ideal technology to support high-speed, high-capacity broadband networks in our cities for two primary reasons.
First, fiber has lower latency than any other kind of broadband connection— which will enable new innovations in video communication, virtual reality, and augmented reality.64 Second fiber
is more future-proof than any other technology on the market including satellites and updated cable.65 Fiber has virtually unlimited capacity and the infrastructure itself lasts a long time with
limited maintenance costs.66 While many telecoms and cities are striving to offer one gigabit speeds today, connectivity requirements will likely demand 10 gigabits by 2025-2030—“fiber will
find it much easier to scale up to meet that demand than these other types of connectors will.”67

Unfortunately, many network upgrades advertised by large incumbent telecoms tend to rely on DOCSIS 3.1 to provide faster speeds over old copper wires.68 In order to reach gigabit speeds,
consumers need to purchase a special router, and because connections are made over aging infrastructure—it isn’t guaranteed that the speeds will even reach those that are advertised.69 “It
feels like a ploy for ISPs to squeeze every last bit of business from soon-to-be obsolete networks,” writes one Gizmodo reporter. It is understandable that profit-driven companies in a free
market economy would make such a choice as opposed to the much more expensive proposition of laying down fiber. This also isn’t to say that the network upgrades aren’t a welcome
development in the broadband space. However, over the long-term, fiber will be a necessary backbone for any city that wants to leverage ubiquitously connected IoT devices and additional
advances in ICT.70 Therefore, municipalities should focus on facilitating a transition toward fiber-optic broadband infrastructure, and shouldn’t rely solely on short-term fixes like those
exemplified by DOCIS 3.1.

In addition to laying wired fiber-optic cables, broadband infrastructure planning for the smart city must also consider the facilitation of high-speed wireless networks.71 Both wired and
wireless broadband will increasingly support each other, as you cannot realistically run fiber cabling to all of the sensor and monitoring devices across a connected city.72 Therefore, fiber
serves the dual purpose of connecting homes, businesses, and government buildings directly, while also serving as a backhaul network for wirelessly connected devices.73 The U.S. Networking
and Information Technology Research and Development Program get its right when stating how important it is to explore “new wireless devices, communication techniques, networks,
systems, and services to enhance high-speed, software-defined connectivity and leverage the emerging Internet of Things (IoT).”74

Another reason to ensure that a robust fiber broadband backbone exists is the movement toward the fifth generation (5G) of
mobile wireless technology. What is 5G? In short, “it is expected to connect billions of machines —kitchen appliances,

medical devices and automobiles, to name a few—to one another and the Web, creating the much-hyped Internet of
Things.” 75 If the promise of 5G were applied to smart city solutions like traffic management and electrical grids, it estimated that it

could produce an estimated $160 billion in benefits and savings for local communities.76 However, 5G should be viewed as supplement

to, rather than a replacement for, a wired fiber-optic backbone. For instance, to handle the rapid growth in data traffic, “networks

need to be ready for a 1,000-fold increase in data volumes across the first half of the 2020’s,” according to estimates cited
in The Economist. 77 Therefore, the “full realization of economic growth and cost savings from leveraging Smart City solutions built on 5G infrastructure will…depend on how robustly 5G
networks are deployed locally.” 78

There are two key, interrelated components of this deployment locally that all planners and city officials should be aware of when devising broadband infrastructure plans. First, fiber is a key
enabler of 5G because it will need to serve as its backhaul network in much the same way it does for existing wireless technologies.79 The demand for fiber is expected to rise because of the
speeds required to support 5G’s specifications.80 Draft specifications dictate that a single 5G mobile device must be capable of 20Gbps download capacity and 10Gbps upload capacity with a
maximum latency of 4 milliseconds.81

Second, 5G networks will need to be supported by small cell sites that are integrated throughout the built environment as a supplement to macrocell towers that have traditionally supported
the majority of existing mobile traffic.82 Small cells are essentially mini-cell towers deployed closer to end users to help ensure continuous connectivity.83 In short, more highly concentrated
wireless networks supported by small cells closer to mobile users and connected devices will be necessary to make the high-performance promises of 5G a reality. What’s more, these small
cell deployments will need fiber “to provide backhaul from these sites.”84 Wireless densification efforts through the deployment of small cells will require collaboration between municipalities
and private actors to handle access to public rights- of-way and manage issues related to permitting and fee structure.85 To support 5G, there is estimated to be a need for 10 times as many
small cells installed across the built environment than there are today.86 “That’s hundreds of thousands, maybe even millions of new antennas. That’s hundreds of thousands, if not millions of
siting decisions,” former FCC Chairman Thomas Wheeler quipped during his keynote address at a Las Vegas conference in 2016.87

The emergence of these new technologies coupled with increased network capacity requirements mean that building the broadband infrastructure to support smart cities is necessary,
complicated, and multi-dimensional. Not only will it require high-speed connections to homes and business, but also will necessitate the creation of Wi-Fi networks and small cell installations

There is no single technological answer in that a hybrid approach to


supported by a high capacity wired backbone.

deploying various wired and wireless solutions based on the needs of a particular
location will be required. Luckily, all of these technologies can help support one another—and no single municipality will be responsible for enabling all of them
on its own. However, cities able to proactively to ensure these broadband infrastructure assets are in place will

be most likely to take advantage of the long-term cost savings, environmental benefits, efficiencies, and
access to data that a connected smart city will provide.
Smart cities key to make urban growth sustainable
Khan 15 (Zaheer Khan & Kamran Soomro – Faculty of Environment and Technology,
Department of Computer Science and Creative Technologies, University of the West of
England. Ashiq Anjum – Faculty of Business, Computing and Law, School of Computing
and Mathematics, University of Derby. Muhammad Atif Tahir – School of Computer
Science and Digital Technologies, University of Northumbria. <KEN> “Towards cloud
based big data analytics for smart future cities,” Journal of Cloud Computing Vol. 4, No.
2. February 2015.
https://journalofcloudcomputing.springeropen.com/articles/10.1186/s13677-015-0026-
8#Sec12)
A large amount of land-use, environment, socio-economic, energy and transport data is
generated in cities. An integrated perspective of managing and analysing such big data can answer a number of science, policy, planning,
governance and business questions and support decision making in enabling a smarter environment. This paper presents a theoretical and
experimental perspective on the smart cities focused big data management and analysis by proposing a cloud-based analytics service. A prototype has been
designed and developed to demonstrate the effectiveness of the analytics service for big data analysis. The prototype has been implemented using Hadoop and
Spark and the results are compared. The service analyses the Bristol Open data by identifying correlations between selected urban environment indicators.
Experiments are performed using Hadoop and Spark and results are presented in this paper. The data pertaining to quality of life mainly crime and safety &
economy and employment was analysed from the data catalogue to measure the indicators spread over years to assess positive and negative trends.

Introduction

ICT is becoming increasingly pervasive to urban environments and providing the necessary basis for sustainability and resilience of the smart
future cities. With the rapid increase in the presence of Internet of Things (IoT) and future internet [1,2] technologies in the smart cities context
[3-5], a large amount of data (a.k.a. big data) is generated, which needs to be properly managed and analysed for various applications using a
structured and integrated ICT approach. Often ICT tools for a smart city deal with different application domains such as land use, transport and
energy, and rarely provide an integrated information perspective to deal with sustainability and socioeconomic growth of the city. Smart cities
can benefit from such information using big, and often real-time, cross-thematic data collection, processing, integration and sharing through
inter-operable services deployed in a cloud environment. However, such information utilisation requires appropriate software tools, services
and technologies to collect, store, analyse and visualise large amounts of data from the city environment, citizens and various departments and
agencies at city scale to generate new knowledge and support decision making.

The real value of such data is gained by new knowledge acquired by performing data analytics using various data mining, machine learning or statistical methods.
However, the field of smart city based data analytics is quite broad, complex and is rapidly evolving. The complexity in the smart city data analytics manifests due to
a variety of issues: i) Requirements of cross-thematic applications e.g. energy, transport, water, urban etc, and ii) multiple sources of data providing unstructured,
semi-structured or structured data, and iii) trustworthiness of data [6,7]. In this regard, this paper provides a data oriented overview of smart cities and provides a
cloud based analytical service architecture and implementation for the analysis of selected case study data.

Smart cities provide a new application domain for big data analytics and relatively not much work is reported in literature. A review of the state of the art provides
very promising insights about applying cloud computing resources for large scale smart city data analytics. For instance, Lu et al. [8] focus on using computational
resources for large scale data for climate having complex structure and format. Using a multi scale dataset for climate data, they demonstrated a cloud based

large scale data integration and analytics approach where they made use of tools such as RapidMiner and Hadoop to process the data in a
hybrid cloud. Among others, the COSMOS project [9] provides a distributed on-demand cloud infrastructure

based on Hadoop for analysing Big Data from social media sources. The infrastructure has the capability to process millions of data-

points that would take much longer on a desktop computer. It allows social scientists to integrate and analyse
data from multiple non-interoperable sources in a transparent fashion. Such a Big Data analysis platform can also be useful for smart cities as it

would allow decision-makers to collect and analyse data from many sources in a timely

manner. Ahuja and Moore [10] provide a state of the art review of the technologies being used for big data storage, transfer and analysis. Qin et al. [11]
present challenges of Big data analytics and acknowledge the capabilities of MapReduce and RDBMS to solve these challenges. The main contribution of their work
is that they have provided a unified MapReduce and RDBMS based analytic ecosystem to avail complementary advantages from both systems. Recently some
studies have investigated the usefulness of data mining techniques to combine data from multiple sources such as by Moraru and Mladenic [12]. They applied
Apriori technique, which is rule based data mining technique, to learn rules from data. Although they are able to extract some rules from small scale but they’re
unable to learn much on large scale data due to high volume of the data and the limited memory on a single system.

We use a similar approach that is based on MapReduce. Our prototype implementation analyses the Bristol open dataset to identify
correlations between selected urban environment indicators such as Quality of Life. We have developed two implementations using Hadoop
and Spark to compare the suitability of such infrastructures for Bristol open data analysis.

The remainder of this paper is structured as follows: the next section provides background and rationale in the context of smart cities. Section
“An abstract architectural design of the cloud-based big data analysis” provides a data analytics service architecture and design for analytical
processing of big data for smart cities. After this, a simple use case based on Bristol open data by identifying needs of information processing
and knowledge generation for decision making is presented in Section “A use case: analytics using Bristol open data”. In Section “Prototype
implementation” we present the applicability of the proposed solution by implementing a MapReduce based prototype for Bristol open data
and discuss outcomes. Finally, we conclude our discussion and present future research directions in Section “Conclusions and future
directions”.

ICT and smart cities

Approximately 50% of world’s population live in urban areas, a number which is expected to
increase to nearly 60% by 2030 [13]. High levels of urbanisation are even more evident in Europe where today over 70% of Europeans live
in urban areas, with projections that this will increase to nearly 80% by 2030 [13,14]. A continuous increase in urban

population strains the limited resources of a city, affects its resilience to the increasing
demands on resources and urban governance faces ever increasing challenges. Furthermore, sustainable urban
development, economic growth and management of natural resources such as energy and water require
better planning and collaborative decision making at the local level. In this regard, the innovation in ICT can provide
integrated information intelligence for better urban management and governance, sustainable socioeconomic
growth and policy development using participatory processes [15].
Smart cities [4] use a variety of ICT solutions to deal with real life urban challenges. Some of these challenges include environmental
sustainability, socioeconomic innovation, participatory governance, better public services, planning and
collaborative decision-making. In addition to creating a sustainable futuristic smart infrastructure, overcoming these challenges can
empower the citizens in terms of having a personal stake in the well-being and betterment of their civic life. Consequently, city administrations

can get new information and knowledge that is hidden in large-scale data to provide better urban governance
and management by applying these ICT solutions. Such ICT enabled solutions thus enable efficient transport planning , better

water management, improved waste management, new energy efficiency strategies, new constructions and
structural methods for health of buildings and effective environment and risk management policies for the citizens. Moreover, other important

aspects of the urban life such as public security, air quality and pollution, public health, urban sprawl and bio-diversity loss
can also benefit from these ICT solutions. ICT as prime enabler for smart cities transforms application specific data into useful information and knowledge that can
help in city planning and decision-making. From the ICT perspective, the possibility of realisation of smart cities is being enabled by smarter hardware and software
e.g. IoTs i.e. RFIDs, smart phones, sensor nets, smart household appliances, and capacity to manage and process large scale data using cloud computing without
compromising data security and citizens privacy [16]. With the passage of time, the volume of data generated from these IoTs is bound to increase exponentially
and classified as Big data [17]. In addition, cities already possess land use, transport, census and environmental monitoring data which is collected from various
local, often not interconnected, sources and used by application specific systems but is rarely used as collective source of information (i.e. system of systems [18])
for urban governance and planning decisions. Many local governments are making such data available for public use as “open data” [19]. Managing such large
amount of data and analysing for various applications e.g. future city models, visualisation, simulations, provision of quality public services and information to
citizens and decision making becomes challenging without developing and applying appropriate tools and techniques.

Unsustainable urbanization causes extinction by 2050


Cribb 17 (Julian Cribb – author, journalist, editor and science communicator. He is
principal of Julian Cribb & Associates who provide specialist consultancy in the
communication of science, agriculture, food, mining, energy and the environment. His
career includes appointments as newspaper editor, scientific editor for The Pic credit: J.
Carl Ganter Australian newspaper, director of national awareness for Australia’s science
agency CSIRO, member of numerous scientific boards and advisory panels, and
president of national professional bodies for agricultural journalism and science
communication. His published work includes over 8000 articles, 3000 media releases
and eight books. He has received 32 awards for journalism. His internationally-acclaimed
book, The Coming Famine explores the question of whether we can feed humanity
through the mid-century peak in numbers and food demand. <KEN> “The Urbanite
(Homo Urbanus).” Surviving the 21st Century. Springer. doi:10.1007/978-3-319-41270-
2_8.)
By the mid-twenty-first century the world’s cities will be home to approaching eight billion
inhabitants and will carpet an area of the planet’s surface the size of China. Several megacities will have 20,
30, and even 40 million people. The largest city on Earth will be Guangzhou-Shenzen, which already has an estimated 120 million citizens
crowded into in its greater metropolitan area (Vidal 2010 ).

By the 2050s these colossal conurbations will absorb 4.5 trillion tonnes of fresh water for
domestic, urban and industrial purposes, and consume around 75 billion tonnes of metals , materials and resources

every year. Their very existence will depend on the preservation of a precarious balance

between the essential resources they need for survival and growth—and the capacity of the
Earth to supply them. Furthermore, they will generate equally phenomenal volumes of waste, reaching an alpine 2.2 billion
tonnes by 2025 ( World Bank )—an average of six million tonnes a day—and probably doubling again by the 2050s, in line with economic
demand for material goods and food. In the words of the Global Footprint Network “ The global effort for sustainability
will be won, or lost, in the world’s cities” (Global Footprint Network 2015).
these giant cities exist on a razor’s edge, at risk of resource
As we have seen in the case of food (Chap. 7),

crises for which none of them are fully- prepared. They are potential targets for weapons of mass destruction (Chap.
4). They are humicribs for emerging pandemic diseases, breeding grounds for crime and hatcheries for unregulated advances in biotechnology,
nanoscience, chemistry and artificial intelligence.

AND, Credit Suisse causes widespread regulatory capture


Jablon 13 (Robert Jablon (L.L.B. Harvard Law School) is a partner, and Anjali Patel (J.D.
University of Michigan Law School) and Latif Nurani (J.D. Columbia University School of
Law) are associates, at the law firm of Spiegel & McDiarmid, LLP. <KEN> “Trinko and
Credit Suisse Revisited: The Need for Effective Administrative Agency Review and Shared
Antitrust Responsibility,” Energy Law Journal. Vol. 34. No. 627. https://www.eba-
net.org/assets/1/6/21-627-Jablon.pdf)
The Court seems to view antitrust courts and administrative agencies as performing much the same function. In fact, a major component of the
Credit Suisse implied immunity test is that agencies have the authority to regulate and actively do so.70 Therefore, the Credit Suisse
Court appears comfortable leaving substantial antitrust enforcement in regulated industries
to administrative agencies.71 To the extent that it exists, this comfort would be misplaced not only because courts are
required to apply the law, but also because courts and administrative agencies often act far differently both in procedural and substantive
decision-making. Deference would often mean antitrust abandonment.
A. Agencies May Have No Power to Order Important Antitrust Remedies

Agencies are not authorized to enforce the antitrust laws but are required to consider applicable antitrust policies.72 Although such policies must be fully taken into account, they must also be harmonized with agencies’ substantive statutes.73 Therefore, leaving antitrust enforcement in
regulated industries largely to agencies precludes strict antitrust enforcement.74

One very important difference between court enforcement of antitrust laws and agency enforcement of regulatory statutes is in allowed remedies. Although some agencies, such as the FERC and the Commodity Futures Trading Commission (CFTC), may levy very substantial fines for
specific types of regulated conduct, e.g. market manipulation,75 agencies generally face procedural and substantive limitations in the relief that they may order. In antitrust enforcement the availability of judicial remedies continues to be especially important where the prospect of treble
damages and potential liability for opposing party legal fees create important deterrents to illegal conduct.76 Although it may be robust, where applicable, the FERC and the CFTC penalty authority exists only over a small spectrum of potential industry antitrust violative conduct and, as to
the FERC, in less robust form for some other violations.77

The administrative record preceding the Trinko decision is an excellent example of how administrative agencies often have inadequate tools to deter anticompetitive conduct. In December, 1999, the FCC granted Verizon’s (then Bell Atlantic’s) application to enter the long distance market
in New York State based upon its “conclusion that Bell Atlantic ha[d] taken the statutorily required steps to open its local exchange and exchange access markets to competition.”78 But within several months Verizon was admitting that it was breaching its open access commitments for
which it paid a “voluntary contribution” of $3 million to the FCC, and $10 million to competitive local exchange carriers.79

The Trinko Court portrayed the FCC action against Verizon as showing that the regulatory structure was sufficient to remedy and deter anticompetitive conduct.80 But then FCC Chairman Powell drew a markedly different conclusion in a subsequent communication to Congress.81 He
explained that “given the ‘vast resources’ of many of” the nation’s incumbent local exchange carriers, the Commission’s maximum fine “is insufficient to punish and to deter violations in many instances.”82 He advised increasing the forfeiture limits “to enhance the deterrent effect of
Commission fines” and also to give the Commission the authority to award punitive damages, attorneys’ fees, and costs in formal complaint cases filed under section 208 of the Communications Act.83

Congress has not provided new remedies under the Communications Act.84 Of course, the kind of remedies that Chairman Powell was requesting is judicially available under the antitrust laws. But the result of Trinko was to prevent courts from using their powers to provide appropriate
deterrents.85 Such a model should not be applied elsewhere.86

Moreover, as we discuss in Section IV, which considers the advantages of complementary agency and court jurisdiction, defendant companies often trumpet the availability of agency relief when appearing in court, but when appearing before agencies, those opposing antitrust relief have
argued that the underlying agency statutes do not permit the relief sought and that agency remedial authority is otherwise limited.87

B. Agencies Often Make Major Decisions Based upon Less Stringent Due Process Considerations Than Courts

At the time of an early series of decisions allocating tasks between courts and agencies, the Administrative Procedure Act (APA)88 divided administrative tasks into two basic groupings: rulemakings and adjudications. 89 Under those categories it was generally understood that most
administrative agencies would have a decisional process for most, if not all, issues involving antitrust policy questions that tracked reasonably closely the decisional process that could be expected in courts.90 This process could involve reasonable discovery, including depositions, and a
trial or hearing including cross-examination, in which parties’ contentions could be tested in accordance with traditional legal principles.91

In some ways, the Trinko and Credit Suisse decisions seem animated by a view of the administrative process that conforms to the adjudicatory ideal that many of the 1970s reformers were trying to implement. Critiques of agency capture problems in the 1970s prompted movement to
increase the effectiveness of regulatory bodies and to make them function more in accord with legal norms.92 Thus, in a series of important decisions, federal courts required agencies to afford more due process and reasoned decision-making in their decisions.93 And public policy
advocates pushed, sometimes successfully, for more vigorous open hearing rules, stricter ex parte contact regulations, abolishment of “secret law” advisory opinions, and changes designed to increase citizen and public interest participation in hearings through proxy advocates and
intervener funding programs.94 But today “agencies do not act like courts over a broad range of regulated agency decision-making and enforcement.” 95 Increasingly, they

inform themselves and make decisions not through administrative hearings [advised by hearing records] but through more informal rulemakings, policy statements, and various forms of conferences, meetings and communications with interested parties of all
stripes, including those who are regulated, and with those who [benefit] or [are] hurt [from] regulation or nonregulation.96

And even proceedings that are treated as adjudicatory may be decided on a “paper” hearing basis without the allowance of discovery and the holding of traditional hearings.97

Agency policy is negotiated in both subtle and non-subtle ways. Traditional cases are increasingly avoided or relegated to the background. This process may be applauded, decried, or both, but the facts are that regulatory bodies are increasingly avoiding adjudicatory procedures, and the
general public, including their representatives, will generally have a lesser voice than they would have in more neutral courts. Some agency determinations may, of course, be best determined through use of less formal procedures. However, where agencies decide factual, contested
issues without parties having access to discovery and therefore to basic facts and without the defining of issues that depositions and cross-examination can bring, rigor is lost.

Notwithstanding the Court’s assumption that agencies can adequately police antitrust principles, agency procedural practices raise serious questions of agencies’ abilities to uncover antitrust violations. Courts are required to follow civil and criminal rules of procedure allowing parties to
conduct discovery, participate in hearings, and cross-examine witnesses. By contrast, evidentiary proceedings are no longer the norm in agency proceedings. The Administrative Procedure Act does not require live hearings, and courts have routinely granted deference to agency decisions
made without the opportunity for discovery or other “trial-like” procedures.98

Like courts, agencies may hear from experts. However, no less than in courts, there is often a “cacophony of experts” before agencies on either side of any substantive issue.99 The perceived superior expertise of industry-specific regulators cannot appropriately be used as subterfuge for
an abdication of judicial responsibility for antitrust enforcement.100 Moreover, the expertise of courts may be understated. A reading of the District Court decision in United States v. Microsoft Corp., 101 dealing with the highly technical computer software industry, demonstrates the
capability of courts to deal with complex issues through focused attention, expert and fact witnesses, evidence, and other vehicles. Court decision after court decision shows that courts can handle complex economic matters. In any event, it would be difficult to conclude that regulators’
greater focus on particular industry problems have led to wiser competition policy or that the policies reached have, in fact, flowed from true expert knowledge. Courts have additional tools available, including agency referrals under primary jurisdiction or similar doctrines and inviting
agency and amici briefs.102

The FERC is one of the agencies where this tendency to avoid court-like processes is significant. The FERC has embarked on a restructuring of the electric power industry, departing from a traditional cost of service regulatory model under which electric companies sell wholesale power
based upon their costs.103 In terms of its magnitude of change, this restructuring effort is comparable to the FPC’s natural gas producer rate regulation efforts. But, unlike in natural gas cases, there have been few electricity restructuring hearings to determine significant matters. The
major electric restructuring orders came about through rulemakings and paper hearings without probing discovery.104 Regulation has been implemented mainly by agency suasion and negotiations. Virtually all significant orders affecting regional transmission organizations resulted from
filings by those organizations and a comment and reply process.105

Similarly, other market changes have taken place through FERC-sponsored “conferences” attended by Commissioners, key staff, and agency-selected representatives from various industry groups. Examples of conference topics include new transmission construction and incentives to
promote such construction, market power and market rates, as well as a range of market power-related topics.106 Such conferences are common and include all segments of the industry, including those represented by the authors of this article.107

Although written comments are allowed and encouraged, the presentations and submissions are not under oath or subject to cross-examination. Thus, the FERC is not informing itself or making decisions based upon traditional due process trial type hearings, as was formerly more
frequent.

A recent paper describes how the FERC’s electric merger policy favors simple rules based upon market concentration screens, rather than a more openended inquiry to determine whether a merger is likely to result in anticompetitive effects.108 The FERC’s streamlined approach “reduces
the cost to the agency and others of assessing competitive effects . . . but increases the likelihood of incorrectly assessing a merger’s competitive effects.”109 In contrast, the paper argues, the Department of Justice engages “in a relatively complex inquiry into competitive effects that
considers many factors” depending upon the theory of harm and the particular characteristics of the industry and firms involved.110 If the FERC were to adopt a more rigorous policy to ensure mergers are not anticompetitive, it would no doubt have to allow for additional process and
discovery, but would have greater factual analysis and, likely, more accurate determinations of mergers’ competitive effects.

In addition to the conferences discussed above, representatives of virtually all interested parties may, and are often encouraged to, meet freely with the FERC commissioners and key staff to discuss issues that they deem important. Meetings with commissioners are permitted before
cases are filed even though the commissioners will have to rule on the filings and opponents’ objections, once they are made. Needless to say, opponents have less practical access to express their concerns. As a practical matter, they may not be alert to proposed actions before they are
filed for agency review. Additionally, the FERC broadly permits off-the-record-communications for notice and comment rulemakings; many investigations, technical, policy and other conferences; and many compliance matters.111 To take one example, discussed further below, before
Exelon and PSEG filed with the FERC for approval of their mega-merger, all four sitting FERC commissioners met privately with Exelon-PSEG executives to discuss parameters of the companies’ proposed merger application.112 A FERC spokeswoman would not comment directly on the
accusation of improper commissioner contacts because the case was pending but did say the agency has “a long-standing practice of being available to market participants and members of the general public for pre-filing meetings.”113 Although courts like agencies encourage alternative
dispute resolution, and judges nearly always encourage settlements, in adjudication the triers of fact are not expected to have participated in extensive ex parte discussions with private parties on factual matters that they will decide.114

Conferences may be useful in providing information for the FERC commissioners and key staff and allowing regulators to communicate agency needs to regulated entities. They may even be arguably necessary under an industry structure where deregulated sales of power amount to
billions of dollars per year. The FERC may be unable to regulate individual transactions directly and may have to rely on general rules and focus on securing competitive market structures.115 Regardless, agencies decide major antitrust issues with little process. Conferences and informal
procedures do not ensure antitrust enforcement. Even in deciding major merger cases, decisions normally result based upon pleadings and accompanying affidavits or testimony without any party discovery, interrogatories, depositions, requests for admissions or the like where real
motivation and the likely effects of merger approvals can be determined.116

This failure to follow rudimentary traditional procedures has not gone unnoticed by the appellate courts. In Electric Power Supply Ass’n v. FERC, 117 the D.C. Circuit overturned the FERC’s contention that market monitors could communicate directly with the Commission on contested
case matters.118

We use the FERC as an example of regulation by negotiation, but this phenomenon of agencies departing from hearings even where facts necessary to be decided are at issue is not limited to the FERC. A 1999 report from the General Counsel of the Nuclear Regulatory Commission (NRC)
on re-examining the Commission’s hearing process states:

We have also identified the trend in statute law and in much administrative practice to move away from formalized adjudication, with its winner-take-all courtroom model, toward alternative procedures, aimed at finding solutions that both satisfy legal requirements
and accommodate a variety of interests. In the last several years, moreover, the Chairman and other Commissioners have created a number of opportunities outside the agency’s [s]ection 189 hearing processes to conduct informal meetings with members of the
public and other stakeholders, both in Washington, and in communities close to nuclear power plants that were experiencing performance problems.119

In some cases, the courts have shown disquietude with this trend away from adjudicatory models of rulemaking and law enforcement. For example, starting in the early 1990’s, the D.C. Circuit became increasingly frustrated with the Environmental Protection Agency’s (EPA) reliance on
informal policy development, and chastised the EPA for failing to use notice and comment rulemaking procedures.120
But in other cases courts have blessed the avoidance of formal adjudicatory processes.121 It may be that to some extent the use of more limited procedures within the agencies and in courts are better suited for regulation of the modern economy (although one may have skepticism).
Mistrust of the traditional processes runs high. Trials can be costly, inefficient, and slow. However, the issue is whether agencies can substitute for courts in meaningfully enforcing antitrust principles. Where there is no assurance that agencies will give focused examinations of factual
situations in light of antitrust principles, free of undue industry influence, agencies can only substitute for courts at the risk of abandoning antitrust enforcement.122

C. The Political Nature of Agencies Compromise Their Role As Impartial Adjudicators

Further reasons for not unduly favoring agency over court enforcement of antitrust law are institutional. Courts and agencies are very different
decisionmaking bodies with different roles, strengths and weaknesses.

One of the pillars of the rule of law is expressed by Justice John Marshall’s statement that we live under “a government of laws, and not of
men.”123 Legislatures are primarily responsible for generally applicable laws that result from a balancing of interests within the political
process. Ideally, courts apply law in individual cases neutrally through a reasoning process that is at least theoretically divorced from political
influences.

The institutionalstructure and processes of courts, including lifetime appointments, strict ex parte
communications rules, and requirements that decisions be justified by factual records and
elaborations of neutral legal norms, are all designed to encourage reasoned and impartial judicial decision-

making.124 Agencies are structured very differently, perhaps due to the fact that they often perform both policy-making and adjudicatory
functions.

appointees, who serve for set, limited terms. Agency


At the top tier of many regulatory agencies is a bipartisan commission of political

heads and commissioners sometimes come from industries that they regulate or aligned industries. They often

seek employment in those industries after their terms and the many legal, financial, and lobbying firms
that represent them.125 Some agencies are headed by a single political appointee; all appointees must obtain Senate confirmation
and lack lifetime tenure to separate them from further political influence. Agency budgets and expenditures of money go through the executive
review process and must be congressionally approved. Agencies are subject to congressional oversight and the possibility of new statutory
enactments. In short, their actions are deeply affected by the political process.

structure of regulatory commissions makes them more susceptible than courts


The political

to the influence of their regulated industries as well as other interested parties. Thus, even at the genesis of many
regulatory commissions, prominent commentators were predicting that “the older such a commission gets to be, the more inclined it will be
found to take the business and railroad view of things.”126 In 1960, James Landis, the late dean of Harvard Law School and a prominent
agency tribunals to reflect
advocate of administrative authority,127 reported to President-Elect Kennedy on the tendency of

industry positions because of the “daily machine-gun-like impact” of industry lobbyists and
lawyers in formal and informal agency processes. 128 Others have attributed regulatory

“capture” to the tendency of agencies to consider themselves responsible for the health of
the industries that they regulate, leading them to sometimes favor industry demands over
consumer concerns and interests.129
2AC vs Iowa BC
Case
2AC – Aff Solves
Big bank’s size causes them to run the government.
Ramirez 13 (Steven A. Ramirez is Professor of Law and Associate Dean at Loyola
University of Chicago, where he also serves as Director of the Business Law Center. This
is the second book he has authored relating to the subprime mortgage crisis and its
meaning in terms of the rule of law. He previously served as an Enforcement Attorney
for the Securities and Exchange Commission and a Senior Attorney for the Federal
Deposit Insurance Corporation. <KEN> “A Revolution in Economics (But Now in Law),” in
“Lawless Capitalism: the Subprime Crisis and the Case for an Economic Rule of Law,”
New York University Press. ISBN 978-0-8147-7650-6)
inequality enables the rich to subvert
Inequality also threatens the rule of law. In The Injustice of Inequality Professors Glaeser, Scheinkman, and Shleifer “argue that is detrimental to growth, because it

the political , regulatory and legal institutions of society to their own benefit .” Inequality empowers the established to impede the efforts of
innovators with initially inferior resources. The excess wealth of the established determines the extent to which they may subvert law. Ultimately the “initially well situated . . . pursue socially harmful acts recognizing that the legal, political and regulatory systems will not hold them

the Gilded Age and transition economies of eastern


accountable.” Glaeser, Scheinkman, and Shleifer test their argument against historic evidence from in the U.S. the

Europe during the 1990s. They find abundant historical evidence that powerful elites subverted in both instances

law and harmed macroeconomic performance. The authors next test the proposition through a constructed index of the rule of law to determine if nations with weak legal systems suffer impaired growth. They find that inequality is “bad for growth” in countries with a weak rule
of law.108 Thus, the law must account for the subversion of legal regimes arising from high levels of inequality.

In 2004, on the occasion of the fiftieth anniversary of the Supreme Court’s landmark decision in Brown v. Board of Education that American apartheid must end (with “all deliberate speed”),109 I surveyed evidence related to inequality and growth and concluded that inequality arising
from the oppression of minorities continues to harm economic growth through the destruction of human capital. I argued in favor of mitigating racial inequality within the U.S. through enhanced funding for human capital formation. Because racial hierarchies draw zero support from
science, their existence necessarily evidences economic oppression.110 The modern U.S. is hardly exempt from the threat that economic inequality may reflect the operation of illegitimate economic hierarchies that lead to impaired human capital formation.

Far from finding any kind of immunity, economists recently demonstrated that in the past, high inequality compromised human capital formation in the U.S.111 Other researchers find high excess returns (of 11 to 15 percent) to state educational outlays, particularly in high-inequality
locales.112 The U.S. therefore appears prone to the same dynamic that economists identify in other nations: Governing elites will not fund appropriate human capital formation for the children of others under conditions of high inequality.113 Historically, land inequality operated to
impede human capital formation in the U.S. and abroad while land reforms and egalitarian land ownership gave rise to enhanced educational outlays.114

Mancur Olson articulated The Theory of Collective Action to explain the mechanism
underlying elite subversion of law. Large, diffused groups free-ride based on the face the temptation to , up

assumption others will press government to vindicate their interests. Small groups
that

with concentrated interests do not fall prey to the same temptation and and resources may

coordinate their efforts without freeriding high inequality means the dilution of their interests as implied in problems.115 Naturally,

more resources in fewer hands and leads to smaller groups with concentrated more

interests. Olson’s theory explain behavior of legislators in seeking issues that


has been extended to the out

attract concentrated groups so that lawmakers


the interests of attain higher levels may exploit their lower costs of organization and

of campaign contributions , or other largesse, such as future employment. Although this complex negotiation between lawmakers and supplicants for legal indulgences can be “well hidden,” law must evolve in view of these
powerful theoretical insights, backed with strong empirical evidence.116

This theory of collective action dovetails with evidence showing that inequality within the U.S. reached a historic peak immediately prior to the subprime debacle in 2007. Economic inequality operated to concentrate more income than ever before in a very small number of hands. The
accompanying graph shows the concentration of income held by the top 0.01 percent (1 in 10,000) of the population. The amount of income controlled by this small number of Americans is at an historic high, a high that has not been rivaled since just before the Great Depression.

The economists Thomas Piketty and Emmanuel Saez assiduously combed through U.S. tax return data from 1913 to 1998 to focus on top share of income.117 Recently, Saez updated his and Piketty’s landmark work and found that in 2007 the top 0.01 percent of the income distribution in
the U.S. (consisting of 14,988 families making at least $11.5 million) controlled more than 6 percent of the nation’s output, more than in the prior peak year of 1928. Concentration of income in the U.S. peaked in 1928 (prior to the Great Depression) before being eclipsed in 2005 and
reaching an all-time peak of 2007 (prior to another historic financial catastrophe). According to Saez, this increased concentration of income arises from “an explosion of top wages and salaries” particularly among top corporate executives.118 These facts mean that more of our nation’s
resources are more concentrated in fewer hands than ever before.

high level of income concentration encourages “investment” in rent-seeking


Such a subversion of law. More

increase in access to resources leads to


concentrated wealth reduces collective action costs. It also allows concentrated elites access to more resources. At some point, the

increased access to political leaders, with cognitive and cultural capture. all the consequent implications regarding

elites homogenize as revolving doors in government and business lead to


Ultimately, economic and political , a

high velocity of exchange between government and business leaders. As the real economy atrophies (when elites attend more to
the indulgences of cronies than to facilitating growth), investment opportunities dwindle and rent-seeking opportunities from governmental subsidies and legal indulgences attract further investment. In the U.S., the damage to the rule of law is evident in its current ranking of fiftieth in
the world in trust in politicians and the ability of the government to deal with the private sector at arm’s length.119
Johnson and Kwak document these mechanisms in the context of the
Professor Simon James precisely

subversion of financial regulation concentration of economic resources of in the run-up to the subprime fiasco. First, the

the six largest banks surged to 60 percent


commercial and investment within the financial sector from under 20 percent of GDP in 1995 over in 2009. Second, from 1990 to 2006

campaign contributions soared while lobbying expenditures reached 3 billion from $61 million to $260 million, $ .4 .

the leaders of Wall Street firms and policymakers


Third, the shuttled between the two the in Washington, D.C.,

power centers until the mindset of the two melded into antiregulation pro–
seamlessly the same ,

financial sector mantra . Finally, the compensation paid in the financial sector soared. From 1980 through 2007 the financial sector successfully freed itself from a wide array of regulations and consolidated to the point where a handful
of mega-banks exerted inordinate control over the economy.120 Chapter 3 reviews the parade of regulatory indulgences in the financial sector, but this pattern of elite subversion of regulation pervades this book and transcends the financial sector.

Control of corporate wealth—within the financial sector or otherwise— unlocks concentrated power. CEOs now dominate the public firm in the U.S.121 CEOs use the prodigious wealth within their corporations to sway law and regulation in a direction that serves CEO interests. In recent
years, senior managers have amassed massive annual compensation and used their political muscle to restructure the American economy. Now, they have offloaded massive risks onto to the taxpayer, saddled the American economy with unsustainable debt, relieved themselves of

financial elites constitute half of the top 0.01 percent


trillions in tax liabilities, and shredded the middle class. Indeed, corporate and now about of the income

and virtually all the growth in the economy


distribution, fueled huge salary gains
of American over the past few decades for this group

financial managers used political power to


even as most Americans endured extended economic stagnation.122 American governing elites, the majority of whom are and corporate senior , their

rig markets and entrench themselves at the cost of general economic growth, as detailed in coming chapters. Thus, the U.S. faced a perfect storm of concentrated wealth on the eve of the crisis: gaping
income inequality, a cancerous growth in finance concentrated among a handful of firms, and the consolidation of power at the apex of the American public firm, in the hands of CEOs.

That SOLVES dedev --- blocks bank influence, which is what’s preventing movements
from working.
Pope 20 (Chris Pope, assistant Professor at Kyoto Women's University, specializing in
East Asian politics. <KEN> “Constructing a New World Order: The Case for a Post-Crisis
International Settlement?” Tokyo and Kyoto: Science Council of Japan and Aoyama
Gakuin University. Pp.120-137. March 2020. DOI: 10.13140/RG.2.2.18095.28328)
The rise of finance is constrain global capacity to respond to climate change
ing and the
socioecological crisis. Indeed, the argument might be extended further to capitalism itself, as a system that demands the relentless
accumulation of capital to function (Wallerstein 2004; Harvey 2005). Therefore, so long as the conditions of social production are based on the
physical properties of the Biosphere, economic growth will impact on the earth’s capacity to regenerate its resources which, in the
Anthropocene, inevitably reshapes the Earth’s subsystems in ways that are deleterious to the sustainability of
civilization and potentially even life on Earth (Rockström et al. 2009; Barnosky et al. 2012; Guignard et al. 2017). While there is
optimism over a ‘decoupling’ of production from the physical properties of the planet, it is surely obvious to most people that this is a facile
argument that minimizes the importance of recalibrating the social metabolism that exists between human production and the biosphere upon
which it unavoidably relies (Fischer-Kowalski 1998; Foster 2000).
Nonetheless, financialization and the centrality of the US dollar itself has further impacted on humanity’s political capacity to respond. To start, governmental reliance on finance markets for short-term growth has allowed the interests of transnational capital to override the preferences
of domestic citizens everywhere (Blyth 2016: 175; Crouch 2004; Stockhammer 2012). For instance, multilateral agreements on trade, such as the Trans-Pacific Partnership, are equipped with frameworks by which transnational corporations can sue states for democratically-mandated
policies in international courts of arbitration if new policies prevent businesses from attaining profit under the conditions of existing trade agreements. This, in effect, makes the enactment of policies designed to prevent socioecological collapse much more unlikely given the obvious link
between production and the exploitation of natural resources (Mathews 22 October 2014)

Second, another reason is that the politics of finance has led to the destruction of collective bargaining capacities among the workforce. Supply-side economics and monetarism at the heart of the international political economy relies on keeping prices and wages low. For self-explanatory
reasons, perennially low wages are not something to which a given workforce is likely to subscribe and thus neoliberal policy as well as US domestic and foreign policy (along with other advanced nations) has been to reduce the bargaining capacities of workers whilst shrinking the earning
power of the state as well as its capacity to meaningfully intervene in markets. For many developing nations, economic models have been premised upon maintaining a low currency valuation to the US dollar and increasing productivity over exports by eradicating any existing social or
work-place benefits and protections for workers whilst relying on unfree labor or wage slavery to bring down the price of its exports in foreign markets (LeBaron 2018). A diminished political capacity for workers to collectively demand changes through demonstration, protest, and
bargaining, not only to improve their working conditions but to benefit their communities, is another cause of the inadequate levels of pressure on politicians to contravene the interests of transnational capital and to assure them that they can survive politically the consequences of doing
so.

Third, financialization and deregulation has allowed large corporations to trans-nationalize and in doing so exacerbated issues of tax evasion, money laundering and offshore finance. The result is that the tax revenue of the states has declined which has compelled states to make sweeping
cuts to social welfare and reduce labor standards. While this exacerbates the issue of the decline in collective bargaining power given that employees, with less social support and employment security, are less likely to protest, and has made national governments increasingly reliant on
financial institutions to attain economic growth (Stockhammer 2012), it has also impinged on the state’s ability to provide adequate levels of funding into research and design for renewable energies and other technologies that would help to mitigate the socioecological crisis.
Furthermore, the GFC, indeed a consequence of reckless lending and soaring levels of private debt, has left private companies with too many liabilities for there to be consent among stakeholders within the extant corporate structure, on the micro-level, for large-scale long-term
investments to be made for the sake of the environment (Koo 2019). At the same time, Quantitative Easing has both done little to alleviate these burdens among the private sector, and has caused government debts to skyrocket which has pressured politicians to implement entirely
unrealistic austerity measures which has exacerbated the crisis (Varoufakis 2011; 2016; Thompson 2018).

power of financial institutions and rising levels of inequality as a result of deregulation and financialization has
Fourth,

empowered corporations to influence the political process . In the US, for example, a quantitative
analysis of 1779 policies issues found that economic elites and organized groups representing business interests
have a significant degree of impact over US policy while average citizens do not (Gilens and Page 2014). Corporate

influence over the political process has impacted society’s ability to respond to climate change and the

socioecological crisis. For instance, climate change denialism in the US has been an institutional effort
among “a large number of organizations, including conservative think tanks, advocacy groups, trade associations and conservative foundations,
with strong links to sympathetic media outlets and conservative politicians” in which much funding is untraceable dark money (Brulle 2014).
Similarly, powerful interests among an economic and political elite have been able to block pathways towards reform through by placing
pressure on environmental journalists through threats of and actual acts of violence including murder (Garside and Watts 17 June 2019).
2AC – Sustainability
Growth is sustainable
Brook, et al, 15 (Barry, professor of environmental sustainability at the University of
Tasmania, John Asafu-Adjaye, University of Queensland, Linus Blomqvist, Breakthrough
Institute, Stewart Brand, Long Now Foundation, Ruth DeFries, Columbia Univeristy, Erle
Ellis, University of Maryland, Baltimore County, Christopher Foreman, University of
Maryland School of Public Policy, David Keith, Harvard University School of Engineering
and Applied Sciences, Martin Lewis, Stanford University, Mark Lynas, Cornell University,
Ted Nordhaus, Breakthrough Institute, Roger Pielke, Jr., University of Colorado, Boulder,
Rachel Pritzker, Pritzker Innovation Fund, Joyashree Roy, Jadavpur University, Mark
Sagoff, George Mason University, Michael Shellenberger, Breakthrough Institute, Robert
Stone, Filmmaker, and Peter Teague, Breakthrough Institute, “AN ECOMODERNIST
MANIFESTO,” http://www.ecomodernism.org/manifesto/)
Intensifying many human activities — particularly farming, energy extraction, forestry, and settlement — so that they use less
land and interfere less with the natural world is the key to decoupling human
development from environmental impacts. These socioeconomic and technological
processes are central to economic modernization and environmental protection.
Together they allow people to mitigate climate change, to spare nature, and to
alleviate global poverty. Although we have to date written separately, our views are increasingly discussed as a whole. We call ourselves ecopragmatists and
ecomodernists. We offer this statement to affirm and to clarify our views and to describe our vision for putting humankind’s extraordinary powers in the service of creating a good

Humanity has flourished over the past two centuries. Average life expectancy
Anthropocene. 1.

has increased from 30 to 70 years, resulting in a large and growing population able to live in many different environments. Humanity has made
extraordinary progress in reducing the incidence and impacts of infectious diseases,
and it has become more resilient to extreme weather and other natural disasters.
Violence in all forms has declined significantly and is probably at the lowest per capita
level ever experienced by the human species, the horrors of the 20th century and present-day terrorism notwithstanding. Globally,
human beings have moved from autocratic government toward liberal democracy
characterized by the rule of law and increased freedom. Personal, economic, and political liberties have
spread worldwide and are today largely accepted as universal values. Modernization liberates women from traditional gender roles, increasing their control of their
fertility. Historically large numbers of humans — both in percentage and in absolute terms —

are free from insecurity, penury, and servitude. At the same time, human flourishing has taken a
serious toll on natural, nonhuman environments and wildlife. Humans use about half of the planet’s ice-free land,
mostly for pasture, crops, and production forestry. Of the land once covered by forests, 20 percent has been converted to human use. Populations of many mammals, amphibians, and birds
have declined by more than 50 percent in the past 40 years alone. More than 100 species from those groups went extinct in the 20th century, and about 785 since 1500. As we write, only four
northern white rhinos are confirmed to exist. Given that humans are completely dependent on the living biosphere, how is it possible that people are doing so much damage to natural

technologies, from
systems without doing more harm to themselves? The role that technology plays in reducing humanity’s dependence on nature explains this paradox. Human

those that first enabled agriculture to replace hunting and gathering, to those that drive today’s globalized economy, have made humans less reliant

upon the many ecosystems that once provided their only sustenance, even as those
same ecosystems have often been left deeply damaged. Despite frequent assertions starting
in the 1970s of fundamental “limits to growth,” there is still remarkably little evidence that human population
and economic expansion will outstrip the capacity to grow food or procure critical
material resources in the foreseeable future . To the degree to which there are fixed physical boundaries to human
consumption, they are so theoretical as to be functionally irrelevant. The amount of solar radiation that hits the
Earth, for instance, is ultimately finite but represents no meaningful constraint upon human endeavors. Human civilization can flourish for centuries

and millennia on energy delivered from a closed uranium or thorium fuel cycle, or from hydrogen-deuterium fusion. With

proper management, humans are at no risk of lacking sufficient agricultural land for
food. Given plentiful land and unlimited energy, substitutes for other material inputs to
human well-being can easily be found if those inputs become scarce or expensive. There remain, however, serious
long-term environmental threats to human well-being, such as anthropogenic climate change, stratospheric ozone depletion,
and ocean acidification. While these risks are difficult to quantify, the evidence is clear today that they could cause significant

risk of catastrophic impacts on societies and ecosystems. Even gradual, non-catastrophic outcomes associated with these
threats are likely to result in significant human and economic costs as well as rising ecological losses. Much of the world’s population still suffers from more-immediate local environmental
health risks. Indoor and outdoor air pollution continue to bring premature death and illness to millions annually. Water pollution and water-borne illness due to pollution and degradation of

a range of long-term trends are today


watersheds cause similar suffering. 2. Even as human environmental impacts continue to grow in the aggregate,

driving significant decoupling of human well-being from environmental impacts. Decoupling


occurs in both relative and absolute terms. Relative decoupling means that human environmental impacts rise at a slower rate than overall economic growth. Thus, for each unit of economic
output, less environmental impact (e.g., deforestation, defaunation, pollution) results. Overall impacts may still increase, just at a slower rate than would otherwise be the case. Absolute

Decoupling can
decoupling occurs when total environmental impacts — impacts in the aggregate — peak and begin to decline, even as the economy continues to grow.

be driven by both technological and demographic trends and usually results from a
combination of the two. The growth rate of the human population has already peaked. Today’s
population growth rate is one percent per year, down from its high point of 2.1 percent in the 1970s. Fertility rates in countries containing more than half of the
global population are now below replacement level. Population growth today is primarily driven by longer life spans and lower infant mortality, not by rising fertility rates. Given current

trends, it is very possible that the size of the human population will peak this century and then start to
decline. Trends in population are inextricably linked to other demographic and economic dynamics. For the first time in human history, over half the global population lives in cities.
By 2050, 70 percent are expected to dwell in cities, a number that could rise to 80 percent or more by the century’s end. Cities are characterized by both dense populations and low fertility

cities both drive and symbolize the


rates. Cities occupy just 1 to 3 percent of the Earth’s surface and yet are home to nearly four billion people. As such,

decoupling of humanity from nature, performing far better than rural economies in
providing efficiently for material needs while reducing environmental impacts . The growth of cities
along with the economic and ecological benefits that come with them are inseparable from improvements in agricultural productivity. As agriculture has become more land and labor efficient,
rural populations have left the countryside for the cities. Roughly half the US population worked the land in 1880. Today, less than 2 percent does. As human lives have been liberated from
hard agricultural labor, enormous human resources have been freed up for other endeavors. Cities, as people know them today, could not exist without radical changes in farming. In contrast,

modernization is not possible in a subsistence agrarian economy. These improvements have resulted not only in
lower labor requirements per unit of agricultural output but also in lower land requirements. This is not a new trend: rising harvest yields have for

millennia reduced the amount of land required to feed the average person. The average
per-capita use of land today is vastly lower than it was 5,000 years ago, despite the fact that modern people enjoy a far richer diet.
Thanks to technological improvements in agriculture, during the half-century starting in the mid-1960s, the amount
of land required for growing crops and animal feed for the average person declined by one-half. Agricultural
intensification, along with the move away from the use of wood as fuel, has allowed many parts of the world to
experience net reforestation. About 80 percent of New England is today forested, compared with about 50 percent at the end of the 19th century. Over the
past 20 years, the amount of land dedicated to production forest worldwide declined by 50 million hectares, an area the size of France. The “forest transition” from net deforestation to net

Human use of many


reforestation seems to be as resilient a feature of development as the demographic transition that reduces human birth rates as poverty declines.

other resources is similarly peaking. The amount of water needed for the average diet has declined
by nearly 25 percent over the past half-century. Nitrogen pollution continues to cause eutrophication and large dead zones in places like the Gulf of Mexico. While the total amount of

nitrogen pollution is rising, the amount used per unit of production has declined significantly in developed nations. Indeed, in
contradiction to the often-expressed fear of infinite growth colliding with a finite planet,
demand for many material goods may be saturating as societies grow wealthier. Meat
consumption, for instance, has peaked in many wealthy nations and has shifted away from beef toward protein sources that are less land intensive. As demand for material goods is met,
developed economies see higher levels of spending directed to materially less-intensive service and knowledge sectors, which account for an increasing share of economic activity. This

dynamic might be even more pronounced in today’s developing economies, which may benefit from being late adopters of resource-efficient technologies. Taken together, these
trends mean that the total human impact on the environment, including land-use change, overexploitation, and
pollution, can peak and decline this century . By understanding and promoting these

emergent processes, humans have the opportunity to re-wild and re-green the Earth —
even as developing countries achieve modern living standards, and material poverty
ends. 3. The processes of decoupling described above challenge the idea that early human societies lived more lightly on the land than do modern societies. Insofar as past societies had
less impact upon the environment, it was because those societies supported vastly smaller populations. In fact, early human populations with much less

advanced technologies had far larger individual land footprints than societies have
today. Consider that a population of no more than one or two million North Americans hunted most of the continent’s large mammals into extinction in the late Pleistocene, while
burning and clearing forests across the continent in the process. Extensive human transformations of the environment continued throughout the Holocene period: as much as three-quarters
of all deforestation globally occurred before the Industrial Revolution. The technologies that humankind’s ancestors used to meet their needs supported much lower living standards with
much higher per-capita impacts on the environment. Absent a massive human die-off, any large-scale attempt at recoupling human societies to nature using these technologies would result in

an unmitigated ecological and human disaster. Ecosystems around the world are threatened today because
people over-rely on them: people who depend on firewood and charcoal for fuel cut down and degrade forests; people who eat bush meat for food hunt
mammal species to local extirpation. Whether it’s a local indigenous community or a foreign corporation that benefits, it is the continued dependence of

humans on natural environments that is the problem for the conservation of nature.
Conversely, modern technologies , by using natural ecosystem flows and services more efficiently, offer a real chance of reducing

the totality of human impacts on the biosphere. To embrace these technologies is to find paths to a good Anthropocene. The
modernization processes that have increasingly liberated humanity from nature are, of course, double-edged, since they
have also degraded the natural environment. Fossil fuels, mechanization and manufacturing, synthetic fertilizers and pesticides,
electrification and modern transportation and communication technologies, have made larger human populations and greater consumption possible in the first place. Had technologies not

It is also true that large, increasingly


improved since the Dark Ages, no doubt the human population would not have grown much either.

affluent urban populations have placed greater demands upon ecosystems in distant places –– the
extraction of natural resources has been globalized. But those same technologies have also made it possible for

people to secure food, shelter, heat, light, and mobility through means that are vastly
more resource- and land-efficient than at any previous time in human history.
Decoupling human well-being from the destruction of nature requires the conscious acceleration of emergent
decoupling processes. In some cases, the objective is the development of technological substitutes. Reducing deforestation and indoor air pollution requires the
substitution of wood and charcoal with modern energy. In other cases, humanity’s goal should be to use resources more

productively. For example, increasing agricultural yields can reduce the conversion of
forests and grasslands to farms. Humans should seek to liberate the environment from
the economy. Urbanization, agricultural intensification, nuclear power, aquaculture,
and desalination are all processes with a demonstrated potential to reduce human
demands on the environment, allowing more room for non-human species. Suburbanization, low-yield farming, and many forms of renewable energy
production, in contrast, generally require more land and resources and leave less room for nature. These patterns suggest that humans are as likely to spare nature because it is not needed to
meet their needs as they are to spare it for explicit aesthetic and spiritual reasons. The parts of the planet that people have not yet profoundly transformed have mostly been spared because
they have not yet found an economic use for them — mountains, deserts, boreal forests, and other “marginal” lands. Decoupling raises the possibility that societies might achieve peak human
impact without intruding much further on relatively untouched areas. Nature unused is nature spared. 4. Plentiful access to modern energy is an essential prerequisite for human development
and for decoupling development from nature. The availability of inexpensive energy allows poor people around the world to stop using forests for fuel. It allows humans to grow more food on
less land, thanks to energy-heavy inputs such as fertilizer and tractors. Energy allows humans to recycle waste water and desalinate sea water in order to spare rivers and aquifers. It allows
humans to cheaply recycle metal and plastic rather than to mine and refine these minerals. Looking forward, modern energy may allow the capture of carbon from the atmosphere to reduce
the accumulated carbon that drives global warming. However, for at least the past three centuries, rising energy production globally has been matched by rising atmospheric concentrations of
carbon dioxide. Nations have also been slowly decarbonizing — that is, reducing the carbon intensity of their economies — over that same time period. But they have not been doing so at a
rate consistent with keeping cumulative carbon emissions low enough to reliably stay below the international target of less than 2 degrees Centigrade of global warming. Significant climate
mitigation, therefore, will require that humans rapidly accelerate existing processes of decarbonization. There remains much confusion, however, as to how this might be accomplished. In
developing countries, rising energy consumption is tightly correlated with rising incomes and improving living standards. Although the use of many other material resource inputs such as
nitrogen, timber, and land are beginning to peak, the centrality of energy in human development and its many uses as a substitute for material and human resources suggest that energy
consumption will continue to rise through much if not all of the 21st century. For that reason, any conflict between climate mitigation and the continuing development process through which
billions of people around the world are achieving modern living standards will continue to be resolved resoundingly in favor of the latter. Climate change and other global ecological challenges
are not the most important immediate concerns for the majority of the world's people. Nor should they be. A new coal-fired power station in Bangladesh may bring air pollution and rising
carbon dioxide emissions but will also save lives. For millions living without light and forced to burn dung to cook their food, electricity and modern fuels, no matter the source, offer a pathway

climate mitigation is fundamentally a technological


to a better life, even as they also bring new environmental challenges. Meaningful

challenge. By this we mean that even dramatic limits to per capita global consumption would be
insufficient to achieve significant climate mitigation. Absent profound technological
change there is no credible path to meaningful climate mitigation . While advocates differ in the
particular mix of technologies they favor, we are aware of no quantified climate mitigation scenario in which

technological change is not responsible for the vast majority of emissions cuts. The specific
technological paths that people might take toward climate mitigation remain deeply contested. Theoretical scenarios for climate mitigation typically reflect their creators’ technological
preferences and analytical assumptions while all too often failing to account for the cost, rate, and scale at which low-carbon energy technologies can be deployed. The history of energy

that there have been consistent patterns associated with the ways that
transitions, however, suggests

societies move toward cleaner sources of energy. Substituting higher-quality (i.e., less carbon-
intensive, higher-density) fuels for lower-quality (i.e., more carbon-intensive, lower-density) ones is how virtually all societies

have decarbonized, and points the way toward accelerated decarbonization in the
future. Transitioning to a world powered by zero-carbon energy sources will require energy technologies that are power dense and capable of scaling to many tens of terawatts to
power a growing human economy. Most forms of renewable energy are, unfortunately, incapable of doing so. The scale of land use and other environmental impacts necessary to power the
world on biofuels or many other renewables are such that we doubt they provide a sound pathway to a zero-carbon low-footprint future. High-efficiency solar cells produced from earth-
abundant materials are an exception and have the potential to provide many tens of terawatts on a few percent of the Earth’s surface. Present-day solar technologies will require substantial
innovation to meet this standard and the development of cheap energy storage technologies that are capable of dealing with highly variable energy generation at large scales. Nuclear fission
today represents the only present-day zero-carbon technology with the demonstrated ability to meet most, if not all, of the energy demands of a modern economy. However, a variety of
social, economic, and institutional challenges make deployment of present-day nuclear technologies at scales necessary to achieve significant climate mitigation unlikely. A new generation of
nuclear technologies that are safer and cheaper will likely be necessary for nuclear energy to meet its full potential as a critical climate mitigation technology. In the long run, next-generation

solar, advanced nuclear fission, and nuclear fusion represent the most plausible pathways toward the joint goals of climate
stabilization and radical decoupling of humans from nature. If the history of energy transitions is any guide, however, that transition will take time.

During that transition, other energy technologies can provide important social and
environmental benefits. Hydroelectric dams, for example, may be a cheap source of low-carbon power for poor nations even though their land and water footprint
is relatively large. Fossil fuels with carbon capture and storage can likewise provide substantial environmental benefits over current fossil or biomass energies. The ethical and

pragmatic path toward a just and sustainable global energy economy requires that
human beings transition as rapidly as possible to energy sources that are cheap, clean,
dense, and abundant. Such a path will require sustained public support for the
development and deployment of clean energy technologies , both within nations and between them, though international
collaboration and competition, and within a broader framework for global modernization and development. 5. We write this document out of deep love and emotional connection to the
natural world. By appreciating, exploring, seeking to understand, and cultivating nature, many people get outside themselves. They connect with their deep evolutionary history. Even when

Humans will always


people never experience these wild natures directly, they affirm their existence as important for their psychological and spiritual well-being.

materially depend on nature to some degree. Even if a fully synthetic world were
possible, many of us might still choose to continue to live more coupled with nature
than human sustenance and technologies require. What decoupling offers is the
possibility that humanity’s material dependence upon nature might be less destructive.
The case for a more active, conscious, and accelerated decoupling to spare nature draws more on spiritual or aesthetic than on material or utilitarian arguments. Current and future
generations could survive and prosper materially on a planet with much less biodiversity and wild nature. But this is not a world we want nor, if humans embrace decoupling processes, need to
accept. What we are here calling nature, or even wild nature, encompasses landscapes, seascapes, biomes and ecosystems that have, in more cases than not, been regularly altered by human
influences over centuries and millennia. Conservation science, and the concepts of biodiversity, complexity, and indigeneity are useful, but alone cannot determine which landscapes to
preserve, or how. In most cases, there is no single baseline prior to human modification to which nature might be returned. For example, efforts to restore landscapes to more closely resemble
earlier states (“indigeneity”) may involve removing recently arrived species (“invasives”) and thus require a net reduction in local biodiversity. In other circumstances, communities may decide
to sacrifice indigeneity for novelty and biodiversity. Explicit efforts to preserve landscapes for their non-utilitarian value are inevitably anthropogenic choices. For this reason, all conservation
efforts are fundamentally anthropogenic. The setting aside of wild nature is no less a human choice, in service of human preferences, than bulldozing it. Humans will save wild places and
landscapes by convincing our fellow citizens that these places, and the creatures that occupy them, are worth protecting. People may choose to have some services — like water purification
and flood protection — provided for by natural systems, such as forested watersheds, reefs, marshes, and wetlands, even if those natural systems are more expensive than simply building
water treatment plants, seawalls, and levees. There will be no one-size-fits-all solution. Environments will be shaped by different local, historical, and cultural preferences. While we believe
that agricultural intensification for land-sparing is key to protecting wild nature, we recognize that many communities will continue to opt for land-sharing, seeking to conserve wildlife within
agricultural landscapes, for example, rather than allowing it to revert to wild nature in the form of grasslands, scrub, and forests. Where decoupling reduces pressure on landscapes and
ecosystems to meet basic human needs, landowners, communities, and governments still must decide to what aesthetic or economic purpose they wish to dedicate those lands. Accelerated
decoupling alone will not be enough to ensure more wild nature. There must still be a conservation politics and a wilderness movement to demand more wild nature for aesthetic and spiritual
reasons. Along with decoupling humankind’s material needs from nature, establishing an enduring commitment to preserve wilderness, biodiversity, and a mosaic of beautiful landscapes will

require a deeper emotional connection to them. 6. We affirm the need and human capacity for accelerated, active,
and conscious decoupling. Technological progress is not inevitable. Decoupling environmental impacts from
economic outputs is not simply a function of market-driven innovation and efficient response to scarcity. The long arc of human transformation

of natural environments through technologies began well before there existed anything
resembling a market or a price signal. Thanks to rising demand, scarcity, inspiration, and serendipity, humans have remade the world for millennia.
Technological solutions to environmental problems must also be considered within a broader social, economic, and political context. We think it is counterproductive for nations like Germany
and Japan, and states like California, to shutter nuclear power plants, recarbonize their energy sectors, and recouple their economies to fossil fuels and biomass. However, such examples

Too often,
underscore clearly that technological choices will not be determined by remote international bodies but rather by national and local institutions and cultures.

modernization is conflated, both by its defenders and critics, with capitalism, corporate power, and
laissez-faire economic policies. We reject such reductions. What we refer to when we speak of
modernization is the long-term evolution of social, economic, political, and
technological arrangements in human societies toward vastly improved material well-being, public health, resource productivity,
economic integration, shared infrastructure, and personal freedom. Modernization has liberated ever more people from

lives of poverty and hard agricultural labor, women from chattel status, children and ethnic minorities from oppression, and societies from capricious and arbitrary
governance. Greater resource productivity associated with modern socio-technological

systems has allowed human societies to meet human needs with fewer resource inputs
and less impact on the environment. More-productive economies are wealthier
economies, capable of better meeting human needs while committing more of their economic surplus to non-economic amenities, including better human health, greater human
freedom and opportunity, arts, culture, and the conservation of nature. Modernizing processes are far from complete , even in

advanced developed economies. Material consumption has only just begun to peak in
the wealthiest societies. Decoupling of human welfare from environmental impacts will require a sustained
commitment to technological progress and the continuing evolution of social,
economic, and political institutions alongside those changes. Accelerated technological
progress will require the active, assertive, and aggressive participation of private sector
entrepreneurs, markets, civil society, and the state. While we reject the planning fallacy of the 1950s, we continue
to embrace a strong public role in addressing environmental problems and accelerating
technological innovation, including research to develop better technologies, subsidies,
and other measures to help bring them to market, and regulations to mitigate
environmental hazards. And international collaboration on technological innovation and technology transfer is essential in the areas of agriculture and energy.
2AC – Transition Backwards
They say 2008 proves --- that’s the opposite.
Zizek 12 (Slavoj Zizek – philosopher, a researcher at the Department of Philosophy of
the University of Ljubljana Faculty of Arts and international director of the Birkbeck
Institute for the Humanities of the University of London. <KEN> “Capitalism: How the
Left Lost the Argument,” Foreign Policy. October 2012.
https://foreignpolicy.com/2012/10/08/capitalism/)
One might think that a crisis brought on by rapacious, unregulated capitalism would
have changed a few minds about the fundamental nature of the global economy.

One would be wrong. True, there is no lack of anti-capitalist sentiment in the world today, particularly as a crisis brought
on by the system’s worst excesses continues to ravage the global economy. If anything, we are witnessing an overload of critiques of the
horrors of capitalism: Books, newspaper investigations, and TV reports abound, telling us of companies ruthlessly polluting our environment,
corrupted bankers who continue to get fat bonuses while their banks are bailed out by taxpayer money, and sweatshops where children work
overtime.

Yet no matter how grievous the abuse or how indicative of a larger, more systemic failure, there’s a limit to how far these critiques go. The goal
is invariably to democratize capitalism in the name of fighting excesses and to extend democratic control of the economy through the pressure
What is never questioned is
of more media scrutiny, parliamentary inquiries, harsher laws, and honest police investigations.

the bourgeois state of law upon which modern capitalism depends. This remains the sacred cow
that even the most radical critics from the likes of Occupy Wall Street and the World Social Forum dare not touch.

It’s no wonder, then, that the optimistic leftist expectations that the ongoing crisis would be a sobering moment — the awakening from a
dream — turned out to be dangerously shortsighted. The year 2011 was indeed one of dreaming dangerously, of the revival of radical
emancipatory politics all around the world. A year later, every day brings new proof of how fragile and inconsistent the awakening actually was.
The enthusiasm of the Arab Spring is mired in compromises and religious fundamentalism; Occupy is losing momentum to
such an extent that the police cleansing of New York’s Zuccotti Park even seemed like a blessing in disguise. It’s the same story

around the world: Nepal’s Maoists seem outmaneuvered by the reactionary royalist
forces; Venezuela’s "Bolivarian" experiment is regressing further and further into caudillo-run populism;
and even the most hopeful sign, Greece’s anti-austerity movement, has lost energy after the
electoral defeat of the leftist Syriza party.
It now seems that the primary political effect of the economic crisis was not the rise of the radical left, but of racist
populism, more wars, more poverty in the poorest Third World countries, and widening divisions between rich and
poor. For all that crises shatter people out of their complacency and make them question the fundamentals of their lives, the first

spontaneous reaction is not revolution but panic, which leads to a return to basics:
food and shelter. The core premises of the ruling ideology are not put into doubt. They are even more
violently asserted.
Turns warming
Crownshaw 18 (Timothy Crownshaw, PhD, Department of Natural Resource Sciences,
McGill University; Caitlin Morgan, Food Systems Graduate Program, University of
Vermont; Alison Adams, Rubenstein School of the Environment, University of Vermont;
Martin Sers, Faculty of Environmental Studies, York University; Natália Britto dos Santos,
Faculty of Environmental Studies, York University; Alice Damiano, Department of
Natural Resource Sciences, McGill University; Laura Gilbert, Department of Natural
Resource Sciences, McGill University; Gabriel Yahya Haage, Department of Natural
Resource Sciences, McGill University; Daniel Horen Greenford, Department of
Geography, Planning and Environment, Concordia University; “Over the horizon:
Exploring the conditions of a post-growth world”, The Anthropocene Review 1–25,
2018, https://doi.org/10.1177/2053019618820350)
Industrial emissions and the aerosol cooling effect
Near-term impacts to the climate system originating from macroeconomic disruptions remains a relatively unexplored topic, as the climate
change research community typically assumes a continuation of economic growth and stability in their scenarios (for example, IPCC, 2014b, and
UNEP, 2014b). However, industrial emissions will be significantly diminished during a period of
economic contraction following the end of growth. This will bring local environmental benefits in the form of
reduced air pollution but also a partial loss of the aerosol-induced cooling effect.3 The IPCC’s best estimate of the magnitude of

aerosol cooling is approximately half that of the warming from carbon dioxide in the
atmosphere (IPCC, 2013); clearly a significant counterbalance to the warming potential of
GHGs. Contraction and deindustrialization of the global economy will curtail these
cooling emissions, and thus complicate climate change policy and mitigation efforts.
Owing to the short residence time of aerosols in the atmosphere (Textor et al., 2006), an
increase in warming could manifest rapidly following a decline in industrial activity .
Changes in the rate and global distribution of industrial aerosol emissions have already caused significant shifts in localized cooling effects (IPCC,
2013; Kühn et al., 2014). Several studies have highlighted a potential increase in global warming as aerosol emissions are gradually reduced via
pollution control measures, finding that average temperatures will rise approximately an additional
1°C by 2100 as a consequence (Smith and Bond, 2014; Westervelt et al., 2015). While the magnitude is uncertain (Lewis and Curry,
2015; Rosenfeld et al., 2013), this additional warming may occur earlier and at a much faster rate

than expected due to falling emissions from industrial activities resulting from the end
of growth and subsequent economic contraction. This outcome could enhance climate impacts non-linearly, as human and natural
systems would have little time to adapt to a rapid change in the rate of warming (Smith et
al., 2015). As such, a relatively sudden increase in the pace of climate change and associated impacts

followed by a gradual long-term reduction may be a more realistic prospect than current
assumptions of a rising emissions trend in line with economic growth , partially mitigated by
technological innovation and declining emissions intensity of the global economy.
T Must Abandon Consumer Welfare
2AC – T – Must Abandon CWS
We meet ---

1 --- the aff closes the antitrust exemption for banks


Macey & Holdcroft 11 (Jonathan R. Macey is Sam Harris Professor of Corporate Law,
Corporate Finance, and Securities Law, Yale Law School. James P. Holdcroft, Jr. is Chief
Legal Officer, CLS Group. <KEN> “Failure Is an Option: An Ersatz-Antitrust Approach to
Financial Regulation,” The Yale Law Journal. Volume 120, No. 6. April 2011.
https://www.yalelawjournal.org/feature/failure-is-an-option-an-ersatz-antitrust-
approach-to-financial-regulation)
It remains the case, however, that unlike other mergers, bank mergers will be permitted, even if they are anticompetitive, as long as they
The Bank Merger Act exempted existing bank mergers, including
promote the public’s interest in stability.

those in pending government suits, from section 1 of the Sherman Act and section 7 of the Clayton Act.81 In the 1966
amendments to the Bank Merger Act, banking agencies were prohibited from approving mergers “whose effect . . . may be substantially to
lessen competition,” or that would result “in restraint of trade.”82 And, as noted above, even when a merger is anticompetitive, regulators may
nonetheless approve it if they find that “the anticompetitive effects of the proposed transaction are clearly outweighed in the public interest by
the probable effect of the transaction in meeting the convenience and needs of the community to be served.”83

2 --- plan makes a size a per se offense --- it isn’t one now
Young & Crews 19 (Ryan Young – M.A. in economics from George Mason, Senior Fellow
at the Competitive Enterprise Institute. Clyde Wayne Crews – MBA from William and
Mary, vice president for policy and a senior fellow at the Competitive Enterprise
Institute. <KEN> "The Case against Antitrust Law," Competitive Enterprise Institute. April
2019. https://cei.org/studies/the-case-against-antitrust-law/)
Under a Neo-Brandeisian standard, a company’s size could once again become a per se
offense, even if a breakup would make consumers worse off (in legalese, this means something is automatically illegal, even if it has good
intentions or beneficial consequences).23 Columbia University law professor Tim Wu, in his 2018 book The Curse of Bigness (titled for a famous
Brandeis expression), advocates returning to an anti-bigness standard. His arguments are a good summation of the general Neo-Brandeisian
world- view, and are worth examining further. The first antitrust legislation, Wu writes, “was clearly understood as a reaction to the rising
power of the monopoly trusts, such as the Standard Oil Company.”24 Seeing large business size as a persistent problem, Wu points to Louis
Brandeis, “whose voice is needed for what we confront today.”25

Expand means to increase in size or importance


Cambridge Dictionary ("Meaning of Expand in English,"
https://dictionary.cambridge.org/dictionary/english/expand)
to increase in size, number, or importance, or to make something increase in this way:
The air in the balloon expands when heated.

They expanded their retail operations during the 1980s.

“scope” means the aff can allow antitrust laws to do new activities
Longman Dictionary ("Scope," https://www.ldoceonline.com/dictionary/scope)
https://www.ldoceonline.com/dictionary/scope
1 [uncountable] the range of things that a subject, activity, book etc deals with scope of the need to define the scope of the investigation
measures to limit the scope of criminals’ activities beyond/outside/within the scope of something A full discussion of that issue is beyond the scope of this book. widen/broaden/extend etc the
scope of something Let us extend the scope of the study to examine more factors. narrow/limit etc the scope of something The court’s ruling narrowed the scope of the affirmative action
program. limited/wider etc in scope His efforts were too limited in scope to have much effect. 2 [uncountable] the opportunity to do or develop something scope for The scope for successful
gardening increases dramatically with a greenhouse. there is considerable/great/little etc scope for something There is considerable scope for further growth in the economy. 3 [singular]
informal a particular set of activities and the people who are involved in them SYN scene the music/cinema/club etc scope COLLOCATIONS ADJECTIVES broad The new book has a broader
scope. limited/narrow The scope of the research was quite limited. VERBS widen/broaden the scope of something The police are widening the scope of their investigation.

extend/ expand the scope of something They may extend the scope of the project.

antitrust law applies to most industries and includes industry-specific action. Prefer on
precision --- their interp can’t explain the existence of antitrust exemptions.
Stucke & Grunes 11 (Maurice Stucke – Associate Professor, University of Tennessee
College of Law. Allen P. Grunes – Partner, Brownstein Hyatt Farber Schreck, LLP. <KEN>
"Why More Antitrust Immunity for Media is Bad Idea," Northwestern University School
of Law. https://scholarlycommons.law.northwestern.edu/cgi/viewcontent.cgi?
article=1164&context=nulr)
Thefederal antitrust laws apply across most industries and to nearly all forms of business organizations. A
number of statutory exemptions exist, however, including immunity for agriculture,10 export

activities,11 insurance,12 labor,13 fishing,14 defense preparedness,15 professional sports,16 small


business joint ventures,17 and local governments.18 Such antitrust immunity departs from Congress’s longstanding commitment to free
markets and open competition.19

anticompetitive practices are any practice that cements market position --- includes
monopolization and mergers, which the aff bans.
SICE 21 (Foreign Trade Information System, “Dictionary of Trade Terms,” 2021,
http://www.sice.oas.org/dictionary/cp_e.asp)
Anticompetitive practices A wide range of business practices in which a firm or group of
firms may engage in order to restrict inter-firm competition to maintain or increase their
relative market position and profits without necessarily providing goods and services at
a lower cost or of higher quality. These practices include price fixing and other cartel
arrangements, abuses of a dominant position or monopolization, mergers that limit
competition and vertical agreements that foreclose markets to new competitors.
T Private Sector
2AC – Private Sector
For example, Amazon
CBI 21 (<KEN> "What Amazon is Doing in Financial Services as Well as Fintech," CBI
Insights. April 2021. https://www.cbinsights.com/research/report/amazon-across-
financial-services-fintech/)
From payments and lending to insurance and checking accounts, Amazon is attacking
financial services from every angle without even applying to be a conventional bank. In
this report, we break down how these efforts impact merchants and consumers. We also dive into various initiatives Amazon is pursuing,
ranging from cashierless payment terminals to health insurance for sellers.

In 2017, Andreessen Horowitz general partner Alex Rampell said that of all the tech giants that could make a major move in financial services,

“Amazon is the most formidable. If Amazon can get you lower-debt payments or give you a bank account, you’ll buy
more stuff on Amazon.”

‘The’ only mandates the AFF must restrict entities with the ‘distinct characteristics’ of
the private sector.
Merriam Webster—(English language dictionary). The. https://www.merriam-
webster.com/dictionary/the. Accessed 9/17/21.
b—used as a function word to indicate that a following noun or noun equivalent is a
unique or a particular member of its class
“Private sector” refers to privately owned entities – can be singular
Clark 21 (Paul Clark, OSC Business Management HL, “Distinction between the private
and public sectors (AO2),” 2021, https://guide.fariaedu.com/business-management-
hl/unit-1-business-organisation-and-environment/1.2-types-of-
organisations/distinction-between-the-private-and-the-public-sectors-ao2)
The private sector refers to organisations owned, controlled and managed by private
individuals for the purpose of making a profit, e.g. Microsoft.
The public sector refers to organisations owned, controlled and managed by the
government to provide essential goods and services for the general public, e.g. a
national health service.
The balance between public and private ownership varies from country to country.
Public ownership today is less common than in the 20th century as private businesses
are considered more efficient. Moving firms from public ownership to private ownership
is called privatisation. Governments may partner with the private sector to operate
aspects of public services, e.g. health provision.
Adv CP
2AC – TBTJ A/O
That certainty is key to reverse perception of unfairness --- counterplan’s complex
process makes that impossible
Foster 15 (Sharon Foster – Associate Professor @ University of Arkansas Law School.
<KEN> “Too Big to Prosecute: Collateral Consequences Systemic institutions and the
Rule of Law,” Review of Banking & Financial Law. Vol. 34.
https://www.bu.edu/rbfl/files/2015/07/Foster.pdf)
C. Certainty
Laws relating to systemic institutions and, in particular the financial services sector, as well as the structure
of systemic institutions, have been described as complex, contributing to the difficulty in prosecuting systemic

institutions.197 For example, the Dodd-Frank Wall Street Reform and Consumer Protection Act is indicative of the

uncertainty problem, consisting of over eight hundred pages of unintelligible language,


which did little more than require regulatory agencies to promulgate more regulation.198 While people have access to Dodd-Frank,199 it was
also promulgated in such a fashion that most people would not care to access it given its over
eight hundred pages, nor understand it if one was ambitious enough to attempt to read
it. This is the buried in transparency problem; make the laws voluminous, complex, and costly to administer to ensure the
laws will not be understood and to discourage the public from legal relief in the courts.200 The buried in
transparency problem is particularly applicable to laws relating to systemic institutions, and the uncertainty problem is further exacerbated by
the policy of collateral consequences by increasing prosecutorial discretion.

Prosecutorial discretion, the decision of who to charge and the scope of the criminal charges, has been given broad deference by the United
States Supreme Court.201 This is partly due to separation of power concerns between the executive and judicial branches,202 as well as the
concern that the judiciary is ill-equipped to second-guess prosecutorial decisions.203 There are some constitutional limits to such discretion
under the Equal Protection clause of the United States Constitution, but the courts will apply the rational basis test204 and prosecutorial
discretion is presumed to have a rational basis due to the Court’s concern that subjecting prosecutors to such oversight may “chill law
enforcement” and “undermine prosecutorial effective-ness.” 205 Additionally, wide latitude is generally allowed under the Equal Protection
Clause for economic policy,206 which one could call the policy of collateral consequences based upon the fear of economic harm. Theoretically,
a decision on who to prosecute based upon “an unjustifiable standard such as race, religion, or other arbitrary classification”207 and vindictive
prosecutions,208 would be an abuse of discretion and violate the Constitution. Realistically, a finding of such abuse of discretion is rare.209

Given the limited review of prosecutorial discretion, the policy of collateral consequences as applied to systemic
institutions is highly questionable as it has increased prosecutorial discretion, decreased legal certainty and, hence,

undermined the rule of law. This policy has contributed to the lack of certainty and trust in the political, legal, and financial
systems, because there is little certainty that the laws will be enforced .210 This lack of trust,

real or perceived, is critical as recognized by the DOJ itself in its United States Attorneys’ Manual at section 9-28.100211 and
recent court decisions suggesting some oversight of DPAs may be warranted.212

Perception of unaccountability collapses the rule of law --- extinction


Ramirez 13 (Steven A. Ramirez is Professor of Law and Associate Dean at Loyola
University of Chicago, where he also serves as Director of the Business Law Center. This
is the second book he has authored relating to the subprime mortgage crisis and its
meaning in terms of the rule of law. He previously served as an Enforcement Attorney
for the Securities and Exchange Commission and a Senior Attorney for the Federal
Deposit Insurance Corporation. <KEN> “The Potential for an Economic Rule of Law,” in
“Lawless Capitalism: the Subprime Crisis and the Case for an Economic Rule of Law,”
New York University Press. ISBN 978-0-8147-7650-6)
a more durable economic rule of law, consistent with both notions of human rights as well as macroeconomic
Still,

growth, seems within the reach of the law. The law should operate to stem economic despotism, which can harm social well-being as

much as political despotism. This means law must secure economically rational human capital investment, until costs exceed
benefits (if ever); otherwise, too many citizens suffer the despotism of impoverishment.31 Government should secure market development to
further empower all citizens and entrepreneurs; this at least ensures a minimum if not optimal macroeconomic opportunity for all. Further, the
law should secure important legal and regulatory infrastructure from elite subversion; the law can thereby avert economic meltdowns.
Longstanding norms of accountability must remain intact so that no individual may act
above the law. This places real limits on the economically mighty to impose massive
costs on others and society generally through the abuse of economic power.
Such a rule of law would meet the minimal definition proposed by Professor Tamanaha as well as the more robust vision of Li Shuguang. It
would stem abuses of power and inspire confidence that the economy secured fair competition for all. Nevertheless, it also demands more. Not
only must the law prevent abuses of power, but individuals must enjoy protection from morally reprehensible and economically objectionable
outcomes—that is, no individual should suffer stunted education or arbitrarily suffer from economic despotism. Under political despotism,
individuals may suffer arbitrary incarceration, lawless confiscation of property, or even death. Under economic despotism, individuals may
suffer seriously impaired health and the economic death of impoverishment. This more demanding rule of law would yield great economic
benefits in terms of growth and stability even beyond averting crises.32

Economists know that the rule of law plays a crucial role in macroeconomic performance. Yet they admit to
failing to define the rule of law.33 The Nobel laureate Robert Lucas, a pioneer of the study of economic growth, along with others previously
mentioned, holds that no other economic issues rivals growth in terms of its potential impact on human welfare. As previously shown, growth
global warming, environmental degradation, starvation,
means more resources to meet challenges such as

infant mortality, and disease. Because the keys to growth involve human capital
formation, market development, and economically rationalized legal and regulatory infrastructure, the path to
maximum growth fundamentally runs through an optimal rule of law that constrains elite action
to impair growth for their own benefit. History and economics prove that these elements of growth cannot take hold unless secured
by law. The increasing attraction of economists to growth becomes easy to understand once one ponders the stakes for humanity. The value of
growth should drive visions of an economic rule of law because otherwise those with power may use the law to irrationally impose massive
costs on all others. In the end, this vision of an economic rule of law secures the same values as a political rule of law: individual autonomy,
integrity, and freedom. The neglect of these values led to exploitation and economic catastrophe at the same time during the subprime
debacle.34

These failures of law to stem the abuse of power and economic despotism cost trillions. The cost of the resulting financial
crisis of 2007–9 may not be known with certainty for years to come, but it will certainly amount to trillions of dollars in

financial losses and forgone economic growth, as well as massive government expenditures and bailouts. Since the end of 2008,
U.S. debt increased by trillions as the government strove mightily to save the financial sector and revive the economy.35 Globally this failure of
law led to trillions in additional debt across the world, threatening another, more dangerous crisis arising from sovereign debt defaults.36 A
more robust economic rule of law can prevent such crises by limiting the ability of governing elites to subvert legal and regulatory
infrastructure, exploit the disempowered, and neglect market development through the maintenance of a broad middle class.

The U.S. remains far from this vision of an economic rule of law. Abundant evidence proves that
the U.S. economy suffers from a second-rate legal system akin to that of Third World countries. Our
economic system suffers from corporate and financial elites untethered to legal
accountability and able to bend law for their private profit. Remarkably, the crisis of 2008–9 led
to very little accountability under criminal law, despite all of the reckless lending and reckless
sale of mortgage-backed securities, backed with recklessly originated mortgages. For example, two central figures from the crisis completely
escaped criminal accountability despite apparent evidence of securities fraud and other misconduct.

In early 2011, the government apparently terminated its criminal inquiry against Countrywide Financial CEO Angelo Mozilo. Mozilo raked in
$521 million from 2000 through 2008, even as he crashed his firm through predatory loans that he himself recognized as toxic and poisonous.
The SEC leveled against Mozilo civil claims of securities fraud that he settled on the eve of trial for $70 million (with a total personal expenditure
of $22.5 million).37 His leadership of Countrywide’s residential mortgage business later resulted in the largest predatory lending settlement in
history.38 A private civil fraud action led to a settlement payment of $600 million (paid by Bank of America, which acquired Countrywide). Yet
Mozilo retains the hundreds of millions he garnered in compensation as well as his liberty, and criminal authorities seem uninterested in
charging Mozilo with criminal fraud.39 Of course, Mozilo and Countrywide generously funded (and gave favorable mortgages to) leaders of
both political parties.40

The government similarly terminated a criminal inquiry against Joseph Cassano, the head of AIG Financial Products, a subsidiary of AIG. AIG
Financial Products wrote hundreds of billions in credit default swaps on mortgage instruments.41 When the mortgage market collapsed, AIG,
the world’s largest insurance company, also collapsed, resulting in a $182 billion federal bailout.42 Cassano assured investment analysts in
December 2007 that “it is very difficult to see how there can be any losses” arising from the CDS positions.43 One year later, AIG posted the
largest loss in corporate history as a direct result of the CDS.44 Indeed, at the very time of the statement to investment analysts, according to
the Financial Crisis Inquiry Commission, AIG had already posted $2 billion in collateral to Goldman Sachs to cover losses that Goldman projected
on securities guaranteed by AIG; Cassano and AIG failed to disclose this payment to investors.45 AIG paid Cassano $315 million for his gambling,
including a $1 million per month consulting agreement after termination.46 Cassano and AIG also showered both political parties with
contributions.47

These two examples constitute only the most high-profile instances of the failure of legal accountability. Not one senior manager at any of the
firms at the center of the crisis suffered even an indictment under the Obama administration. Instead of bringing criminal actions, Obama
administration officials actually lobbied state authorities to refrain from pursuing wrongdoing.48 Financial elites cognitively and culturally
captured the Obama White House.49 The savings and loan crisis of the 1980s spawned nearly 1,000 felony convictions.50 Given the massive
costs imposed by this crisis, only a failure of law explains such a failure of justice. As one commentator stated in mid-2010, “[T]he American
people should now get the justice we deserve, in the form of prosecuting the people on Wall Street who had major roles in causing the financial
restoring the rule of law to finance will
crisis in the first place.”51 Simple accountability disintegrated in this crisis, and

require criminal convictions of those responsible for the lawlessness in our banks.52 This historic
pacification of criminal accountability will dilute disincentives for criminal profits for decades to come. But the lack of criminal accountability
forms just one element of the breakdown in law.53
2AC – TBTM A/O
Banks are too big to manage --- causes money laundering to terrorists
Heineman 13 (Ben W. Heineman, Jr. is former GE General Counsel and is a senior fellow
at Harvard University’s schools of law and government. <KEN> "Too Big to Manage: JP
Morgan and the Mega Banks," Harvard Business Review. October 2013.
https://hbr.org/2013/10/too-big-to-manage-jp-morgan-and-the-mega-banks)
Many other
Every casual reader of business news knows that JP Morgan Chase & Co. is in a world of legal hurt. But, it is not alone.

major financial institutions — Bank of America, Citigroup, HSBC, Barclay’s, Wells Fargo, UBS, etc. — have their share
of big dollar controversies with regulators and private claimants. The immediate news coverage is focused on the
size of financial penalties for the institutions, on the potential civil or criminal culpability of bank officials and on the reputational harm to both
the bank and its senior officers.

But the profound underlying question is whether these major financial institutions could have prevented the welter of business and related
legal/accounting issues in the past and, more importantly, whether they can prevent such problems in the future. Of course, these institutions
are always challenging aspects of regulatory regimes and engaging in disputes about future laws. But, at the end of the day, it is bank leaders
and employees who must take the right business, legal and ethical actions under existing law. Are these huge major financial
institutions not just too big to fail, their leaders “too big to jail” (as some critics charge), but also “too big to manage”?
The range of problems in the financial sector is striking: Bad trades with unforeseen and poorly understood billion
dollar losses. Poor controls over risk and valuations. Deceptive communication within the company and to the

board. Flawed mortgage origination , loan modification and debt collection practices. Manipulation of energy

markets. LIBOR rate rigging. Participation in money laundering that helps drug smugglers or
terrorists. Questionable hiring of sons and daughters of Chinese officials. Some of these problems occurred before the 2008 crisis and
some since then. But they are not the regulatory esoterica that critics of Dodd-Frank worry about — if proven, these are core issues of wrong
doing.

JP Morgan is the biggest of them all with $2.3 trillion in assets ,$1.1 trillion in deposits and approximately 260,000 employees, followed closely
by Bank of America (also beset by myriad legal problems). CEO Jamie Dimon consistently espouses the virtues of size and diversity. But,
although profitable, JP Morgan has either settled, is settling, is being investigated for, or is in litigation about virtually all the issues noted
immediately above and more, with consequences in the billions of dollars. Moreover, JP Morgan’s legal expenses since 2008 have totaled
more than $18 billion dollars (which does not include the enormous internal resources expended on these matters or the cost of settlements).
Yes, JP Morgan is profitable, but it would be a stronger institution without these issues and all their complexities. The expenditure of time,
alone, has been enormous.

Partly to calm the waters as it tries to navigate through its regulatory perfect storm, JP Morgan now states that it has no more important task
than addressing its current legal issues and preventing them in the future. In his letter to shareholders in this year’s Annual Report, Dimon said
that “we are now making our control agenda priority #1.” And, just days before the recent “London Whale” settlement with four different
regulatory agencies was announced, Dimon wrote an anticipatory letter to employees reiterating the primacy of the control agenda and also
announcing that the bank would add 5,000 employees in control functions (for a total of 15,000) and spend an additional $4 billion (1.5B on
actual outlays and $2.5B in additional reserves). These letters followed a company task force report in January that sharply criticized many
functions at many levels of the company for the “Whale” fiasco.

a control agenda? Its purpose is to prevent undue business risk, prevent violations of the
What, you may ask, is

spirit and letter of formal rules (financial and legal) and to prevent transgressions of global standards (ethics) which an
organization imposes upon itself to enhance sound performance or to promote integrity. A control agenda seeks an
irreducible minimum in harmful mistakes, gross negligence and bad intentional acts. It has three broad activities: to prevent, to detect and to
respond. It must be led not by staff, but by business leaders who devote appropriate resources, hire outstanding people and embed the
prevent, detect and respond activities deeply in business operations. These business leaders must, however, be aided by highly competent
legal, financial, risk, compliance, audit and technology staff.
In a complex organization like JP Morgan, with many separate entities and lines of
business, an effective “control agenda” is a huge undertaking. It means “process mapping” the myriad
business functions; assessing business, legal and ethical risks at various points; mitigating that risk through education, checks and balances; and
ensuring that problems are discovered early and handled promptly. It is a vexing, complicated task which requires both outstanding leadership
and management. It also requires a significant investment of time and resources which, while sizeable, amounts to far less than the huge
resource drain which scandal can cause. Ultimately, it means having an open, transparent performance-with-integrity culture that encourages
but bounds business risk and that does not cut legal or ethical corners to make the numbers.

In his letters to shareholders and to employees, Dimon effectively admits that the bank had not properly addressed the broad set of control
issues in the past, but he states clearly the effort required when the control agenda is a first priority:

“Adjusting to the new regulatory environment will require an enormous amount of time, effort and resources… We have reprioritized our major
projects and initiatives, deployed massive new resources and refocused critical management time on this effort. We are ensuring that our
systems, practices, controls, technology and, above all, culture meet the highest standards…Eventually most of these new processes will be
embedded permanently in how we conduct our business.”

One would expect a daily buffeting from regulators (and the media) to concentrate the mind, and so Dimon’s words are hardly a surprise.
Given the enormous management effort already devoted to the myriad issues, he and the board have clearly concluded that they cannot fight
but must settle, if at all possible, and repair credibility and relationships with the regulators. And Dimon’s strong words will no doubt be
followed by detailed, complex actions — voluntarily adopted or required by government consent decrees — in a variety of areas: board
oversight; risk management; internal audit of control processes; internal financial reporting and review; compliance with formal rules;
education and training, etc.

But will JP Morgan’s attempt to correct past, systemic control issues actually reduce mistakes to the irreducible minimum, the best one can
hope for in so vast an institution? And will the intensity and focus of top management remain in 12 or 18 months when the current crisis has
passed? Of even greater importance, will the JP Morgan example send a signal to other institutions regarding the necessary step function
increase in resources, effort and leadership for an effective control agenda – and will that signal be received ?

One of the hardest corporate decisions is figuring out what level of resources to invest in prevention: the return is “avoidance” of catastrophic
scandals (which can devour far more resources than the investment) and improved reputation with various constituencies. But these
avoidance and reputational benefits are hard to quantify when set against real outlays for people, systems and processes. And that can stop
needed control reform.

Ultimately, the issue of prevention is not about regulation. To be sure, a core set of regulations in an industry like
finance is necessary and will always exist: to impose important internal processes, to set substantive standards, to require disclosure to the
marketplace and to deter bad conduct both through the rules themselves and through enforcement. But all the rules in the
world don’t matter without strong CEO commitment, backed by the board, a culture of performance with integrity that is real
and permeates the company, and the resources, people and processes to do the right kind of work — based on
business reality not just rules — in all corners of the company.

The perils of JP Morgan, once esteemed as the best manager of risk among the elephants, reflects a bank that, in
retrospect and by its own admission, does indeed appear to have been too big too manage. Whether the changes it is trying – or
being forced — to make will more effectively prevent these kinds of business, legal and ethical problems in the future is, of course, uncertain.
Whether its example will have preventative impact on other mega-banks is, of course, also unknown.

Causes nuclear terror AND rogue state prolif --- extinction


Kassenova 20 (Togzhan Kassenova – nonresident fellow in the Nuclear Policy Program at
the Carnegie Endowment. <KEN> "The Exploitation of the Global Financial Systems for
Weapons of Mass Destruction (WMD) Proliferation," Carnegie Endowment for
International Peace. March 2020.
https://carnegieendowment.org/2020/03/04/exploitation-of-global-financial-systems-
for-weapons-of-mass-destruction-wmd-proliferation-pub-81221)
1. WMD PROLIFERATION AS A SECURITY RISK1
Weapons of mass destruction – nuclear, biological, and chemical weapons - present a persistent risk to the U.S.
and international security. If a 10-kiloton nuclear bomb, like the one tested by North Korea in 2013, is

dropped in Washington, DC, a fireball of almost 500 feet in radius will cover the city.2 The radiation will
reach such high levels within a half a mile radius that 50-90% percent of people could die without medical help – some of them
within hours.

When it comes to preventing WMD proliferation, we need to be conscious of both state and non-state
actors. North Korea continues to procure sensitive goods for its nuclear and missile program in defiance of
sanctions. Iran is procuring missile-related goods. Agents working on behalf of Syria have sought chemical goods on the

commercial market. Several groups, such as Al Qaeda and ISIS, demonstrated interest in acquiring a WMD

capability. We do not have a full picture of who might be interested in obtaining a WMD capability in the future.
2. How Proliferation Networks Operate

The main path to a WMD is to procure components,


Stealing or buying a ready-made weapon is a next to impossible feat.

material, and technology and then build a weapon. Because most goods usable in a WMD program are

dual-use in nature, with indispensable civilian purposes, they are available on the
international commercial market.
The international community attempts to minimize the risk that trade in dual-use and military goods entails. The international export control
regimes and national export control systems are designed to regulate trade in sensitive items by requiring traders to obtain licenses.
Additionally, the international and unilateral sanctions regimes target known proliferators.

The goal of proliferators and their agents is to acquire goods that can contribute to WMD programs without being caught. Proliferators and
their networks continue to defy both export controls and sanctions.

Proliferation networks come in all sizes and shapes. They can be small or large, loose, or more organized. Those buying WMD-related goods can
be directly connected to proliferator states, or they can do it purely for profit by inserting themselves into the illicit market to make money.

Proliferators have perfected methods that help them stay under the radar.3 One of the standard techniques they use is to buy goods that are
slightly below the controlled threshold. This means that unless exporting companies are incredibly vigilant,4 they would not apply for an export
license and subject transaction to government scrutiny. However, these slightly inferior goods can still be used for nefarious purposes.

There is another method proliferators use to avoid government oversight and licensing—they pretend they are ordering goods for a domestic
company. In such cases, supplier companies do not have to apply for licenses.

To avoid export controls and sanctions, proliferators lie about the end-use and end-user and hide behind front and shell companies all the time.
They never declare that they are buying components for North Korea’s nuclear program, Iran’s missile program, or Syria’s chemical arsenal. For
example, they can tell a supplying company they need goods for scientific research or other peaceful purposes. In 2006, an Iranian company
ordered sensitive bioresearch equipment from Norway purportedly for a scientific laboratory. On closer look, an attentive Norwegian supplier
determined that the equipment Iranians sought was technically superior to what would be necessary for a civilian lab and that it did not fit the
physical layout of the laboratory.5

Increasingly, shipping companies and vessels are used prominently in sanction evasion. For example, Iran and North Korea falsify documents,
reflag vessels, and switch off automatic identification systems to avoid being discovered in the process of illicit transfers of goods.6

Supplier companies that provide goods to proliferators can be complicit or not complicit. Larger companies have resources to implement strong
internal compliance programs that help them detect any suspicious orders. But some companies, especially smaller ones, do not have resources
to invest in compliance and remain negligent. In some cases, supplier companies or individuals within know precisely what they are doing. They
do it either because of ideology (to support a sanctioned state) or for profit. In one notorious case, a U.S.-based company MKS Instruments sent
pressure transducers to its subsidiary in China after duly applying for a U.S. export license, thinking that the goods would be used in China. The
co-opted employee of the MKS Instruments’ subsidiary ordered transducers from an unsuspecting parent company and pretended they would
be used by Chinese companies but planned all along to ship those goods to Iran.7 Pressure transducers can be used in uranium enrichment
centrifuges, making possible the production of fissile material that can also be used in a nuclear weapon.

Proliferators prefer to buy good quality goods – mostly from the U.S., European, and Asian suppliers.
This means that in most cases, they have to pay for those goods through the formal financial
system, making financial institutions part of their proliferation schemes.
States CP
2AC – PDB – Agenda
Perm do both --- shields link to agenda politics because moderates will perceive plan
as bipartisan after all the states do it.
2AC – Theory – Preemption Fiat
Counterplan’s cannot fiat states and federal government action ---
1 --- makes it impossible to get solvency deficits because the literature is about EITHER
federal government action OR state action, but not both.
2 --- rational policymaker --- none would assume fiating both levels --- proves the
counterplan isn’t an opportunity cost to the aff
Our interp solves their offense --- they still get the states counterplan, just can’t fiat
through preemption.
2AC – Credit Suisse
Counterplan can’t overturn Credit Suisse --- can’t create a fight between federal
regulators and federal antitrust authorities, so it can’t demarcate their authority.
That’s Kobayashi and Wright.
2AC – Modeling
Can’t solve modeling --- incoherence, international fora
Kovacic 15 (William E. Kovacic – Global Competition Professor of Law and Policy, George
Washington University Law School and Non-executive Director, United Kingdom
Competition and Markets Authority. J.D., 1978, Columbia University; B.A., 1974,
Princeton University. <KEN> “The United States and Its Future Influence on Global
Competition Policy,” George Mason Law Review. Vol. 22. Issue 5.
http://www.georgemasonlawreview.org/wp-content/uploads/22_5_Kovacic.pdf)
b. Institutional Multiplicity and Limited Policy Integration
Themultiplicity of public agency decision-makers in the U.S. antitrust system greatly complicates
efforts to build a coherent and appealing brand. Two federal agencies, the state attorneys
general, and sectoral regulators such as the Federal Communications Commission (“FCC”) and the Federal Energy Regulatory Commission
share the competition policy portfolio.165 Because strong political and historical forces appear to have frozen in place the
existing distribution of authority among these agencies, the integration of policy must come through “contract” rather than through
“ownership.”166 The coherence and attractiveness of the U.S. competition law brand depends on the
willingness of various public agency actors—notably, the DOJ and the FTC—to cooperate to define common
principles, select programs consistent with those principles, and communicate messages that reinforce a shared
vision of U.S. competition law.167
Within this ecology of public agencies, important forms of cooperation take place today on a regular basis, whether in the development of
common enforcement guidelines by DOJ and the FTC, 168 the convening of public consultations by the two federal agencies on new
developments in competition economics and law, 169 the sharing of information by the DOJ and the FCC in reviewing telecommunications
mergers, 170 or in the work of the Multistate Task Force of the National Association of Attorneys General. 171 Without these forms of
cooperation, the disparate elements of the U.S. public enforcement regime would seize up and fail.

Existing levels of interagency cooperation play a vital role in the successful operation of the U.S. antitrust system, but they do not go far
enough. The U.S. regime of multiple actors can be likened to the passenger rail system that links the metropolitan centers from Boston to
Washington, D.C. In one sense, the system works adequately. Each day it moves large numbers of passengers from city to city. Nonetheless, the
system is constrained by an aged infrastructure that limits the speed of trains that transit the corridor. No amount of better, state-of-the-art
rolling stock can fix problems that reside in a merely adequate, but hardly superior, infrastructure. By comparison, the U.S. antitrust policy
framework works adequately, but below levels of effectiveness it could achieve with a better policy infrastructure.

The U.S. has no mechanism for engaging the national authorities and the states in the regular,
intensive discussions that take place between the European Commission and the national competition

authorities of the EU member states in the context of the European Competition Network (“ECN”).172 Nor
does the U.S. have the equivalent of the United Kingdom Competition Network (“UKCN”), which provides a forum for the United Kingdom
Competition and Markets Authority to meet regularly with the nation’s sectoral regulators to discuss the application of shared or related
competencies for competition law.173 The world outside the U.S. has worked harder to enact structural reforms to increase policy coherence,
including the construction of platforms to encourage interagency coordination and the development of common strategies.174 To my view, the
competition systems of jurisdictions such as the EU and the United Kingdom enjoy more coherent brands as a result.
2AC – Preemption
Counterplan is struck down.
Sykes 18 (Jay B. Sykes – Legislative Attorney for Congressional Research Service. <KEN>
“Banking Law: An Overview of Federal Preemption in the Dual Banking System,”
Congressional Research Service. January 2018.
https://crsreports.congress.gov/product/pdf/R/R45081)
Nonetheless, federal law preempts state laws that interfere with the powers of national banks. In
Barnett Bank of Marion County, N.A. v. Nelson, the Supreme Court held that the National Bank Act of 1864 (NBA)

preempts state laws that “significantly interfere” with a “national bank’s exercise of its powers”—a
standard that lower courts have applied to hold a wide variety of state laws
preempted. The Court has also issued two decisions on the preemptive scope of a provision of the NBA limiting “visitorial powers”
over national banks to the OCC, holding that the provision extends to the operating subsidiaries of national banks, but does not bar state
judicial law enforcement actions against national banks. Finally, the OCC has taken a broad view of the preemptive effects of the NBA, a view
that it has reaffirmed after the passage of the Dodd-Frank Wall Street Reform and Consumer Protection Act of 2010 (Dodd-Frank).

Use of antitrust law doubles preemption.


Greene & MacDonald 93 (Peter E. Greene – JD from the University of Connecticut. Gary
A. MacDonald – MBA & JD from the University of Michigan. <KEN> “The Jurisdiction of
State Attorneys General to Challenge Bank Mergers under the Antitrust Laws,” Banking
Law Journal. https://heinonline.org/HOL/LandingPage?
handle=hein.journals/blj110&div=46&id=&page=)
This article contends that state attorneys general do not have authority (i.e., jurisdiction) to commence
an independent antitrust action to challenge bank mergers approved under the Bank Merger Act (BMA) or the Bank

Holding Company Act (BHCA) (collectively, the Acts). Discussion begins with an explanation of the special

regulatory framework applicable to bank mergers and how the Acts alter the applicability of
Section 7 of the Clayton Act. A straightforward statutory interpretation is then employed to show that Congress intended

a uniform system to govern the evaluation and approval of bank mergers and thus did not intend to provide a private
right of action. Next, the authors explain how policy considerations underlying the BMA and BHCA, already articulated by the

Supreme Court in analogous circumstances, support giving only the Justice Department the independent right to challenge bank
mergers. State attorneys general and private parties are not left without remedies, however. As long as they participate in the federal banking
agency's approval process they may seek review of the agency's decision. Finally, the issue of why the Acts preempt state antitrust law is
analyzed.
2AC – AT: State Suits CP
This counterplan should start at zero risk of solvency until they read ev about banking
--- obviously squo antitrust law doesn’t allow putting size caps on banks, or else a
liberal state would have tried it by now.

Courts strike down the counterplan


Markham 11 (Jesse W. Markham Jr. – Marshall P. Madison Professor of Law, The
University of San Francisco School of Law. <KEN> “Lessons for Competition Law from the
Economic Crisis: The Prospect for Antitrust Responses to the “To Big to Fail”
Phenomena,” Fordham Journal of Corporate and Financial Law. Vol. XVI (2011).
https://ir.lawnet.fordham.edu/jcfl/vol16/iss2/2/) *Footnote numbers deleted when text
was pasted

It seems paradoxical that antitrust law appears to have had nothing to say about the problem of firms becoming
too big to fail. The antitrust laws are uniquely addressed to the problem of maintaining healthy markets against distortion from excessive
aggregations of economic power. Yet, antitrust law has not intervened to prevent or redress the recent

outbreak of systemic threats caused directly by companies that are too big and too integral to the functioning of markets to
allow those firms to function normally without massive governmental infusions of capital. In some instances, these companies have
accumulated assets exceeding $1 trillion, and they cut across a broad swath of economic activity, including investment banking, depositary
banking," insurance, securities, mortgage lending, and automobile manufacturing. All of these industries are subject to antitrust law.
Furthermore, nearly all of the firms that have been considered too big to fail grew, in large part, by
acquisition activity that is within the reach of antitrust laws and subject to pre-transaction government
clearance.' Yet, antitrust laws did nothing to intervene to prevent these firms from reaching potentially catastrophic
dimensions. The Sherman Aet surely was not enacted so that firms could become so big, and so economically and politically powerful, that the
mere possibility of their failure would impose unacceptable policy choices on the nation. Why, then, are so many firms too big to fail, and why
has antitrust not done its part to prevent the possibility of these catastrophic failures?

As currently interpreted
This paradox begins to unravel when one considers the ever narrowing reach of modem antitrust law.

by the courts, U.S. antitrust law is a shadow of its original self. Whatever animated their enactment, antitrust
laws no longer concern themselves with preventing bigness, and indeed tend instead to
encourage largescale enterprise for efficiency's sake. Beginning with Continental TV., Inc. V. GTE Sylvania, Inc., the antitrust
laws in the United States began a steady process of judicial erosion to eliminate multiple and possibly conflicting policy objectives, distilling in
their place the exclusive purpose of promoting consumer welfare through allocative and dynamic efficiency. With marginal and mostly
theoretical exceptions, the efficient allocation of society's scarce resources through the use of existing technologies and the production of
goods and services more efficiently using innovative new ones, comprise the sum total of the residual policy underpinnings of modem antitrust
law. In light of these modem movements in antitrust law, it is perhaps not entirely surprising that antitrust law has not prevented the too-big-
to-fail problem, since consumer welfare may be enhanced, rather than harmed, by permitting firms to
become big and even indispensable.
DoJ DA
2AC – AT: Gerrymandering DA
DOJ has been stripped of gerrymandering authority – new court decision removes pre-
clearance map submissions
CNN 9-21 (Tierney, September 9, 2021, “Shelby County ruling could make it easier for
states to get away with extreme racial gerrymandering”,
https://www.google.com/search?client=firefox-b-1-d&q=Tierney+Sneed)
The 2013 Supreme Court ruling that gutted the Voting Rights Act still finds new ways to scramble the Justice Department's enforcement of the
landmark 1965 law. As legislation that restores a key element of the law makes its way toward a likely Senate GOP filibuster,the Justice
Department is heading into the first redistricting cycle in a half century without the Voting
Rights Act's so-called preclearance requirement. At stake is whether millions of minority
voters will have their political power protected from certain racial gerrymanders in elections ranging
from local school boards all the way up to US congressional seats. The preclearance requirement mandated that

states or localities with a history of racial voting discrimination get federal approval -- either
from the Justice Department or a court in DC -- for election policy changes, including the legislative maps that are redrawn
every 10 years. While only a part of the country was covered by the requirement, the process brought transparency to

how maps were being redrawn in several large states in the South, like Georgia and Texas, as well as in parts of New York and California. "It's

going to be a mess for the Department of Justice," William Yeomans, a former acting assistant attorney general
for civil rights who now lectures at Columbia Law School, told CNN. "I know they will do their best, but they really are

handicapped <hindered> by the loss of preclearance." For the first time in decades, the department
will in some ways be starting from scratch in understanding whether voters of color are being
discriminated against in redistricting. "Where previously you had the ability of the
Justice Department to gather the information, and process it, and make determinations
in a timely and resource-efficient manner, that is certainly lost," said Terry Ao Minnis, the senior director of the census and voting
programs for Asian Americans Advancing Justice - AAJC. The onset of the next redistricting cycle has created "added urgency" for Congress to
The Justice Department "will
revive the requirement, Assistant Attorney General Kristen Clarke told Congress last month.

not have access to maps and other redistricting-related information from many jurisdictions where
there is reason for concern, even though this kind of information is necessary to assess where

voting rights are being restricted or inform how the department directs its limited enforcement resources," said Clarke,
who leads the department's civil rights division. How preclearance opened up the redistricting process in covered states The Supreme Court's
2013 ruling in Shelby County v. Holder opened the floodgates to a wave of restrictive voting laws in previously covered states. Now in those
states, map-drawers will be delineating districts without the Justice Department or a federal court looking over their shoulders. This
preclearance requirement had a deterrence effect on map-drawers , legal experts say. Not only did
legislatures know that those regimes would be scrutinized closely by the federal government, but the burden also was on the map-drawers to
prove to the department that their new districts would not adversely affect the ability of minority communities to elect the candidates of their
choice. In reviewing maps in the covered states, the department would invite members of those communities to weigh in on whether they
should be approved. Now, Clarke said in her written testimony last month, the absence of preclearance means there will be "less incentive" for
those jurisdictions to get community input on election rule changes, and local communities will have "less insight into the electoral process and
the process of making voting changes." The deterrence effect extended beyond just the covered states, said Thomas Saenz, the president of the
Mexican American Legal Defense and Educational Fund. Because the department made its determinations public when it concluded that a map
had violated the Voting Rights Act, those objection letters sent a signal to the non-covered jurisdictions what tactics might earn a legal
response. While only the jurisdictions in about a dozen fully or partially covered states had to go through preclearance before the Shelby
County ruling, the entire nation is subject to the Voting Rights Act's broader protections for minority voters and could face lawsuits for maps
that violate the law. "If you're doing something similar, then even if you're not in a covered situation you should pause, and consider whether
you want to go through with it, because the risk you will be sued by a private party or the government is pretty high at that point," Saenz said.
The cases implicating US congressional districts or state legislative maps attract much of the national attention, but the bulk of Voting Rights Act
enforcement has historically happened on an extremely local level, in places where school board districts and parish lines are alleged to have
violated the law. Private minority rights groups that monitor redistricting will now to pay more attention to those extremely local maps, Saenz
said, because local jurisdictions that no longer have to go through the Justice Department review process may believe they can fly under the
radar with discriminatory maps and not get caught. A potential time crunch The Justice Department released
guidance last week reminding jurisdictions of their obligations to comply with the Voting Rights Act in redistricting. The law
prohibits intentional discrimination in redistricting, as well as line-drawing that has the effect of diluting the votes of minorities. While its
preclearance provision is not in effect, the department can still bring proactive lawsuits against noncompliant jurisdictions under a Voting Rights
Act provision known as Section 2. It can also file briefs in support of private groups that bring Section 2 cases. A Justice Department official told
reporters that the department has been preparing for the coming redistricting cycle for "some time." It could use "public records requests or
other formal requests from the department for maps, if we are not able to get them in the public sphere," the official -- whose anonymity the
department requested as a condition of the call -- told reporters. Getting that detailed information about a state's maps is a more complicated
endeavor than it may seem. For instance, to analyze whether a map is Voting Rights Act-compliant, one needs to look at past election results,
often at the precinct level, and not every state offers such information in a single database, according to Kathay Feng, the national redistricting
director for the voting rights group Common Cause. "You would have to go to each county, and each county would give it to you in a slightly
different format, and you have to figure out how that all could be put together," Feng said As it has waited for the restricting process to start in
earnest, the Justice Department has looked at data from the American Community Survey, the department official said, to identify potential hot
spots, based on where the largest demographic shifts in the country occurred. "We are doing our best to anticipate where there may be issues,
but based on past experience from past decades, things will come up that we will have to -- not necessarily things that were anticipated," the
redistricting
Justice Department official said. The pandemic-related delays the Census Bureau faced in getting out its 2020 data -- the

numbers were released about five months late -- are throwing another wrench into the
process, as there will be a real time crunch in some states between when the post-2020 maps are released and when the electoral
calendar gets underway. The census delays aside, preclearance created an incentive for covered

jurisdictions to submit their maps with enough time for the full Justice Department
review that would be needed before they could go into effect for the next election cycle. That will likely change now that Section
2 lawsuits are the only tool for blocking noncompliant maps, Nathaniel Persily, a Stanford Law School professor and

redistricting expert, told CNN. "If you are a gerrymanderer, you are going to wait until the last

minute," Persily said.


2AC – AT: Resources Link (Enforceability)
No link --- plan’s size cap is easy to administer and enforce --- doesn’t expend agency
resources. That’s Macey and Holdcroft.

Zaps litigation
Macey & Holdcroft 11 (Jonathan R. Macey is Sam Harris Professor of Corporate Law,
Corporate Finance, and Securities Law, Yale Law School. James P. Holdcroft, Jr. is Chief
Legal Officer, CLS Group. <KEN> “Failure Is an Option: An Ersatz-Antitrust Approach to
Financial Regulation,” The Yale Law Journal. Volume 120, No. 6. April 2011.
https://www.yalelawjournal.org/feature/failure-is-an-option-an-ersatz-antitrust-
approach-to-financial-regulation)
Second, as mentioned above, we fully acknowledge that breaking up the nation’s largest financial institutions likely will create costly
Litigation may result. However, the relatively
inefficiencies. Legislation will be required to implement a breakup plan.

simple metric used in our proposal to determine the outer size of financial institutions
will reduce the transaction costs associated with implementing our proposal. And, at the end of
the day, the relevant policy question is not whether our plan has costs; rather, the relevant issue is whether the benefits of implementing our
proposal are greater than the costs. Moreover, the truly enormous, immediate, direct, long-lasting out-ofpocket expenses associated with
bailouts of financial institutions are clear. The potential costs of our plan, which come in the form of forgone efficiencies of an unspecified kind,
are ephemeral and can be reduced by innovation and competition.
2AC – NUQ + Lkt – Merger Wave
Non-unique and link turn --- merger wave crushes agency resources now, plan stops
the merger wave
Bloomberg Law 21 (“FTC’s Khan Says Merger Wave Is Straining Agency Resources (1).”
Bloomberg Law, 7/28/21, https://news.bloomberglaw.com/antitrust/ftcs-khan-says-
merger-wave-is-straining-agency-resources, Accessed 8/18/21, JMoore)
head of the U.S. Federal Trade Commission said the antitrust agency is struggling to
The

handle a merger boom that is rapidly consolidating industries across the economy.
Khan told House lawmakers at a hearing Wednesday that antitrust officials are processing the highest number
Chair Lina

of merger filings in two decades.


“Although the FTC is working to review many of these deals, the sheer volume of
transactions is significantly straining commission resources,” Khan said. “I am deeply concerned that the current
merger boom will further exacerbate deep asymmetries of power across our economy, further enabling abuses.”
2AC – NUQ + Lkt – Size Cap
Non-unique and link turn --- squo rule of reason analysis tanks agency resources, plan
replaces rule of reason framework with per size cap.
Woodcock 21 (Ramsi A. Woodcock – Assistant Professor, University of Kentucky
Rosenberg College of Law, Secondary Appointment, Department of Management,
University of Kentucky Gatton College of Business and Economics. <KEN> “The Hidden
Rules of a Modest Antitrust,” Minnesota Law Review.
https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2896453) *SSRN says 2017,
article says 2021, defaulted to article because it’s in a journal
Antitrust enforcement budgets, understood to include both the budgets of government agencies such as the Federal Trade Commission and the
budgets of private antitrust plaintiffs, have been declining, once adjusted for growth in the size of the markets that enforcers police, since
rules of reason, even after their costs are reduced by burden-shifting reforms, will always be the
World War Two.21 But

most expensive rules to enforce because rules of reason require enforcers to identify
consumer harm in addition to prohibited conduct when choosing cases to bring. 22 By contrast, the
enforcement of conventional rules that prohibit conduct regardless of effects—called per se rules in antitrust—requires
only the identification of prohibited conduct. The Court’s conversion of many per se rules of illegality
to rules of reason starting in the 1970s has therefore driven up the costs of enforcing the antitrust
laws at a time when the enforcement budget constraint has been tightening.23
2AC – Thumper
Thumpers everywhere
Tankersley & Kang 8-25 (Jim Tankersley – White House correspondent for The New York Times, with a focus on economic policy.
Cecilia Kang – covers technology and regulation and joined The Times in 2015. She is the co-author, along with Sheera Frankel of The Times, of
“An Ugly Truth: Inside Facebook's Battle for Domination.”<KEN> " Biden’s Antitrust Team Signals a Big Swing at
Corporate Titans," New York Times. Updated August 25, 2021. July 24, 2021.
https://www.nytimes.com/2021/07/24/business/biden-antitrust-amazon-google.html)

the most aggressive antitrust team in decades, stacking


WASHINGTON — President Biden has assembled

his administration with three legal crusaders as it prepares to take on corporate consolidation and market power with
efforts that could include blocking mergers and breaking up big companies.

Mr. Biden’s decision this past week to name Jonathan Kanter to lead the Justice Department’s antitrust division is the latest sign of his
willingness to clash with corporate America to promote more competition in the tech industry and across the economy. Mr. Kanter has
spent years as a lawyer fighting behemoths like Facebook and Google on behalf of rival companies.
Khan, who helped reframe the academic debate over antitrust and
If confirmed by the Senate, he will join Lina

now leads the Federal Trade Commission, and Tim Wu, a longtime proponent of breaking up

Facebook and other large companies who is now the special assistant to the president for technology and competition policy.
The appointmentsshow both the Democratic Party’s renewed antitrust activism and the Biden administration’s growing
concern that the concentration of power in technology, as well as other industries like pharmaceuticals,
agriculture, health care and finance, has hurt consumers and workers and stunted economic growth.
Biden is willing to use the power of his office and not wait for the tougher grind
They also underscore that Mr.

of congressional action, an approach that is both faster and potentially riskier. This month, he issued an executive order stuffed

with 72 initiatives meant to stoke competition in a variety of industries, increase scrutiny of mergers and restrict
the widespread practice of forcing workers to sign noncompete agreements.
2AC – No Tradeoff (DoJ)
No internal link
Ramirez & Ramirez 17 (Mary Kreiner Ramirez is Professor of Law at Washburn
University School of Law. She is a former Prosecutor for the Department of Justice
Antitrust Division, where she prosecuted white-collar criminals, and a former Assistant
US Attorney for the District of Kansas. She has published numerous articles addressing
the challenges in combating white-collar crime. Steven A. Ramirez is Professor of Law
and Associate Dean at Loyola University of Chicago, where he also serves as Director of
the Business Law Center. This is the second book he has authored relating to the
subprime mortgage crisis and its meaning in terms of the rule of law. He previously
served as an Enforcement Attorney for the Securities and Exchange Commission and a
Senior Attorney for the Federal Deposit Insurance Corporation. <KEN> “The Case for the
Corporate Death Penalty: Restoring Law and Order on Wall Street,” New York University
Press. ISBN: 978-1-4798-8157-4)
DOJ in the past prosecuted Enron officers for derivatives trading, Ivan Boesky for
Indeed, the

insider trading, and Michael Milken for securities violations. The cases demanded
tremendous capabilities and resources. This crisis erupted in 2008, and the DOJ has had eight years to obtain
expertise and shift resources around. Consequently, if the DOJ lacked expertise and resources
to try the cases discussed in this book, it is because they wanted to lack them. One need look to only the
resources redirected to immigration, where large-scale detention of powerless immigrants is the new norm, over this same period to
understand the fluidity of the resource limitations (Preston 2013).
1AR Rd. 6
TBTF Adv
1AR – Econ ! -

New Great Depression ignites prolonged inequality and populism --- nationalism
ensures escalation
Qian 18 (Qian Liu – PhD in economics from Uppsala University. <KEN> “From Economic
Crisis to World War III,” Project Syndicate. November 2018. https://www.project-
syndicate.org/commentary/economic-crisis-military-conflict-or-structural-reform-by-
qian-liu-2018-11)
For example, during the Great Depression, US President Herbert Hoover signed the 1930 Smoot-Hawley
Tariff Act, intended to protect American workers and farmers from foreign competition. In the subsequent five years, global trade shrank
by two-thirds. Within a decade, World War II had begun.

To be sure, WWII, like World War I, was caused by a multitude of factors; there is no standard path to war. But there is reason to believe that
high levels of inequality can play a significant role in stoking conflict.3
According to research by the economist Thomas Piketty, a spike in income inequality is often
followed by a great crisis. Income inequality then declines for a while, before rising again, until a new peak – and a new disaster.
Though causality has yet to be proven, given the limited number of data points, this correlation should not be taken lightly, especially with
wealth and income inequality at historically high levels.

more worrying in view of the numerous other factors stoking social unrest and
This is all the

diplomatic tension, including technological disruption, a record-breaking migration crisis, anxiety over
globalization, political polarization, and rising nationalism. All are symptoms of failed policies that could turn out to be

trigger points for a future crisis.

Goes nuclear, nationalism bypasses defense


Solt 11 (Frederick Solt, Ph.D. in Political Science from University of North Carolina at
Chapel Hill, currently Associate Professor of Political Science at the University of Iowa,
Assistant Professor, Departments of Political Science and Sociology, Southern Illinois at
the time of publication, “Diversionary Nationalism: Economic Inequality and the
Formation of National Pride,” The Journal of Politics, Vol. 73, No. 3, pgs. 821-830, July
2011, Available to Subscribing Institutions)
states instill the nationalist myth in their citizens to divert their
One of the oldest theories of nationalism is that

attention from great economic inequality and so forestall pervasive unrest. Because the
very concept of nationalism obscures the extent of inequality and is a potent tool for delegitimizing calls for
redistribution, it is a perfect diversion, and states should be expected to engage in more

nationalist mythmaking when inequality increases . The evidence presented by this


study supports this theory: across the countries and over time, where economic inequality is greater,
nationalist sentiments are substantially more widespread . This result adds considerably to our understanding of
nationalism. To date, many scholars have focused on the international environment as the principal source of threats that prompt states to generate nationalism; the importance of the

domestic inequality is a far more


domestic threat posed by economic inequality has been largely overlooked. However, at least in recent years,

important stimulus for the generation of nationalist sentiments than the international
context. Given that nuclear weapons—either their own or their allies’—rather than the mass army now
serve as the primary defense of many countries against being overrun by their
enemies, perhaps this is not surprising : nationalism-inspired mass mobilization is simply no longer as necessary for protection as it once was (see
Mearsheimer 1990, 21; Posen 1993, 122–24). Another important implication of the analyses presented above is that growing economic inequality may increase ethnic conflict. States

may foment national pride to stem discontent with increasing inequality, but this
pride can also lead to more hostility towards immigrants and minorities . Though pride in the nation is
distinct from chauvinism and outgroup hostility, it is nevertheless closely related to these phenomena, and recent experimental research has shown that members of majority groups who

by leading to the
express high levels of national pride can be nudged into intolerant and xenophobic responses quite easily (Li and Brewer 2004). This finding suggests that,

creation of more national pride, higher levels of inequality produce environments


favorable to those who would inflame ethnic animosities. Another and perhaps even more worrisome
implication regards the likelihood of war. Nationalism is frequently suggested as a
cause of war, and more national pride has been found to result in a much greater
demand for national security even at the expense of civil liberties (Davis and Silver 2004, 36–37) as well
as preferences for “a more militaristic foreign affairs posture and a more
interventionist role in world politics” (Conover and Feldman 1987, 3). To the extent that these preferences influence policymaking, the
growth in economic inequality over the last quarter century should be expected to lead to more
aggressive foreign policies and more international conflict. If economic inequality prompts states to generate
diversionary nationalism as the results presented above suggest, then rising inequality could make for a more dangerous

world. The results of this work also contribute to our still limited knowledge of the relationship between economic inequality and democratic politics. In particular, it helps explain the
fact that, contrary to median-voter models of redistribution (e.g., Meltzer and Richard 1981), democracies with higher levels of inequality

do not consistently respond with more redistribution (e.g., Bénabou 1996). Rather than allowing
redistribution to be decided through the democratic process suggested by such
models, this work suggests that states often respond to higher levels of inequality with more
nationalism. Nationalism then works to divert attention from inequality, so many citizens neither realize the extent of inequality nor demand redistributive policies. By prompting
states to promote nationalism, greater economic inequality removes the issue of redistribution from debate and therefore narrows the scope of democratic politics.
1ar – banks turn

They’ll rent-seek for “shock doctrine” policies --- their interest is in societal collapse!
Pope 20 (Chris Pope, assistant Professor at Kyoto Women's University, specializing in
East Asian politics. <KEN> “Constructing a New World Order: The Case for a Post-Crisis
International Settlement?” Tokyo and Kyoto: Science Council of Japan and Aoyama
Gakuin University. Pp.120-137. March 2020. DOI: 10.13140/RG.2.2.18095.28328)
Fifth,financial markets have been able to exploit the socioecological crisis for profit by displacing state
regulation in favor of international markets and commodifying and financializing natural resources (Fletcher 2012),
while ‘shock doctrine’ policies have been implemented by so-called ‘disaster capitalists ’ who

profit from privatizing public spaces, properties and services in the immediate aftermath of an
environmental catastrophe (Klein 2014). The presumption with marketbased frameworks for climate mitigation such as cap-
and-trade is that this very drive among businesses to make profit can used as an engine for green growth. Whether or not it will prove
successful remains to be seen. However, the unpredictability of carbon feedback loops, tipping elements and the sensitivity of cumulative
Greenhouse Gas (GHGs) emissions make it extremely difficult for market forces—as well as the political and scientific community—to know, for
example, how exactly how to tax carbon (Cai et al. 2015), or indeed how to (and even if we can or should) balance economic growth with the
financial institutions and individual
realities of ecological collapse (Keen 4 July 2019; forthcoming). In addition, various states,

investors have sought to purchase land and other natural resources in poorer nations following

the GFC, in an effort to profit from the rise in prices of natural resources following oncoming
financial or socioecological collapse, which rather than operating to shape vested interests towards profiting from market

regulations aimed at green growth, makes socioecological collapse within the interests of these elite actors
(Funk 2014).

Too big to fail ensures access


Teachout & Khan 14 (Zephyr Teachout – Associate Law Professor, Fordham Law School.
Lina Khan – JD from Yale Law School in 2017, Fellow at New America. <KEN> “Market
Structure and Political Law: A Taxonomy of Power,” Duke Journal of Constitutional Law
& Public Policy. Vol. 9, Issue 1. https://scholarship.law.duke.edu/cgi/viewcontent.cgi?
referer=&httpsredir=1&article=1087&context=djclpp)
5. Too Big to Fail
Even in the absence of resources devoted to purchasing political influence, a company
with a large relative size to the economy will have power. Bank of America’s assets are
over 1 percent of the United States GDP.54 Exxon Mobil made $45 billion dollars in profits in 2008.55 When the
relative size of a company is significant—certainly anything approaching 1 percent of GDP—
democratic choices become constrained by the self-interest of the individual corporation. The relative
size makes it incumbent upon legislators to design laws that will at a minimum ensure the
stability of the company. Dominant firms breed uncertainty and instability in key resources—
and that uncertainty leads to political power.56
[Insert Footnote 56]

Gigantic firms are a


56. As Simon Johnson and James Kwak argue in their book, blogs, and articles, this structure reeks of oligarchy.

real threat to self-government. If big corporations can demand bailouts and dictate
policy it takes away the ability of the people to choose the policies they most want. The
policy is “chosen” by the people in the same way that someone with a gun to their head “chooses” to do what the holder of the gun tells them
to. See generally JOHNSON & KWAK, supra note 6.

[Exit Footnote 56]

largest
If Lockheed Martin goes under and lays off all its employees, it will have an impact on the entire economy. Therefore, the

companies, even without lobbying, can make demands of government based on the threat
of their own failure.
Companies that are large relative to the size of American GDP use this power by threatening to collapse or
leave if their demands are not met. After the recent financial crisis, because of the size (relative to the economy) of the
biggest banks and investment firms, politicians made the decision that they should not be allowed to fail and bring the country down with
imagine that there were
them. Putting aside the banks causal role in the crisis (which is itself arguably a function of relative size),

10,000 banks, instead of 5, facing restructuring. The government could have allowed some to fail while
others were restructured. Though the government might still have chosen to provide a bailout, it could have had more

bargaining power with the banks in determining the size of the bailout. You can think of
this kind of size as the “too big to fail” rent, a promised subsidy that enables cheap capital and that
cheapens the cost of seeking political power.

that allows corporations to oppose necessary climate solutions


Pope 20 (Chris Pope, assistant Professor at Kyoto Women's University, specializing in
East Asian politics. <KEN> “Constructing a New World Order: The Case for a Post-Crisis
International Settlement?” Tokyo and Kyoto: Science Council of Japan and Aoyama
Gakuin University. Pp.120-137. March 2020. DOI: 10.13140/RG.2.2.18095.28328)
Nonetheless, financialization and the centrality of the US dollar itself has further impacted on humanity’s political capacity to respond. To start,
governmental reliance on finance markets for short-term growth has allowed the
interests of transnational capital to override the preferences of domestic citizens everywhere (Blyth 2016:
175; Crouch 2004; Stockhammer 2012). For instance, multilateral agreements on trade , such as the Trans-Pacific

Partnership, are equipped with frameworks by which transnational corporations can sue states for

democratically-mandated policies in international courts of arbitration if new policies prevent businesses from attaining
profit under the conditions of existing trade agreements. This, in effect, makes the enactment of policies designed to

prevent socioecological collapse much more unlikely given the obvious link between production
and the exploitation of natural resources (Mathews 22 October 2014)
Kills bargaining capacity of workers --- makes it impossible to address climate
Pope 20 (Chris Pope, assistant Professor at Kyoto Women's University, specializing in
East Asian politics. <KEN> “Constructing a New World Order: The Case for a Post-Crisis
International Settlement?” Tokyo and Kyoto: Science Council of Japan and Aoyama
Gakuin University. Pp.120-137. March 2020. DOI: 10.13140/RG.2.2.18095.28328)
finance has led to the destruction of collective bargaining
Second, another reason is that the politics of

capacities among the workforce. Supply-side economics and monetarism at the heart of the international political economy relies on

keeping prices and wages low. For self-explanatory reasons, perennially low wages are not something to which a given
workforce is likely to subscribe and thus neoliberal policy as well as US domestic and foreign policy (along with other advanced

nations) has been to reduce the bargaining capacities of workers whilst shrinking the earning

power of the state as well as its capacity to meaningfully intervene in markets. For many developing
nations, economic models have been premised upon maintaining a low currency valuation to the US dollar and increasing productivity over
exports by eradicating any existing social or work-place benefits and protections for workers whilst relying on unfree labor or wage slavery to
diminished political capacity for workers to
bring down the price of its exports in foreign markets (LeBaron 2018). A

collectively demand changes through demonstration, protest, and bargaining , not only to
improve their working conditions but to benefit their communities, is another cause of the inadequate levels of

pressure on politicians to contravene the interests of transnational capital and to


assure them that they can survive politically the consequences of doing so.
1ar – aresols turn

Studies prove the effect is substantial – independently pushes past tipping points
Westervelt 15 (D. M. Westervelt, Program in Science, Technology, and Environmental
Policy, Woodrow Wilson School of Public and International Affairs, Princeton University;
L. W. Horowitz, Geophysical Fluid Dynamics Laboratory, National Oceanic and
Atmospheric Administration; V. Naik, UCAR/NOAA Geophysical Fluid Dynamics
Laboratory, Princeton; J.-C. Golaz, Geophysical Fluid Dynamics Laboratory, National
Oceanic and Atmospheric Administration; D. L. Mauzerall, Department of Civil and
Environmental Engineering, Princeton University; “Radiative forcing and climate
response to projected 21st century aerosol decreases” Atmos. Chem. Phys., 15, 12681–
12703, 2015, https://acp.copernicus.org/articles/15/12681/2015/acp-15-12681-
2015.pdf)
The projected decreases in aerosol emissions and optical depth lead to a decrease in the magnitude of the aerosol
direct and indirect radiative forcing. CM3 predicts an aerosol direct plus indirect radiative forcing of −1.8 W m−2 from the pre-industrial
period to present day, which is close to the upper range of −1.9 W m−2 used by the IPCC (Myhre et al., 2013). To test robustness of our results
to the strength of aerosol forcing, we run additional simulations in which the autoconversion threshold is lowered, resulting in a presentday
aerosol ERF of about −1.0 W m−2 . By 2100, the aerosol forcing is projected to diminish to between −0.21 W m−2 in RCP2.6 and −0.53 W m−2 in
Our results therefore indicate that a large positive forcing, up to 1 W m−2 or greater, will result
RCP8.5.

from the projected decrease in aerosol and aerosol precursor emissions. This forcing has a strong
impact on climate, warming temperatures by as much as 1 K globally and up to 2–3 K
regionally in the standard CM3 RCP runs, with a much more modest response in the reduced aerosol forcing runs of 0.5 K globally.
Aerosol reduction-driven surface temperature response generally accounts for a large fraction
of the total all-forcing response. Ratios over East Asia can exceed 30 % in RCP8.5, and are even higher in the other
scenarios that do not feature as much greenhouse gas-induced climate warming. However, these ratios are closer to 10–20 % in most regions
and globally when we consider the reduced aerosol forcing RCP8.5 run. Global precipitation rates are projected to increase by as much as 3 % of
2005 values or 0.1 mm day−1 , with greater regional impacts. Inconsistent with Levy et al. (2013), we found significant local precipitation
changes co-located with areas of large aerosol decrease (e.g., East Asia). On the global scale, aerosol reduction-driven changes in AOD and
climate response trajectories do not vary significantly among the four RCPs, especially towards the end of the century, despite stemming from
nominally different scenarios. Mid-century variation in the climate response and radiative forcing trajectories closely follows the aerosol and
precursor emissions trajectories (and thus the energy use trajectories), even for climate parameters such as liquid water path and cloud droplet
effective radius.

Prefer data – aerosols basically stopped predicted warming between 2000 and 2010
Kaufmann 11 (Kaufmann, R. K., Department of Geography and Environment, Center for
Energy and Environmental Studies, Boston University; Kauppi, H., Department of
Economics, University of Turku, Finland; Mann, M. L., Department of Geography and
Environment, Center for Energy and Environmental Studies, Boston University; & Stock,
J. H. Department of Economics, Harvard University; (2011). Reconciling anthropogenic
climate change with observed temperature 1998-2008. Proceedings of the National
Academy of Sciences, 108(29), 11790–11793. doi:10.1073/pnas.1102467108)
Data for global surface temperature indicate little warming between 1998 and 2008 (1). Furthermore, global
surface temperature declines 0.2 °C between 2005 and 2008. Although temperature increases in 2009
and 2010, the lack of a clear increase in global surface temperature between 1998 and 2008 (1), combined with rising concentrations of
atmospheric CO2 and other greenhouse gases, prompts some popular commentators (2, 3) to doubt the existing understanding of the
relationship among radiative forcing, internal variability, and global surface temperature. This seeming disconnect may be one reason why the
public is increasingly sceptical about anthropogenic climate change (4).

Recent analyses address this source of scepticism by focusing on internal variability or expanding the list of forcings. Model simulations are
used to suggest that internal variability can generate extended periods of stable temperature similar to 1999–2008 (5). Alternatively, expanding
the list of forcings to include recent changes in stratospheric water vapor (6) may account for the recent lack of warming. But neither approach
evaluates whether the current understanding of the relationship among radiative forcing, internal variability, and global surface temperature
can account for the timing and magnitude of the 1999–2008 hiatus in warming.

Here we use a previously published statistical model (7) to evaluate whether anthropogenic emissions of radiatively active gases, along with
natural variables, can account for the 1999–2008 hiatus in warming. To do so, we compile information on anthropogenic and natural drivers of
global surface temperature, use these data to estimate the statistical model through 1998, and use the model to simulate global surface
net anthropogenic forcing rises slower than
temperature between 1999 and 2008. Results indicate that

previous decades because the cooling effects of sulfur emissions grow in tandem with
the warming effects greenhouse gas concentrations . This slow-down, along with declining solar insolation and a change
from El Nino to La Nina conditions, enables the model to simulate the lack of warming after 1998. These findings are not sensitive to a wide
range of assumptions, including the time series used to measure temperature, the omission of black carbon and stratospheric water vapor, and
uncertainty about anthropogenic sulfur emissions and its effect on radiative forcing (SI Appendix: Sections 2.4–7).

Results
Increasing emissions and concentrations of carbon dioxide receive considerable attention, but our analyses identify an important change in
another pathway for anthropogenic climate change — a rapid rise in anthropogenic sulfur emissions driven by large
increases incoal consumption in Asia in general, and China in particular. Chinese coal consumption more than doubles in the 4 y
from 2003 to 2007 (the previous doubling takes 22 y, 1980– 2002). In this four year period, Chinese coal consumption accounts for 77% of the
26% rise in global coal consumption (8). These increases are large relative to previous growth rates. For example, global coal consumption
increases only 27% in the twenty two years between 1980 and 2002 (8). Because of the resultant increase in anthropogenic sulfur emissions,
there is a 0.06 W∕m2 (absolute) increase in their cooling effect since 2002 (Fig. 1). This increase partly reverses a
period of declining sulfur emissions that had a warming effect of 0.19 W∕m2 between 1990 and 2002.

Peer-reviewed scientific lit goes aff


Michaels 14 (Chip Knappenberger is the assistant director of the Center for the Study of
Science at the Cato Institute, and coordinates the scientific and outreach activities for
the Center. He has over 20 years of experience in climate research and public outreach,
including 10 years with the Virginia State Climatology Office and 15 years as the
Research Coordinator for New Hope Environmental Services, Inc; Patrick J. Michaels is
the director of the Center for the Study of Science at the Cato Institute. Michaels is a
past president of the American Association of State Climatologists and was program
chair for the Committee on Applied Climatology of the American Meteorological Society.
He was a research professor of Environmental Sciences at University of Virginia for thirty
years. Michaels was a contributing author and is a reviewer of the United Nations
Intergovernmental Panel on Climate Chanage, which was awarded the Nobel Peace
Prize in 2007, “Oops: Got the Sign Wrong Trying to Explain Away the Global Warming
“Pause””, 6/26/14, http://www.cato.org/blog/oops-got-sign-wrong-trying-explain-away-
global-warming-pause)
Now, a new paper appearing in the peer‐reviewed scientific literature takes a deeper view
of aerosol emissions during the past 15 years and finds that, in net, changes in aerosol emissions over the period 1996–2010
contributed a net warming pressure to the earth’s climate. Kühn et al. (2014) write: Increases in Asian aerosol emissions

have been suggested as one possible reason for the hiatus in global temperature
increase during the past 15 years. We study the effect of sulphur and black carbon (BC) emission changes between 1996–2010 on the
global energy balance. We find that the increased Asian emissions have had very little regional or global effects, while the emission reductions
in Europe and the U.S. have caused a positive radiative forcing. In our simulations, the global‐mean aerosol direct radiative effect changes 0.06
rather than acting to slow
W/m2 during 1996–2010, while the effective radiative forcing (ERF) is 0.42 W/m2. So in other words,

global warming during the past decade and a half as proposed by Kaufmann et al. (2011), changes in
anthropogenic aerosol emissions (including declining emissions trends in North America and
Europe) have acted to enhance global warming (described as contributing to a positive increase in the radiative
forcing in the above quote).
1ar – mindset shift
Assumes impossible mindset shift – nobody wants to give up on the economy - Its
human nature – neuroscience proves
Rees 14 (William E, PhD, FRSC UBC School of Community and Regional Planning,
ecological economist Professor Emeritus and former director of the University of British
Columbia’s School of Community and Regional Planning, “Avoiding Collapse,”
https://www.policyalternatives.ca/sites/default/files/uploads/publications/BC
%20Office/2014/06/ccpa-bc_AvoidingCollapse_Rees.pdf)
Would an osten - sibly intelligent, forward-thinking, morally
In theory, opting for this alternative should not be a difficult choice for Homo sapiens .

conscious, compassionate species continue to defend an economic system that wrecks its planetary

home, exacerbates inequality, undermines social cohesion, generates greater net costs than benefits and ultimately threatens to lead to systemic collapse?
Remarkably, the answer so far seems to be “ yes .” There are simply no strong voices for caution among contemporary leaders and certainly no
political constituencies for degrowth. There is no nascent plan for a World Assembly for

Mutual Survival. Humanity’s unique capacities for collective intelligence, rational analysis and planning ahead for the common good play no major role
in the political arena, particularly when they challenge conventional myths, corporate values and monied elites. On present evidence, there is little

possibility that anything like the proposals outlined above will be implemented in time for a smooth transition to
sustainability. Daly was right: “evidentally, things still have to get much worse before we will muster the courage and clarity to try to make them better.” 61 We are

we generally react
our own worst enemy. People are naturally both short-sighted and optimistic and thus discount the future;

emotionally/instinctively to things that threaten our social status or political/economic


power; those most vested in the status quo therefore vigorously resist significant change;
corruption and greed (all but sanctioned by contemporary morality) over - shadow the public interest. Mindless dedication to

entrenched beliefs is a particularly powerful blinder to otherwise obvious truths. History shows that the resultant
“Woodenheadedness...plays a remarkably large role in gov - ernment. It consists in assessing a situation in terms of preconceived fixed notions (i.e. ideology) while

ignoring any contrary signs. It is acting according to wish while not allowing oneself to be deflected by the facts.” 62 Neuroscientists have long
recognized the general phenomenon, but the means by which people become so deeply committed
to particular concepts has only recently been revealed. In the course of individual development, repeated social, cultural
and sensory experiences actually trace a semi-permanent record in the individual’s synaptic
circuitry — cultural norms, beliefs and values can acquire a physical presence in the
brain. Once entrenched, these neural structures alter the individual’s perception of
subsequent experiences. People tend to seek out situations, people and information that
reinforce their neural “presets.” Conversely, “when faced with info rmation that does not
agree with their internal structures, they deny, discredit, reinterpret, or forget that
information.” 63
2008 proves --- it unleashed Neo Nazis, not the left, who became an introverted husk.
Varoufakis 15 – Former Greek Finance Minister, PhD in economics from University of
Essex specializing in Marxism and Political Theory (Yannis,
https://www.theguardian.com/news/2015/feb/18/yanis-varoufakis-how-i-became-an-
erratic-marxist, February 18th 2015, Yanis Varoufakis: How I became an erratic Marxist)
NAR
In 2008, capitalism had its second global spasm. The financial crisis set off a chain
reaction that pushed Europe into a downward spiral that continues to this day. Europe’s present situation
is not merely a threat for workers, for the dispossessed, for the bankers, for social classes or,
indeed, nations. No, Europe’s current posture poses a threat to civilisation as we know it . If my

prognosis is correct, and we are not facing just another cyclical slump soon to be overcome, the question that arises for radicals is this: should we

welcome this crisis of European capitalism as an opportunity to replace it with a better


system? Or should we be so worried about it as to embark upon a campaign for
stabilising European capitalism? To me, the answer is clear. Europe’s crisis is far less likely to give birth
to a better alternative to capitalism than it is to unleash dangerously regressive forces
that have the capacity to cause a humanitarian bloodbath, while extinguishing the hope for any progressive
moves for generations to come. For this view I have been accused, by well-meaning radical voices, of being “defeatist” and of trying to save an indefensible
European socioeconomic system. This criticism, I confess, hurts. And it hurts because it contains more than a kernel of truth. I share the view that this European
Union is typified by a large democratic deficit that, in combination with the denial of the faulty architecture of its monetary union, has put Europe’s peoples on a
path to permanent recession. And I also bow to the criticism that I have campaigned on an agenda founded on the assumption that the left was, and remains,
squarely defeated. I confess I would much rather be promoting a radical agenda, the raison d’être of which is to replace European capitalism with a different
system. Yet my aim here is to offer a window into my view of a repugnant European capitalism whose implosion, despite its many ills, should be avoided at all costs.
It is a confession intended to convince radicals that we have a contradictory mission: to arrest the freefall of European capitalism in order to buy the time we need
to formulate its alternative. Why a Marxist? When I chose the subject of my doctoral thesis, back in 1982, I deliberately focused on a highly mathematical topic
within which Marx’s thought was irrelevant. When, later on, I embarked on an academic career, as a lecturer in mainstream economics departments, the implicit
contract between myself and the departments that offered me lectureships was that I would be teaching the type of economic theory that left no room for Marx. In
the late 1980s, I was hired by the University of Sydney’s school of economics in order to keep out a leftwing candidate (although I did not know this at the time).
After I returned to Greece in 2000, I threw my lot in with the future prime minister George Papandreou, hoping to help stem the return to power of a resurgent
right wing that wanted to push Greece towards xenophobia both domestically and in its foreign policy. As the whole world now knows, Papandreou’s party not only
failed to stem xenophobia but, in the end, presided over the most virulent neoliberal macroeconomic policies that spearheaded the eurozone’s so-called bailouts
thus, unwittingly, causing the return of Nazis to the streets of Athens. Even though I resigned as Papandreou’s adviser early in 2006, and turned into his
government’s staunchest critic during his mishandling of the post-2009 Greek implosion, my public interventions in the debate on Greece and Europe have carried
no whiff of Marxism. Given all this, you may be puzzled to hear me call myself a Marxist. But, in truth, Karl Marx was responsible for framing my perspective of the
world we live in, from my childhood to this day. This is not something that I often volunteer to talk about in “polite society” because the very mention of the M-
word switches audiences off. But I never deny it either. After a few years of addressing audiences with whom I do not share an ideology, a need has crept up on me
to talk about Marx’s imprint on my thinking. To explain why, while an unapologetic Marxist, I think it is important to resist him passionately in a variety of ways. To
be, in other words, erratic in one’s Marxism. If my whole academic career largely ignored Marx, and my current policy recommendations are impossible to describe
as Marxist, why bring up my Marxism now? The answer is simple: Even my non-Marxist economics was guided by a mindset influenced by Marx. A radical social
theorist can challenge the economic mainstream in two different ways, I always thought. One way is by means of immanent criticism. To accept the mainstream’s
axioms and then expose its internal contradictions. To say: “I shall not contest your assumptions but here is why your own conclusions do not logically flow on from
them.” This was, indeed, Marx’s method of undermining British political economics. He accepted every axiom by Adam Smith and David Ricardo in order to
demonstrate that, in the context of their assumptions, capitalism was a contradictory system. The second avenue that a radical theorist can pursue is, of course, the
construction of alternative theories to those of the establishment, hoping that they will be taken seriously. My view on this dilemma has always been that the
powers that be are never perturbed by theories that embark from assumptions different to their own. The only thing that can destabilise and genuinely challenge
mainstream, neoclassical economists is the demonstration of the internal inconsistency of their own models. It was for this reason that, from the very beginning, I
chose to delve into the guts of neoclassical theory and to spend next to no energy trying to develop alternative, Marxist models of capitalism. My reasons, I submit,
were quite Marxist. When called upon to comment on the world we live in, I had no alternative but to fall back on the Marxist tradition which had shaped my
thinking ever since my metallurgist father impressed upon me, when I was still a child, the effect of technological innovation on the historical process. How, for
instance, the passage from the bronze age to the iron age sped up history; how the discovery of steel greatly accelerated historical time; and how silicon-based IT
technologies are fast-tracking socioeconomic and historical discontinuities. My first encounter with Marx’s writings came very early in life, as a result of the strange
times I grew up in, with Greece exiting the nightmare of the neofascist dictatorship of 1967-74. What caught my eye was Marx’s mesmerising gift for writing a
dramatic script for human history, indeed for human damnation, that was also laced with the possibility of salvation and authentic spirituality. Marx created a
narrative populated by workers, capitalists, officials and scientists who were history’s dramatis personae. They struggled to harness reason and science in the
context of empowering humanity while, contrary to their intentions, unleashing demonic forces that usurped and subverted their own freedom and humanity. This
dialectical perspective, where everything is pregnant with its opposite, and the eager eye with which Marx discerned the potential for change in what seemed to be
the most unchanging of social structures, helped me to grasp the great contradictions of the capitalist era. It dissolved the paradox of an age that generated the
most remarkable wealth and, in the same breath, the most conspicuous poverty. Today, turning to the European crisis, the crisis in the United States and the long-
term stagnation of Japanese capitalism, most commentators fail to appreciate the dialectical process under their nose. They recognise the mountain of debts and
banking losses but neglect the opposite side of the same coin: the mountain of idle savings that are “frozen” by fear and thus fail to convert into productive
investments. A Marxist alertness to binary oppositions might have opened their eyes. A major reason why established opinion fails to come to terms with
contemporary reality is that it never understood the dialectically tense “joint production” of debts and surpluses, of growth and unemployment, of wealth and
poverty, indeed of good and evil. Marx’s script alerted us these binary oppositions as the sources of history’s cunning. From my first steps of thinking like an
economist, to this very day, it occurred to me that Marx had made a discovery that must remain at the heart of any useful analysis of capitalism. It was the discovery
of another binary opposition deep within human labour. Between labour’s two quite different natures: i) labour as a value-creating activity that can never be
quantified in advance (and is therefore impossible to commodify), and ii) labour as a quantity (eg, numbers of hours worked) that is for sale and comes at a price.
That is what distinguishes labour from other productive inputs such as electricity: its twin, contradictory, nature. A differentiation-cum-contradiction that political
economics neglected to make before Marx came along and that mainstream economics is steadfastly refusing to acknowledge today. Both electricity and labour can
be thought of as commodities. Indeed, both employers and workers struggle to commodify labour. Employers use all their ingenuity, and that of their HR
management minions, to quantify, measure and homogenise labour. Meanwhile, prospective employees go through the wringer in an anxious attempt to
commodify their labour power, to write and rewrite their CVs in order to portray themselves as purveyors of quantifiable labour units. And there’s the rub. If
workers and employers ever succeed in commodifying labour fully, capitalism will perish. This is an insight without which capitalism’s tendency to generate crises
can never be fully grasped and, also, an insight that no one has access to without some exposure to Marx’s thought. Science fiction becomes documentary In the
classic 1953 film Invasion of the Body Snatchers, the alien force does not attack us head on, unlike in, say, HG Wells’s The War of the Worlds. Instead, people are
taken over from within, until nothing is left of their human spirit and emotions. Their bodies are shells that used to contain a free will and which now labour, go
through the motions of everyday “life”, and function as human simulacra “liberated” from the unquantifiable essence of human nature. This is something like what
would have transpired if human labour had become perfectly reducible to human capital and thus fit for insertion into the vulgar economists’ models. Every non-
Marxist economic theory that treats human and non-human productive inputs as interchangeable assumes that the dehumanisation of human labour is complete.
But if it could ever be completed, the result would be the end of capitalism as a system capable of creating and distributing value. For a start, a society of
dehumanised automata would resemble a mechanical watch full of cogs and springs, each with its own unique function, together producing a “good”: timekeeping.
Yet if that society contained nothing but other automata, timekeeping would not be a “good”. It would certainly be an “output” but why a “good”? Without real
humans to experience the clock’s function, there can be no such thing as “good” or “bad”. If capital ever succeeds in quantifying, and subsequently fully
commodifying, labour, as it is constantly trying to, it will also squeeze that indeterminate, recalcitrant human freedom from within labour that allows for the
generation of value. Marx’s brilliant insight into the essence of capitalist crises was precisely this: the greater capitalism’s success in turning labour into a commodity
the less the value of each unit of output it generates, the lower the profit rate and, ultimately, the nearer the next recession of the economy as a system. The
portrayal of human freedom as an economic category is unique in Marx, making possible a distinctively dramatic and analytically astute interpretation of
capitalism’s propensity to snatch recession, even depression, from the jaws of growth. When Marx was writing that labour is the living, form-giving fire; the
transitoriness of things; their temporality; he was making the greatest contribution any economist has ever made to our understanding of the acute contradiction
buried inside capitalism’s DNA. When he portrayed capital as a “… force we must submit to … it develops a cosmopolitan, universal energy which breaks through
every limit and every bond and posts itself as the only policy, the only universality the only limit and the only bond”, he was highlighting the reality that labour can
be purchased by liquid capital (ie money), in its commodity form, but that it will always carry with it a will hostile to the capitalist buyer. But Marx was not just
making a psychological, philosophical or political statement. He was, rather, supplying a remarkable analysis of why the moment that labour (as an unquantifiable
activity) sheds this hostility, it becomes sterile, incapable of producing value. At a time when neoliberals have ensnared the majority in their theoretical tentacles,
incessantly regurgitating the ideology of enhancing labour productivity in an effort to enhance competitiveness with a view to creating growth etc, Marx’s analysis
offers a powerful antidote. Capital can never win in its struggle to turn labour into an infinitely elastic, mechanised input, without destroying itself. That is what
neither the neoliberals nor the Keynesians will ever grasp. “If the whole class of the wage-labourer were to be annihilated by machinery”, wrote Marx “how terrible
that would be for capital, which, without wage-labour, ceases to be capital!” What has Marx done for us? Almost all schools of thought, including those of some
progressive economists, like to pretend that, though Marx was a powerful figure, very little of his contribution remains relevant today. I beg to differ. Besides having
captured the basic drama of capitalist dynamics, Marx has given me the tools with which to become immune to the toxic propaganda of neoliberalism. For example,
the idea that wealth is privately produced and then appropriated by a quasi-illegitimate state, through taxation, is easy to succumb to if one has not been exposed
first to Marx’s poignant argument that precisely the opposite applies: wealth is collectively produced and then privately appropriated through social relations of
production and property rights that rely, for their reproduction, almost exclusively on false consciousness. In his recent book Never Let a Serious Crisis Go to Waste,
the historian of economic thought, Philip Mirowski, has highlighted the neoliberals’ success in convincing a large array of people that markets are not just a useful
means to an end but also an end in themselves. According to this view, while collective action and public institutions are never able to “get it right”, the unfettered
operations of decentralised private interest are guaranteed to produce not only the right outcomes but also the right desires, character, ethos even. The best
example of this form of neoliberal crassness is, of course, the debate on how to deal with climate change. Neoliberals have rushed in to argue that, if anything is to
be done, it must take the form of creating a quasi-market for “bads” (eg an emissions trading scheme), since only markets “know” how to price goods and bads
appropriately. To understand why such a quasi-market solution is bound to fail and, more importantly, where the motivation comes from for such “solutions”, one
can do much worse than to become acquainted with the logic of capital accumulation that Marx outlined and the Polish economist Michal Kalecki adapted to a
world ruled by networked oligopolies. In the 20th century, the two political movements that sought their roots in Marx’s thought were the communist and social
democratic parties. Both of them, in addition to their other errors (and, indeed, crimes) failed, to their detriment, to follow Marx’s lead in a crucial regard: instead of
embracing liberty and rationality as their rallying cries and organising concepts, they opted for equality and justice, bequeathing the concept of freedom to the
neoliberals. Marx was adamant: The problem with capitalism is not that it is unfair but that it is irrational, as it habitually condemns whole generations to
deprivation and unemployment and even turns capitalists into angst-ridden automata, living in permanent fear that unless they commodify their fellow humans
fully so as to serve capital accumulation more efficiently, they will cease to be capitalists. So, if capitalism appears unjust this is because it enslaves everyone; it
wastes human and natural resources; the same production line that pumps out remarkable gizmos and untold wealth, also produces deep unhappiness and crises.
Having failed to couch a critique of capitalism in terms of freedom and rationality, as Marx thought essential, social democracy and the left in general allowed the
neoliberals to usurp the mantle of freedom and to win a spectacular triumph in the contest of ideologies. Perhaps the most significant dimension of the neoliberal
triumph is what has come to be known as the “democratic deficit”. Rivers of crocodile tears have flowed over the decline of our great democracies during the past
three decades of financialisation and globalisation. Marx would have laughed long and hard at those who seem surprised, or upset, by the “democratic deficit”.
What was the great objective behind 19th-century liberalism? It was, as Marx never tired of pointing out, to separate the economic sphere from the political sphere
and to confine politics to the latter while leaving the economic sphere to capital. It is liberalism’s splendid success in achieving this long-held goal that we are now
observing. Take a look at South Africa today, more than two decades after Nelson Mandela was freed and the political sphere, at long last, embraced the whole
population. The ANC’s predicament was that, in order to be allowed to dominate the political sphere, it had to give up power over the economic one. And if you
think otherwise, I suggest that you talk to the dozens of miners gunned down by armed guards paid by their employers after they dared demand a wage rise. Why
erratic? Having explained why I owe whatever understanding of our social world I may possess largely to Karl Marx, I now want to explain why I remain terribly
angry with him. In other words, I shall outline why I am by choice an erratic, inconsistent Marxist. Marx committed two spectacular mistakes, one of them an error
of omission, the other one of commission. Even today, these mistakes still hamper the left’s effectiveness, especially in Europe. Marx’s first error – the error of
omission was that he failed to give sufficient thought to the impact of his own theorising on the world that he was theorising about. His theory is discursively
exceptionally powerful, and Marx had a sense of its power. So how come he showed no concern that his disciples, people with a better grasp of these powerful
ideas than the average worker, might use the power bestowed upon them, via Marx’s own ideas, in order to abuse other comrades, to build their own power base,
to gain positions of influence? Marx’s second error, the one I ascribe to commission, was worse. It was his assumption that truth about capitalism could be
discovered in the mathematics of his models. This was the worst disservice he could have delivered to his own theoretical system. The man who equipped us with
human freedom as a first-order economic concept; the scholar who elevated radical indeterminacy to its rightful place within political economics; he was the same
person who ended up toying around with simplistic algebraic models, in which labour units were, naturally, fully quantified, hoping against hope to evince from
these equations some additional insights about capitalism. After his death, Marxist economists wasted long careers indulging a similar type of scholastic mechanism.
Fully immersed in irrelevant debates on “the transformation problem” and what to do about it, they eventually became an almost extinct species, as the neoliberal
juggernaut crushed all dissent in its path. How could Marx be so deluded? Why did he not recognise that no truth about capitalism can ever spring out of any
mathematical model, however brilliant the modeller may be? Did he not have the intellectual tools to realise that capitalist dynamics spring from the unquantifiable
part of human labour; ie from a variable that can never be well-defined mathematically? Of course he did, since he forged these tools! No, the reason for his error is
a little more sinister: just like the vulgar economists that he so brilliantly admonished (and who continue to dominate the departments of economics today), he
coveted the power that mathematical “proof” afforded him. If I am right, Marx knew what he was doing. He understood, or had the capacity to know, that a
comprehensive theory of value cannot be accommodated within a mathematical model of a dynamic capitalist economy. He was, I have no doubt, aware that a
proper economic theory must respect the idea that the rules of the undetermined are themselves undetermined. In economic terms this meant a recognition that
the market power, and thus the profitability, of capitalists was not necessarily reducible to their capacity to extract labour from employees; that some capitalists can
extract more from a given pool of labour or from a given community of consumers for reasons that are external to Marx’s own theory. Alas, that recognition would
be tantamount to accepting that his “laws” were not immutable. He would have to concede to competing voices in the trades union movement that his theory was
indeterminate and, therefore, that his pronouncements could not be uniquely and unambiguously correct. That they were permanently provisional. This
determination to have the complete, closed story, or model, the final word, is something I cannot forgive Marx for. It proved, after all, responsible for a great deal of
error and, more significantly, authoritarianism. Errors and authoritarianism that are largely responsible for the left’s current impotence as a force of good and as a
check on the abuses of reason and liberty that the neoliberal crew are overseeing today. Mrs Thatcher’s lesson I moved to England to attend university in
September 1978, six months or so before Margaret Thatcher’s victory changed Britain forever. Watching the Labour government disintegrate, under the weight of
its degenerate social democratic programme, led me to a serious error: to the thought that Thatcher’s victory could be a good thing, delivering to Britain’s working
and middle classes the short, sharp shock necessary to reinvigorate progressive politics; to give the left a chance to create a fresh, radical agenda for a new type of
effective, progressive politics. Even as unemployment doubled and then trebled, under Thatcher’s radical neoliberal interventions, I continued to harbour hope that
Lenin was right: “Things have to get worse before they get better.” As life became nastier, more brutish and, for many, shorter, it occurred to me that I was
tragically in error: things could get worse in perpetuity, without ever getting better. The hope that the deterioration of public goods, the diminution of the lives of
the majority, the spread of deprivation to every corner of the land would, automatically, lead to a renaissance of the left was just that: hope. The reality was,

however, painfully different. With every turn of the recession’s screw, the left became more
introverted, less capable of producing a convincing progressive agenda and, meanwhile,
the working class was being divided between those who dropped out of society and
those co-opted into the neoliberal mindset. My hope that Thatcher would inadvertently bring about a new political
revolution was well and truly bogus. All that sprang out of Thatcherism were extreme financialisation,

the triumph of the shopping mall over the corner store, the fetishisation of housing and Tony Blair. Instead of
radicalising British society, the recession that Thatcher’s government so carefully engineered, as part

of its class war against organised labour and against the public institutions of social
security and redistribution that had been established after the war, permanently
destroyed the very possibility of radical, progressive politics in Britain. Indeed, it rendered impossible
the very notion of values that transcended what the market determined as the “right” price. The lesson Thatcher taught me about the

capacity of a long-lasting recession to undermine progressive politics, is one that I carry with me into
today’s European crisis. It is, indeed, the most important determinant of my stance in relation to the crisis. It is the reason I am happy to

confess to the sin I am accused of by some of my critics on the left: the sin of choosing
not to propose radical political programs that seek to exploit the crisis as an
opportunity to overthrow European capitalism, to dismantle the awful eurozone, and to undermine the European Union of
the cartels and the bankrupt bankers. Yes, I would love to put forward such a radical agenda. But,

no, I am not prepared to commit the same error twice. What good did we achieve in Britain in the early 1980s by
promoting an agenda of socialist change that British society scorned while falling headlong into Thatcher’s neoliberal trap? Precisely none. What good will it do
today to call for a dismantling of the eurozone, of the European Union itself, when European capitalism is doing its utmost to undermine the eurozone, the
European Union, indeed itself? A Greek or a Portuguese or an Italian exit from the eurozone would soon lead to a fragmentation of European capitalism, yielding a

Who do
seriously recessionary surplus region east of the Rhine and north of the Alps, while the rest of Europe is would be in the grip of vicious stagflation.

you think would benefit from this development? A progressive left, that will rise
Phoenix-like from the ashes of Europe’s public institutions? Or the Golden Dawn Nazis, the assorted neofascists, the
xenophobes and the spivs? I have absolutely no doubt as to which of the two will do best from a disintegration of the eurozone. I, for one, am not
we, the suitably erratic Marxists, who
prepared to blow fresh wind into the sails of this postmodern version of the 1930s. If this means that it is

must try to save European capitalism from itself, so be it. Not out of love for European capitalism, for the
eurozone, for Brussels, or for the European Central Bank, but just because we want to minimise the unnecessary

human toll from this crisis. What should Marxists do? Europe’s elites are behaving today as if they understand
neither the nature of the crisis that they are presiding over, nor its implications for the future of European civilisation. Atavistically, they are choosing to plunder the
diminishing stocks of the weak and the dispossessed in order to plug the gaping holes of the financial sector, refusing to come to terms with the unsustainability of

, the left must admit that we are just not ready to plug
the task. Yet with Europe’s elites deep in denial and disarray

the chasm that a collapse of European capitalism would open up with a functioning socialist
system. Our task should then be twofold. First, to put forward an analysis of the current
state of play that non-Marxist, well meaning Europeans who have been lured by the
sirens of neoliberalism, find insightful. Second, to follow this sound analysis up with
proposals for stabilising Europe – for ending the downward spiral that, in the end, reinforces
only the bigots. Let me now conclude with two confessions. First, while I am happy to defend as genuinely
radical the pursuit of a modest agenda for stabilising a system that I criticise, I shall not
pretend to be enthusiastic about it. This may be what we must do, under the present
circumstances, but I am sad that I shall probably not be around to see a more radical
agenda being adopted. My final confession is of a highly personal nature: I know that I run the risk of, surreptitiously, lessening the sadness
from ditching any hope of replacing capitalism in my lifetime by indulging a feeling of having become agreeable to the circles of polite society. The sense of self-
satisfaction from being feted by the high and mighty did begin, on occasion, to creep up on me. And what a non-radical, ugly, corruptive and corrosive sense it was.
My personal nadir came at an airport. Some moneyed outfit had invited me to give a keynote speech on the European crisis and had forked out the ludicrous sum
necessary to buy me a first-class ticket. On my way back home, tired and with several flights under my belt, I was making my way past the long queue of economy
passengers, to get to my gate. Suddenly I noticed, with horror, how easy it was for my mind to be infected with the sense that I was entitled to bypass the hoi polloi.
I realised how readily I could forget that which my leftwing mind had always known: that nothing succeeds in reproducing itself better than a false sense of
entitlement. Forging alliances with reactionary forces, as I think we should do to stabilise Europe today, brings us up against the risk of becoming co-opted, of
shedding our radicalism through the warm glow of having “arrived” in the corridors of power. Radical confessions, like the one I have attempted here, are perhaps
the only programmatic antidote to ideological slippage that threatens to turn us into cogs of the machine. If we are to forge alliances with our political adversaries

The trick is to
we must avoid becoming like the socialists who failed to change the world but succeeded in improving their private circumstances.

avoid the revolutionary maximalism that, in the end, helps the neoliberals bypass all
opposition to their self-defeating policies and to retain in our sights capitalism’s
inherent failures while trying to save it, for strategic purposes, from itself.
TBTM Add On
1AR – Yes Nuclear Terror
Stealing and illegal activities aren’t necessary
Jabr 18 (Ferris Jabr – contributing writer to The New York Times Magazine and Scientific
American, MA in Journalism from NYU. Alex Wellerstein – historian of science at the
Stevens Institute of Technology, PhD in the History of Science from Harvard. “This Is
What a Nuclear Bomb Looks Like,” New York Magazine. June 2018.
http://nymag.com/daily/intelligencer/2018/06/what-a-nuclear-attack-in-new-york-
would-look-like.html)
Once terrorists obtained the uranium, they would need only a small team of sympathetic engineers and physicists to build what is known as a
gun-type nuclear bomb, like the one dropped on Hiroshima. A gun-type nuke uses traditional explosives to fire a slug of uranium through a tube
directly into another chunk of uranium, fracturing huge numbers of atoms and unleashing a massive amount of energy. Compared to modern
nuclear missiles, which are far more powerful and complex, constructing a crude gun-type nuke is fairly straightforward. In 2002, when Joe
Biden was chairman of the Senate Foreign Relations Committee, he asked several nuclear laboratories whether
a terrorist group could construct an off-the-shelf nuclear weapon. Several months later, they gave
their answer: Without resorting to any illegal activities or drawing on classified information, and

using only commercially available parts, they had built a nuclear bomb that was
“bigger than a breadbox but smaller than a dump truck .” To underscore the danger, Biden had them bring
the device to the Senate.

Construction and transport are easy


Jabr 18 (Ferris Jabr – contributing writer to The New York Times Magazine and Scientific
American, MA in Journalism from NYU. Alex Wellerstein – historian of science at the
Stevens Institute of Technology, PhD in the History of Science from Harvard. “This Is
What a Nuclear Bomb Looks Like,” New York Magazine. June 2018.
http://nymag.com/daily/intelligencer/2018/06/what-a-nuclear-attack-in-new-york-
would-look-like.html)
What made the false alarm all the more frightening is just how plausible the prospect of a nuclear strike has become. The U.S. and Russia, both
of which maintain massive nuclear arsenals, are increasingly at odds. Iran has announced plans to ramp up its production of enriched uranium.
North Korea may already have nuclear missiles capable of striking anywhere in the U.S., and there is no way to know whether Trump’s
negotiations with Kim Jong-un will wind up increasing or decreasing the prospect of nuclear war. But the current state of dread, while entirely
understandable, has overshadowed two crucial realities about the threat of a nuclear calamity. First, a nuclear attack on the United States could
Experts warn that it would be relatively easy for
well come not from the skies but from the streets.

terrorists to build an “improvised nuclear bomb” and smuggle it into America. Building
a ten-kiloton bomb nearly as destructive as the one dropped on Hiroshima would require little more
than some technical expertise and 46 kilograms of highly enriched uranium — a
quantity about the size of a bowling ball.
The second reality we have failed to understand is what a nuclear detonation and its aftermath would actually look like. In our imaginations,
fueled by apocalyptic fictions like The Road and The Day After, the scale and speed of nuclear annihilation seem too vast and horrific to
contemplate. If nuclear war is considered “unthinkable,” that is in no small part because of our refusal to think about it with any clarity or
specificity. In the long run, the best deterrent to nuclear war may be to understand what a single nuclear bomb is capable of doing to, say, a city
like New York — and to accept that the reality would be even worse than our fears.

The Bomb
There are currently at least 2,000 tons of weapons-grade nuclear material stored in some 40
countries — enough to make more than 40,000 bombs approximately the size of the one that devastated
Hiroshima. Stealing the material would be challenging but far from impossible . Russia stockpiles

numerous bombs built before the use of electronic locks that disable the weapons in the event of
tampering. Universities that handle uranium often have lax security. And insiders at military

compounds sometimes steal radioactive material and sell it on the black market. Since
1993, there have been 762 known instances in which radioactive materials were lost
or stolen, and more than 2,000 cases of trafficking and other criminal activities.
Once terrorists obtained the uranium, they would need only a small team of sympathetic engineers and physicists to build what is known as a
gun-type nuclear bomb, like the one dropped on Hiroshima. A gun-type nuke uses traditional explosives to fire a slug of uranium through a tube
directly into another chunk of uranium, fracturing huge numbers of atoms and unleashing a massive amount of energy. Compared to modern
nuclear missiles, which are far more powerful and complex, constructing a crude gun-type nuke is fairly straightforward. In 2002, when Joe
Biden was chairman of the Senate Foreign Relations Committee, he asked several nuclear laboratories whether a terrorist group could
construct an off-the-shelf nuclear weapon. Several months later, they gave their answer: Without resorting to any illegal activities or drawing
on classified information, and using only commercially available parts, they had built a nuclear bomb that was “bigger than a breadbox but
smaller than a dump truck.” To underscore the danger, Biden had them bring the device to the Senate.

smuggling the weapon into the United States — would be even easier. A
The last step in the process —

ten-kiloton bomb, which would release as much energy as 10,000 tons of TNT, would be only seven feet long
and weigh about 1,000 pounds. It would be simple to transport such a device to
America aboard a container ship, just another unseen object in a giant metal box
among millions of other metal boxes floating on the ocean. Even a moderate amount of shielding would be enough to
hide its radioactive signature from most detectors at shipping hubs. Given all the naturally radioactive items that frequently trigger false alarms
— bananas, ceramics, Brazil nuts, pet deodorizers — a terrorist group could even bury the bomb in bags of Fresh Step or Tidy Cats to fool
inspectors if a security sensor was tripped. In 1946, a senator asked J. Robert Oppenheimer, the physicist who played a key role in the
Manhattan Project, what instrument he would use to detect a nuclear bomb smuggled into the United States. Oppenheimer’s answer: “A
screwdriver.” Amazingly, our detection systems have still not caught up to this threat: One would essentially have to open and visually inspect
terrorists would
every single crate and container arriving on America’s shores. Once the container ship reached a port like Newark,

have no trouble loading the concealed bomb into the back of an unassuming white
van and driving it through the Lincoln Tunnel directly into Times Square.

Five groups have capability and will


Hayes 18 (Peter Hayes – Director of the Nautilus Institute and Honorary Professor at the
Centre for International Security Studies at the University of Sydney. <KEN> "Non State
Terrorism and Inadvertent Nuclear War," January 18, 2018. Nautilus Institute for
Security and Sustainability. https://nautilus.org/napsnet/napsnet-special-reports/non-
state-terrorism-and-inadvertent-nuclear-war/)
For non-state actors to succeed at complex engineering project such as acquiring a nuclear weapons or
nuclear threat capacity demands substantial effort. Gary Ackerman specifies that to have a chance of succeeding, non-state actors

with nuclear weapons aspirations must be able to demonstrate that they control substantial resources, have a safe haven

in which to conduct research and development, have their own or procured expertise, are able to learn from failing and have the

stamina and strategic commitment to do so, and manifest long-term planning and ability to make rational
choices on decadal timelines. He identified five such violent non-state actors who
already conducted such engineering projects (see Figure 3), and also noted the important facilitating condition of
a global network of expertize and hardware. Thus, although the skill, financial, and materiel requirements of a

non-state nuclear weapons project present a high bar, they are certainly reachable.
Figure 3: Complex engineering projects by five violent non-state actors & Khan network Source: G. Ackerman, “Comparative Analysis of VNSA
Complex Engineering Efforts,” Journal of Strategic Security, 9:1, 2016, at: http://scholarcommons.usf.edu/jss/vol9/iss1/10/

Along similar lines, James Forest examined the extent to which non-state actors can pose a threat of nuclear terrorism.[10] He notes that such
entities face practical constraints, including expense, the obstacles to stealing many essential elements for nuclear weapons, the risk of
discovery, and the difficulties of constructing and concealing such weapons. He also recognizes the strategic constraints that work against
obtaining nuclear weapons, including a cost-benefit analysis, possible de-legitimation that might follow from perceived genocidal intent or use,
and the primacy of political-ideological objectives over long-term projects that might lead to the group’s elimination, the availability of cheaper
and more effective alternatives that would be foregone by pursuit of nuclear weapons, and the risk of failure and/or discovery before successful
acquisition and use occurs. In the past, almost all—but not all—non-state terrorist groups appeared to be restrained by a combination of high
practical and strategic constraints, plus their own cost-benefit analysis of the opportunity costs of pursuing nuclear weapons. However, should
some or all of these constraints diminish, a rapid non-state nuclear proliferation is possible.

Although only a few non-state actors such as Al Qaeda and Islamic State have exhibited such
underlying stamina and organizational capacities and actually attempted to obtain nuclear weapons-related skills, hardware,

and materials, the past is not prologue . An incredibly diverse set of variously motivated terrorist

groups exist already, including politico-ideological, apocalyptic-millenarian, politico-religious, nationalist-separatist, ecological, and
political-insurgency entities, some of which converge with criminal-military and criminal-scientist (profit based) networks; but also pyscho-
The social, economic, and deculturating
pathological mass killing cults, lone wolves, and ephemeral copy-cat non-state actors.

conditions that generate such entities are likely to persist and even expand.

You might also like