Professional Documents
Culture Documents
1AC
1AC – Federalism
Advantage 1: Federalism
Federal criminalization of marijuana destroys cross-agency collaboration at all levels
of government, weakens the rule of law, and wrecks effective federalist governance.
Malone 18 [Matthew A. Malone, Professor of Law, Lehigh University, Winter, 2018, “ARTICLE: FEDERAL MARIJUANA POLICY: HOMAGE
TO FEDERALISM IN FORM; POTEMKIN FEDERALISM IN SUBSTANCE,” 63 Wayne L. Rev. 215, 245-266, lexis]
The Controlled Substances Act, in form, pays fealty to federalism but, in substance, it does violence to its underlying principles in two respects.
First, despite the fact that the Controlled Substances Act allowed states the space in which to pursue their
own policy preferences, its treatment of marijuana as a Schedule I substance imposes difficult practical
obstacles for alternate state treatment of the substance Whatever. policy choices are made at the state
level, marijuana is an illegal substance under federal law --a fact whose consequences cause the Controlled Substances
Act to occupy the field in many practical respects. The Controlled Substances Act pays lip service to state authority. Second, tolerance for
violation of federal law , whether by state or federal authorities, is a sign of open disrespect for federal law.
Cultural norms have a significant influence on the level of voluntary compliance with the law . The
existence of effective deterrents to non-compliance and the reputational harm attendant to such non-
compliance are critical --and often reinforcing-- components of an effective legal scheme that is predicated , in
large part, on voluntary compliance . Reputational harm [*262] is, in turn, dependent upon cultural norms and
transparency. Because the U.S. tax system depends, to a great extent, on voluntary compliance by
taxpayers, it offers an example of the effects of a cultural norm of compliance predicated , for the most part,
on deterrence. Levels of voluntary tax compliance are high in the United States and one would expect that this fact has contributed to a strong
cultural norm of compliance, thereby heightening the reputational harm of non-compliance. In fact, anecdotal evidence suggests that, absent
deterrence, levels of voluntary compliance would be quite low. For the most part, the third-party reporting requirements with respect to
wages, interest, dividends, and many other forms of income ensures an extremely high detection rate for noncompliance. Certain segments of
the taxpayer population whose noncompliance is not easily detected are notorious for noncompliance--small businesses, for example. Such
taxpayers have non-compliance rates of approximately fifty percent. [*263] Historically, income generated from foreign financial accounts has
not been subject to third party reporting, and non-reporting of such income was endemic. The I.R.S. estimated a gross tax gap of approximately
$ 450 billion in 2006, the vast majority of which was attributable to underreported income. Income not subject to third party reporting was
misreported at a fifty-six percent rate in that year. The I.R.S. issued an update to its 2006 report that estimated the annual tax gap at
approximately $ 458 billion during the years 2008 to 2010 and a sixty-three percent rate of misreporting by taxpayers not subject to third party
income reporting. The prevalence of strong cultural norm of tax compliance faces several obstacles. The fact that tax information is confidential
inhibits the effects of reputational harm. High profile criminal prosecution or media attention to the tactics of publicly-traded corporations does
focus attention on reputational harm, but such cases are rare and, in any event, may not cause reputational harm. In the face of consumer
protests over its tax tactics in the United Kingdom, Starbucks recently decided to move its European headquarters there. However, Starbucks'
response [*264] was aberrational. Apple, in contrast, perceived little reputational harm stemming from its aggressive tax strategies. The
current legal state of affairs with respect to marijuana serves to diminish respect for federal law in
general . The fact that many states have made policy choices inapposite to long-standing federal policy
is neither unusual nor troublesome. What is problematic is the federal government's response to states' disparate policies.
Throughout the nation's history, state responses to federal law with which they disagree often have been
confrontational. State attempts to nullify federal law can be traced as far back as the late eighteenth century and over the course of our
history have implicated, inter alia, the Alien and Sedition Acts of 1798, taxation of the bank of the United States, embargoes during the war of
1812, tariffs during the early part of the nineteenth century, and the Fugitive Slave Act. In modern times, states have attempted to thwart
school desegregation and federal gun control legislation. However, in such cases the federal government vigorously defended its prerogatives,
either in court or with threats of force. It is arguable that state law legalization of marijuana is tantamount to
nullification . Although state legalization does not purport to invalidate federal law, such laws do explicitly sanction actions that the federal
government does not. In and of itself, this is not troubling and, in fact, is an example of federalism at work. The federal government's response,
Instead of defending federal law -- or quietly acquiescing to state policy --the federal
however, is troubling.
government has chosen to openly disregard its own law as evidenced by the Department of Justice
memorandums to the U.S. Attorneys. Moreover, Congress itself has passed legislation that bars [*265] the
Department of Justice from using funds to enforce the Controlled Substances Act against activities that
are sanctioned under state law. The notion that a federal law enforcement authority categorically announces it will not enforce a
law under particular circumstances and Congress defunding any such efforts, is disquieting--particularly in this instance. The exercise of
prosecutorial discretion is commonplace and may be motivated by a number of factors such as resource constraints, laws whose effects in a
particular case are unintended or the result of poor legislative language, or societal shifts that command public support for such discretion. The
latter reason provides all the more justification in the face of a dysfunctional legislature that is incapable of reacting in a timely fashion to
societal changes. It appears that federal enforcement policy with respect to marijuana is a reaction to significant changes in social norms with
respect to marijuana usage, as reflected by the number of states that have legalized marijuana to some extent. Non-enforcement of laws under
such circumstances may be desirable if the societal benefits exceed the cost of lack of action. There is a cost of non-enforcement. Such actions
cannot help but erode citizens' respect for federal law. In effect, the federal government itself is signaling that federal law is not necessarily
binding. However, the benefits derived by current federal policy towards marijuana are minimal because it is, at best, a half-measure. To be
sure, state actors can take comfort in a reduced fear of criminal prosecution. This comfort, however, does not extend to the assurance that
contracts will be enforceable, to the ability to bank like other businesses, to the imposition of the same tax burden as taxpayers in general, or to
the availability of bankruptcy protection. The collateral effects of the Controlled Substances Act diminish, if not overwhelm, the discretion
exercised by the Department of Justice and the defunding of any efforts by the department not in keeping with such discretion. [*266] So what
has federal policy accomplished? It has disregarded federal law, yet it has maintained a host of infirmities on
state sanctioned marijuana businesses. Moreover, federal policy is inviting state judges, sometimes at the urging of
state legislatures, to ignore federal law in their determination of public policy. Federal officials should exercise great
caution in this respect because if a state judge can ignore federal law due to the actions of federal officials in
the marijuana context, they can do so in other contexts. Moreover, I would imagine that once state judges
have crossed this Rubicon it becomes more likely that they do so again . V. CONCLUSION Current federal
policy regarding marijuana ostensibly pays homage to state policy preferences . However, the continued
status of marijuana as a Schedule I narcotic significantly impairs the ability of state policy preferences
to come to full fruition . If, as I and many believe, state preferences with respect to marijuana should be
respected, then the appropriate response is to amend federal law , not ignore it, and defund federal law
enforcement efforts . The refusal to take legislative action and, instead, to resort to politically more expedient means that openly
diminish the force of federal law can have dangerous, and broader, implications. The tax system evidences the difficulties caused by the failure
to entrench a norm of compliance not driven by deterrence, and policy makers should be careful in believing that the tax system is somehow
sui generis. The controversy over the enforcement of federal immigration laws and the open defiance of such laws by some state and local
authorities should give Congress and the Executive branch pause before taking any actions that encourage the belief that the law, in the right
circumstances, is appropriately ignored. Sometimes Congress gets lucky and avoids having to make politically-charged choices. The Supreme
Court bailed out Congress on the matter of same-sex marriage. It is unlikely that the Court will do the same here.
Current marijuana regime is highly unstable --- Murphy vs. NCAA doesn’t solve
anything.
Young 20 (Ernest A. Young is the Alston & Bird Professor at Duke Law School, 2020, “Marijuana federalism : Uncle Sam and Mary Jane”,
Chapter 3: The Smoke Next Time Nullification, Commandeering, and the Future of Marijuana Regulation, Brookings Institute, ISBN
9780815737902, accessed 8/3/20)
Federal law has long prohibited the cultivation, sale, and consumption of marijuana .1 That prohibition has survived
legislative and administrative efforts at modification or repeal, as well as a variety of statutory and constitutional challenges in the courts.2 And
yet, as of 2019, thirty-three states have authorized the medicinal use of marijuana, and eleven states have authorized
recreational use.3 It is commonplace to speak of this recent trend in state law as “legalization” of marijuana, but that
is plainly not the case. As anyone who has taken a high school civics course could tell you, state decriminalization cannot
override the federal prohibition.4 Nonetheless, those who speak of “legalization” or “decriminalization” are not wrong, at least as a
practical matter. Although marijuana cultivation, sale, and use remain federal crimes, federal law enforcement has long depended on state and
local authorities to do the heavy lifting when it comes to drug enforcement. Under the anti-commandeering doctrine,5 Congress lacks
constitutional authority to compel state or local officers to cooperate in such enforcement. And in the absence of state and local cooperation,
federal authorities have so far proven unwilling to deploy sufficient resources to fill the gap. The dependence of federal drug enforcement on
state cooperation has enabled states like Colorado, Washington, and California not only to sit out the federal drug war on marijuana but,
effectively, to “nullify” federal law with which they disagree.6 This chapter begins by suggesting that state noncooperation with
federal drug policy is a form of nullification that may usefully be compared with South Carolina’s famous resistance to the federal
tariff in the 1830s. The Supreme Court’s recent decision in Murphy v. NCAA,7 which affirmed New Jersey’s power to remove state law
prohibitions on sports gambling in contravention of federal policy, likely strengthens the hand of states wishing to facilitate the
marijuana industry within their borders. I suggest, however, that the current
marijuana regime remains highly unstable .
Just as the cooperative structure of modern federalism gives states leverage to influence national policy on
issues like marijuana, that same structure makes it difficult for states to foster a workable state legal regime for
activities that remain illegal under federal law . And now that at least some national authorities are less sympathetic to
marijuana legalization, this lurking instability may well flare into open conflict . MODERN-DAY NULLIFICATION:
COOPERATIVE FEDERALISM AND STATE MARIJUANA LEGALIZATION It is a truism of modern constitutional law that no state may
“nullify” federal law, so long as that law falls within Congress’s (very broad) enumerated powers. That is the lesson
of the famous Nullification Crisis of the 1830s.8 In 1832, the South Carolina legislature enacted an ordinance declaring that the federal tariffs of
1828 and 1832 were null and void within the boundaries of that state.9 Southerners viewed the tariff as not only bad policy but also
unconstitutional because its purpose was not so much to raise revenue as to protect domestic industries.10 But South Carolina’s protest, which
built on a strong theory of state sovereignty advanced by John C. Calhoun, went down to defeat. President Andrew Jackson rejected
nullification in principle, threatened to enforce the tariff by force, then undercut the state’s practical position by introducing new legislation to
radically lower that same tariff.11 No other state joined South Carolina, and, in fact, eight Southern state legislatures passed resolutions
condemning the South Carolinians’ action.12 To the extent that resolution of a political dispute can settle constitutional meaning, the tariff
dispute “decisively rejected” South Carolina’s claimed authority to nullify federal law.13 So
how can Colorado and other states
purporting to legalize marijuana nullify the federal Controlled Substances Act? The answer has to do with changes
in American federalism that, while weakening the theoretical sovereignty of the states on which Calhoun’s theory
relied, have rendered the national government frequently dependent on state cooperation in practice. Contemporary
constitutional doctrine largely eschews the nineteenth-century notion of “dual federalism,” which held that state and federal governments
reign over separate and mutually exclusive spheres of authority.14 Hence, in modern America, “virtually all governments are involved in
virtually all functions,” and “there is hardly any activity that does not involve the federal, state, and some local government in important
responsibilities.”15 Under this regime of “cooperative federalism,” federal programs like the Clean Air and Water Acts,
Medicaid, and telecommunications regulation are administered by state officials working in conjunction with federal
administrative agencies.16 Cooperative federalism is frequently seen as a tool of centralization, leading to
“concentration of political powers in the national government.” 17 Such regulatory regimes allow federal agencies to
leverage their own limited resources, sometimes coopt state officials to adopt a more national perspective , and have gone
hand in hand with an expansion of federal authority that would shock our Constitution’s framers. But cooperative
federalism schemes do typically have a significant amount of play in the joints, which allows state officials tasked with implementing
federal mandates to put their own spin on national directives . And because cooperative federalism can leave
national authorities dependent on state cooperation to implement national law, state officials can sometimes
resist or undermine federal mandates. Jessica Bulman-Pozen and Heather Gerken have dubbed this sort of state behavior
“uncooperative federalism.”18 The drug enforcement regime is not technically one of cooperative federalism;
in theory, state and federal authorities each enforce their own distinct set of drug laws. But at least prior to
the wave of marijuana liberalizations, Congress and the states had criminalized largely the same
behaviors, and federal and state law enforcement pervasively cooperated in practice. The allocation of labor was
hardly equal, however; in practice, federal authorities played a decidedly secondary role. State and local law enforcement
personnel outnumber federal officers in this country by a factor of roughly ten to one,19 and drug enforcement must now compete with
terrorism and other national priorities for the attention of national officials. Federal authorities have thus tended to focus on
major distribution “kingpins” while leaving the overwhelming majority of minor drug offenses to state and local police.20 In 2007,
federal agents made 7,276 marijuana arrests—less than 1 percent of all American marijuana arrests that year.21 Federal marijuana policy thus
depends heavily on state and local enforcement. But contemporary constitutional doctrine makes clear that Congress may not require
state and local officials to participate in the administration of federal law if the state wishes them not to do so. Under the
“anti-commandeering” rule,22 Congress may request state assistance and offer considerable inducements to secure it,
but Congress may not require or coerce state officials to participate in cooperative federalism regimes.23 And
the Court’s latest anti-commandeering decision in Murphy makes clear that Congress may not prohibit states from repealing
criminal prohibitions in their own law .24 Hence, although federal marijuana laws remain legally binding in
those states purporting to legalize the drug, state and local officials in those jurisdictions have the right to engage in the
ultimate form of “uncooperative federalism”—they can, at the behest of state legislatures or referenda, simply go on
strike . Because federal authorities have insufficient resources to credibly enforce federal marijuana laws, the loss of state
cooperation effectively nullifies the federal prohibition . As Rob Mikos explains, “The federal government has too
few law enforcement agents to handle the large number of potential targets. Simply put, the expected sanctions for using or
supplying marijuana under federal law are too low, standing alone, to deter many prospective marijuana users or suppliers.”25 This “modern
day nullification” practiced by pro-marijuana states and sanctuary jurisdictions is quite different in its formal structure from Calhoun’s theory
that South Carolina tried to put into practice in the 1830s. The South Carolinians purported to hold the federal law legally invalid; their theory,
in essence, posited that state governments had their own unreviewable authority to interpret the Constitution and to make their
interpretations stick within their own borders.26 Coloradoand likeminded states make no such claim, of course. Instead, they are
simply betting that, without state and local cooperation, federal authorities will be unwilling to deploy sufficient
resources to enforce the national marijuana laws on their own. So far, it has been a good bet. THE INSTABILITY OF THE
CURRENT REGIME The current “regime” of marijuana regulation—if one can call it that—has certain real advantages. Because the national
government lacks the resources or political will to fill the enforcement gap, individual states have been able to experiment with liberalized drug
policies. As Justice Louis Brandeis observed long ago, state-level experimentation is beneficial both because it is a means to “try novel social and
economic experiments,” and because it permits these experiments to be tried “without risk to the rest of the country.”27 State experiments
have, in other words, something to offer both proponents and skeptics of reform. The combination of the anti-commandeering doctrine with
limited national enforcement resources has allowed reform-minded states to add their voices to a debate heretofore dominated by national
interests and institutions. The present regime is considerably less attractive, however, as a practical framework for
regulation. Most proponents of marijuana liberalization do not seem to envision a laissez faire regime for
marijuana; rather, they tend to project a world in which marijuana remains restricted for certain persons
(especially minors), regulated as to production, composition, sale, and circumstances of its use, and taxed. The continuing illegality of
marijuana under federal law complicates all these objectives significantly. For that reason, the current modus
vivendi appears to be highly unstable . That instability arises from two specific sources: First, many of the regulatory
measures that legalizing states have undertaken (or might desire to adopt in future) may be preempted by the
federal law prohibition.28 Second, the viability of state marijuana legalization continues to depend on the
forbearance of federal law enforcement , but neither the willingness of the Justice Department to focus
on other priorities nor the unwillingness of Congress to devote more resources to marijuana
enforcement are guaranteed. Consider preemption first. Preemption might operate on state marijuana laws in at
least two ways. In the early days of state legalization efforts, many believed federal law preempted states from
removing their state-law prohibitions on marijuana use. The argument was that such state repeals effectively “authorized”
marijuana use, and that “affirmatively authorizing a use that federal law prohibits stands as an obstacle to the implementation and execution of
the full purposes and objectives of the Controlled Substances Act.”29 This makes a certain degree of sense, at least to the extent that one takes
“obstacle” preemption seriously.30 As discussed, state legalization throws an enormous practical wrench into the gears of federal marijuana
policy. The troublewith this argument, however, is that it suggests that federal law effectively compels states
to continue enforcing their laws against marijuana even if the states would like to repeal them . As Robert Mikos has
long argued, any such argument would violate the anti-commandeering doctrine by effectively requiring
states to implement national policy without their consent .31 The Supreme Court seemed to confirm Professor
Mikos’s reasoning in Murphy v. Nat’l Collegiate Athletic Assn’ .32 That decision invalidated the federal Professional and
Amateur Sports Protection Act (PASPA),33 which limited states’ ability to “authorize” sports gambling. Although the United States sought to
distinguish between state laws that simply repealed state gambling prohibitions and those that “authorized” sports gambling, Justice Samuel
Alito’s majority opinion held that this distinction made no difference.34 Using state marijuana laws as an example, the Court suggested that one
might sensibly describe state repeals of those laws as “authorizing” marijuana use.35 Because the federal statute in Murphy directed the states
not to authorize sports gambling laws, it violated the anti-commandeering doctrine.36 That doctrine, Justice Alito explained, forbids Congress
“the power to issue orders directly to the States.”37 Mikos seemed to speak for most observers when he concluded that “the Murphy
decision will provide states the clear precedent they need to debunk lingering claims that their marijuana
reforms are preempted because they ‘authorize’ drug activities federal law forbids.”38 Professor Mikos may well be right, but
aspects of the Murphy opinion may cut the other way. To invalidate the PASPA, the Court had to reject the
United States’ argument that the statute was simply a “ valid preemption provision.”39 Such a provision,
Justice Alito said, must meet two requirements: (1) it must be an exercise of one of Congress’s enumerated
powers, and (2) it “must be best read as [a provision] that regulates private actors .”40 This latter requirement
derived from the basic principle inherent in the anti-commandeering doctrine that “the Constitution ‘confers upon Congress the power to
regulate individuals not States.’ ”41 To underscore the point, Justice Alito noted that “regardless of the language sometimes used by Congress
and this Court, every form of preemption is based on a federal law that regulates the conduct of private actors, not the States.”42 The PASPA
failed this requirement. “[I]t is clear
that the PASPA provision prohibiting state authorization of sports gambling
is not a preemption provision,” the Court said, “because there is no way in which this provision can be
understood as a regulation of private actors .”43 The PASPA neither “confer[red] any federal rights on private actors interested
in conducting sports gambling operations” nor “impos[ed] any federal restrictions on private actors.”44 The trouble
is that the
relevant provisions of the federal CSA are not directed at the states in the way § 3702(1) of the PASPA
was. The CSA regulates private actors by prohibiting them from cultivating, distributing, or consuming
marijuana. The CSA’s impact on state actors stems primarily from the argument that state “authorizations”
of marijuana production, sale, and consumption undermine the purposes of the national prohibition.
Murphy did not explicitly consider any implied preemption arguments of this kind . And the CSA’s basic
prohibition on private activity clearly passes Murphy’s test for a valid preemption provision . At the same
time, the Court did note that the PASPA contains a prohibition on private conduct somewhat similar to the
CSA’s prohibition.45 If the Court were inclined to think that such a private prohibition impliedly preempted state
authorizations, one would have thought the Court would have said so.46 But the PASPA scheme was odd enough, as it applied to
both public and private entities, that one should probably resist reading Murphy as deciding questions it did not explicitly purport to resolve.
The point here is simply that the implied preemptive effect of the CSA’s general prohibition on private drug
activity with respect to state efforts to authorize such activity may well be one of those unresolved questions . In any
event, even more difficult preemption questions arise as states seek to develop their own robust legal
regimes to regulate the legal use of marijuana after legalization. States do not wish to leave marijuana unregulated; they seek to license
distributors, regulate product quality, and restrict its use by certain persons (for example, minors) and persons engaged in certain activities (for
example, driving or operating heavy machinery). They also generally seek to foster the development of a thriving marijuana industry that will
provide jobs and tax revenue within the state. Certain models a state might adopt for achieving these ends seem plainly preempted under
current doctrine. Several states, for example, control the distribution of alcohol and facilitate its taxation by operating state-run liquor stores.
Similar state arrangements for marijuana would have the state actually conducting activity prohibited by
federal law; surely this would be preempted.47 It is not much more of a step to suggest that state laws encouraging
private sellers of marijuana—perhaps through various kinds of subsidies, tax exemptions, and the like—would likewise involve
state entities in directly encouraging the commission of federal crimes .48 Preemption thus does not
prevent a state from removing state-law prohibitions on marijuana use or from doing so in a more limited way that leaves
many restrictions in place. But it is likely to be an impediment to efforts to derive public benefits from
decriminalization by promoting a prosperous marijuana industry within a particular jurisdiction. The second set of difficulties facing state
legalization efforts arises from their continuing need for forbearance by national authorities. Federal
authorities do not have
sufficient resources to replace state and local officials no longer authorized to pursue marijuana producers, dealers, and
users. One should not assume, however, that federal enforcement resources will remain constant . To cite
an old example, Justice Joseph Story may well have thought he had fatally undercut enforcement of the
Fugitive Slave Act when he held in Prigg v. Pennsylvania49 that state and local law enforcement could not be
required to participate in repatriating accused fugitives to the South.50 But the national government proved willing to
expand federal resources as necessary to enforce the act, creating one of the nation’s earliest federal enforcement bureaucracies.51
Such an expansion of federal resources for marijuana enforcement seems unlikely given current conditions, but those conditions could change.
In any event, a Justice
Department hostile to state marijuana reforms could make life miserable for
reformers through selective, high-profile prosecutions for maximum in terrorem effect. And resource-intensive criminal
prosecutions are not the only way in which federal drug laws may bite. Nascent marijuana businesses may be
unable to access the banking world on account of federal prohibitions on financial transactions involving illegal activity.52 Although
the House passed a bill easing such restrictions for marijuana businesses just as this chapter went to press, that bill faced an uncertain future in
the Senate;53 hence, “[m]ost financial institutions are staying away from the cannabis industry because of the burdensome regulations and lack
of legal clarity, such as the potential for repercussions given that marijuana remains illegal on a federal basis.”54 Similarly, marijuana
businesses may face damaging federal tax consequences;55 and state ethics rules may prevent attorneys from counseling
persons who engage in activities that remain illegal under federal law.56 Likewise, individuals using marijuana in violation of
federal law may face significant employment or family law consequences, and persons on probation or parole
may find that marijuana use constitutes a violation of that status.57 All these collateral consequences of federal illegality may
well combine to undermine the development of anything like a thriving market of legitimate marijuana
businesses simply because state prohibitions have been removed . The development of such a market , with its
attendant need for long-term investment, will depend on broad confidence that the whole range of federal
regulatory and enforcement authorities—and not just the local U.S. Attorney’s office—will forbear enforcement of the
federal prohibition for the foreseeable future. The Obama administration—broadly sympathetic with the goals of state reformers—
was largely willing to forebear. Equally important, that administration was willing to articulate its policy of nonenforcement in official policy
statements.58 But Attorney General Jeff Sessions rescinded that guidance on January 4, 2018, and directed U.S. Attorneys to “follow well-
established principles that govern all federal prosecutions” when addressing marijuana-related offenses.59 Some U.S. Attorneys then signaled
that they would not alter their post-Obama guidance approach to marijuana prosecutions, but others vowed to more vigorously pursue
marijuana criminal cases.60 Attorney General William Barr, who replaced Sessions in February of 2019, has not issued any guidance of his own
and has told Congress that he favors a statutory solution over “just ignoring the enforcement of federal law.”61 Despite these shifts under the
Trump administration, we have not yet seen a spate of federal marijuana prosecutions in legalizing states. Perhaps this should not be surprising:
after all, a return to pre-Obama enforcement priorities would still leave marijuana low on the list of federal priorities. And although the current
Attorney General is evidently anti-legalization, he ultimately serves at the pleasure of a president who has repeatedly indicated that his
sympathies lie in the opposite direction.62 Finally, strong pro-marijuana sentiments in legalizing states no doubt lend weight and immediacy to
the political safeguards of federalism on this issue.63 With control of Congress so finely balanced and fiercely contested, one may doubt
whether a Republican administration would press aggressively on this issue anytime soon. Nonetheless, it would be hard to paint the longer-
term picture as anything other than cloudy on the executive forbearance question. The present correlation of political forces makes it unlikely
that federal enforcement will expand to fill the gap caused by state defections from the War on Drugs, but those forces are highly contingent.
To the extent that building a viable infrastructure of legal marijuana businesses requires long-term
investment, such investment remains an uncertain bet so long as marijuana remains illegal at the national
level. Finally, it is worth remembering that not everyone is Justice Oliver Wendell Holmes’s “bad man”—that is, motivated only by the fear of
sanctions.64 Even if adverse legal consequences are unlikely, some persons may have strong moral or religious aversions to lawbreaking.65
Moreover, the national
government has interests in compliance with federal law that are independent of the
policy objectives embodied in that law ; it is not great for national authority generally , after all, if the states
learn that they may ignore it with impunity . National authorities are unlikely to be indifferent to this basic institutional interest
in compliance with national law indefinitely. President Dwight Eisenhower, for example, seems to have sent federal troops to Little Rock,
Arkansas, in 1957 not so much out of personal support for the Supreme Court’s desegregation decisions as out of a perceived need to enforce
the supremacy of federal law.66 For all these reasons, then, the current state of play on marijuana is best viewed as
unstable—a temporary standoff, not an enduring settlement.67 THE PROSPECTS FOR STABLE STATE MARIJUANA LEGALIZATION
What might an enduring settlement look like, and what would it take to get there? To the tidy minds that seek national uniformity on every
question of significance, the current patchwork of marijuana regulation and de facto deregulation may seem like chaos. But tidy uniformity may
be overrated.68 Moreover, I have argued in other work that federalism
offers a valuable safety valve on polarizing issues
like marijuana policy by allowing individual jurisdictions to satisfy the public preferences predominating within
their borders without requiring a divisive fight at the national level .69 If one agrees that the desirability of marijuana
regulation is a subject upon which reasonable minds can differ, and that preferences on that question are unevenly distributed geographically
among the several states, then the
current flowering of state-by-state experimentation on that question may be
valuable to the nation as a whole. And if that is true, then the inherent instability of those experiments
should be troubling .
Most relevant to today's marijuana legalization debate is the Controlled Substances Act
aimed at "healing America's drug users."l6
("CSA"), which incorporates marijuana among its many listed illicit substances.' 7 Maximum penalties for marijuana possession, cultivation, and distribution range from one year to life in
prison, with maximum fines from one thousand to eight million dollars depending on the amount of marijuana at issue and the circumstances underlying the conviction.'8 The CSA is
undoubtedly one of the most salient consequences of current Supreme Court jurisprudence regarding
Congress' interstate commerce power . Notably, the Court found in Gonzales v. Raich that Congress did not
overstep its Constitutional authority by regulating the trade of illicit substances, including marijuana. 19 Relying on Wickard v. Fillburn2,0 the
Court held that even purely intrastate cultivation and distribution of marijuana is subject to federal regulation under the interstate commerce clause-and hence constitutionally controlled
Even before Supreme Court jurisprudence dramatically extended Congress' ability to regulate
under the CSA.21
illicit substances in interstate commerce, several commentators decried a federal "monopoly" over drug policy.22 Though the federal government has always possessed "an
impressive array of tools to influence policymaking at lower levels of government," 2 3 recent developments in academia and state-based drug policies, suggest that state authority and policy
innovation has established a solid footing in the marijuana law paradigm, ranging from medical-use licensing to decriminalization. While recognizing the federal government's oversight role in
drug enforcement policy, this article ultimately argues for horizontal competition-at the expense of federal
supremacy25-in marijuana policy for several reasons. First, it is not clear that the federal government
has constitutional authority to mandate state drug policy. 2 6 Though preemption, through properly enacted federal law, plays an important
role in drug enforcement, the federal government cannot require a state to enforce federal laws.27 Second, though the mere presence of federal
enforcement undoubtedly affects state policymaking, the lack of federal enforcement resources strongly
limits the feasibility of effective wide-scale federal enforcement . To be sure, drug laws are almost exclusively
implemented and policed by state and local governments. As such, likelihood of vertical competition from the federal government is
reduced.29 Lastly, federal regulation is inefficient and burdensome, diminishing citizen autonomy, while
hindering innovation and consumer choice.30 Regardless of the federal government's involvement in drug policy, current state innovation
in marijuana legislation is undoubtedly significant . Presently, sixteen states as well as the District of Columbia have enacted legislation legalizing the
possession, cultivation, and use of marijuana for the treatment of certain illnesses.3 1 Against this state regulatory backdrop loom the CSA and
the potential for DEA and FBI enforcement. As previously mentioned, however, the federal government plays a very small role in the enforcement and
prosecution of marijuana users, growers, and dispensaries-the United States Attorney only manages about one percent of all marijuana cases, leaving the rest to state enforcement. 3 2 Given
that federal resources are unable to manage the overwhelming drug caseload, and that many states have already shown their unwillingness to cede power over drug regulation and
possibility for competition between states in the enactment of innovative marijuana regulatory
schemes and legalization policies. This dynamic is known as jurisdictional competition, or more simply, the market for laws.33 The basic premise of the jurisdictional
competition paradigm is that governments compete to supply laws in order to support the influx of business and economic benefits, taxes, and citizenry. 3 4 This legal market concept has been
applied most extensively in the corporate law context, focusing on Delaware's market dominance. The market concept has, however, found a receptive audience in the fields of environmental
This article embarks on an analysis of the competitive framework over drug lawmaking authority and enforcement. While recognizing
law, tax, bankruptcy, trusts, and family law.35
decentralized regulation over marijuana policy . Using alcohol regulation as a guiding example, this article argues for state
authority over marijuana regulation, with localized enforcement and state discretion over local
policymaking authority. Notwithstanding the ban on possession, cultivation, distribution, and use, there are a number of regulatory
mechanisms states can implement outside of absolute illegality . For instance, states can institute penalty
schemes, by varying sentencing guidelines or establishing statutory penalty frameworks that differentiate between misdemeanor and felony violations. 3 7 In addition, some states have
employed alternative sentencing schemes, experimenting with drug treatment courts and probation dependent upon the successful completion of a rehabilitation
program. Outside of varying penalties, states have an array of legalization options, ranging from 39 40 to marijuana licenses for medical use. Not to mention, several
states have instituted decriminalization laws wherein possession and use is either legal or considered a misdemeanor, while distribution and 41 trafficking remains criminal.
Enhanced forfeiture is also an interesting option for reform that has potential incentive effects not only for criminal possessors but for state coffers. laws currently stand, asset
forfeiture "provides a significant incentive for state and local governments both to allocate substantial resources to drug enforcement and to cooperate with federal agencies."4 3 On the other
hand, from a marijuana user's perspective, reform initiatives aimed at limiting state and federal ability to confiscate property in conjunction with drug seizures may be a considerable incentive
Given the myriad of potential decentralized alternatives for marijuana regulation, there is
to relocate.4
significant room for jurisdictional competition among state and municipal governments for citizens, businesses, tax
revenues, and reduced violent crime. On the other hand, the drug debate is never quite so clear-cut; there are significant political45 and moral considerations-e.g. addiction, rehabilitation, and
health care expenditures, as well as the potential for decreased economic productivity in the wake of potentially rampant drug abuse. Given the complex considerations involved, the next Part
will introduce a new theory of decentralized marijuana regulation modeled partly after state alcohol regulation, while accounting for possible spillover effects, interest group influence, and
political incentives unique to the market for marijuana. II. Invigorating the Market for Marijuana Laws - Embracing a Decentralized Role for Regulation With fundamentally different individual
and political viewpoints in the marijuana debate, citizen autonomy should be at the forefront of the regulatory policymaking agenda, providing an avenue for increased individual choice and
more efficient and innovative lawmaking. Accordingly, the core argument in this article promotes the redistribution of marijuana regulatory authority away from the federal government and
into the hands of the states and local authorities. After first outlining the current regulatory framework, this Part argues for the rejection of federal control over marijuana policymaking. Noting
the federal government's failure to account for state innovation and autonomy, the first section utilizes public choice theory to establish a state-based framework akin to alcohol regulation
following the Twenty-First Amendment. The following section explains criticisms of such a position, but ultimately dispels these analyses in favor of the state as central decision-maker. The
following section, however, points out, and expands upon, two well-founded critiques of consolidated state control so as to build on the decentralization framework; placing state and local
politics at the forefront of the marijuana regulatory regime. A. FederalInvolvement in Drug Policy Ironically, current drug policy can best be described by the "Cooperative Federalism"
framework. This regulatory depiction is "a combination of federal policy mandates and inducements (such as conditional grants) that require or provide strong financial incentives for states to
implement the federal policy."4 6 National policy issues that are not only resource intensive, but also respond to hypothetical state-to-state externalities further buoy the federally dominated
regulatory regime.47 Sparked by states enacting reactive policies to a particular problem, the federal government responds at the behest of states and - 48 interest groups most invested in the
issue. On one hand, states concerned about the capacity to fund these programs and the ability to successfully implement the program if other states do not conform push the federal
government to enact a national program. On the other hand, federal politicians can garner the political support of vocal interest groups,50 while only paying for part of the overarching
program. ' In the context of drug policy, "Cooperative Federalism" is illustrated by the pioneer states that first prohibited marijuana, and the resulting federal program, implemented through
the CSA and the "War on Drugs." As the "Cooperative Federalism" framework predicts, the drug regulation dynamic balances "federal desires for control (and hence political support) and . . .
however, does not leave adequate room for state innovation in the drug policy aren a. 5 3 Rather, this paradigm is only
responsive to the dynamic wherein states and the federal government express views that are in agreement, or at the very least, that can be squared through political compromise. 5 4 As a
states are left to either venture on their own, in defiance of federal policy initiatives, or maintain
result,
some complicity with the "War on Drugs." B. Federal Drug Regulation is Hampering the Marketfor MarijuanaPolicy Amidst the federal drug policy debate,
there are abundant theories for optimal policymaking and response to population preferences. These theories are based on principles of federalism, public choice, and efficient competition,
and range from strong federal control to variations of hybrid state-federal policymaking that either attempt to explain the current dynamic or argue for a shift in regulatory policy to better
democracy, autonomy, and efficiency. The federal government has instituted a "War on Drugs," stemming from political and moral opposition that
predominantly began in the 1970s. Out of political necessity and increasing violence attributed to drug trafficking, the federal executive branch invested ever-increasin resources into drug
stemming drug-use, the federal drug regime led to divergent state drug policies 56 and an Executive Order that retreats from
the strictures of the CSA.57 This drug regime also confuses the citizenry and retail merchants as to how the federal government will react to marijuana use, possession, and distribution. In
contrast to federal marijuana laws, alcohol polic covering use and distribution is largely left to the states. 60 it is relevant, if not absolutely necessary, to compare the maladies documented
from alcohol prohibition in the 1920s in an effort to engender a new era of efficient and autonomous marijuana policymaking in the hands of state and local governments. 1.The Pitfalls of
Prohibition The federal government has three basic positions it can take in response to a given state policy: active support, neutrality, or active discouragement. In the drug regulation arena,
the federal government's traditional stance has been based almost entirely on how closely state policy resembles the "War on Drugs" paradigm. 6 2 In contrast, the federal government's role
in the alcohol arena strongly supports state efforts at policymaking, 6 3 where the only inde endent roles for the federal government lie in labeling, taxation, and interstate distribution.6 6
Current United States alcohol policy is hardly surprising given Prohibition's sordid past. On January 16, 1920, the Eighteenth Amendment to the Constitution went into effect and prohibited
"the manufacture, sale, or transportation of intoxicating liquors . . . for beverage purposes. . . ."67 Within a few short years, alcohol use once again became rampant, but was now unregulated,
dangerous, and controlled by organized crime; prisons were overpopulated, and corruption in public officials was 68 unprecedented.68 Prohibition's failure is distinctly ironic, given its lofty
social and public health goals. Indeed, the "noble experiment" as it came to be known, "was undertaken to reduce crime and corruption, solve social problems, reduce the tax burden created
by prisons and poorhouses, and improve health and hygiene in America."69 These goals are similarly idealized by the CSA,70 which has been espoused as no less than the protector of the
Ignoring the pitfalls of Prohibition and the social ills befalling blind adherence to
nation's health and public welfare.7 1
rigid moral high ground not only ignores potential economic boons due to product taxation and retail
sale, but also leaves "controlled" substances to be bartered for in the underground market, adulterated
by drug dealers, and subject only to regulation 72 noble ideals pushed by Prohibitionists in an effort to
rid through the criminal underworld. society of the social ills created by alcohol, the homicide rate increased by seventy-eight percent during Prohibition, all
other crimes increased by twenty-four percent, and arrests for drunkenness and disorderly conduct increased by forty-one percent.73 In essence, "[m]ore crimes were committed because
[P]rohibition destroy[ed] legal jobs, create[ed] black-market violence, divert[ed] resources from enforcement of other laws, and greatly increase[d] the prices people ha[d] to pay for the
prohibited goods."74 The analogy to marijuana is striking considering the enormous rate of violent crime attributed to drug trafficking, yet the exorbitant number of inmates in United States
prisons are incarcerated as a result of simple drug possession7.5 2. A State-Based Solution to Federal Marijuana Prohibition In response to the historically apt analogy to Prohibition and the
arguable shortcomings inherent in the current federal drug regime, some commentators argue for adoption of the "Constitutional Alternative."76 Finding support in basic public choice theory,
Constitutional Alternative" argue for a basic reversion of authority to the states, wherein "the power to control the
supporters of the "
manufacture, distribution, and consumption of all psychoactives" would be under state control . Strongly
resembling the current federal-state dynamic over alcohol distribution, this dynamic, however, would leave the regulation of interstate drug distribution to the federal government. 7 9 Rather
'permitting the states to choose drug-control strategies more in line with the preferences and
circumstances of their citizens."' 80 This state-based framework is supported by two overarching policy
rationales: 1) citizen choice; and 2) policy innovation. 1 Decentralization would promote more autonomy
among the United States population to choose the laws and regulations that fit their lifestyle preferences so that if a "resident of one state does not
like the rules imposed by the majority there, he is free to move to a state whose laws better suit his preferences or circumstances." 82 For example, if a nation consisted of one hundred
people, forty of whom want marijuana to be legalized and sixty who would opt to retain the status quo, the ban on marijuana possession will remain in place-as it is in the CSA-leaving forty
citizens unhappy with the law.83f, however, the nation were divided into separate states, each with the power to enforce its own laws, then more citizens would be content with the nation's
regulatory policy.84 For instance, if one state contains fifty residents who favor the status quo and ten residents who would opt for legalization, while another state contains ten residents
favoring continued illegality, and thirty who would opt for legalization, then one state will opt to maintain marijuana's illegal status, while the other will opt for some form of legalization.
Simple arithmetic provides that eighty of the nation's citizenry will be satisfied, while twenty are still unhappy with the policy. 8 6 Adding in the option for citizen mobility and minimal
divergent political considerations experiment with new-and possibly more optimal- regulatory policy .
In stark contrast, a purely unitary federal policy only gives the political process one shot to respond to
social needs.89 As Justice Brandeis' famous dissent points out, "[i]t is one of the happy incidents of the federal system that a single courageous state may, if its citizens choose, serve
as a laboratory; and try novel social and economic experiments without risk to the rest of the country." 90 The simplistic example above shows us how the policy innovation rationale easily fits
into the public choice model wherein two states adopting different policies can adapt, amend, or reject their own policies in response to the consequences-both positive and negative-
displayed by their peer state's policy choices.91 Policymakers should take heed; just as Prohibition failed to cure, and even exacerbated, the social ills it attempted to curtail, the federal reign
Though federal
over marijuana law could do the same; it has already created an enormous taxpayer burden while leading to increased violent crime and addiction. 9 2
legislators may lose the political soapbox federal regulation so conveniently provides, repeal of the CSA
(as it relates to marijuana ) will lead to the same benefits we saw following enactment of the Twenty-
First Amendment:9 3 reduced corruption and organized crime, job creation, and invigorated addiction support programs. III. Spillover Effects and Negative Externalities:
Evaluating the Criticisms of Consolidated State Control Commentators have not unanimously rejoiced at the prospect of bolstering state power in drug policymaking. Rejecting the
"Constitutional Alternative" approach to United States drug policy, Michael O'Hear argues that the federal government must "adopt a clear, coherent policy towards state innovation"9 5
through the adoption of a theory of government control he labels the "Competitive Alternative." 96 O'Hear critiques the purely state-based policymaking approach, arguing that it may actually
"reduce the degree of decentralization in national drug policy by consolidating state control, and . . . [producing] perverse incentives that warrant federal intervention."97 The first section to
this Part outlines O'Hear's concerns with an outright reversion of federal power to state regulatory authority. The following section attempts to rebut O'Hear's most salient critiques by utilizing
traditional theory in the field of jurisdictional competition. The next section follows with an analysis of the focal points of O'Hear's "Competitive Alternative," evaluating the federal media
machine and asset forfeiture laws. Finally, this Part attempts to reconcile and incorporate some of O'Hear's most salient and practical points with this article's approach to state control over
lead to less local autonomy than under the "Cooperative Federalist" regime. This is so because much of the support for federal drug control goes directly to localities-e.g.
monetary grants, referral for federal prosecution, and equitable sharing statutes that allow local enforcement to keep some of the proceeds of drug confiscations. 99 Local autonomy may be
engendered due to federal prosecutorial incentives as well, where United States Attorneys are subject to political pressures and must address local needs. 00 At the very least, O'Hear argues
that state regulatory control would not clearly do a better job of regulatory policymaking than the current regime by stating that, "[n]otwithstanding the benefits of decentralization, federal
control may still be justified on the basis of 'Race to the Bottom' pressures or spillover effects."' 0' He argues that dominant state regulatory authority may create a "Race to the Bottom"
market failure wherein states will create continually relaxed marijuana regulation laws in an effort to garner tax revenues from legalized sale and distribution.10 2 The critique further predicts
that "spillover effects" may undermine the workability of such a decentralization framework because states that relax their drug policy may create problematic negative externalities in
"neighboring get-tough states." 03 O'Hear points out that a significant part of the cost of marijuana lies in the risk and subterfuge involved in the illegal trafficking regime, which inflates the
price.104 Consequently, when states legalize the process, prices will deflate, attracting potential users in neighboring states-states that maintain the illegality of marijuana use, possession, and
distribution.'0 5 In response to the alleged failings of state regulatory dominance, O'Hear argues for implementation of his own "Competitive Alternative." Though still grounded in a
presumption of decentralized policymaking, O'Hear additionally focuses on reducing federal distortion of drug policy information, increasing local political control over federal drug
authority is in many ways hard to dispute. Moreover, hypothetical fears can be assuaged, and state- based authority validated,
by analogy to the current alcohol regulation framework, which would take nothing more than repeal of the
CSA as it relates to marijuana control. Further, the main argument for federalization -07and one recognized by O'Hearl 0 8-often lies in an attempt to curtail
negative externalities and potential "races to the bottom" among states.109 It is unclear, however, that federal regulation would be the answer,
even if these market failures existed . More relevant to this discussion is the uncertainty that state-based marijuana policy is likely to lead to the problems
highlighted in O'Hear's critique. 1. Federal Regulation May Not be the Answer to a Race to the Bottom for Marijuana Laws As previously discussed, O'Hear points out the likelihood of a "Race
to the Bottom," and potential spillover effects resulting from the decentralization of drug policy.' 10 A common solution to these state-based market failures is preemptive federal regulatory
market failure that it is presumed to be." The typical argument for federal authority is simple; where federal regulation preempts state policymaking in the
field, states will no longer be able to engage in an inefficient policy battle with negative social utility." 3 Revesz used federal authority in environmental policy to rebut the preemption
rationale: [Flederal environmental standards can have adverse effects on other state programs. Such secondary effects must be considered in evaluating the desirability of federal
negative effects of interstate competition. Recall that the central tenet of race-to-the-bottom claims is that competition will lead to the reduction of social
welfare; the assertion that states enact suboptimally lax environmental standards is simply a consequence of this more basic problem. In the face of federal environmental regulation,
states will continue to compete for industry by adjusting the incentive structure of other state
however,
programs. Federal regulation thus will not solve the prisoner's dilemma .114 Revesz simply points out that regulation and social
welfare are not created in a vacuum. The government should, and does, regulate in a complex matrix of policies involving a number of different variables that all impact each other. To take
one of the variables that suffers from market failure and impose a uniform federal standard upon it does not necessarily lead to increased social welfare on the whole. In essence, desirable
regulation is too complex to achieve through piecemeal centralization; it is akin to plugging the dam with a federal forefinger while watching the wall fissure just out of reach. Unfortunately for
federalism and state autonomy, the theoretical result from such an approach is complete centralization in the federal 115 government. So what is to be made of the environmental- marijuana
analogy? Revesz points to competing regulatory variables in the environmental arena, like workers' rights and corporate taxation, which are inevitably tied to industry location decisions.116
Thus, when several variables play into corporate decision-making, one state-based regulatory change is unlikely to provide the incentives necessary to propagate a "Race to the Bottom." A
possible counterargument to the application of this analogy here may elucidate a number of distinctions in marijuana regulation. For example, political decisions in the environmental arena
are often aimed at maintaining the status quo-keeping industry in place or simply combating more stringent environmental policies-while progressive marijuana regulation runs against the
status quo. Thus, rather than Revesz's world of environmental regulations playing a small factor in business incentives, marijuana regulation may play out differently. To be sure, political
inertia is undoubtedly an important consideration when confronting change. Here, however, it is less than certain that the pivotal "status quo" distinction makes a difference in the theoretical
argument; or practically whether it creates a barrier at all. Rather, it seems that the anti-drug status quo is less of a political fallback and more of a public perception and interest group driver
that would be balanced in a jurisdictional competition framework. In fact, it is more likely that the complexities of regulatory dynamics would be more robust in the market for marijuana than
contemplated in Revesz's critique of federal oversight in the environmental arena. Comparing the market for marijuana laws to the environmental law patchwork, there are several apparent
variables in a complex regulatory scheme that would play against a centralization argument. Simply speaking, one such variable lies in economic growth itself. Much like pollution, if a state is
not allowed to provide for legal marijuana sales-and hence benefit from economic growth and taxes-the state may loosen standards in other areas to compensate. Further, drug tourism is not
an unheard of phenomenon; it is seen internationally, as well as in states that allow for purchase without local citizenship." 7 Federalized drug prohibition could thus lead to overly lax
enforcement in tourism related to other vice goods like gambling or prostitution. Furthermore, it is plausible that, given the extensive prison overpopulation and the overwhelming burden
faced by enforcement authorities, policymakers will institute overly lenient penalties for non- drug crimes or prosecutors may simply not enforce crimes to the extent of the law. In sum, just as
Even if
Revesz argues that federal oversight is an unwise option for corrective regulation in the environmental arena, preemptive regulation in marijuana regulation is similarly disjunctive.
a "Race to the Bottom" does exist for marijuana laws, federal oversight may lead to inefficient regulation
in other economic areas, especially tourism, in addition to penal laws and their enforcement. 2. It is Not Clear that
Jurisdictional Competition for Marijuana Laws Will Lead to a Race to the Bottom Among States The preceding discussion may be largely irrelevant, however, if marijuana policy is not conducive
economic failures play out in the realm of marijuana policy. In the criminal justice arena, scholars focus extensively on the effects of penalties
on crime displacement and jurisdictional infighting that may lead to inefficient collective-action problems. This market failure contemplates peer jurisdictions "spending increasingly high
resources on their criminal justice system[s] simply to deflect crime to their neighbors.""8 Indeed, "in recent decades [states] have shown increasing awareness of the criminal justice policies
of their sister states."1 9 Scholars utilizing this approach are apt to recognize the need for federal oversight to eliminate the state "race" to overly harsh criminal penalties.'20 As previously
discussed, a similar argument has been heavily cited and remarked upon in the environmental field; noting the argument for federal regulation to circumvent a state-industrial "Race to the
Bottom" over pollution standards. 12 1 The clearly established "Race to the Bottom" argument in other areas can certainly be applied to criminal justice standards, wherein criminals are
assumed to be rational actors and will commit crimes in the jurisdictions where the costs associated with illegal activity are the lowest.122 When one state implements stricter criminal laws or
penalties, it is posited that criminals will at least consider relocating to a jurisdiction with more lenient standards. 12 3 In the face of criminal displacement, recipient states that presumably do
not want the social ills associated with more criminals among its populace will respond in-kind and institute even harsher penalties in an effort to displace the criminal population within its
borders.1 This established model, however, only reasonably applies to criminal activities with little to no societal benefits; for instance, violent crimes, sex crimes, and larceny. In contrast,
regardless of the negative effects of drug-use itself, a large proportion of the negative societal consequences of criminal drug activity are due to the nature of illicitness itself. To be sure, while
drug use may lead to community costs in the form of increased health care outlays, rehabilitation, and reduced economic productivity, the overwhelming demand for drugs creates an
enormous underground market,125 policed by drug dealers, street gangs, organized crime syndicates, and drug cartels. Whereas government-sanctioned markets are transparent and
regulated, underground "shadow economies"1 26 lead to regulation by the hand of distribution, the criminal underworld and organized crime syndicates. The end result is a drug trade that
leads to overwhelming violence, not just in manufacturing countries, but also in developed countries, which fuel the demand for these illicit substances. 127 On one hand, federal regulation of
drug markets has led to remarkable societal consequences in the form of crime and violence. On the other hand, criminal justice theorists suggest a potential "Race to the Bottom," leading to
overly harsh criminal penalties.It is not clear, however, that a "Race to the Bottom" will occur in the marijuana market.
Empirics and logic suggest a successful and societally beneficial market for drug legalization .128 For example, in
contrast to state exile of pedophiles and violent criminals, states stand to benefit from increased tax revenues,129 less violent
crime,130 and significant economic growth by taking an already existing market aboveground .1 3 1 In order for
the "Race to the Bottom" theory to attach, there must be negative externalities sufficiently realized to incentivize states to change their laws in an attempt to remedy those externalities. First,
consider Teichman's theory of overly strict regulation to effectively exile criminals from within a jurisdiction. This is hardly a far-fetched theory. Rather, state and local policies regarding ex-
convicts have shown just such an effort to exile criminals through bussing and relocation efforts. 13 2 Taking the next step, altering penal laws to move criminals to other jurisdictions is also
plausible. However, this theory's application in the realm of marijuana laws is less than certain and seemingly far- fetched. The negative externalities associated with criminal activities
seemingly stem mostly from violence and economic losses through theft. Though addiction, medical problems, homelessness and vagrancy undoubtedly contribute to the attacks against
legalization, these factors exist whether marijuana is legal or illegal, as we have seen for decades. But if a jurisdiction legalizes marijuana, the violent crime variable will presumably be
eliminated as the market moves out of the hands of organized crime and into retail outlets.' 33 The more relevant question is whether marijuana use will increase with legalization; and if it
does, whether the negative impacts of citizen use will outweigh the benefits, such that the jurisdiction will seek to move users outside its boundaries. Even assuming that most of the populace
will begin to use, or even abuse, marijuana, it does not necessarily follow that there will be far-reaching negative public impacts. Though it is certainly possible that worker productivity may
decrease, while accidents, DUIs, and addiction rehabilitation needs increase. It is also necessary to consider moral stigma and negative externalities associated with inter-jurisdictional
trafficking. 134 Policymakers must balance these negative implications with the possible benefits of taxation, reduced prison populations, increased citizen autonomy and happiness, and
reduced violent crime through elimination of the drug underworld. In contrast to the unsavory criminal activities noted by Teichman, where the criminal element moves from one jurisdiction
to another, unwanted by all, marijuana users and would-be distributors would bring both benefits and possible detriments to a jurisdiction, leaving state and local government to make the
decisions jurisdictional competition theorists argue should be made by decentralized government in order to further efficient and innovative lawmaking. Even if Teichman's "Race to the
Bottom" for overregulation does not apply to the market for marijuana laws, an argument could be made that the opposite may be true-under-regulation incited by jurisdictions competing for
tax revenues, drug tourism, and economic growth. But just as liquor laws faced the Teetotalers in the early 20th century, progressive drug policy faces a strong check through opposition in the
religious right and parent advocacy groups, among many others. The marijuana policy battlefield offers a multitude of variables for policymakers to balance as they attempt to appease and
policymaking instituted by the states, with constituencies influenced by a variety of considerations including corporate, retail, and direct taxation; citizen
autonomy and happiness; economic growth; and reduced crime and prison populations.135 Driving anti-marijuana legislation are various interest groups intent on entrenching the status quo.
For example, the biggest contributors to Partnership for a Drug-Free America are the Prison Industrial Complex, Big Pharmaceutical, Big Tobacco, and the alcohol manufacturing industry.' 36 If
under-regulation is the concerning factor in a "Race to the Bottom" analysis, these major interest groups will play a strong role in combating increasingly lenient marijuana policy. Considering a
"Race to the Bottom" may end in overly restrictive or overly lenient lawmaking depending on the interests, the aforementioned competing interests should be robust enough to avoid a "race"
in either direction. Given the extent of politically salient variables in play, state autonomy in policymaking would seem particularly apt in the context of marijuana policy. Indeed, principles of
federalism suggest that states be able to choose the laws most applicable to the characteristics of the jurisdiction "thereby giving mobile citizens many different regulatory regimes from which
to choose when selecting a place to live."l 37 Stepping outside the realm of theory, reality has similarly not played out the way an under-regulating "Race to the Bottom" would dictate. Only
sixteen states138 have made progressive marijuana regulation in the face of the current administration's tolerant Executive Order' 39 and general federal reliance on state enforcement.140
Though this article's proposed solution would remove the supposed federal barrier, possibly giving hesitant states the last push necessary to enter the "race" to legalization, a map of current
drug laws indicates that the impetus for progressive marijuana laws is likely more strongly tied to geographical ideologies and preferences than fear of the federal government's stance on drug
laws.141 For instance, the most progressive laws tend to be on the West Coast: California, Nevada, Oregon, Washington, Hawaii, Alaska, Montana, and Colorado.14 2 In contrast, southern
"bible belt" states have the strictest stance on marijuana with essentially zero- tolerance laws in Texas, Louisiana, Alabama, South Carolina, Georgia, Florida, Arkansas, Oklahoma, and
Tennessee. 143 While hardly conclusive evidence of ideological preference influencing marijuana policymaking, the religious, tobacco, and prison industrial interest groups' stranglehold over
the Southeast may well keep states in this region from entertaining progressive legalization laws, even if the federal government leaves the picture. This is not surprising given that analogous
competition for marijuana laws, state autonomy seems to be the best alternative in an effort to achieve the greatest public
welfare. In the absence of over-burdensome negative externalities and a race to overly strict or overly lax
marijuana laws, the federal government's role should be limited to international traffic cop and
interstate referee. Even if the aforementioned market failures do exist in a competitive framework for marijuana regulation, it is wholly unclear that the
federal government's role as uniform legislator is the proper solution where states have other
regulatory avenues to exploit in an effort to establish economic growth and constituent appeasement .
Even Teichman concedes that "the U[nited] S[tates] government has a dismal track record when it comes to criminal justice, very often manifesting an irrational 'tou h on crime' attitude
Prohibition's catastrophic failure should give policymakers keen background insight
irrespective of legislative context."1 4
into marijuana's current federal regulatory future, opening the door for state and local authority with
the repeal of the CSA's prohibition on marijuana use, possession, and distribution. C. The Competitive Alternative's
Practical Concerns Though the market for marijuana policy likely includes the competing interests necessary to avoid the problems encountered in state-based market O'Hear nonetheless
makes several salient suggestions for creating an efficient model for decentralization of marijuana policy and enforcement, regardless of the federal government's ultimate policymaking role.
The "Competitive Alternative" first highlights federal policies and practices that distort the political debate over drug policy, hampering state and local efforts that conflict with the federal "War
on Drugs." 46 Federal control inhibits state-based policy on a number of fronts. For instance, the federal marketing machine places an overwhelmingly negative spin on marijuana and
progressive drug enforcement policy.14 7 This federal message stifles alternatives to the current status quo, including decriminalization or medical marijuana programs.148 In response, the
"Competitive Alternative" posits that federal funds for advertising and marketing might be decentralized and turned over to the states to use at their discretion, or at the very least, with
minimal federal funding conditions attached.14 9 In addition to revamping the federal media machine, O'Hear articulates a need for local oversight over federal enforcement.o50 This point
harkens to the limited federal resources for drug policy implementation, yet acknowledges the overarching need for occasional federal enforcement and prosecution. O'Hear proposes a
possible reform, requiring a local official, such as a District Attorney, to approve federal prosecutions within municipal boundaries so as to establish "systematic checks on federal 5
enforcement discretion."'1 While O'Hear maintains some federal control, he does not discount the need for local policymaking and enforcement. The "Competitive Alternative" keenly looks to
the incentives driving municipal actors who forge drug O'Hear and the "Competitive Alternative" model. Highlighting the danger of spillover effects or a potential "Race to the Bottom," the
"Competitive Alternative" calls for continued federal supremacy, with local control in the political and enforcement regimes, while systematically overhauling the federal media machine. In the
end, there may be no perfect regulatory scheme, but if the past several decades of drug regulation have shown us anything, the United States fosters a vastly inefficient and over-budgeted
federal drug regime imposed at the expense of state innovation. Recent state reforms have shown expansive state-based marijuana law reform and the federal government should respond in
turn, ceding regulatory authority to the states and local governments. D. The Give and Take - Putting the Competitive Alternative to Work "[W]ithin our system of government, state control
stands not as an endpoint on the decentralization spectrum, but as a midpoint between federal and local control."' 5 7 Indeed, O'Hear argues that the same tenets justifying decentralization to
the states support further reversion to local governments.' 5 8 For example, citizen mobility is greater at the local level than across state lines and, rather than fifty state-level policy innovators,
localities would provide tens of thousands of opportunities for experimentation.1 5 9 O'Hear's "Competitive Alternative" makes local authorities the gatekeepers to federal enforcement
authority.160 Further, disassembling the federal media machine and eliminating the misaligned forfeiture laws are central propositions of the "Competitive Alternative."'61 While the previous
section made the case for the "Constitutional Alternative," supporting a strong decentralization framework, this section analyzes the applicability of O'Hear's "Competitive Alternative" in an
attempt to improve the state-based framework and respond to some of the likely shortcomings inherent in over- expansive decentralization. 1. Questioning the Localist Paradigm The
"Competitive Alternative" pushes strongly for extensive decentralization, past the state level and on to local authorities, while maintaining a co-extensive federal regime.162 Local
governments, however, lack the financial resources of states and have insufficient economies of scale to justify expensive enforcement mechanisms.163 In addition, while it is easier for
criminals to cross municipality lines, local enforcement jurisdiction only extends to local boundaries.164 Most importantly, local governments rely on the state to provide an overarching
criminal code and prison system. In O'Hear's defense, he does acknowledge these problems, and notes a possible solution of state funding, while allowing for local implementation at
municipalities' discretion. 16 6 O'Hear argues that municipal decentralization accounts for local implementation instead of the state in the same way it does for state authority vis-A-vis the
federal government; essentially the argument is that if some decentralization is good, then more is better.167 The consequences, however, of policymaking authority may not affect local
governments in the same way they do state governments. Indeed, citizen autonomy is undoubtedly benefited by even more localized policymaking, increasing the policy choices of United
States citizens from fifty states to tens of thousands of counties or municipalities. But the ultimate answer may lie in the incentives already encountered by the entrenchment and proliferation
of the federal "War on Drugs" in the first place; policymakers seek to gain political clout with their constituencies while paying for as little of the program as possible.168 Just as federal
legislators do not want to foot the bill for drug enforcement without the political windfall that comes with it, state legislators do not want to provide the implementation funds for policies that
they may not agree with. Given that the states currently enforce the majority of marijuana violations, implement and fund the penal institutions, and would be the main beneficiaries of state
corporate, sales, and direct drug taxes, the lawmaking authority and implementation should remain with the states, rather than localities that do not have the means to implement their own
policy choices. This is not to say that states could not relinquish exclusive control, leaving authority with the local government, just that they would not be forced to do so, as O'Hear seems to
argue. Just as the Twenty-First Amendment places plenary control in the hands of the states, repeal of the CSA's marijuana restrictions would leave authority and implementation solely to
While some states may pass policymaking authority down to localities, such an outcome would
state discretion.
not be required, allowing state legislators to make the decision as to where state funds and the resulting
political consequences go. It is also unclear why O'Hear posits the need for local authorities to serve as gatekeepers to federal enforcement authority' 69 as opposed to a
purely state-based mechanism, removing the need for federal enforcement in intrastate marijuana policy. While the local-federal cooperative would put more power in the hands of local
authorities, the "Competitive Alternative" uses a roundabout mechanism for empowering local politicians, while still supporting federal entrenchment. Indeed, rather than bolstering extensive
bureaucracy and the resulting squabble between state and federal officials-not to mention the looming threat of federal bullying of local District Attorneys-an alternative would be for states
and local governments to maintain concurrent enforcement authority, keepinf the federal government out of intrastate marijuana issues. In sum, O'Hear's localized enforcement regimes seem
less responsive to the shortcomings of state-based regulatory authority, and more to amending some pitfalls in the federally dominated regulatory model. For instance, O'Hear argues for
localization on one hand in making local law enforcement accountable to the local community, yet his framework notes that "local police would become answerable not only to federal law
enforcement authorities, but also to local leaders who stand outside the law enforcement establishment."l 7 1 Rather than decentralization and the workability of a state-local dichotomy in
incentivizing efficient enforcement allocation, the "Competitive Alternative" seemingly adapts the current federal framework by instituting a more localized federal regime, appeasing
decentralization advocates while tiptoeing around the status quo. 2. Learning From the Competitive Alternative The "Competitive Alternative" make a good point about the perverse incentives
generated by current forfeiture laws and articulates a very workable idea in the form of redirection to a state general fund. 17 2 Because municipal actors respond to drug policies at the
ground level, forfeiture and sharing laws incentivize local enforcement personnel to over-enforce drug laws in an effort to boost local coffers with the proceeds from drug busts. Rather than
redirecting all enforcement to the state, O'Hear smartly recognizes the ability to redirect assets to the state level.173 The "Competitive Alternative" also cogently points to the problems
the issues surrounding marijuana legislation and pits reform groups against politicians responding to the
federal anti-drug stance. O'Hear sensibly argues for federal advertising funding to be directed instead to state marketing budgets or to Congressional spending bills.s75 The
importance of this directive, however, may be limited under a "Constitutional Alternative" framework, as the federal government plays such a limited role in marijuana enforcement that
continued federal advertising spending would be unlikely. Unlike the alcohol regulatory context, however, there are still many other drugs that would fall under the guise of the CSA,
maintaining the federal government's incentive to continue its campaign against illegal drugs. Thus, it does appear that some control over the federal media machine is necessary, and directing
at least a portion of its funds as it relates to marijuana is imperative. In addition, stipulations as to the federal content and the overarching "War on Drugs" message would be essential to
fostering state innovation and adoption of progressive marijuana policies. Ultimately, O'Hear's "Competitive Alternative" argument, while putting forth strong ideas for specific reforms, is
seemingly unresponsive to any purported shortcomings of state-based regulatory authority. Instead of elucidating the decentralization regime he purports to stand behind, O'Hear makes
adjustments to much of the federally entrenched framework we see in place today, without accounting for the reality, and necessity of, state innovation and competition in the market for
marijuana. Nonetheless, O'Hear makes cogent points about the federal role in enforcement, incentives, and media content. Accordingly, this article recognizes the need to adopt reformed
forfeiture laws, asset redirection, and redistributed government media funding so as to properly set the stage for state-based jurisdiction over marijuana laws. IV. A Final Concern Raised by
Decentralization: The "Race to Nowhere" The previous Part set out to raise, and refute, some of the most salient concerns surrounding state consolidation of marijuana policy. Among the most
prominent arguments against divergent state-based policymaking is the "Race to the Bottom" effect garnered by individualized competition for (or against) an identifiable social policy
repercussion. As discussed previously, the variety of interests inherent in the market for marijuana do not lend it to a race to overly stringent or lenient regulation and increasingly inefficient
outcomes inherent in one-upping neighboring states. Interestingly though, is the possibility not for a "Race to the Bottom," but simply to one extreme or the other, giving a jurisdiction an all-
or-nothing choice, legalization or a complete ban. This Part will first explain a previously illegalization in an effort to avoid the criminal element entrenched in illegal drug distribution. B.
Undercutting the Assumptions Necessary to Effectuate a "Race to Nowhere" Clearly, the aforementioned result is not optimal for a state that, all else being equal, chooses decriminalization,
medical marijuana, or drug treatment programs over full legalization or a complete ban. Policymakers faced with an all or nothing choice will opt for the lesser of two evils, whatever that
choice might be, but inefficient regardless. This hypothetical, however, rests on several assumptions, none of which can be fully realized in a world of bundled laws and complex regulatory
frameworks. For the "Race to Nowhere" to occur there must be citizen mobility, full information, and unrealized benefits from the centrist choice. First, consider the ability and willingness of
citizens to move from one jurisdiction to another based on the marijuana policies within the state. With more than fourteen million marijuana users in the United States, this is hardly a trifling
variable.176 But of those fourteen million users, it is entirely unclear how many would choose to move based on the legality of their marijuana use when all they have ever known is a
complete ban. Moreover, it is questionable how many would choose to relocate at the expense of families, jobs, and geographic ties. Assuming that many users choose to remain in a total-ban
jurisdiction, State A, criminal distributors would have a market in both State A and State C, the intermediate, decriminalized, jurisdiction. Given this counterargument to full mobility, we can
expect a viable criminal distribution market spread across both abolitionist and intermediate jurisdictions. One part of the hypothetical should remain true, however, in that the criminal
element would remain displaced in State B, where distribution is legal, because the criminal distribution chain would be overwhelmed by regulated retail sales. Unlike mobility, full information
is more likely to come to fruition in this hypothetical. With the overwhelming use of the Internet and the salience of the marijuana policy debate, both consumers and distributors are likely
fully aware of the relevant policies in place. On the demand side, any consumer making a decision to move jurisdictions based on the marijuana policy is undoubtedly informed of the law when
making such a decision. Even if not making a mobility decision based on another jurisdiction's marijuana laws, it seems likely that a drug user, accustomed to illicit substance use and avoiding
enforcement, will be aware of current policy and upcoming changes to policy. On the supply side, just as we would expect a businessman to know the regulations and laws that apply to the
business, drug dealers or legal dispensaries will know the law, how to avoid or comply with it, and surely be abreast of changes in policy. The true uncertainty in full information is more likely
to be through the lens of the policymaker. A legislator faced with battling interest groups may be more informed on highly specific issues and less apprised of the indirect criminal costs
associated with marijuana distribution and displacement from other jurisdictions. The costs and benefits of proposed intermediate policy is probably most difficult to project and account for in
a hypothetical "Race to Nowhere." For such a race to occur, the hypothetical assumes that the benefits associated with a centrist marijuana policy choice would be outweighed by criminal
activity within its borders based on the policy actions of State A and State B. However, given uncertain citizen mobility and possible criminal disbursement between State C and State B, the
costs associated with such a choice may be limited. Further, state policymakers may not have full information on the consequences of their decisions relative to increased criminal distributor
influx into the jurisdiction. Moreover, even if these two factors are fully realized, legislators may find that the benefits of an intermediate policy outweigh the costs of any criminal influx. For
instance, reduced enforcement costs on minor possession may be redistributed to enforcement on distributors and trafficking or simply used for drug treatment. The intermediate policy itself
may be focused on public health, instituting drug courts, or rehabilitation,'77 rather than turning a blind eye to addiction as many abolitionist states do, or simply promoting use as a
The "Race to Nowhere" is likely not a foregone conclusion, albeit relying on several
legalization state does.
assumptions that are almost impossible to predict. Focusing on the analogy to alcohol regulation leads
to the conclusion that the race is at least plausible, though limited. While states are free to implement
their own alcohol policies, none has kept alcohol completely illega l; some states, however, maintained prohibition for several years
following enactment of the Twenty-First Amendment. But some states do allow counties and municipalities to enact their own
alcohol restrictions, and many have done so, opting for complete bans within county lines; restricted alcohol sales on certain days of the week; or requiring distribution through
government suppliers. 7 9 While some of these limitations are less than a complete ban, and clearly not full legalization, neither are they akin to decriminalization where one side of the
economic chain, consumption, is legalized and the other side, distribution, is criminal. Centrist alcohol ordinances, such as Sunday sales and government distribution, would not be expected to
garner a bootlegging criminal element. Criminals are unlikely to move to take advantage of a one-day black market or to attempt to circumvent government distribution when consumers can
easily accommodate the law and still consume alcohol. Liquor law regulation in this context has not progressed toward decriminalization or substance abuse programs in lieu of criminal
punishment. Rather, alcohol policies reflect complete bans or legalization with retail restrictions. The alcohol analogy, though not perfectly aligned to marijuana regulation, seems to support a
Even if a "Race to Nowhere" exists, the cure is not federal regulation. The Prohibition and its
"Race to Nowhere."
aftermath tells us that much. Beyond the alcohol regulatory analogy, the past generations of over-
enforcement; billions of dollars of federal taxpayer money; seeming absence of a "Race to the Bottom " or
substantial negative externalities; exceedingly high violent crime rates associated with illicit drugs; and unclear federal enforcement policy lead to the conclusion that decentralization is the
best regulatory stance for marijuana laws. IV. Conclusion Currently, more than 24.8 million people are eligible to receive medical marijuana licenses under state laws, and approximately
730,000 people actually do.180 Medical marijuana markets exist in seven states: California, Colorado, Michigan, Montana, Oregon, Washington and New Mexico and five more will open this
year in Arizona, Maine, New Jersey, Rhode Island and the District of Columbia.' Economically speaking, the marijuana marketplace is projected to more than double within the next five years.1
2 Outside of the capitalist retail market for marijuana, the question remains whether there is a viable
market for innovative state laws. As addressed in Part II.A.- B., the federal regime over marijuana laws is hampering
innovation and efficient policymaking , leaving overly harsh federal laws that go largely unenforced in
practice and by Executive Order. State legislators and enforcement authorities are left in the dark , and United States citizens are
faced with an unclear state-federal dichotomy by which distribution may be illegal but consumption is decriminalized. Even more striking, dispensary
operators may be legally licensed by the state and yet subject to federal enforcement for violation of the Controlled Substances Act.183 Even outside the lack of clarity and poor suitability of
we should expect a fairly robust "market" for marijuana laws where there are competing
federal authority,
interests, an informed and reactive populace, and a primed state-to-state competition for economic
growth and citizenry.184 Further, as discussed in Part III.B, there does not seem to be reason to expect marijuana policies
to be ill-suited for efficient competition, by promoting a "Race to the Bottom" 85 through imperfect
information, negative externalities, or power inequalities between suppliers and consumers of laws . 86 In
addition, federalization as a remedy to an unclear problem stifles innovation and experimentation ,
replacing jurisdictional competition with regulatory oversight and unwavering rules .' 8 7 Most simply, a given
legal system would prefer state laws if the "market" has the ability to produce efficient laws and will not
inflict market failures leading to overly stringent or lax regulation .'8 In the market for marijuana laws, one would expect to encounter less
need for consistency, uniformity, and correction of market failure because jurisdictional "markets" in the drug trade will presumably be transparent and consumers will have relatively full
information, while states will have appropriate incentives to optimize laws.' 89 The legalization, decriminalization, and medicalization of marijuana undoubtedly comprise a story in its early
chapters. As states continue to adopt progressive marijuana laws, the legal marijuana industry continues to grow, and the executive branch ignores the strictures of the CSA, the structure of
law and are waiting to see how the policymaking game will play out . Interest groups and lobbyists are no strangers to this game,
pitting Big Tobacco, Big Pharma, the Prison Industrial Complex, and the Religious Right against a progressive populace and state legislators looking to fill their recession- ragged coffers while
cutting back on drug-induced violence. The federal regulatory regime and the politically motivated and maintained "War on Drugs" costs American taxpayers billions of dollars a year in a
seemingly fruitless attempt to rid the American populace of the social and moral hazards of drug use. Yet the social ills of marijuana use stem almost entirely from its illicitness, 90 inducing
violent organized crime but causing fewer deaths each year than alcoholl91 or tobacco use;192 marijuana's addiction rate is also a mere pittance compared to nicotine addiction.' 9 3 The
United States system of federalism is premised on extensive state autonomy, leading to experimentation and innovation in policymaking, concurrent with the citizenry's ability to choose the
laws they want applied by locating in a jurisdiction with the bundle of laws they find most appealing. In accordance with this paradigm, we have already seen the bulwark of progressive
marijuana laws enacted on the West Coast,194 and almost no innovation in the Southeast, seemingly in line with population ideologies in those respective locales.' 95 Cutting the federal
government entirely out of marijuana regulation and enforcement is neither plausible, nor advisable. The drug trade is too international to limit federal involvement and states rely on federal
enforcement where distribution and trafficking crosses state lines. Economies of scale also empower the federal government to utilize powerful resources in an effort to keep pace with well-
funded drug syndicates. Further, federal legislators have too much at stake in the drug debate to let it go entirely. As seen in the alcohol regulatory scheme, we can expect to see Congress
utilize its spending power to incentivize states to act in accordance with federal objectives.16 That being said, two central arguments from the "Competitive Alternative" give informed
guidance to Congress, arguing to reign back on forfeiture laws and simultaneously cut spending on federal media campaigns against marijuana use.197 Ultimately , it seems the
marijuana train has left the station and has the momentum necessary to establish its legitimacy in the
United States. The million-dollar question then is how it will be regulated. From the standpoint of
history and logic, state authority is the best vehicle for public welfare, citizen autonomy, and efficient
regulation.
their survival is threatened by the values of the free world, epitomized by the United States . And they are
thriving as the U.S. has retreated. The global freedom index has declined for ten consecutive years . No
one like to talk about the U nited S tates as a global policeman, but this is what happens when there is no cop
on the beat.
American leadership begins at home , right here . America cannot lead the world on democracy and human
rights if there is no unity on the meaning and importance of these things. Leadership is required to make that case clearly and
powerfully. Right now, Americans are engaged in politics at a level not seen in decades. It is an opportunity
for them to rediscover that making America great begins with believing America can be great.
The Cold War was won on American values that were shared by both parties and nearly every American.
Institutions that were created by a Democrat, Truman, were triumphant forty years later thanks to the courage of a
Republican, Reagan. This bipartisan consistency created the decades of strategic stability that is the great
strength of democracies. Strong institutions that outlast politicians allow for long-range planning. In
contrast, dictators can operate only tactically, not strategically , because they are not constrained by the balance of
powers, but cannot afford to think beyond their own survival. This is why a dictator like Putin has an advantage in
chaos, the ability to move quickly. This can only be met by strategy, by long-term goals that are based on
shared values , not on polls and cable news.
The fear of making things worse has paralyzed [prevented] the U nited S tates from trying to make things better.
There will always be setbacks, but the U nited S tates cannot quit. The spread of democracy is the only proven
remedy for nearly every crisis that plagues the world today. War , famine , poverty , terror ism–all are
generated and exacerbated by authoritarian regimes. A policy of America First inevitably puts American security last.
American leadership is required because there is no one else , and because it is good for America. There is no
weapon or wall that is more powerful for security than America being envied, imitated, and admired around
the world . Admired not for being perfect, but for having the exceptional courage to always try to be better. Thank you.
1AC – Plan
The United States federal government should enact substantial criminal justice reform
in the sentencing and policing of criminal statutes pertaining to the possession and
distribution of marijuana.
1AC – Treaties
Advantage 2: Treaties
The Drug Conventions will collapse now---a wave of defections are inevitable absent
reform---2021 is key to steer the conventions away from blanket drug prohibition
Pascual 20 [Alfredo Pascual, Marijuana Business Daily, “In major shift, UN drug chief questions whether control treaties involving
cannabis are out of date,” Feb 27, 2020, https://mjbizdaily.com/in-major-shift-un-drug-chief-questions-whether-control-treaties-involving-
cannabis-are-out-of-date/]
The president of the narcotics enforcement agency of the United Nations is questioning whether the
agency’s decades-old drug conventions are outdated given global policy developments in recent years
involving drugs such as cannabis. During a presentation Thursday for the International Narcotics Control Board’s (INCB) 2019 annual report,
President Cornelis P. de Joncheere discussed the developments taking place with regard to cannabis and synthetic drugs. “We have some
fundamental issues around the conventions that state parties will need to start looking at,” he said, adding, “ We
have to recognize
that the conventions were drawn up 50 and 60 years ago .” Joncheere said 2021 is “an appropriate time to
look at whether those are still fit for purpose, or whether we need new alternative instruments and
approaches to deal with these problems.” Next year will mark the 60-year anniversary of the 1961 Single Convention on Narcotic Drugs. Kenzi
Riboulet-Zemouli, an independent expert on U.N. drug policy, told Marijuana Business Daily that the
INCB “is the most
authoritative international institution on drug policy – and also the most conservative in its
interpretation of the conventions. “Having the head of the INCB suggesting that the conventions are not
fit for the challenges of the 21st century is already breaking a strong taboo.” Riboulet-Zemouli specifically
highlighted as “unprecedented and unexpected” the INCB president’s mention of possible “new instruments.” “ It is possible and
feasible for the international community to update international law ,” Riboulet-Zemouli said. “Taboo is the
only reason why there has not been any discussion about a new, a different or another drug treaty since
1988. “ Now that this taboo has been broken, perspectives will open .” Joncheere’s comments come as the United
Nations Commission on Narcotic Drugs (CND) – the agency’s main drug policymaking body – is scheduled to meet next week in Vienna to
discuss the World Health Organization’s cannabis recommendations. “One thing is certain: If
the CND rejects the
recommendations of WHO on cannabis, the divide between governments will increase, ” Riboulet-Zemouli said.
“ If the deadlock surrounding cannabis policy reform persists in the coming years , it will likely
accelerate the end of the policy regime of the conventions as a whole .” Joncheere’s predecessor as INCB head,
Viroj Sumyai, took the reins of a medical cannabis company in Thailand last week. The 2019 INCB report Thursday’s INCB report for 2019
insisted – as it does every year – that recreational marijuana contravenes international drug control treaties. The Vienna-based agency serves
as the independent and quasi-judicial monitoring body for the implementation of U.N. drug conventions. One of the INCB’s functions is to
make recommendations about how to comply with the treaties. INCB statements have gradually become more progressive views over the
years. For instance, the new 2019 report mentions “human rights” more than any previous INCB report in recent years. However, the latest
report’s foreword notes that the INCB “remains concerned at the legislative developments permitting the use of cannabis for ‘recreational’
uses.” “Not only are these developments in contravention of the drug control conventions and the commitments made by States parties, the
consequences for health and well-being, in particular of young people, are of serious concern,” the foreword reads. North America The report
mentions that “ measures to decriminalize or legalize cannabis are proliferating in North America ” and warns
that consumption is increasing. The report cites the sales of edibles in Canada and legalization in the U.S. state of
Illinois as examples. It notes that, in Canada, “the number of first-time users of cannabis in 2019 was nearly double the estimated number of
first-time users in 2018, when non-medical cannabis was not yet legal.” Europe According to the INCB, “the discussion of different approaches
to regulating cannabis has figured prominently in the policy debate on drug control across Europe.” The report mentions “ an increasing
number of European countries” that are “exploring” or have already “established” medical cannabis
programs. “The majority of European countries allow cannabis to be used only for medical and scientific purposes, in keeping with their
obligations,” the report notes. However, the INCB shows concern about “steps underway toward the legalization of
the non-medical use of cannabis that included the legalization of the cultivation, distribution and use of cannabis for such purposes, notably in
the Netherlands and Luxembourg.” “The developments in a few countries that have legalized or permitted the use of cannabis for
non-medical purposes or that have tolerated its legalization at the subnational level are undermining the universal adherence to the three
New Zealand
international drug control conventions and the commitment to their goals and objectives,” the report warns. New Zealand
gets a special mention in the report because of that nation’s upcoming recreational marijuana
legalization referendum. The INCB reiterated that “any and all legislative or regulatory measures aimed at the legalization of cannabis
for non-medical purposes are inconsistent” with the international drug control treaties. The agency “will continue to monitor policy and legal
developments in New Zealand pertaining to drug control and encourages the Government of New Zealand to continue its constructive dialogue
with the Board to ensure consistency with the drug control conventions.”
US criminalization makes treaty collapse inevitable --- but the plan provides the US
credibility for reforming the conventions.
Tackett 18 [Michael Tackeff, B.A., 2012, Brown University, Providence, Rhode Island; M.A., 2012, Brown University, Providence, Rhode
Island; Candidate for Doctor of Jurisprudence, 2018, Vanderbilt University Law School, “NOTE: Constructing a "Creative Reading": Will US State
Cannabis Legislation Threaten the Fate of the International Drug Control Treaties?,” 51 Vand. J. Transnat'l L. 247, 294-295, January, 2018, lexis]
With attitudes changing so quickly , the United States must address the tensions between the drug
control treaties it signed and the human rights principles animating the entire international system .
Cannabis simply presents the best opportunity to rethink the treaties , given that U nited S tates domestic
law is changing with such speed . An essential argument the United States should make brings these human rights concerns to the
fore and keeps them on the table for future discussions about other narcotics. Simply ignoring the treaties does nothing for
US credibility abroad - cannabis is an issue where the U nited S tates can take the lead in rethinking its
international obligations and maintain its position as a chief proponent of international drug control .
Human rights abuses in the name of drug control are carried out every day in American neighborhoods and cities,
and an acknowledgement of this reality gives the U nited S tates [*282] credibility to leverage in leading a
formal reassessment of the treaties . Although international drug control regimes and human rights law developed side by side,
the drug control treaties were produced and modified in "an artificial legal vacuum ." That is, the two systems
developed concurrently without comingling. Drug policy was specifically omitted from mention in the U.N. Charter;
the signatories understood the subject to rest under the umbrella of development rather than law enforcement.
In more recent years, the U.N. General Assembly has adopted resolutions calling for drug control efforts
that comply with the human rights provisions of the Charter. The principles in the U.N. Charter and the more recent moves in
the General Assembly represent the wiser conception of human rights as applied to drug control. Articulating a human rights approach to drug
control is no mere abstract exercise. "Human rights law recognises that without certain civil and political rights being guaranteed, economic,
social and cultural rights will remain out of reach." A human rights approach to drug control would entail elevating well-being and harm
reduction over a demand and supply reduction strategy: "placing demand and supply reduction as overall objectives or strategy pillars is to
confuse goals, processes and outcomes. The goal, for example, is not demand reduction per se. It is, among other concerns, improved health.
And the strategy is not demand reduction; it is , for example, prevention and treatment." Allowing certain US
states to regulate marijuana is a step in the right direction in bringing the drug treaties into line with
human rights norms , because regulation removes the drug from criminal markets and lifts its users out
of the criminal justice system . The conventions, in a sense, are backwards because they do not
contemplate curing the root causes of drug abuse . In fact, they perpetuate the problem by continuing to
require local criminalization in the face of overwhelming evidence that the criminal market for narcotics remains robust.
Although the drug treaties are silent as to [*283] proportionate penalties for drug use, "if a measure
cannot or has not achieved its stated aim, can it be considered necessary or proportionate? " Cannabis is
the best place to start .
Even if not enforced, the CSA creates legal conflicts that smother the industry
Kamin 14 – Sam Kamin, Professor and Director, Constitutional Rights and Remedies Program,
University of Denver, Sturm College of Law; J.D., Ph.D., University of California, Berkeley, “Cooperative
Federalism and State Marijuana Regulation”, University of Colorado Law Review, Fall, 85 U. Colo. L. Rev.
1105, Lexis
II. A Step in the Right Direction - But Problems Remain By forestalling, at least for now, the threat of federal injunctive suit, the second Cole memo removed much of the uncertainty that has
governed federal-state interaction in this area for the last five years. Although the memo was too long in coming, it made clear that the federal government would give the states an
opportunity to prove themselves capable of managing the negative externalities of marijuana legalization, regulation, and taxation. As such, it is a positive, cooperative vision for the future of
marijuana regulation in this country. n32 But the second Cole memo did not - and no similar memorandum could - remove the ancillary
consequences of marijuana remaining a Schedule I narcotic under the CSA. As marijuana-law reform moves
from a focus on medical use to an increasing emphasis on adult or recreational use, it confronts the consequences of marijuana's continuing
federal prohibition. This Part sets forth some of the principal problems caused by marijuana's continued prohibition before turning to a solution in the next Part. A.
Consequences for the Industry 1. Contracting Because marijuana remains illegal at the federal level,
much of the predictability that comes from enforceable contracts is unavailable to marijuana
practitioners. In 2012, for example, an Arizona state court refused to enforce a loan agreement between
two Arizona residents and a Colorado marijuana dispensary on the basis that the contract was void as
against public policy. n33 Although this ruling had the effect of [*1114] providing a windfall to the illegally-operating dispensary, the court felt itself without recourse; so long
as the trafficking of marijuana remains illegal under federal law, contracts designed to facilitate that conduct remain void. This result reminds us why the enforceability of
contracts is important not just to the parties but to society more generally. When those who have loaned $ 500,000 (the amount in issue in the Arizona
case) to a cash business find themselves without recourse to the courts, they might be tempted to engage in what the law euphemistically refers to as "self-help." Everyone is better off when
access to banking services. As has been widely reported, n34 threats of money-laundering prosecution
most basic of business needs:
from the federal government n35 have made banks gun-shy about lending to marijuana businesses.
Currently, in Colorado, no bank will do business with marijuana businesses . n36 There are many negative
consequences of withholding banking services from marijuana businesses. Principally, the lack of
banking services keeps marijuana businesses operating in the shadows of society. As cash businesses,
they are targets for violent crime. Faced with this ever-present threat, marijuana business operators are left with [*1115] a Hobson's choice: they can either remain
cash businesses and accept the risk and stigma that comes with that, or they can attempt to bank surreptitiously, through the use of their personal accounts or holding companies designed to
purge the taint of marijuana transactions. These latter options, of course, open practitioners to the same threat of money-laundering charges that led to the unavailability of banking services in
the first place. The governors of Colorado and Washington appealed to the federal government for assistance with this problem, n37 and in February of 2014 the Department of Justice and the
Department of Treasury's Financial Crimes Enforcement Network released memos purporting to permit banks to do business with those in the marijuana industry. n38 However, the
banking memos, like the second Cole memo which preceded it, stopped short of removing the specter of future enforcement
actions. n39 One leading bank official was immediately quoted as saying, "We're still not going to bank them." n40 3. Legal Services The legal minefield described in the
previous Section calls out for experienced legal counsel to help marijuana practitioners negotiate the complicated, ever-changing web of marijuana rules and regulations. Marijuana's
continuing illegality makes the provision of these legal services particularly fraught, however. As long as
marijuana remains a prohibited substance - and as long as the CSA continues to criminalize those who
aid and abet marijuana distribution or [*1116] join in a conspiracy to distribute it - lawyers who assist their
marijuana clients in setting up or running marijuana businesses necessarily put themselves at risk. Although
the second Cole memo declares that states decriminalizing marijuana would generally be permitted to enforce marijuana laws themselves, the specter of federal
prosecution of marijuana lawyers for aiding and abetting the illegal conduct of their clients continues
to loom . Model Rule of Professional Conduct 1.2(d) n41 and its state analogs prohibit attorneys from knowingly facilitating criminal conduct. A literal reading of that rule would
preclude a lawyer from providing any assistance - e.g., drafting contracts, negotiating leases - to clients whom the attorney knows are engaged in on-going violations of the CSA. In fact, there is
a split of authority among those states that have considered whether providing legal services to the marijuana industry violates a lawyer's obligations under the rules of professional
responsibility. n42 Colorado, having previously found such conduct to violate its state ethics rules, n43 later amended those [*1117] rules to explicitly permit lawyers to serve marijuana
industry clients. n44 As I have argued elsewhere, I believe that other, countervailing policy considerations argue against such a literal reading of Rule 1.2(d) and its state-law equivalents. n45
Because states that are legalizing marijuana - either for medical patients or for adult users - are creating a complex regulatory apparatus, fairness requires the assistance of lawyers in
navigating that system. Without the assistance of competent counsel, a state regulatory regime becomes a trap for the unwary. Furthermore, denying competent legal counsel to those
engaged in the marijuana industry can have profound distributive effects. Powerful actors will be able either to secure legal assistance or to proceed without it; those without the same means
will necessarily be disadvantaged and subject to considerable risk. Nonetheless, marijuana's continuing federal illegality means that
attorneys may be unwilling to serve those who are in critical need of legal services . B. Consequences for
Marijuana Users While negative externalities discussed above primarily affect marijuana practitioners, the consequences are no less profound for those simply wishing to
consume marijuana in compliance with their state's laws. These consequences are real and will persist so long as marijuana remains
prohibited by the CSA; promises from the federal government to let the states [*1118] take the lead in
marijuana enforcement simply do not undo the consequences of federal prohibition . 1. Employment
Currently, one of the biggest impediments to the legalization of marijuana in the states is the fact that those who test positive for marijuana can lose
their employment even if their conduct is entirely consistent with state law . In Colorado, both state n46 and federal courts n47
have held that Colorado's "lawful off-duty conduct" statute does not govern the consumption of marijuana. Because the possession of marijuana remains illegal under federal law, these courts
have reasoned that consuming marijuana is not "lawful" conduct, even if it does not violate state law. Furthermore, the Colorado courts have concluded that an individual fired for testing
2.
positive for marijuana is ineligible for unemployment benefits under the same reasoning, even if that individual is a marijuana patient acting in compliance with state law. n48
Probation/Parole Similarly, state courts have used marijuana's continuing illegality at the federal level to
deny otherwise qualified criminal defendants probation or parole . n49 Because it is generally a standard condition of supervised release -
either following a term of imprisonment or in lieu of one - that the defendant agree to commit no new offenses during the period of [*1119] release, n50 courts have held that a defendant's
positive test for marijuana permits his re-arrest. Unless or until legislatures in marijuana states make explicit provision for marijuana use consistent with state law, n51 the federal prohibition
number of other public benefits, from public housing to student loans to government employment, are
conditioned on the recipient's abstinence from illegal- drug use. For example, the federal program that helps fund local public housing
agencies (PHAs) forbids those agencies from admitting into public housing facilities families that include members who use marijuana. n52 While PHAs have the discretion not to evict residents
who use medical marijuana, n53 that discretion does not extend to admitting marijuana users into public housing even where their use is compliant with state law. A single medical marijuana
patient, in other words, can make an entire [*1120] family ineligible to receive public housing, as long as marijuana remains illegal under federal law. 4. Conclusion This non-exhaustive list of
examples of consequences makes clear that the continued prohibition of marijuana at the federal level leads to unsettled
expectations , not just for those trying to make a living in the marijuana industry but also for those who would take advantage of state laws permitting marijuana use. Deputy
Attorney General Cole stated that federal policy is to let states achieve federal goals through the taxing and regulation
of marijuana rather than state-level prohibition, but the criminality of marijuana at the federal level
makes such experimentation essentially impossible in practice . The following Part proposes a cooperative federalism approach to
marijuana regulation. If states that wish to opt out of the CSA are permitted to do so , if that law simply does not apply within those states,
then they will truly be able to function as laboratories of ideas with regard to marijuana regulation and
taxation. III. A Solution: Making the Second Cole Memo Law The second Cole memo is a cooperative step toward solving the apparent contradiction created when states legalize a drug
that the federal government continues to prohibit. This concluding Part sketches a solution that I hope to expand upon in a later article. n54 I propose that Congress
amend the CSA in a manner that allows states to opt out of its marijuana provisions. The federal
government has already set forth the criteria to be used in determining whether a state is regulating
marijuana in a manner consistent with federal priorities. Under this approach, Congress would authorize
the Attorney General, or some other executive official, to certify that a state is regulating marijuana in a manner
consistent with federal priorities . n55 Upon certification, the state's regulations would [*1121] become the
sole regulations governing marijuana within that state. Those state provisions, rather than the CSA,
would then apply to the manufacture, distribution, and use of marijuana . n56 While this approach might
closely resemble the status quo in which states are allowed to experiment with marijuana legalization
so long as they keep in mind and help achieve federal goals, it has one crucial difference. Under the
current approach, states are allowed to experiment with marijuana law reform through an act of
prosecutorial grace . Those using, selling, or manufacturing marijuana under state law are not subject to
criminal prosecution simply because federal prosecutors have chosen not to prosecute them. This
decision can be undone by yet another memo. A newly elected president may chart a new policy
course or may invoke the wiggle-room written into the second Cole memo. Thus, those using or selling
marijuana pursuant to state law could be arrested and prosecuted without any change in federal law .
But more than that, the problem with the status quo is that marijuana possession, manufacture, and
distribution remain illegal under the second Cole memo. Even if the government keeps its promise not to intervene
in states that have enacted robust marijuana regulations, the continuance of federal marijuana
prohibition has a profound effect in those states. Only by making marijuana truly legal in those states,
by allowing qualified states to opt out of the CSA, can the [*1122] states truly be empowered to chart their
own policy direction.
Biodiversity loss causes extinction---invisible tipping points and it’s a conflict magnifier
Phil Torres 16, the founder of the X-Risks Institute, an affiliate scholar at the Institute for Ethics and
Emerging Technologies “Biodiversity Loss and the Doomsday Clock: An Invisible Disaster Almost No One
is Talking About” 2016. http://www.commondreams.org/views/2016/02/10/biodiversity-loss-and-
doomsday-clock-invisible-disaster-almost-no-one-talking-about
But there’s another global catastrophe that the Bulletin neglected to consider — a catastrophe that will almost certainly have conflict
multiplying effects no less than climate change. I’m referring here to biodiversity loss — i.e., the reduction in the total number of species, or in
their population sizes, over time. The fact is that in the past few centuries, the
loss of biological diversity around the world has
accelerated at an incredible pace. Consider the findings of a 2015 paper published in Science Advances. According to this study, we’ve
only recently entered the early stages of the sixth mass extinction event in life’s entire 3.5 billion year history. The
previous mass extinctions are known as the “Big Five,” and the last one wiped out the dinosaurs some 65 million years ago. Unlike these past
tragedies, though, the current mass extinction — called the “Holocene extinction event” — is almost entirely the result of a one species in
particular, namely Homo sapiens (which ironically means the “wise man”). But biodiversity loss isn’t limited to species extinctions. As the
founder of the Long Now Institute, Stewart Brand, suggests in an article for Aeon, one could argue that a more pressing issue is the reduction in
population sizes around the globe. For example, the 3rd Global Biodiversity Report (GBO-3), published in 2010, found that the total abundance
of vertebrates — a category that includes mammals, birds, reptiles, sharks, rays, and amphibians — living in the tropics declined by a whopping
59% between 1970 and 2006. In other words, the population size of creatures with a spine more than halved in only 36 years. The study also
found that farmland birds in Europe have declined by 50% since 1980, birds in North America have declined by 40% between 1968 and 2003,
and nearly 25% of all plant species are currently “threatened with extinction.” The latter statistic is especially worth
noting because many people suffer from what’s called “plant blindness,” according to which we fail “to recognize the importance of plants in
the biosphere and in human affairs.” Indeed, plants form the very bottom of the food chains upon which human life ultimately depends. Even
more disturbing is the claim that amphibians “face the greatest risk” of extinction, with “ 42%
of all amphibian species … declining
in population,” as the GBO-3 reports. Consistent with this, a more recent study from 2013 that focused on North America found that
“frogs, toads and salamanders in the United States are disappearing from their habitats … at an alarming and rapid rate,” and are projected to
“disappear from half of the habitats they currently occupy in about 20 years.” The
decline of amphibian populations is
ominous because amphibians are “ecological indicators” that are more sensitive to environmental
changes than other organisms. As such they are the “canaries in the coal mine” that reflect the overall health of the ecosystems in
which they reside. When they start to disappear, bigger problems are sure to follow . Yet another comprehensive
survey of the biosphere comes from the Living Planet Report — and its results are no less dismal than those of the GBO-3. For example, it finds
that the global population of vertebrates between 1970 and 2010 dropped by an unbelievable 52%. Although the authors refrain from making
any predictions based on their data, the reader is welcome to extrapolate this trend into the near future, noting that as ecosystems
weaken , the likelihood of further population losses increases . This study thus concludes that humanity would “need
1.5 Earths to meet the demands we currently make on nature,” meaning that we either need to reduce our collective consumption and adopt
less myopic economic policies or hurry up and start colonizing the solar system. Other studies have found that 20%
of all reptile
species, 48% of all the world's primates, 50% of all freshwater turtles, and 68% of plant species are
currently threatened with extinction. There’s also talk about the Cavendish banana going extinct as a result of a fungus, and
research has confirmed that honey bees, which remain “the most important insect that transfers pollen between flowers and between plants,”
are dying out around the world at an alarming rate due to what’s called “colony collapse disorder” — perhaps a good metaphor for our
technologically advanced civilization and its self-destructive tendencies. Turning to the world’s oceans, one finds few reasons for optimism here
as well. Consider the fact that atmospheric carbon dioxide — the byproduct of burning fossil fuels — is not only warming up the oceans, but it’s
making them far more acidic. The resulting changes in ocean chemistry are inducing a process known as “coral bleaching,” whereby coral loses
the algae (called “zooxanthellae”) that it needs to survive. Today, roughly 60% of coral reefs are in danger of becoming underwater ghost
towns, and some 10% are already dead. This has direct consequences for humanity because coral reefs “provide us with food, construction
materials (limestone) and new medicines,” and in fact “more than half of new cancer drug research is focused on marine organisms.” Similarly,
yet another study found that ocean acidification is becoming so pronounced that the shells of “tiny marine snails that live along North
America’s western coast” are literally dissolving in the water, resulting in “pitted textures” that give the shells a “cauliflower” or “sandpaper”
appearance. Furthermore, human-created pollution that makes its way into the oceans is carving out vast regions in which the amount of
dissolved oxygen is too low for marine life to survive. These regions are called “dead zones,” and the most recent count by Robert Diaz and his
colleagues found more than 500 around the world. The biggest dead zone discovered so far is located in the Baltic Sea, and it’s been estimated
to be about 27,000 square miles, or a little less than the size of New Hampshire, Vermont, and Maryland combined. Scientists have even
discovered an “island” of trash in the middle of the Pacific called the “Great Pacific Garbage Patch” that could be up to “twice the size of the
continental United States.” Similar “patches” of floating plastic debris can be found in the Atlantic and Indian oceans as well, although these are
not quite as impressive. The point is that “Earth’s final frontier” — the oceans — are becoming vast watery graveyards for a huge diversity of
marine lifeforms, and in fact a 2006 paper in Science predicts that there could be virtually no more wild-caught seafood by 2048. Everywhere
one looks, the biosphere is wilting — and a single bipedal species with large brains and opposable thumbs is almost entirely
responsible for this worsening plight. If humanity continues to prune back the Tree of Life with reckless abandon, we could be forced to
confront a global disaster of truly unprecedented proportions. Along these lines, a 2012 article published in Nature and
authored by over twenty scientists claims that humanity could be teetering on the brink of a catastrophic,
irreversible collapse of the global ecosystem. According to the paper, there could be “tipping points” — also
called “critical thresholds” — lurking in the environment that, once crossed, could initiate radical and
sudden changes in the biosphere. Thus, an event of this sort could be preceded by little or no warning:
everything might look more or less okay, until the ecosystem is suddenly in ruins. We must, moving forward,
never forget that just as we’re minds embodied, so too are we bodies environed, meaning that if the environment implodes under
the weight of civilization, then civilization itself is doomed . While the threat of nuclear weapons deserves serious attention from
political leaders and academics, as the Bulletin correctly observes, it’s even more imperative that we focus on the broader “contextual
problems” that could inflate the overall probability of wars and terrorism in the future. Climate change and
biodiversity loss are both conflict multipliers of precisely this sort, and each is a contributing factor that’s exacerbating the
other. If we fail to make these threats a top priority in 2016, the likelihood of nuclear weapons — or some other
form of emerging technology, including biotechnology and artificial intelligence — being used in the future will only increase .
The most obvious possible efficiency gain for the Cannabis industry may be for general marijuana
production to shift from the use of artificial lighting to natural sunlight. The ability of the population of
marijuana growers to do that depends on outward pressures of prohibition. Continued federal
prohibition of Cannabis by the United States government will force growers and crops to remain
hidden . If cultivation took place openly, outdoor production and greenhouses using light deprivation
and supplementation techniques might thrive as major forms of marijuana production
. For sunlight-isolated environments, electrical efficiency to power lighting may be improved by new technology and cultivation methods. New HID lamps have adjustable 79 power settings that can mimic daylight cycles (or “seasons” for batch crops) while reducing total power consumed for crop production. Also, with two dedicated indoor spaces, continuous production may be
achieved. Immature plants are grown in one room under an 18 hour per day vegetation light cycle, and when mature they are moved to another room under a 12 hour per day flowering light cycle. The continuous production method may be practiced for indoor or greenhouse locations with light deprivation and supplementation techniques.12 Developments in LED array grow
lighting may boost crop production by providing specific spectral outputs at greatly reduced power draw. The time of use for electricity consumption may also be an important efficiency consideration for some indoor cultivators. Public utilities may charge higher rates for electricity consumed during daily peak demand times. Electrical power systems that rely on renewable energy,
like solar or wind, have the best efficiencies when direct current (DC) electricity is used while the resource is available – that is, when the sun is shining or the wind is blowing. Some indoor grows may be able to schedule their peak electricity demand times to coincide with times that their electricity resource is most available. Environmental Impact Reductions from Methods While
electricity used to power indoor Cannabis cultivation is a high quality form of energy, so are the marijuana buds produced. Quality measurements for medical marijuana are important considerations when evaluating the energy efficiencies of indoor 12 The hours of light and darkness received by plants inside a greenhouse may be controlled by using light deprivation curtains and
supplemental low-power LED lighting. 80 grows. Another valuable asset that indoor marijuana cultivation provides is continuous production. With skilled gardener attention and plant selection, a small-scale indoor medical marijuana cultivation facility can harvest several plants per day, providing a continuous fresh supply of product to patients. Proper disposal of solid waste and
sewage generated by Cannabis production is an important environmental impact consideration. The lifetime of a typical indoor marijuana grow operation evaluated in Chapter 3 is estimated to be around five years. Equipment such as light bulbs may be replaced every one or two grow cycles (about 98 days per cycle). Other items that are disposed of at the end of a grow operation
include electrical wiring, ballasts, reflectors, ventilation ducting, fans, pots, hoses, and various construction materials (drywall, paint, plastic, insulation, et cetera). Waste products that are continually produced at an indoor Cannabis production facility include packaging of input materials (soil, fertilizers, and other additives) and unused organic (plant) materials. Soil, roots, stems, and
leaves are all byproducts of indoor Cannabis production that have value: embodied energy of past photosynthesis. Perhaps if collection efforts were coordinated, this biomass could be used for applications such as gasifier fuel, paper pulp raw material, or at least compost. Future research in investigating the energy intensity of different marijuana cultivation styles should consider
plant strain, or variety, as a significant predictor variable for bud weight yield. The practice of cutting clone from a plant during its vegetative stage may also have an ultimate effect on the bud weight yield for that plant; future studies on energy input related to plant yield should take this into consideration. 81 Possible energy efficiency gains and environmental impact reductions in
marijuana production methods are specific to individual cultivation practices and business operations. These decisions for growers depend on the valued output of their production (such as quality of marijuana buds, variety of products, consistent availability, or price). Each cultivation operation may find ways to improve efficiency and reduce environmental impacts that are unique
to that style of cultivation. As federal, state, and local governments continue to expend efforts regulating the Cannabis industry, it should be noted that the largest identified inefficiency in production practices – clandestine electricity use for artificial lighting – was instigated by attempted regulation (Bienenstock, 2008; Mills, 2011). 82 5.4 Regulations Cannabis prohibition causes
wasted resources due to the nature of black market production and law enforcement’s inability to control it. In Humboldt County, California, differences in marijuana prohibition laws on national, state, and local levels have created confusion among law enforcement officers, business proprietors, and city officials. According to Mark Peterson, the Humboldt County Sheriff’s Office
deputy dedicated full-time to policing commercial marijuana growing operations in 2010, the evolution of prohibition laws and their enforcement over the past few decades has left a situation where small amounts of cultivation, distribution, and consumption of Cannabis are locally tolerated. While the Compassionate Use Act of 1996 (otherwise known as Proposition 215) declared
medicinal use legal in California, marijuana is federally classified as a Schedule I substance, the most restrictive category under the Controlled Substances Act of 1970. This disparity in regulation, along with increasing nationwide demand and production of medical marijuana, has fueled the black market (NORML, 2001; Sifaneck, et al., 2007). Clandestine indoor grow operations have
evolved in response to drug enforcement and eradication efforts (Humboldt County Sheriff’s Office, 2010). Problems that have arisen as a direct result of these “grow-ops” in Humboldt County include “improper and dangerous electrical alterations and use,” that disproportionately affect residential neighborhoods. This has further caused “an increase in response costs, including
code enforcement, building, land use, fire, and police staff time and expenses” (City of Eureka, 83 2010). Also, a steady increase in residential electricity consumption in Humboldt County over the past several years (Figure 9, page 36) may be directly linked to the practice of growing marijuana with artificial lights instead of sunlight. There has been a lack of data for policy makers,
growers, and consumers about the energy efficiency of different Cannabis cultivation styles. What is the energy intensity for marijuana grown in different locations, with different additives, and different plant genetics? Cultivators must choose whether to grow under artificial lights indoors, or under sunlight either outdoors or in a greenhouse. Decisions must be made on an irrigation
source and watering method, as well as the use of fertilizers, pesticides, herbicides, fungicides, or other chemical additives. Depending on availability, growers may also have a varied choice in crop genetics. Marijuana strains may be selected based on their potency (profile of cannabinoids present in the plant’s essential oils) or mass yield of flower buds. A grower may also choose
plant genetics based on the variety’s morphological characteristics or flowering photoperiod. Certain plant varieties may better match a particular geographic region’s daylight cycle, or crops with a short-cycle photoperiod may be grown under high-intensity indoor lights for a quick return on investment. As long as there exists a black market to keep marijuana sales prices high,
cultivators can consider inefficient production costs as standard business expenses (Decorte, 2010; Sifineck, et al., 2007). Legitimate medical marijuana consumers are confronted with limited choice in selecting the quality of medicine they are purchasing; there are no standards in reporting the growth source and style such as those for consumer products regulated by the Food 84
and Drug Administration (FDA) and the United States Department of Agriculture (USDA). Programs such as Clean Green Certified offer qualified third-party evaluations of Cannabis production as self-described below: Clean Green Certified is a program modeled on the USDA National Organic Program, ensuring environmentally clean and sustainable methods. Clean Green inspects all
inputs, from seed or clone selection, soil, nutrients, pesticides, mold treatments, dust control, and source of electricity, to methods of harvesting and processing. This program reduces the environmental impact of Cannabis crops, ensures legality, and regulates what chemicals go into ingested medicine. A certified operation is licensed to use the Clean Green Certified label on their
products after an annual review requiring yearly on-site inspections and third party laboratory testing (Clean Green Certified, n.d.). While third-party certification benefits marijuana consumers with information, it burdens cultivators with the cost of certification. United States regulatory structures exist for food products, medicines, and medicinal herbs; 13 these programs are funded
by taxpayers to protect consumers (FDA, 2012; USDA, 2012). Future regulation of the United States marijuana industry may follow these precedents. With smaller scale, farmers-market style Cannabis distribution, consumers may know their growers. This type of interaction has “regulated” marijuana sales and consumption until now, and may also be an essential consideration for
any future regulation. In addition to consumer protection, future regulations may protect against environmental impacts caused by Cannabis cultivation. Municipalities within Humboldt County have drafted legislation aiming to reduce greenhouse gas emissions associated with energy use. The City of Arcata has plans that specifically address energy efficiency, 13 Medicinal herbs are
regulated by the FDA as “dietary supplements”. 85 renewable energy, sustainable transportation, waste and consumption reduction, along with other methods of reducing environmental impacts caused by energy use (City of Arcata, 2006). A November 2012 ballot initiative to tax residential electricity consumers that use over 600 percent of PG&E’s baseline was directed at local
indoor Cannabis growers. Caution must be exercised with regulation such as this; taxation may simply relocate indoor grows to neighboring parts of the county, negating any greenhouse gas reduction attempts. In addition, unintended consequences such as an increase in use of diesel generators or a downswing in the local economy could occur. If taxes were coupled with incentive
programs such as municipal assisted financing for commercial and residential renewable energy power systems, local growers might stay and become more energy efficient in their cultivation practices. Municipalities could use tax revenues to offer low interest loans to high-end electricity users who install solar or wind powered electricity generation; this strategy would help to build
a network of local renewable energy resources. As of 2013, nineteen US states have enacted laws for medical marijuana, while laws regarding prosecution and sentencing for possession of marijuana vary widely across all 50 states (National Organization for Reform of Marijuana Laws, 2013). As prohibition laws continue to change, so will the dynamics of Cannabis as a commodity on
the black market and a legal and regulated one. With an expected continued and possible increasing production of marijuana locally in Humboldt County, statewide in California, and across the United States, harmful and wasteful practices may be avoided by shaping personal and commercial marijuana cultivation techniques with education 86 programs and appropriate policy and
regulations for cultivation facilities and equipment (Bot, 2001; McQuiston, Parker, & Spitler, 2005; Schirmann, 2007). The potential impacts of the federal legalization of Cannabis on energy use and climate change mitigation should be considered and further evaluated. In Humboldt County, California, local regulation of the production, consumption, and trade of Cannabis is hindered
by federal prohibition (City of Arcata, 2012). There is an immeasurable delicate balance between the production of this black-market commodity and the local economy. Legalizing production, consumption, and trade of marijuana at the federal level could have the unintended consequences of sending Humboldt into the next “bust” phase of its historical “boom and bust” economic
cycle (Widick, 2009; Poor, 2013). 87 5.5 Suggested Future Research The plant had adapted more brilliantly to its strange new environment than anyone could have anticipated. For Cannabis, the drug war is what global warming will be for much of the rest of the plant world, a cataclysm that some species will turn into a great opportunity to expand their range. Cannabis has thrived
on its taboo the way another plant might thrive in a particularly acid soil. [Michael Pollan, The Botany of Desire, 2001] Suggested Research on Efficiency Gains For indoor Cannabis cultivation requiring electricity for lighting, new LED technology may offer increasingly lower investment costs, and much lower power consumption. LED arrays have improved efficiency over HID lighting at
the cost of a significant reduction in spectral output; each LED bulb within an array emits a narrow wavelength band (for example, red 660 nanometers or blue 450 nanometers). LED arrays for horticultural applications have been designed with many individual bulbs, emitting several different wavelengths, to provide maximum spectral output of photosynthetically active radiation
(PAR) (Yeh & Chung, 2009). However, the peak light frequencies needed for photosynthesis may differ from those needed in reactions to produce a Cannabis plant’s secondary metabolites, such as cannabinoids and terpenes present in the essential oils of marijuana buds and leaves. Further research on LED grow lighting should include an analysis on the energy intensity of cultivation
and quality characteristics of the crop produced. A 2008 HortScience article reports that horticultural LED arrays can provide three times more light output per watt of input power on an equivalent area basis (Morrow, 88 2008). Using this estimate with the same assumptions used for the small-scale indoor Cannabis cultivation economic feasibility analysis presented in Section 3.3, a
comparison of electricity used for Cannabis crops grown under different proportions of HID and LED lighting is summarized in Table 12 below. Calculations are based on plants grown under four 1000-watt lights, starting with a four-week vegetation cycle using 18 hours of light per day, followed by an eight-week flowering cycle using 12 hours of light per day. Emissions for each
scenario were estimated using a factor of 0.559 pounds of carbondioxide-equivalent emissions per kilowatt-hour of electricity generated (2005-2009 average reported by PG&E). Table 12: Indoor Cannabis Cultivation Potential Electricity and Emissions Savings with LED Lighting HID LED Lighting Electricity Use per Cycle (kWh/cycle) Associated Emissions per Cycle (lbCO2e/cycle) 100%
0% 4,700 2,600 75% 25% 3,900 2,200 50% 50% 3,100 1,800 25% 75% 2,400 1,300 0% 100% 1,600 880 Pollution intensity values (pounds of carbon dioxide equivalents per pound of product) are not estimated in Table 12 above because of uncertainties in product yield. Based on the indoor Cannabis cultivator’s rule of thumb that one pound of marijuana buds will be produced per
each 1000-watt HID light per grow cycle (D. Brownfield, personal communication, February 24, 2010), the estimated energy use and associated emissions presented above would correspond to about four pounds of marijuana buds 89 produced. Future studies are needed to determine what ratio of HID to LED lighting may be acceptable to minimize electricity use while still providing
a comparable quantity and quality of the marijuana produced. Different genetic varieties, or strains, of marijuana have different photoperiods, and different profiles of medicinal compounds. Plants that take longer to grow may have more value in the active ingredients that particular strain produces. Energy efficient production characteristics of different strains should be compared
to the value of the marijuana flowers they produce. Laboratory testing methods for medical marijuana include gas chromatography (GC) with flame ionization detection (FID) to quantify cannabinoids. The two cannabinoids most studied for medical use are tetrahydrocannabinol (THC) and cannabidiol (CBD). Table 13 shows ranges of THC and CBD that are typically detected in
marijuana samples analyzed by GC with FID at a Cannabis testing laboratory (Pure Analytics, 2011). Table 13: Typical Cannabinoid Profiles of Laboratory Tested Marijuana SOURCE: Pure Analytics, percentages by dry weight THC CBD low 3-10% 0-2% medium 10-16% 3-5% high 17-20% 6-14% very high 21+% --- Laboratory tests for medicinal potency, as well as contamination with
molds, pesticides, or other chemical additives, may be needed for a thorough evaluation of energy intensities at different Cannabis cultivation operations (Mozingo, 2012)
If being hidden was not a factor in Cannabis cultivation, outdoor methods coupled with light
deprivation or supplementation techniques in greenhouses could prove to be the most efficient and
highest quality means of production . In the absence of a national or global black market, the sizes of marijuana markets would
likely be local; that is, production, trade, and consumption would be about the same scale as local farmers markets. Future economic analyses
of Cannabis cultivation depend on the dynamic status of local, state, and federal legislation, and the presence or absence of inflated black
market prices. As mentioned previously, further research should examine the effects of predictor variables such as plant variety, light source,
and cultivation inputs on response variables such as bud yield, cycle time, and product quality.
2AC
2AC – AT: T-Decriminalization
1. Removing marijuana from Schedule I classification is sentencing and/or policing CJR
Chung et al. 18 [Ed Chung is the vice president for Criminal Justice Reform at the Center, Maritza Perez is the senior policy analyst for
Criminal Justice Reform at the Center, and Lea Hunter is a research assistant for Progress 2050 and Criminal Justice Reform at the Center,
“Rethinking Federal Marijuana Policy,” Center for American Progress, May 1, 2018, https://www.americanprogress.org/issues/criminal-
justice/reports/2018/05/01/450201/rethinking-federal-marijuana-policy/]
A smart and fair criminal justice and public safety strategy
For states that have liberalized marijuana laws, an inherent tension with federal law exists, as possession, distribution, and cultivation of
marijuana remain federal criminal offenses across the country. Marijuana is classified as a Schedule I drug, the most
serious of five categories pursuant to the Controlled Substances Import and Export Act .8 These categories, or
schedules, are based on the drug’s medical value and susceptibility for abuse. Schedule I substances have been determined by the federal
government to have no accepted medical use and a high potential for abuse—a description that does not apply to marijuana, as evidenced by a
classification also attaches serious criminal penalties for criminal offenses and
growing body of research.9 This
can trigger mandatory minimum sentences .10 Other Schedule I drugs include heroin, fentanyl, and ecstasy, which, unlike
marijuana, regularly contribute to overdose deaths.
Decriminalization versus legalization
Decriminalization means that the possession of small amounts of marijuana will trigger lower or no
criminal penalties , although fines and citations may still be levied. In New York, for example, the possession of a small amount of
marijuana for recreational use will not lead to an arrest, but the state criminalizes marijuana consumption in public view.11
Generally, the possession of larger amounts and trafficking of marijuana remain criminally illegal under this
system. Many jurisdictions have chosen to decriminalize marijuana in order to prioritize higher-level
crimes and cut down on justice-related costs .
2. Policing is the use of state power for law enforcement --- includes management and
control of minor offenses
Scott Tighe and William Brown 15 – both from Western Oregon University (“The Militarization of Law Enforcement: Bypassing the
Posse Comitatus Act” Justice Policy Journal ! Volume 12, Number 2 (Fall),
http://www.cjcj.org/uploads/cjcj/documents/jpj_militarization_of_law_enforcement_-_fall_2015.pdf //DH
The Bureau of Justice Statistics describes law enforcement as a collection of agencies responsible for maintaining public order and enforcing the
law (BJS, 2015). Michalowski (1985, p. 170) defines policing as “the use of state power by delegated authorities
for the purposes of law enforcement and the maintenance of order.” Law enforcement is commonly perceived as the
implementation of laws that accommodate the majority of people. Maintenance order typically includes the management
and control of minor offenses and behavioral/social disruptions that may threaten the status quo , which
includes individuals, businesses (including corporations), and other organizations who benefit by keeping social, economic, and political
arrangements stable (Shelden, et al., 2016).
3. Sentencing reform is reducing the amount of people sent to prison and amount of
time spent in prison
Serano 18 [David A Serano is a Maui defense attorney " Passing Criminal Justice Reform Will Congress Finally Pass Criminal Justice
Reform?," David Serano blog, https://www.davidserenolaw.com/passing-criminal-justice-reform/ Sentencing reform] nw
Sentencing reform refers to fixing the “front end.” It targets reducing the amount of people sent to
prison and the amount of time people spend in prison by changing what happens before they are
locked up , meaning when offenders are arrested, prosecuted, and sentenced. Sentencing reform aims
to ensure the punishment fits the crime by reducing mandatory minimum sentences and giving judges
more discretion to give a sentence considering the offender’s history and circumstances surrounding the
case instead of handing out terms based on the charges.
AT: Bunkers CP
Bunkers don’t solve---they’d never survive.
Klara 18 [Robert Klara, journalist for History. Nuclear Fallout Shelters Were Never Going to Work. September 1,
2018. https://www.history.com/news/nuclear-fallout-shelters-were-never-going-to-work]
Perhaps not surprisingly, the trouble with such crude accommodations became obvious almost immediately . Mere months into
the program, reports emerged of leaking water drums and shelters that had never received any supplies . In a New York
Times story in June of 1963, a Harlem woman asked, “Who’d want to go down there?” referring to the fetid tenement cellar meant to serve as her shelter space.
The “rats are as big as dogs,” she said. “If fallout came, I’d just run.” In fact, the untenability of the shelters was public knowledge before they
had even opened. A November 1961 story on the front page of The Washington Post bemoaned that most of the designated shelters would be little more than
“cold, unpleasant cellar space, with bad ventilation and even worse sanitation.” Conditions were a serious problem, but location was a bigger one. Two-thirds
of the fallout shelters in the U.S. were in “risk areas”—neighborhoods so close to strike targets that they’d likely never
survive an attack in the first place. In New York, for example, most of the government shelters could be found in Manhattan and Brooklyn—despite
the fact that a 20-megaton h ydrogen bomb detonated over Midtown would leave a crater 20 stories deep and drive a
firestorm all the way to the center of Long Island. Even out there, Life magazine said, occupants of a fallout shelter “might be barbequed.” What were the feds
thinking? According to Kenneth D. Rose, author of the book One Nation Underground, defense officials placed their faith in the
counterforce doctrine , a game theory that held that atomic war would be waged with only military installations as targets. But that was wishful
thinking . “It wouldn’t take much for the whole theory to totally go south,” Rose said. “If a bomber missed its target
and hit a city by mistake , then of course the gloves would come off and both sides would concentrate on cities as
well.” The shelters’ dubious utility also hinged on the shaky bet that the Soviets would drop only one bomb on a
city like New York, an assumption that Khrushchev himself later ridiculed in his memoirs. If he’d managed to get “one or two big ones”
into Gotham, wrote the Soviet Premier, “there wouldn’t be much of New York left.” And Americans knew it. Anyone who read the newspapers understood not just
that an inbound ICBM would leave them only 15 minutes, if that long, to get to a fallout shelter—but also that few structures in the city would survive a strike
anyway. As Steven R. David , professor of i nternational r elations at John Hopkins University, observes: “People reasoned, when faced with the
prospect of nuclear war, climbing into a shelter probably wasn’t going to do that much good.” In fact, mere weeks after the shelter
program got started, The Washington Post was already reporting “a public feeling of helplessness” about civil defense. In January of 1962, Life magazine
encapsulated the sentiments of many when it quoted a bank teller named Dorothy Gannaway. “An attack wouldn’t be one bomb, it would be
many ,” she said. “ We’d die in those shelters .”
Bunkers fail
Gao 11/19 – studied political and computer science at Grinnell College and is a frequent commentator
on defense and national-security issues. (Charlie, “Can Russia's Bunkers Really Save Moscow from
Nuclear War?” National Interest, November 19, 2019, https://nationalinterest.org/blog/buzz/can-
russias-bunkers-really-save-moscow-nuclear-war-97302)//RP
The “sphere” style of bunker was developed as a way to improve the survivability of shallow bunkers since shallow
bunkers are cheaper to build than deeper ones. To attain greater survivability, an outer bunker is made in the form of a sphere. This sphere is
placed inside a shallow circular shaft. Shock absorbers are placed around the sphere connecting into an internal bunker. Those absorbers
cushion the occupants from the shock waves of a nuclear explosion. Other bunkers that use similar technology in which the
central bunker is suspended on shock absorbers in a central structure might also be present, with various variations on the shape of the
central bunker. “Cylinder” and “Nut bolt” (hexagonal) types are also rumored to exist. The infamous “metro-2” bunker style is
laid out similarly to the older “metro” style but is deeper underground for greater blast resistance and secrecy. It was said to be
built in two phases, with the first being in the 1970s and 1980s, called D-6 ,and the second being between 1990–2000 by the TIS (OAO
Трансинжстрой) firm, which also builds civilian metro stations. However, most sources reporting on Metro-2 are
speculative , with the primary ones being reports of hobbyists who may have stumbled upon some Metro-2 entrances or exits or a 1990s
DIA report on the system. Despite the vast number of bunkers , recent advances in fuzing technology for
nuclear weapons are threatening to make the minimum civil defense standard obsolete. As fuzing
technology improves, such as that used on the American Super Fuze, it’s more likely that pressure levels experienced
by the civil defense bunkers will far exceed their design rating.
More ev.
AP 18 (Associated Press, 1-18-2018, “Cold War-era nuclear fallout shelters are useless“, New York Post,
https://nypost.com/2018/01/18/cold-war-era-nuclear-fallout-shelters-are-useless/, accessed 9-13-2019) jd
NEW YORK — A generation of Americans knew just what to do in the event of a nuclear attack — or during a
major false alarm, like the one over the weekend in Hawaii: Take cover in a building bearing a yellow fallout shelter
symbol. But these days, that might not be the best option, or even an option at all. Relics from the Cold
War, the aging shelters that once numbered in the thousands in schools, courthouses and churches
haven’t been maintained . And conventional wisdom has changed about whether such a shelter system
is necessary for an age when an attack is more likely to come from a weak rogue state or terrorist group
rather than a superpower. “We’re not in a Cold War scenario. We are in 2018,” said Dr. Irwin Redlener, head of the National Center
for Disaster Preparedness at Columbia University’s Earth Institute. “We’re not facing what we were facing 50 years ago when the Soviet Union
and the US had nuclear warheads pointed at each other that would devastate the world. There’s a threat, but it’s a different type of threat
today.” People weren’t sure what to do Saturday when Hawaii mistakenly sent a cellphone alert warning of an incoming ballistic missile and
didn’t retract it for 38 minutes. The state had set up the missile warning infrastructure after North Korea demonstrated its missiles had the
range to reach the islands. Drivers abandoned cars on a highway and took shelter in a tunnel. Parents huddled in bathtubs with their children.
Students bolted across the University of Hawaii campus to take cover in buildings. The false alarm is the perfect time to talk about what to do in
such an emergency, Redlener said, because most of the time people don’t want to talk about it. At all. “But it’s a real possibility,” he said. “City
officials should be talking about what their citizens should do if an attack happened. And it’s a necessity for individuals and families to talk
about and develop their own plan of what they would do.” New Yorkers who were asked this week about where they would seek shelter during
a missile attack said they had no idea. “The only thing I can think is, I would run,” said Sabrina Shephard, 45, of Manhattan. “Where we would
run, I don’t know, because I don’t know if New York has any bomb shelters or anything.” The
fallout shelters, marked with metal
signs bearing a logo similar to, but slightly different from, the symbol for radiation — three joined
triangles inside a circle — were set up in tens of thousands of buildings nationwide in the early 1960s
amid the nuclear arms race. In New York City alone there were believed to be about 18,000. The locations were chosen because they
could best block radioactive material. Anything could be a shelter as long as it was built with concrete, cinder blocks or brick, had no windows
and could be retrofitted quickly with supplies, an air filtration system and potable water. But
the idea was controversial from the
start, especially since one of the scenarios at the time, a full-scale nuclear war between the US and the
Soviet Union, would have left few survivors. By the 1970s, the concept was abandoned. A FEMA spokeswoman said the agency
doesn’t even have current information on where the shelters are located. New York City education officials announced last month they are
taking down the fallout shelter signs at schools. In Minot, North Dakota, just a few miles from the base where dozens of US missiles are at the
ready, a few fallout shelter signs remain, but their status as viable refuges isn’t known.
Case
There is obviously a ethics problem here--- should we ethically kill billions for small
probability we can repropagate
Nuclear war is the number one existential risk.
Sandberg 14 — Anders Sandberg, James Martin Research Fellow at the Future of Humanity Institute at Oxford University, holds a Ph.D.
in Computational Neuroscience from Stockholm University, 2014 (“The Five Biggest Threats To Human Existence,” Popular Science, May 29th,
Available Online at http://www.popsci.com/article/science/five-biggest-threats-human-existence, Accessed 10-07-2014)
In the daily hubbub of current “crises” facing humanity, we forget about the many generations we hope
are yet to come. Not those who will live 200 years from now, but 1,000 or 10,000 years from now. I use the word “hope” because we
face risks, called existential risks , that threaten to wipe out humanity . These risks are not just for big disasters, but for the
disasters that could end history. Not everyone has ignored the long future though. Mystics like Nostradamus have regularly tried to calculate the
end of the world. HG Wells tried to develop a science of forecasting and famously depicted the far future of humanity in his book The Time Machine. Other writers
built other long-term futures to warn, amuse or speculate. But had these pioneers or futurologists not thought about humanity’s future, it would not have changed
the outcome. There wasn’t much that human beings in their place could have done to save us from an existential crisis or even cause one. We are in a more
privileged position today. Human activity has been steadily shaping the future of our planet. And even though we are far from controlling natural disasters, we are
developing technologies that may help mitigate, or at least, deal with them. Future imperfect Yet, these risks remain understudied. There is a sense of
powerlessness and fatalism about them. People have been talking apocalypses for millennia, but few have tried to prevent them. Humans are also bad at doing
anything about problems that have not occurred yet (partially because of the availability heuristic – the tendency to overestimate the probability of events we know
examples of, and underestimate events we cannot readily recall). If
humanity becomes extinct, at the very least the loss is
equivalent to the loss of all living individuals and the frustration of their goals. But the loss would
probably be far greater than that. Human extinction means the loss of meaning generated by past
generations, the lives of all future generations (and there could be an astronomical number of future
lives) and all the value they might have been able to create . If consciousness or intelligence are lost, it might mean that value itself
becomes absent from the universe. This is a huge moral reason to work hard to prevent existential threats from
becoming reality. And we must not fail even once in this pursuit . With that in mind, I have selected what
I consider the five biggest threats to humanity’s existence . But there are caveats that must be kept in mind, for this list is not final.
Over the past century we have discovered or created new existential risks – supervolcanoes were discovered in the early 1970s, and before the Manhattan project
nuclear war was impossible – so we should expect others to appear. Also, some risks that look serious today might disappear as we learn more. The probabilities
also change over time – sometimes because we are concerned about the risks and fix them. Finally, just because something is possible and potentially hazardous,
doesn’t mean it is worth worrying about. There are some risks we cannot do anything at all about, such as gamma ray bursts that result from the explosions of
galaxies. But if we learn we can do something, the priorities change. For instance, with sanitation, vaccines and antibiotics, pestilence went from an act of God to
bad public health. 1. Nuclear war While only two nuclear weapons have been used in war so far – at Hiroshima and Nagasaki in World War II – and nuclear
stockpiles are down from their the peak they reached in the Cold War, it
is a mistake to think that nuclear war is impossible. In
fact, it might not be improbable . The Cuban Missile crisis was very close to turning nuclear. If we
assume one such event every 69 years and a one in three chance that it might go all the way to being
nuclear war, the chance of such a catastrophe increases to about one in 200 per year . Worse still, the
Cuban Missile crisis was only the most well-known case. The history of Soviet-US nuclear deterrence is
full of close calls and dangerous mistakes. The actual probability has changed depending on international tensions, but it seems
implausible that the chances would be much lower than one in 1000 per year . A full-scale nuclear war
between major powers would kill hundreds of millions of people directly or through the near aftermath
– an unimaginable disaster. But that is not enough to make it an existential risk. Similarly the hazards of fallout are often exaggerated – potentially
deadly locally, but globally a relatively limited problem. Cobalt bombs were proposed as a hypothetical doomsday weapon that would kill everybody with fallout,
but are in practice hard and expensive to build. And they are physically just barely possible. The
real threat is nuclear winter – that is, soot
lofted into the stratosphere causing a multi-year cooling and drying of the world. Modern climate
simulations show that it could preclude agriculture across much of the world for years. If this scenario
occurs billions would starve, leaving only scattered survivors that might be picked off by other threats
such as disease. The main uncertainty is how the soot would behave: depending on the kind of soot the outcomes may be very different, and we currently
have no good ways of estimating this. 2. Bioengineered pandemic Natural pandemics have killed more people than wars. However, natural pandemics are unlikely
to be existential threats: there are usually some people resistant to the pathogen, and the offspring of survivors would be more resistant. Evolution also does not
favor parasites that wipe out their hosts, which is why syphilis went from a virulent killer to a chronic disease as it spread in Europe. Unfortunately we can now
make diseases nastier. One of the more famous examples is how the introduction of an extra gene in mousepox – the mouse version of smallpox – made it far more
lethal and able to infect vaccinated individuals. Recent work on bird flu has demonstrated that the contagiousness of a disease can be deliberately boosted. Right
now the risk of somebody deliberately releasing something devastating is low. But as biotechnology gets better and cheaper, more groups will be able to make
diseases worse. Most work on bioweapons have been done by governments looking for something controllable, because wiping out humanity is not militarily useful.
But there are always some people who might want to do things because they can. Others have higher purposes. For instance, the Aum Shinrikyo cult tried to hasten
the apocalypse using bioweapons beside their more successful nerve gas attack. Some people think the Earth would be better off without humans, and so on. The
number of fatalities from bioweapon and epidemic outbreaks attacks looks like it has a power-law distribution – most attacks have few victims, but a few kill many.
Given current numbers the risk of a global pandemic from bioterrorism seems very small. But this is just bioterrorism: governments have killed far more people than
terrorists with bioweapons (up to 400,000 may have died from the WWII Japanese biowar program). And as technology gets more powerful in the future nastier
pathogens become easier to design. 3. Superintelligence Intelligence is very powerful. A tiny increment in problem-solving ability and group coordination is why we
left the other apes in the dust. Now their continued existence depends on human decisions, not what they do. Being smart is a real advantage for people and
organisations, so there is much effort in figuring out ways of improving our individual and collective intelligence: from cognition-enhancing drugs to artificial-
intelligence software. The problem is that intelligent entities are good at achieving their goals, but if the goals are badly set they can use their power to cleverly
achieve disastrous ends. There is no reason to think that intelligence itself will make something behave nice and morally. In fact, it is possible to prove that certain
types of superintelligent systems would not obey moral rules even if they were true. Even more worrying is that in trying to explain things to an artificial intelligence
we run into profound practical and philosophical problems. Human values are diffuse, complex things that we are not good at expressing, and even if we could do
that we might not understand all the implications of what we wish for. Software-based intelligence may very quickly go from below human to frighteningly
powerful. The reason is that it may scale in different ways from biological intelligence: it can run faster on faster computers, parts can be distributed on more
computers, different versions tested and updated on the fly, new algorithms incorporated that give a jump in performance. It has been proposed that an
“intelligence explosion” is possible when software becomes good enough at making better software. Should such a jump occur there would be a large difference in
potential power between the smart system (or the people telling it what to do) and the rest of the world. This has clear potential for disaster if the goals are badly
set. The unusual thing about superintelligence is that we do not know if rapid and powerful intelligence explosions are possible: maybe our current civilisation as a
whole is improving itself at the fastest possible rate. But there are good reasons to think that some technologies may speed things up far faster than current
societies can handle. Similarly we do not have a good grip on just how dangerous different forms of superintelligence would be, or what mitigation strategies would
actually work. It is very hard to reason about future technology we do not yet have, or intelligences greater than ourselves. Of the risks on this list, this is the one
most likely to either be massive or just a mirage. This is a surprisingly under-researched area. Even in the 50s and 60s when people were extremely confident that
superintelligence could be achieved “within a generation”, they did not look much into safety issues. Maybe they did not take their predictions seriously, but more
likely is that they just saw it as a remote future problem. 4. Nanotechnology Nanotechnology is the control over matter with atomic or molecular precision. That is in
itself not dangerous – instead, it would be very good news for most applications. The problem is that, like biotechnology, increasing power also increases the
potential for abuses that are hard to defend against. The big problem is not the infamous “grey goo” of self-replicating nanomachines eating everything. That would
require clever design for this very purpose. It is tough to make a machine replicate: biology is much better at it, by default. Maybe some maniac would eventually
succeed, but there are plenty of more low-hanging fruits on the destructive technology tree. The most obvious risk is that atomically precise manufacturing looks
ideal for rapid, cheap manufacturing of things like weapons. In a world where any government could “print” large amounts of autonomous or semi-autonomous
weapons (including facilities to make even more) arms races could become very fast – and hence unstable, since doing a first strike before the enemy gets a too
large advantage might be tempting. Weapons can also be small, precision things: a “smart poison” that acts like a nerve gas but seeks out victims, or ubiquitous
“gnatbot” surveillance systems for keeping populations obedient seems entirely possible. Also, there might be ways of getting nuclear proliferation and climate
engineering into the hands of anybody who wants it. We cannot judge the likelihood of existential risk from future nanotechnology, but it looks like it could be
potentially disruptive just because it can give us whatever we wish for. 5. Unknown unknowns The most unsettling possibility is that there is something out there
that is very deadly, and we have no clue about it. The silence in the sky might be evidence for this. Is the absence of aliens due to that life or intelligence is
extremely rare, or that intelligent life tends to get wiped out? If there is a future Great Filter, it must have been noticed by other civilisations too, and even that
didn’t help. Whatever the threat is, it would have to be something that is nearly unavoidable even when you know it is there, no matter who and what you are. We
do not know about any such threats (none of the others on this list work like this), but they might exist. Note that just because something is unknown it doesn’t
mean we cannot reason about it. In a remarkable paper Max Tegmark and Nick Bostrom show that a certain set of risks must be less than one chance in a billion per
year, based on the relative age of Earth. You
might wonder why climate change or meteor impacts have been left off this
list. Climate change, no matter how scary , is unlikely to make the entire planet uninhabitable (but it could
compound other threats if our defences to it break down). Meteors could certainly wipe us out, but we would have to be very unlucky. The average mammalian
species survives for about a million years. Hence, the
background natural extinction rate is roughly one in a million per
year. This is much lower than the nuclear-war risk, which after 70 years is still the biggest threat to our
continued existence . The availability heuristic makes us overestimate risks that are often in the media,
and discount unprecedented risks. If we want to be around in a million years we need to correct that .
2AC – AT: Spark – Top Level
Nuclear war causes extinction --- checks fail.
Starr 15 (Steven; Associate member of the Nuclear Age Peace Foundation and has been published by the Bulletin of the Atomic Scientists,
expert on the environmental consequences of nuclear war, and in 2011, he made an address to the U.N. First Committee describing the dangers
that nuclear weapons and nuclear war poses to all nations and peoples, “Nuclear War: An Unrecognized Mass Extinction Event Waiting To
Happen,” 3/1/15, https://ratical.org/radiation/NuclearExtinction/StevenStarr022815.html, accessed 10/6/16)
Our continued
Thank you Helen. In 1945, Albert Einstein said, “The release of atomic power has changed everything except our way of thinking.” In 2015, seventy years later, we are still stockpiling nuclear weapons in preparation for nuclear war.
willingness to allow huge nuclear arsenals to exist shows we have not fundamentally grasped the clearly that
most important truth of the nuclear age a nuclear war is not likely to be survived by the human : that
species. A war fought with 21st century nuclear weapons would be more than just a great strategic
catastrophe in human history. If we allow it to happen, a war would be a mass extinction event that such
is an event of utter finality, a nuclear war that could cause human extinction should be
definition, and really
considered as the ultimate criminal act. war fought with US & Russian strategic It certainly would be the crime to end all crimes. Nuclear
nuclear arsenals would leave Earth uninhabitable layer The world’s ; Radioactive fallout from bombs, ruined nuclear power plants, and destruction of ozone
leading climatologists tell us that nuclear war threatens now existence as a species. a large our continued Their studies predict that
and dark to even grow food not only humans, but most large animals and many other forms
. Their findings make it clear that
of complex life would likely vanish forever in a nuclear darkness of our own making. The environmental consequences of nuclear war would attack the ecological support
Radioactive fallout
systems of life at every level. would poison the biosphere.
, produced not only by nuclear bombs, but also by the destruction of nuclear power plants and their spent fuel pools,
Millions of tons of smoke would destroy Earth’s protective ozone layer act to creating Ice and block most sunlight from reaching Earth’s surface,
Age weather conditions that would last for decades . Yet the political and military leaders who control nuclear weapons strictly avoid any direct public discussion of the consequences of nuclear war.
done by climatologists, the predicts virtually any nuclear war, fought with even a fraction of the
research that operational and
deployed nuclear arsenals, will leave the Earth essentially uninhabitable. Existential Threat of Nuclear War is Unrecognized--Leaders of Nuclear Weapon States
do not publicly recognize or discuss the threat their nuclear arsenals pose to continued human existence Some of the more recent of these scientific studies appeared in print almost 9 years ago. Yet their predictions have not been publicly acknowledged or discussed by any American or
Russian President, nor by any of their top military leaders. In fact, none of the current leaders of any of the nations that possess nuclear weapons have ever made such a public acknowledgment. It is not clear that these leaders are even aware of the findings of this research, since they
The
have consistently refused to meet with the scientists who did the studies. No Nuclear Weapon State has ever attempted to evaluate what consequences the detonation of their nuclear arsenals would have upon the global biosphere and ecosystems.
existential danger of strategic nuclear arsenals is not part of the global debate on nuclear weapons As a result, the grave threat to continued human
mentioned during American presidential campaigns or debates for more than 40 years. Such considerations have never been included
in any military planning or as part of any national strategic review of US military force requirements. The existential danger of strategic nuclear arsenals is not part of the global debate on nuclear weapons Why is this? According to our best scientists, the deployed arsenals of nuclear
weapons pose a clear and present danger to the survival of our species. How is it that our leaders are unable or unwilling to even talk about this grave danger? Absence of public education and awareness of nuclear war: American public schools do not teach about the effects of nuclear
weapons or the consequences of nuclear war; public is no longer aware of the grave danger posed by nuclear war; American political leaders are also generally uninformed about the size and capabilities of the US nuclear arsenal In the 1980s, the American public was generally aware of
the existential threat posed by nuclear war. That awareness no longer exists today. This is in part because US public schools do not teach students about nuclear weapons. A couple of generations of Americans have grown up with essentially no knowledge of the effects or consequences
of nuclear war. This may be why our political and military leaders continue to focus upon the numbers of nuclear weapons rather than the consequences of their use. This makes no sense when a single ballistic missile now carries almost three times more nuclear explosive power than all
the bombs that were detonated during World War 2. A universal ignorance of basic nuclear facts ultimately creates a very dangerous situation, because leaders who are unaware that nuclear war can end human history are likely to lack the gut fear of nuclear war that’s needed to prevent
them from leading us into a nuclear holocaust. I teach an online class on nuclear weapons at the University of Missouri [Nuclear Weapons: Environmental, Health and Social Effects], and I get smart students but virtually none of them that come into my class know anything about nuclear
weapons; they don’t know the difference between an atomic bomb and a strategic nuclear weapon, they don’t know that large arsenals of strategic nuclear weapons even exist. Without this basic knowledge, it is almost impossible for anyone to understand the immense dangers posed by
nuclear war. Thus I am now going to take some time to explain these facts, to try to insure my message today is clear. Immense Destructive Power: Hiroshima Atomic Bomb: 15,000 tons TNT; Strategic Nuclear Weapon: 15,000,000 tons of TNT Atomic bombs were the first nuclear weapons
to be invented. It was an atomic bomb that destroyed the Japanese city of Hiroshima at the end of the World War 2. A typical atomic bomb has an explosive power of about 15,000 tons of TNT, also called 15 kilotons. Atomic bombs are much less powerful than the hydrogen bombs, or
thermonuclear weapons, which were invented in the 1950s. These “super-bombs” often had explosive powers one thousand times greater than atomic bombs. The photo compares the relative size of the mushroom cloud produced by an atomic bomb to a mushroom cloud produced a
thermonuclear weapon that had an explosive power of 15 million tons of TNT, which was tested by the United States in 1954. Today, thermonuclear bombs are called “strategic nuclear weapons,” and they generally have an explosive power ranging from 100,000 tons to more than 1
million tons of TNT. Of course, while an atomic bomb is much less powerful than a strategic bomb, it will still completely destroy any city. Nuclear Firestorm from an Atomic Bomb: 15,000 tons of TNT explosive power; Fire zone: 3 to 5 sq miles (7 to 13 km) This graphic illustrates the
detonation of an atomic bomb above Midtown Manhattan, where we are now, in the heart of New York City. The solar temperatures of the bomb would almost instantly ignite a nuclear firestorm covering 3 to 5 square miles. It was an atomic bomb of this size—close to this size at least—
that destroyed the city of Hiroshima in 1945. City of Hiroshima before the first atomic bomb was dropped on it on August 6, 1945 It is shocking to see what the atomic bomb did to Hiroshima. Images presented on Hiroshima: the first city destroyed by a nuclear weapon from Nuclear
Darkness, Global Climate Change & Nuclear Famine – The Deadly Consequences of Nuclear War CLICK AN IMAGE TO VIEW HI RES PANORAMA Hiroshima Panorama #1 Hiroshima Peace Memorial Museum, Photo by Shigeo Hayashi - RA119-RA134 Hiroshima Panorama #2 360 degree view
span Hiroshima Peace Memorial Museum, Photo by H.J. Peterson - K-HJP001-K-HJP013 Hiroshima Panorama #4 360 degree view span Hiroshima Peace Memorial Museum, Photo by Shigeo Hayashi A723-A742 More than 4 square miles of the city were utterly destroyed, transforming it
into a barren wasteland. Nuclear Firestorm from a Strategic Weapon: 800 kiloton warhead (800,000 tons of TNT); 90 sq miles certain fire zone (230 sq km): 152 sq miles probable fire zone (389 km) (to see this detonation go from street level to covering hundreds of square miles, go to
http://www.nucleardarkness.org/nuclear/nuclearexplosionsimulator/ and click “detonate”) Atomic Bomb compared with Thermonuclear (strategic) Bomb However, the firestorm produced by a strategic nuclear weapon is vastly larger than that produced by an atomic bomb. This graphic
illustrates the most likely size of a fire zone, created by an 800 kiloton strategic nuclear warhead. This graphic shows it also being detonated above where we are now in New York. [See: “What would happen if an 800-kiloton nuclear warhead detonated above midtown Manhattan?” by
over a total area of approximately 90 to 152 square miles fires would have joined . 20 to 30 minutes after the detonation, these
together to form a single, immense firestorm. Air temperatures in the fire zone would be above the boiling point of water. Hurricane force winds would blow towards the center of the firestorm, driving the
The US and Russia together maintain a total of more than 800 launch-ready ballistic missiles,
nuclear forces (2014)
which can be fired with less than 15 minutes warning, and will strike their targets in 30 minutes or less . Once
they are launched, they cannot be recalled from flight. These missiles are armed with a total of about 2400 strategic nuclear warheads, which have a combined explosive power of approximately 808 million tons of TNT. That’s a lot of TNT to visualize. The explosive power of 808 million
tons of TNT is easier to visualize if you have something to compare it to. 2.7 million tons of TNT of all bombs exploded in WWII; 808 million tons of TNT of all US & Russian launch-ready N-weps or, 300 TIMES the power of all bombs exploded in WWII The small red dot on the left side of
this figure represents 2.7 million tons of TNT, which is estimated to be the total explosive power of all the bombs exploded by all the armies of the world during the 5 years of World War 2. The large red circle represents the 808 million tons of TNT explosive power of US and Russian
launch-ready nuclear weapons. This amount is 300 times greater than the explosive power of all the bombs exploded during World War 2. It would require less than one hour for these launch-ready weapons to all detonate over their targets. Russian and US nuclear briefcases (called
``nuclear footballs'') In order to make sure that the US and Russian presidents can give the order to launch their missiles in less than one minute, both presidents are constantly accompanied by a military officer carrying a “nuclear briefcase”. The briefcase contains something similar to a
destruction of nuclear power plants and spent fuel Another catastrophic consequence of nuclear war is
likely to result from the targeting or destruction of nuclear power plants. [See above] There are more than 400 commercial nuclear power plants in the world.
planet , and the largest portion of it resides in the used or spent fuel rods that are stored in spent fuel pools next to the reactors. All spent fuel pools are outside primary containment. Spent pools contain 5 to 7 times more radioactivity than the reactor. The rods in a fuel pool
contain 5 to 7 times more radioactivity than is inside the nuclear reactor. However, these pools are all located outside the large concrete containment vessels that house the reactors. This means any catastrophic release of radioactivity from the pools will not be contained, but will be
released into the atmosphere and the environment. Spent fuel rods continue to produce huge amounts of heat even after they are removed from the reactor core. Thus spent fuel pools must constantly operate large cooling systems to remove this heat, otherwise the water in the pool
directly targeted they would probably still be destroyed by the long-term loss of off-site
in a nuclear war or during wartime,
electrical power , which is required to run their cooling systems. In other words, the destruction of the electrical grid in a nuclear war would almost inevitably lead to the failure of spent fuel cooling systems, and the subsequent destruction of the spent fuel they
burning fuel rods would release huge amounts of radioactivity, including a radioactive isotope
contain. Ruptured or
called cesium-137 . Chernobyl: Meltdown of one reactor that contained 3% of the Cesium-137 in one Spent fuel pool We unfortunately have had experience with what happens when cesium-137 is released by a catastrophic nuclear accident. The destruction
of the Chernobyl reactor, which contained about 3% of the cesium-137 found in most spent fuel pools, created an uninhabitable radioactive exclusion zone, which covers more than 1100 square miles. Looking at this map of the Chernobyl exclusion zone [hi resolution version], it specifies
in the key that greater than 40 curies per square kilometer is required to make an area uninhabitable. While I was studying this some years ago I remembered that cesium-137 has 88 curies per gram. So this means that in less than half a gram of cesium-137—if it’s made into smoke and
spread over a square kilometer—it means you can’t live there for centuries. That translates into about 1.2 grams per square mile. An American dime weighs 2.7 grams. So you can take half the weight of a dime of cesium-137 and spread it over Central Park and nobody can live there again
All this incredible destruction would lead to what scientists describe as a mass extinction event. 5 Mass Extinction
Events in Geologic History in which 50% of animal species died There is considerable evidence, which appears in the fossil records, that there have been 5 mass extinction events in geologic history. In these events, at least 50% of all animal species were wiped out. Afterwards, it took
millions of years for animal life to recover to its previous levels of abundance. Asteriod Impact The most recent of these mass extinction events took place 66 million years ago, when an asteroid about 6 miles in diameter struck the Earth near the Yucatan peninsula. Its impact vaporized
the Earth’s crust and it formed a 112 mile-wide crater in the Gulf of Mexico. The super-heated material vaporized in the crater was blasted far above the Earth’s atmosphere. It then rained down upon the entire planet. Debris from the asteroid heated the upper atmosphere to 2700
degrees ahrenheitDinosaurs were broiled alive The debris from the asteroid heated Earth’s upper atmosphere to about 2700 degrees Fahrenheit, a temperature hot enough to melt steel. For a number of hours, the temperatures in the lower atmosphere climbed to about 700 degrees
Fahrenheit and the dinosaurs were broiled alive. Impact Winter: All forests on Earth set on fire; Smoke layer forms above cloud level; All sunlight blocked by smoke for years; Prolonged cold and dark forms At the same time, all the forests of the world caught fire and burned. The smoke
from the burning forests rose above cloud level into the stratosphere, where it surrounded and engulfed the Earth. The smoke formed a global stratospheric smoke layer, which remained in place for many years. I read that if you would have been there you could have held your hand in
front of your face and you would not have seen it. Because the smoke layer almost completely blocked warming sunlight, it created Ice Age weather conditions on Earth. This period of cold and dark that followed has come to be known as “Impact Winter”. unfolding Impact Winter
unfolding Impact Winter unfolding Impact Winter unfolding Impact Winter unfolding Impact Winter Earth after Impact Winter: No land animal larger than a squirrel is observed to have survived; 75% of all species on Earth eliminated The Impact Winter devastated life on Earth. No land
30 years ago when scientists began warning that a nuclear war would also create a global stratospheric
smoke layer, which would cause a “Nuclear Winter New studies show that the original 1980s ” similar to the Impact Winter.
detonations would ignite immense nuclear firestorms, which would produce as much as 150 to 180
million tons of smoke. This smoke would rise above cloud level and encircle the Earth . Because the smoke could not be rained out, it
layer would vastly increase the amount of harmful ultra violet (UV-B) light reaching Earth. The loss of warming sunlight would
create weather conditions colder than those experienced at the height of the last Ice Age. Temperatures would fall below freezing every day for one to three years in the continental interiors of North America and Eurasia. Growing seasons would be eliminated for at least a decade or
longer. Nothing would grow , including the food crops we depend upon. Most humans and animals would starve to death. Smoke surrounds the Earth after a strategic nuclear war Please click on this graphic in order to see the animation The animated
graphic here was produced by Dr. Luke Oman, of NASA’s Goddard Space Flight Center, and it illustrates 150 million tons of smoke entering the stratosphere after a war fought with the strategic nuclear arsenals of the US and Russia. In less than 2 weeks, the smoke surrounds the earth. In
a few more weeks, the smoke acts to block 70% of all sunlight reaching the surface of the Northern Hemisphere. Cloudless sky after US-Russian nuclear war, 70% of sunlight blocked by global stratospheric smoke layer The illustration of a farmer standing in his ruined cornfield gives you
some idea of how dark a cloudless sky in North America would appear following a war fought with strategic nuclear weapons. On a cloudy day at noon, the light levels would be similar to what we would experience now at midnight, during a full moon. The image was adapted from an
article published in 2009 by Scientific American, written by Drs. Toon and Robock, [Local Nuclear War, Global Suffering, Scientific American, 2009] a few years after they had published peer-reviewed studies that warned of the existential dangers of nuclear war.{+} In 2011, Dr. Robock
wrote in Nature magazine that he had made multiple requests to meet with officials of the Obama administration, so that he and Dr. Toon could discuss the findings of their studies. All of his requests were denied. [{+} See: Robock, Oman, et al, 2007: “Climatic consequences of regional
nuclear conflicts,” Supplement Caption, Supplement. Atm. Chem. Phys., 7, 2003-2012; Robock, Oman, et al, 2007: “Nuclear winter revisited with a modern climate model and current nuclear arsenals: Still catastrophic consequences.” J. Geophys. Res., 112, D13107, doi:2006JD008235;
Robock, Toon, et al, 2007: “The continuing environmental threat of nuclear weapons: Integrated policy responses needed.” EOS, 88, 228, 231, doi:10.1029/2007ES001816; Toon, Robock, et al, 2007: “Consequences of regional-scale nuclear conflicts.” Science, 315, 1224-1225.] It is time for
the leaders of the nuclear weapon states to publicly acknowledge that nuclear arsenals threaten continued human existence. I could provide a lot more information and examples of leaders refusing to meet with scientists who only wish to warn them about the grave dangers posed by
. But further proof is unnecessary, simply because of the fact that none of the leaders of the nuclear
nuclear war
weapon states have publicly acknowledged that a war fought with their nuclear arsenals could end
human history. It is time for them to do so. It is also time for us to demand that they do so. I began this talk with a quote by Albert Einstein, and I would like to close with another, one that we all should remember. Einstein said: The world is a dangerous place to
live; not because of the people who are evil, but because of the people who don’t do anything about it. Don’t be one of them. Please, do whatever you can to help prevent nuclear war. Thank you.
Nuclear war triggers a nuclear winter --- that collapses ag and leads to an ice-age that
causes extinction.
Robock 7 (Alan, PhD from Massachusetts University in Meterology, Professor of Environmental Sciences at Rutgers University, “Nuclear
winter revisited with a modern climate model and current nuclear arsenals: Still catastrophic consequences” in the Journal of Geophysical
Research https://agupubs.onlinelibrary.wiley.com/doi/full/10.1029/2006JD008235)NFleming
[19] As found by Robock et al. [2007] for a 5 Tg case, the black carbon particles in the aerosol layer for the 150 Tg case are heated by
absorption of shortwave radiation and lofted into the upper stratosphere. The aerosolsquickly spread globally and produce a
long‐lasting climate forcing (Figure 1). They end up much higher than is typical of weakly absorbing volcanic
sulfate aerosols, which typically are just above the tropopause [Stenchikov et al., 1998]. As a result, the soot aerosols
have a very long residence time and continue to affect surface climate for more than a decade . The
mass e‐folding time for the smoke is 4.6 years, as compared to 1 year for typical volcanic eruptions [Oman et
al., 2006a] and 1 week for tropospheric aerosols. After 4.6 years, the e‐folding time is reduced, but is still longer than that
of volcanic aerosols. In addition to the lofting of the smoke by solar absorption, another reason for this difference is that volcanic sulfate
aerosols are larger, with an effective radius of 0.5 μm, and thus they have a higher settling velocity than the smaller smoke aerosols. This
long smoke aerosol lifetime is different from results found in previous nuclear winter simulations ,
which either fixed the vertical extent of the aerosols [Turco et al., 1983] or used older‐generation climate
models with limited vertical resolution and low model tops [Aleksandrov and Stenchikov, 1983; Covey et al., 1984;
Malone et al., 1986], artificially limiting the particle lifetimes . Changes in visible optical depth and net downward shortwave
radiation at the surface for the 150 Tg case. Although the maximum forcing is in the Northern Hemisphere during the first summer, the
aerosols rapidly spread around the globe producing large solar radiation reductions in both
hemispheres . [20] The maximum change in net global average surface shortwave radiation for the 150 Tg case
is −100 W m−2 (Figure 2). This negative forcing persists for many years, with the global average value still at −20 W m−2 even 10 years after the
initial smoke injection. This forcing greatly exceeds the maximum global average surface forcing of −4 W m−2 for the
1991 Mt. Pinatubo volcanic eruption [Kirchner et al., 1999; Oman et al., 2005], the largest of the 20th century, also shown in Figure 2. The
volcanic forcing disappeared with an e‐folding time of only 1 year, and during the first year averaged −3.5 W m−2 (Figure 2).
Change of
global average surface air temperature, precipitation, and net downward shortwave radiation for the 5 Tg
[Robock et al., 2007], 50 Tg and 150 Tg cases. Also shown for comparison is the global average change in downward shortwave radiation for the
1991 Mt. Pinatubo volcanic eruption [Oman et al., 2005], the largest volcanic eruption of the 20th century. The global average precipitation in
the control case is 3.0 mm/day, so the changes in years 2–4 for the 150 Tg case represent a 45% global average reduction in precipitation. [21]
T he effects of the smoke cloud on surface temperature are extremely large (Figure 2). Stratospheric
temperatures are also severely perturbed (Figure 3). A global average surface cooling of −7°C to −8°C
persists for years, and after a decade the cooling is still −4°C (Figure 2). Considering that the global
average cooling at the depth of the last ice age 18,000 years ago was about −5°C, this would be a
climate change unprecedented in speed and amplitude in the history of the human race. The
temperature changes are largest over land . Maps of the temperature changes for the Northern Hemisphere summers for the
year of smoke injection (year 0) and the next year (year 1) are shown in Figure 4. Cooling of more than −20°C occurs over large
areas of North America and of more than −30°C over much of Eurasia, including all agricultural regions .
There are also large temperature changes in the tropics and over Southern Hemisphere continents . Large
climatic effects would occur in regions far removed from the target areas or the countries involved in the conflict.
Change in global average temperature (°C) profile for the 150 Tg case from the surface to 0.02 mbar [80 km]. The semiannual periodicity at the
top is due to enhanced heating during the summers in each hemisphere. Surface air temperature changes for the 150 Tg case averaged for
June, July, and August of the year of smoke injection and the next year. Effects are largest over land, but there is substantial cooling over
oceans, too. The warming over Antarctica in year 0 is for a small area, is part of normal winter interannual variability, and is not significant. Also
shown as red circles are two locations in Iowa and Ukraine, for which time series of temperature and precipitation are shown in Figures 5 and 7.
[22] As examples of the actual temperature changes in important grain‐growing regions, we have plotted the time series of daily minimum air
temperature for grid points in Iowa, United States, at 42°N, 95°W, and in Ukraine at 50°N, 30°E (Figure 5). For both locations (shown in Figure
4),minimum temperatures rapidly plummet below freezing and stay there for more than a year . In
Ukraine, they stay below freezing for more than two years. Clearly, this would have agricultural implications . Time series of
daily minimum temperature from the control , 50 Tg, and 150 Tg cases for two important agricultural regions ,
(top) Iowa, United States, at 42°N, 95°W, and (bottom) Ukraine at 50°N, 30°E. These two locations are shown on the map in Figure 4. [23] As a
evapotranspiration is reduced and the global hydrological cycle is
result of the cooling of the Earth's surface,
weakened . In addition, Northern Hemisphere summer monsoon circulations collapse, because the driving
continent‐ocean temperature gradient does not develop. The resulting global precipitation is reduced
by about 45% ( Figure 2). As an example, Figure 6 shows a map of precipitation change for the Northern Hemisphere summer one year
after the smoke injection. The largest precipitation reductions are in the Intertropical Convergence Zone and in areas affected by the North
American, Asian, and African summer monsoons. The small areas of increased precipitation are in the subtropics in response to a severely
weakened Hadley Cell. Figure 7 shows time series of monthly precipitation for the same Iowa location as shown in Figure 5, and it is clear that
these large precipitation reductions would also have agricultural implications.
Growth path dependence and elites block a quick transition --- default to behavioral
psychology.
Hubert BUCH-HANSEN 18. Associate Professor, Department of Business and Politics, Copenhagen Business School. “The
Prerequisites for a Degrowth Paradigm Shift: Insights from Critical Political Economy.” Ecological Economics 146: 157-63. Emory Libraries.
Still, the
degrowth project is nowhere near enjoying the degree and type of support it needs if its policies
are to be implemented through democratic processes . The number of political parties, labour unions,
business associations and international organisations that have so far embraced degrowth is modest to
say the least. Economic and political elites, including social democratic parties and most of the trade
union movement, are united in the belief that economic growth is necessary and desirable . This
consensus finds support in the prevailing type of economic theory and underpins the main contenders in
the neoliberal project, such as centre-left and nationalist projects . In spite of the world's multidimensional crisis, a
pro-growth discourse in other words continues to be hegemonic: it is widely considered a matter of
common sense that continued economic growth is required . It is also noteworthy that economic and political
elites, to a large extent, continue to support the neoliberal project, even in the face of its evident
shortcomings . Indeed, the 2008 financial crisis did not result in the weaken ing of transnational financial
capital that could have paved the way for a paradigm shift . Instead of coming to an end, neoliberal
capitalism has arguably entered a more authoritarian phase (Bruff, 2014). The main reason the power of
the pre-crisis coalition remains intact is that governments stepped in and saved the dominant fraction
by means of massive bailouts. It is a foregone conclusion that this fraction and the wider coalition behind
the neoliberal paradigm (transnational industrial capital, the middle classes and segments of organized
labour) will consider the degrowth paradigm unattractive and that such social forces will vehemently
oppose the implementation of degrowth policies (see also Rees, 2014: 97). While degrowth advocates envision a future in which market forces play a less prominent role than
they do today, degrowth is not an antimarket project. As such, it can attract support from certain types of market actors. In particular, it is worth noting that social enterprises, such as cooperatives (Restakis, 2010), play a major role in the degrowth vision. Such enterprises are defined by
being ‘organisations involved at least to some extent in the market, with a clear social, cultural and/or environmental purpose, rooted in and serving primarily the local community and ideally having a local and/or democratic ownership structure’ (Johanisova et al., 2013: 11). Social
enterprises currently exist at the margins of a system, in which the dominant type of business entity is profit-oriented, shareholder-owned corporations. The further dissemination of social enterprises, which is crucial to the transitions to degrowth societies, is – in many cases – blocked or
delayed as a result of the centrifugal forces of global competition (Wigger and Buch-Hansen, 2013). Overall, social enterprises thus (still) constitute a social force with modest power. Ougaard (2016: 467) notes that one of the major dividing lines in the contemporary transnational
capitalist class is between capitalists who have a material interest in the carbon-based economy and capitalists who have a material interest in decarbonisation. The latter group, for instance, includes manufacturers of equipment for the production of renewable energy (ibid.: 467). As
mentioned above, degrowth advocates have singled out renewable energy as one of the sectors that needs to grow in the future. As such, it seems likely that the owners of national and transnational companies operating in this sector would be more positively inclined towards the
degrowth project than would capitalists with a stake in the carbon-based economy. Still, the prospect of the “green sector” emerging as a driving force behind degrowth currently appears meagre. Being under the control of transnational capital (Harris, 2010), such companies generally
embrace the “green growth” discourse, which ‘is deeply embedded in neoliberal capitalism’ and indeed serves to adjust this form of capitalism ‘to crises arising from contradictions within itself’ (Wanner, 2015: 23). In addition to support from the social forces engendered by the
production process, a political project ‘also needs the political ability to mobilize majorities in parliamentary democracies, and a sufficient measure of at least passive consent’ (van Apeldoorn and Overbeek, 2012: 5–6) if it is to become hegemonic. As mentioned, degrowth enjoys little
support in parliaments, and certainly the pro-growth discourse is hegemonic among parties in government.5 With capital accumulation being the most important driving force in capitalist societies, political decision-makers are generally eager to create conditions conducive to production
and the accumulation of capital (Lindblom, 1977: 172). Capitalist states and international organisations are thus “programmed” to facilitate capital accumulation, and do as such constitute a strategically selective terrain that works to the disadvantage of the degrowth project. The main
advocates of the degrowth project are grassroots, small fractions of left-wing parties and labour unions as well as academics and other citizens who are concerned about social injustice and the environmentally unsustainable nature of societies in the rich parts of the world. The project is
thus ideationally driven in the sense that support for it is not so much rooted in the material circumstances or short-term self-interests of specific groups or classes as it is rooted in the conviction that degrowth is necessary if current and future generations across the globe are to be able
degrowth do not possess instruments that enable them to force political decision-makers to listen to –
let alone comply with – their views. As such, they are in a weaker position than the labour union
movement was in its heyday, and they are in a far weaker position than the owners and managers of
large corporations are today (on the structural power of transnational corporations, see Gill and Law, 1989). 6. Consent It is also safe
to say that degrowth enjoys no “passive consent” from the majority of the population . For the time being,
degrowth remains unknown to most people. Yet, if it were to become generally known, most people
would probably not find the vision of a smaller economic system appealing. This is not just a matter of degrowth being ‘a missile word that backfires’
because it triggers negative feelings in people when they first hear it (Drews and Antal, 2016). It is also a matter of the actual content of the degrowth project. Two issues in particular should be mentioned in this context. First, for many, the anti-capitalist sentiments embodied in the
degrowth project will inevitably be a difficult pill to swallow. Today, the vast majority of people find it almost impossible to conceive of a world without capitalism. There is a ‘widespread sense that not only is capitalism the only viable political and economic system, but also that it is now
impossible to even imagine a coherent alternative to it’ (Fisher, 2009: 2). As Jameson (2003) famously observed, it is, in a sense, easier to imagine the end of the world than it is to imagine the end of capitalism. However, not only is degrowth – like other anti-capitalist projects – up against
the challenge that most people consider capitalism the only system that can function; it is also up against the additional challenge that it speaks against economic growth in a world where the desirability of growth is considered common sense. Second, degrowth is incompatible with the
lifestyles to which many of us who live in rich countries have become accustomed. Economic growth in the Western world is, to no small extent, premised on the existence of consumer societies and an associated consumer culture most of us find it difficult to completely escape. In this
culture, social status, happiness, well-being and identity are linked to consumption (Jackson, 2009). Indeed, it is widely considered a natural right to lead an environmentally unsustainable lifestyle – a lifestyle that includes car ownership, air travel, spacious accommodations, fashionable
clothing, an omnivorous diet and all sorts of electronic gadgets. This Western norm of consumption has increasingly been exported to other parts of the world, the result being that never before have so many people taken part in consumption patterns that used to be reserved for elites
(Koch, 2012). If degrowth were to be institutionalised, many citizens in the rich countries would have to adapt to a materially lower standard of living. That is, while the basic needs of the global population can be met in a non-growing economy, not all wants and preferences can be
Undoubtedly, many people in the rich countries would experience various limitations on their
fulfilled (Koch et al., 2017).
consumption opportunities as a violent encroachment on their personal freedom. Indeed, whereas many
recognize that contemporary consumer societies are environmentally unsustainable, fewer are prepared
to actually change their own lifestyles to reverse/address this . At present, then, the degrowth project is in its
“deconstructive phase”, i.e., the phase in which its advocates are able to present a powerful critique of the prevailing neoliberal project and
point to alternative solutions to crisis. At this stage, not enough support has been mobilised behind the degrowth project for it to be elevated
to the phases of “construction” and “consolidation”. It is conceivable that at some point, enough people will become sufficiently discontent
with the existing economic system and push for something radically different. Reasons for doing so could be the failure of the system to satisfy
human needs and/or its inability to resolve the multidimensional crisis confronting humanity. Yet, various
material and ideational
path-dependencies currently stand in the way of such a development, particularly in countries with
large middle-classes. Even if it were to happen that the majority wanted a break with the current
system, it is far from given that a system based on the ideas of degrowth is what they would demand.
Even if it doesn’t cause extinction---collapse of civilization dooms future generations’
life prospects---also answers aliens thing cuz they wont come
Seth D. Baum and Anthony M. Barrett, 18. Global Catastrophic Risk Institute. 4/3/18, Global
Catastrophic Risk Institute Working Paper 18-2 “A Model for the Impacts of Nuclear War” p. 6
https://dx.doi.org/10.2139/ssrn.3155983 Accessed 10/27/19
An important consideration in this paper is the possibility of nuclear war resulting in a global
catastrophe . In general terms, a global catastrophe is generally understood to be a major harm to
global human civilization . Some studies have focused on catastrophes resulting in human extinction,
including early discussions of nuclear winter (Sagan 1983). Several studies posit minimum damage
thresholds such as the death of 10% of the human population (Cotton-Barratt et al. 2016), the death of
25% of the human population (Atkinson 1999), or 104 to 107 deaths or $109 to $1012 in damages
(Bostrom and Ćirković 2008). Other studies define global catastrophe as an event that exceeds the
resilience of global human civilization, resulting in its collapse (Maher and Baum 2013; Baum and
Handoh 2014).
A case can be made for focusing on catastrophes causing permanent harm to human civilization. If
members of future generations are counted equally , then permanent harm would be an especially
large impact , due to the potentially large number of people in future generations. Early studies of this
idea focused on human extinction, which is clearly a permanent harm (e.g., Sagan 1983; Parfit 1984).
More recent scholarship recognizes that comparable permanent harm can come from the permanent
collapse of human civilization or other long-term declines (e.g., Bostrom 2002; Maher and Baum
2013). The potential for nuclear war to cause either type of permanent harm is an important question,
which this paper will consider in some detail.
Long term trends are driving sustainable capitalist development – no reason this can’t
continue, their limits to growth arguments are empirically unsupported
Brook et al. 15—professor of environmental sustainability at the University of Tasmania (Barry, with John Asafu-Adjaye, University of
Queensland, Linus Blomqvist, Breakthrough Institute, Stewart Brand, Long Now Foundation, Ruth DeFries, Columbia Univeristy, Erle Ellis,
University of Maryland, Baltimore County, Christopher Foreman, University of Maryland School of Public Policy, David Keith, Harvard University
School of Engineering and Applied Sciences, Martin Lewis, Stanford University, Mark Lynas, Cornell University, Ted Nordhaus, Breakthrough
Institute, Roger Pielke, Jr., University of Colorado, Boulder, Rachel Pritzker, Pritzker Innovation Fund, Joyashree Roy, Jadavpur University, Mark
Sagoff, George Mason University, Michael Shellenberger, Breakthrough Institute, Robert Stone, Filmmaker, and Peter Teague, Breakthrough
Institute, “AN ECOMODERNIST MANIFESTO,” http://www.ecomodernism.org/manifesto/, dml)
Intensifying many human activities so that they use less land and interfere
— particularly farming, energy extraction, forestry, and settlement —
less with the natural world is the key to decoupling human development from environmental impacts .
These socioeconomic and technological processes are central to economic modernization and
environmental protection. Together they allow people to mitigate climate change , to spare nature ,
and to alleviate global poverty . Although we have to date written separately, our views are increasingly discussed as a whole. We call ourselves ecopragmatists and ecomodernists. We offer this statement to affirm and to clarify
Average life expectancy has increased Humanity has made from 30 to 70 years, resulting in a large and growing population able to live in many different environments.
extraordinary progress in reducing the incidence and impacts of infectious diseases , and it has become
more resilient to extreme weather and other natural disasters. Violence in all forms has declined
significantly and is probably at the lowest per capita level ever experienced by the human species , the horrors
democracy characterized by the rule of law and increased freedom. liberties have spread Personal, economic, and political
human flourishing has taken a serious toll on natural, nonhuman environments and wildlife. Humans use about half of the
planet’s ice-free land, mostly for pasture, crops, and production forestry. Of the land once covered by forests, 20 percent has been converted to human use. Populations of many mammals, amphibians, and birds have declined by more than 50 percent in the past 40 years alone. More
than 100 species from those groups went extinct in the 20th century, and about 785 since 1500. As we write, only four northern white rhinos are confirmed to exist. Given that humans are completely dependent on the living biosphere, how is it possible that people are doing so much
technologies
damage to natural systems without doing more harm to themselves? The role that technology plays in reducing humanity’s dependence on nature explains this paradox. Human , from those that first enabled agriculture to replace hunting and gathering,
have made humans less reliant upon the many ecosystems that once provided their
to those that drive today’s globalized economy,
only sustenance, even as those same ecosystems have often been left deeply damaged. Despite
frequent assertions of limits to growth there is little evidence that human population
starting in the 1970s fundamental “ ,” still remarkably
and economic expansion will outstrip the capacity to grow food or procure critical material resources in
the foreseeable future boundaries to consumption are so theoretical as to be
. To the degree to which there are fixed physical human , they
management, humans are at no risk of lacking sufficient agricultural land for food Given plentiful land .
and unlimited energy, substitutes for other material inputs to human well-being can easily be found if those
catastrophic impacts on societies and ecosystems . Even gradual, non-catastrophic outcomes associated with these threats are likely to result in significant human and economic costs as well as rising
ecological losses. Much of the world’s population still suffers from more-immediate local environmental health risks. Indoor and outdoor air pollution continue to bring premature death and illness to millions annually. Water pollution and water-borne illness due to pollution and
and demographic trends and usually results from a combination of the two growth rate of the . The human
population has already peaked population growth is down . Today’s rate one percent per year, from its high point of 2.1 percent in the 1970s. Fertility rates in countries containing more
human population will peak this century and then start to decline . Trends in population are inextricably linked to other demographic and economic dynamics. For the
first time in human history, over half the global population lives in cities. By 2050, 70 percent are expected to dwell in cities, a number that could rise to 80 percent or more by the century’s end. Cities are characterized by both dense populations and low fertility rates. Cities occupy just 1
nature, performing far better than rural economies in providing efficiently for material needs while
reducing environmental impacts . The growth of cities along with the economic and ecological benefits that come with them are inseparable from improvements in agricultural productivity. As agriculture has become more land and
labor efficient, rural populations have left the countryside for the cities. Roughly half the US population worked the land in 1880. Today, less than 2 percent does. As human lives have been liberated from hard agricultural labor, enormous human resources have been freed up for other
millennia reduced the amount of land required to feed the average person. The average per-capita use
of land today is vastly lower Thanks to technological improvements in
than it was 5,000 years ago, despite the fact that modern people enjoy a far richer diet.
net reforestation . About 80 percent of New England is today forested, compared with about 50 percent at the end of the 19th century. Over the past 20 years, the amount of land dedicated to production forest worldwide declined by 50 million hectares,
other resources is similarly peaking . The amount of water needed has declined for the average diet by nearly 25 percent over the past half-
finite planet, demand for many material goods may be saturating as societies grow wealthier . Meat consumption, for
instance, has peaked in many wealthy nations and has shifted away from beef toward protein sources that are less land intensive. As demand for material goods is met, developed economies see higher levels of spending directed to materially less-intensive service and knowledge sectors,
that the total human impact on the environment can peak and decline this , including land-use change, overexploitation, and pollution,
century . By understanding and promoting these emergent processes , humans have the opportunity to
re-wild and re-green the Earth — even as developing countries achieve modern living standards, and
material poverty ends. 3. The processes of decoupling described above challenge the idea that early human societies lived more lightly on the land than do modern societies. Insofar as past societies had less impact upon the environment, it was
early populations with much less advanced technologies had far larger
because those societies supported vastly smaller populations. In fact, human
individual land footprints than societies have today . Consider that a population of no more than one or two million North Americans hunted most of the continent’s large mammals into extinction in
the late Pleistocene, while burning and clearing forests across the continent in the process. Extensive human transformations of the environment continued throughout the Holocene period: as much as three-quarters of all deforestation globally occurred before the Industrial Revolution.
The technologies that humankind’s ancestors used to meet their needs supported much lower living standards with much higher per-capita impacts on the environment. Absent a massive human die-off, any large-scale attempt at recoupling human societies to nature using these
technologies would result in an unmitigated ecological and human disaster. Ecosystems around the world are threatened today because people over-
rely on them it
: people who depend on firewood and charcoal for fuel cut down and degrade forests; people who eat bush meat for food hunt mammal species to local extirpation. Whether it’s a local indigenous community or a foreign corporation that benefits,
is the continued dependence of humans on natural environments that is the problem for the
conservation of nature. modern technologies offer a real chance of
Conversely, , by using natural ecosystem flows and services more efficiently,
reducing the totality of human impacts on the biosphere The modernization . To embrace these technologies is to find paths to a good Anthropocene.
processes are double-edged, since they have also degraded the natural
that have increasingly liberated humanity from nature , of course,
environment . Fossil fuels, mechanization and manufacturing, synthetic fertilizers and pesticides, electrification and modern transportation and communication technologies, have made larger human populations and greater consumption possible in the first place.
populations have placed greater demands upon ecosystems But those same in distant places –– the extraction of natural resources has been globalized.
technologies have also made it possible for people to secure food, shelter, heat, light, and mobility
through means that are vastly more resource- and land-efficient than at any previous time in human
history. Decoupling requires the conscious acceleration of emergent decoupling
human well-being from the destruction of nature
should be to use resources more productively . For example, increasing agricultural yields can reduce
the conversion of forests and grasslands to farms. Humans should seek to liberate the environment from
the economy. Urbanization, agricultural intensification , nuclear power , aquaculture , and desalination
are all processes with a demonstrated potential to reduce human demands on the environment , allowing more
room for non-human species. Suburbanization, low-yield farming, and many forms of renewable energy production, in contrast, generally require more land and resources and leave less room for nature. These patterns suggest that humans are as likely to spare nature because it is not
needed to meet their needs as they are to spare it for explicit aesthetic and spiritual reasons. The parts of the planet that people have not yet profoundly transformed have mostly been spared because they have not yet found an economic use for them — mountains, deserts, boreal
forests, and other “marginal” lands. Decoupling raises the possibility that societies might achieve peak human impact without intruding much further on relatively untouched areas. Nature unused is nature spared. 4. Plentiful access to modern energy is an essential prerequisite for human
development and for decoupling development from nature. The availability of inexpensive energy allows poor people around the world to stop using forests for fuel. It allows humans to grow more food on less land, thanks to energy-heavy inputs such as fertilizer and tractors. Energy
allows humans to recycle waste water and desalinate sea water in order to spare rivers and aquifers. It allows humans to cheaply recycle metal and plastic rather than to mine and refine these minerals. Looking forward, modern energy may allow the capture of carbon from the
atmosphere to reduce the accumulated carbon that drives global warming. However, for at least the past three centuries, rising energy production globally has been matched by rising atmospheric concentrations of carbon dioxide. Nations have also been slowly decarbonizing — that is,
reducing the carbon intensity of their economies — over that same time period. But they have not been doing so at a rate consistent with keeping cumulative carbon emissions low enough to reliably stay below the international target of less than 2 degrees Centigrade of global warming.
Significant climate mitigation, therefore, will require that humans rapidly accelerate existing processes of decarbonization. There remains much confusion, however, as to how this might be accomplished. In developing countries, rising energy consumption is tightly correlated with rising
incomes and improving living standards. Although the use of many other material resource inputs such as nitrogen, timber, and land are beginning to peak, the centrality of energy in human development and its many uses as a substitute for material and human resources suggest that
energy consumption will continue to rise through much if not all of the 21st century. For that reason, any conflict between climate mitigation and the continuing development process through which billions of people around the world are achieving modern living standards will continue to
be resolved resoundingly in favor of the latter. Climate change and other global ecological challenges are not the most important immediate concerns for the majority of the world's people. Nor should they be. A new coal-fired power station in Bangladesh may bring air pollution and
rising carbon dioxide emissions but will also save lives. For millions living without light and forced to burn dung to cook their food, electricity and modern fuels, no matter the source, offer a pathway to a better life, even as they also bring new environmental challenges. Meaningful
climate mitigation is a technological challenge even dramatic limits to per capita global
fundamentally . By this we mean that
change is not responsible for the vast majority of emissions cuts. The specific technological paths that people might take toward climate mitigation remain deeply contested.
Theoretical scenarios for climate mitigation typically reflect their creators’ technological preferences and analytical assumptions while all too often failing to account for the cost, rate, and scale at which low-carbon energy technologies can be deployed. The history of energy transitions,
that there have been consistent patterns associated with the ways that societies move toward
however, suggests
cleaner sources of energy. Substituting higher-quality fuels for lower-quality (i.e., less carbon-intensive, higher-density) (i.e., more carbon-intensive, lower-
ones is how virtually all societies have decarbonized, and points the way toward accelerated
density)
decarbonization in the future . Transitioning to a world powered by zero-carbon energy sources will require energy technologies that are power dense and capable of scaling to many tens of terawatts to power a growing human economy.
Most forms of renewable energy are, unfortunately, incapable of doing so. The scale of land use and other environmental impacts necessary to power the world on biofuels or many other renewables are such that we doubt they provide a sound pathway to a zero-carbon low-footprint
future. High-efficiency solar cells produced from earth-abundant materials are an exception and have the potential to provide many tens of terawatts on a few percent of the Earth’s surface. Present-day solar technologies will require substantial innovation to meet this standard and the
development of cheap energy storage technologies that are capable of dealing with highly variable energy generation at large scales. Nuclear fission today represents the only present-day zero-carbon technology with the demonstrated ability to meet most, if not all, of the energy
demands of a modern economy. However, a variety of social, economic, and institutional challenges make deployment of present-day nuclear technologies at scales necessary to achieve significant climate mitigation unlikely. A new generation of nuclear technologies that are safer and
During that transition, other energy technologies can provide important social and environmental
benefits . Hydroelectric dams, for example, may be a cheap source of low-carbon power for poor nations even though their land and water footprint is relatively large. Fossil fuels with carbon capture and storage can likewise provide substantial environmental benefits over
The ethical and pragmatic path toward a just and sustainable global energy economy
current fossil or biomass energies.
requires that human beings transition as rapidly as possible to energy sources that are cheap , clean ,
dense , and abundant . Such a path will require sustained public support for the development and
deployment of clean energy technologies , both within nations and between them, though international collaboration and competition, and within a broader framework for global modernization and development. 5.
We write this document out of deep love and emotional connection to the natural world. By appreciating, exploring, seeking to understand, and cultivating nature, many people get outside themselves. They connect with their deep evolutionary history. Even when people never
some degree. Even if a fully synthetic world were possible, many of us might still choose to continue to
live more coupled with nature than human sustenance and technologies require. What decoupling
offers is the possibility that humanity’s material dependence upon nature might be less destructive . The case
for a more active, conscious, and accelerated decoupling to spare nature draws more on spiritual or aesthetic than on material or utilitarian arguments. Current and future generations could survive and prosper materially on a planet with much less biodiversity and wild nature. But this is
not a world we want nor, if humans embrace decoupling processes, need to accept. What we are here calling nature, or even wild nature, encompasses landscapes, seascapes, biomes and ecosystems that have, in more cases than not, been regularly altered by human influences over
centuries and millennia. Conservation science, and the concepts of biodiversity, complexity, and indigeneity are useful, but alone cannot determine which landscapes to preserve, or how. In most cases, there is no single baseline prior to human modification to which nature might be
returned. For example, efforts to restore landscapes to more closely resemble earlier states (“indigeneity”) may involve removing recently arrived species (“invasives”) and thus require a net reduction in local biodiversity. In other circumstances, communities may decide to sacrifice
indigeneity for novelty and biodiversity. Explicit efforts to preserve landscapes for their non-utilitarian value are inevitably anthropogenic choices. For this reason, all conservation efforts are fundamentally anthropogenic. The setting aside of wild nature is no less a human choice, in
service of human preferences, than bulldozing it. Humans will save wild places and landscapes by convincing our fellow citizens that these places, and the creatures that occupy them, are worth protecting. People may choose to have some services — like water purification and flood
protection — provided for by natural systems, such as forested watersheds, reefs, marshes, and wetlands, even if those natural systems are more expensive than simply building water treatment plants, seawalls, and levees. There will be no one-size-fits-all solution. Environments will be
shaped by different local, historical, and cultural preferences. While we believe that agricultural intensification for land-sparing is key to protecting wild nature, we recognize that many communities will continue to opt for land-sharing, seeking to conserve wildlife within agricultural
landscapes, for example, rather than allowing it to revert to wild nature in the form of grasslands, scrub, and forests. Where decoupling reduces pressure on landscapes and ecosystems to meet basic human needs, landowners, communities, and governments still must decide to what
aesthetic or economic purpose they wish to dedicate those lands. Accelerated decoupling alone will not be enough to ensure more wild nature. There must still be a conservation politics and a wilderness movement to demand more wild nature for aesthetic and spiritual reasons. Along
and human capacity for accelerated , active , and conscious decoupling . Technological progress is not
inevitable The long arc of human transformation
. Decoupling environmental impacts from economic outputs is not simply a function of market-driven innovation and efficient response to scarcity.
of natural environments through technologies began well before there existed anything resembling a
market or a price signal . Thanks to rising demand, scarcity, inspiration, and serendipity, humans have remade the world for millennia. Technological solutions to environmental problems must also be considered within a broader social, economic,
and political context. We think it is counterproductive for nations like Germany and Japan, and states like California, to shutter nuclear power plants, recarbonize their energy sectors, and recouple their economies to fossil fuels and biomass. However, such examples underscore clearly
with capitalism , corporate power, and laissez-faire economic policies. We reject such reductions . What we refer to
has liberated ever more people from lives of poverty and hard agricultural labor, women from chattel status, children and ethnic minorities from oppression, and societies from capricious and
allowed human societies to meet human needs with fewer resource inputs and less impact on the
environment. More-productive economies are wealthier economies , capable of better meeting human needs while committing more of their economic surplus to non-
in advanced developed economies. Material consumption has only just begun to peak in the wealthiest
societies. Decoupling will require a sustained commitment to technological progress
of human welfare from environmental impacts
and the continuing evolution of social, economic, and political institutions alongside those changes.
Accelerated technological progress will require the active , assertive , and aggressive participation of
private sector entrepreneurs , markets , civil society , and the state we continue to . While we reject the planning fallacy of the 1950s,
embrace a strong public role in addressing environmental problems and accelerating technological
innovation , including research to develop better technologies, subsidies , and other measures to help
bring them to market , and regulations to mitigate environmental hazards . And international collaboration on technological innovation and technology
transfer is essential in the areas of agriculture and energy.
Warming is irreversible and only growth solves---CCS and renewables are key.
Graciela 16 – Professor of Economics and of Statistics at Columbia University and Visiting Professor at Stanford University, and was the
architect of the Kyoto Protocol carbon market (9-1-2016, being interviewed by Marcus Rolle, freelance journalist specializing in environmental
issues and global affairs, “Reversing Climate Change: Interview with Graciela Chichilnisky,”
http://www.globalpolicyjournal.com/blog/01/09/2016/reversing-climate-change-interview-graciela-chichilnisky)//cmr
GC: Green capitalism is a new economic system that values the natural resources on which human survival depends. It fosters a harmonious
relationship with our planet, its resources and the many species it harbors. It is a new type of market economics that addresses both equity and
efficiency. Using carbon negative technology™ it helps reduce carbon in the atmosphere while fostering economic development in rich and
developing nations, for example in the U S., EU, China and India. How does this work? In a nutshell Green
Capitalism requires the
creation of global limits or property rights nation by nation for the use of the atmosphere, the bodies of water and
the planet’s biodiversity, and the creation of new markets to trade these rights from which new economic values and a new
concept of economic progress emerges updating GDP as is now generally agreed is needed. Green Capitalism is needed now to
help avert climate change and achieve the goals of the 2015 UN Paris Agreement, which are very ambitious and universally
supported but have no way to be realized within the Agreement itself. The Carbon Market and its CDM play critical roles in the foundation of
Green Capitalism, creating values to redefine GDP. These are needed to remain within the world’s “CO2 budget” and avoid catastrophic climate
change. As I see it, the building blocks for Green Capitalism are then as follows; (1) Global limits nation by nation in
the use of the planet’s atmosphere, its water bodies and biodiversity - these are global public goods. (2) New global markets to trade
these limits, based on equity and efficiency. These markets are relatives of the Carbon Market and the SO2 market. The new market
create new measures of economic values and update the concept of GDP. (3) Efficient use of Carbon
Negative Technologies to avert catastrophic climate change by providing a smooth transition to clean energy and ensuring economic
prosperity in rich and poor nations. These building blocks have immediate practical implications in reversing climate change and can assist the
ambitious aims of Paris COP21 become a reality. MR: What is the greatest advantage of the new
generation technologies that can
capture CO2 from the air? GC: These technologies build carbon negative power plants, such as Global Thermostat, that clean the
atmosphere of CO2 while producing electricity. Global Thermostat is a firm that is commercializing a technology that takes CO2 out of air and
uses mostly low cost residual heat rather than electricity to drive the capture process, making the entire process of capturing CO2 from the
atmosphere very inexpensive. There is enough residua heat in a coal power plant that it can be used to capture twice as much CO2 as the plant
emits, thus transforming the power plant into a “carbon sink.” For example, a
400 MW coal plant that emits 1 million tons of
CO2 per year can become a carbon sink absorbing a net amount of 1 million tons of CO2 instead. Carbon
capture from air can be done anywhere and at any time, and so inexpensively that the CO2 can be sold
for industrial or commercial uses such as plastics, food and beverages, greenhouses, bio-fertilizers,
building materials and even enhanced oil recovery , all examples of large global markets and profitable opportunities. Carbon
capture is powered mostly by low (85°C) residual heat that is inexpensive, and any source will do. In particular, renewable (solar) technology
can power the process of carbon capture. This can help advance solar technology and make it more cost-efficient. This
means more
energy, more jobs, and it also means economic growth in developing nations, all of this while cleaning
the CO2 in the atmosphere. Carbon negative tech nologies can literally transform the world economy . MR:
One final question. You distinguish between long-run and short-run strategies in the effort to reverse climate change. Would carbon negative
technologies be part of a short-run strategy? GC: Long-run strategies are quite different from strategies for the short-run. Often long-run
strategies do not work in the short run and different policies and economic incentives are needed. In the long run
the best climate change policy is to replace fossil fuel sources of energy that by themselves cause 45% of the global emissions, and to plant
trees to restore if possible the natural sources and sinks of CO2. But the
fossil fuel power plant infrastructure is about 87% of
the power plant infrastructure and about $45-55 trillion globally. This infrastructure cannot be replaced
quickly, certainly not in the short time period in which we need to take action to avert
catastrophic climate change . The issue is that CO2 once emitted remains hundreds of years in the
atmosphere and we have emitted so much that unless we actually remove the CO2 that is already
there, we cannot remain long within the carbon budget , which is the concentration of CO2 beyond which we fear
catastrophic climate change. In the short run, therefore, we face significant time pressure . The IPCC indicates in its 2014 5th
Assessment Report that we must actually remove the carbon that is already in the atmosphere and do so in
massive quantities , this century (p. 191 of 5th Assessment Report). This is what I called a carbon negative approach,
which works for the short run. Renewable energy is the long run solution. Renewable energy is too slow for a short run
resolution since replacing a $45-55 trillion power plant infrastructure with renewable plants could take
decades . We need action sooner than that. For the short run we need carbon negative technologies that capture more carbon
than what is emitted. Trees do that and they must be conserved to help preserve biodiversity. Biochar does that. But trees and other
natural sinks are too slow for what we need today. Therefore, negative carbon is needed now as part of a blueprint for
transformation. It must be part of the blueprint for Sustainable Development and its short term manifestation that I call Green
Capitalism , while in the long run renewable sources of energy suffice , including Wind, Biofuels, Nuclear, Geothermal,
and Hydroelectric energy. These are in limited supply and cannot replace fossil fuels . Global energy today is
roughly divided as follows: 87% is fossil, namely natural gas, coal, oil; 10% is nuclear, geothermal, and hydroelectric, and
less than 1% is solar power — photovoltaic and solar thermal. Nuclear fuel is scarce and nuclear technology is generally considered
dangerous as tragically experienced by the Fukushima Daichi nuclear disaster in Japan, and it seems unrealistic to seek a solution in the nuclear
direction. Only solar energy can be a long term solution: Less than 1% of the solar energy we receive on earth can be transformed into 10 times
we need a short-term strategy that accelerates long run
the fossil fuel energy used in the world today. Yet
renewable energy , or we will defeat long-term goals. In the short term as the IPCC validates, we need carbon
negative technology, carbon removals. The short run is the next 20 or 30 years . There is no time in this period
of time to transform the entire fossil infrastructure — it costs $45-55 trillion (IEA) to replace and it is slow to build. We
need to directly reduce carbon in the atmosphere now. We cannot use traditional methods to remove CO2 from smokestacks (called often
Carbon Capture and Sequestration, CSS) because they are not carbon negative as is required. CSS works but does not suffice because it only
captures what power plants currently emit. Any level of emissions adds to the stable and high concentration we have today and CO2 remains in
the atmosphere for years. We need to remove the CO2 that is already in the atmosphere, namely air capture of CO2 also called carbon
removals. The solution is to combine air capture of CO2 with storage of CO2 into stable materials such as
biochar, cement, polymers, and carbon fibers that replace a number of other construction materials such as metals. The most recent
BMW automobile model uses only carbon fibers rather than metals. It is also possible to combine CO2 to produce renewable
gasoline, namely gasoline produced from air and water. CO2 can be separated from air and hydrogen separated from water, and their
combination is a well-known industrial process to produce gasoline. Is this therefore too expensive? There are new technologies using
algae that make synthetic fuel commercially feasible at competitive rates . Other policies would involve combining air
capture with solar thermal electricity using the residual solar thermal heat to drive the carbon capture process. This can make a solar plant
more productive and efficient so it can out-compete coal as a source of energy. In summary, the
blueprint offered here is a
private/public approach , based on new industrial tech nology and financial markets , self-funded and
using profitable greenmarkets , with securities that utilize carbon credits as the “underlying” asset, based on the KP CDM, as well as
new markets for biodiversity and water providing abundant clean energy to stave off impending and actual energy crisis in developing nations,
fostering mutually beneficial cooperation for industrial and developing nations. The blueprint proposed provides the two sides of the coin,
a
equity and efficiency, and can assign a critical role for women as stewards for human survival and sustainable development. My vision is
carbon negative economy that represents green capitalism in resolving the Global Climate negotiations and the
North–South Divide . Carbon negative power plants and capture of CO2 from air and ensure a clean atmosphere together innovation
and more jobs and exports: the more you produce and create jobs the cleaner becomes the atmosphere. In practice, Green Capitalism
means economic growth that is harmonious with the Earth resources .
stockpiles are down from their the peak they reached in the Cold War, it
is a mistake to think that nuclear war is impossible. In
fact, it might not be improbable . The Cuban Missile crisis was very close to turning nuclear. If we
assume one such event every 69 years and a one in three chance that it might go all the way to being
nuclear war, the chance of such a catastrophe increases to about one in 200 per year . Worse still, the
Cuban Missile crisis was only the most well-known case. The history of Soviet-US nuclear deterrence is
full of close calls and dangerous mistakes. The actual probability has changed depending on international tensions, but it seems
implausible that the chances would be much lower than one in 1000 per year . A full-scale nuclear war
between major powers would kill hundreds of millions of people directly or through the near aftermath
– an unimaginable disaster. But that is not enough to make it an existential risk. Similarly the hazards of fallout are often exaggerated – potentially
deadly locally, but globally a relatively limited problem. Cobalt bombs were proposed as a hypothetical doomsday weapon that would kill everybody with fallout,
but are in practice hard and expensive to build. And they are physically just barely possible. The
real threat is nuclear winter – that is, soot
lofted into the stratosphere causing a multi-year cooling and drying of the world. Modern climate
simulations show that it could preclude agriculture across much of the world for years. If this scenario
occurs billions would starve, leaving only scattered survivors that might be picked off by other threats
such as disease. The main uncertainty is how the soot would behave: depending on the kind of soot the outcomes may be very different, and we currently
have no good ways of estimating this. 2. Bioengineered pandemic Natural pandemics have killed more people than wars. However, natural pandemics are unlikely
to be existential threats: there are usually some people resistant to the pathogen, and the offspring of survivors would be more resistant. Evolution also does not
favor parasites that wipe out their hosts, which is why syphilis went from a virulent killer to a chronic disease as it spread in Europe. Unfortunately we can now
make diseases nastier. One of the more famous examples is how the introduction of an extra gene in mousepox – the mouse version of smallpox – made it far more
lethal and able to infect vaccinated individuals. Recent work on bird flu has demonstrated that the contagiousness of a disease can be deliberately boosted. Right
now the risk of somebody deliberately releasing something devastating is low. But as biotechnology gets better and cheaper, more groups will be able to make
diseases worse. Most work on bioweapons have been done by governments looking for something controllable, because wiping out humanity is not militarily useful.
But there are always some people who might want to do things because they can. Others have higher purposes. For instance, the Aum Shinrikyo cult tried to hasten
the apocalypse using bioweapons beside their more successful nerve gas attack. Some people think the Earth would be better off without humans, and so on. The
number of fatalities from bioweapon and epidemic outbreaks attacks looks like it has a power-law distribution – most attacks have few victims, but a few kill many.
Given current numbers the risk of a global pandemic from bioterrorism seems very small. But this is just bioterrorism: governments have killed far more people than
terrorists with bioweapons (up to 400,000 may have died from the WWII Japanese biowar program). And as technology gets more powerful in the future nastier
pathogens become easier to design. 3. Superintelligence Intelligence is very powerful. A tiny increment in problem-solving ability and group coordination is why we
left the other apes in the dust. Now their continued existence depends on human decisions, not what they do. Being smart is a real advantage for people and
organisations, so there is much effort in figuring out ways of improving our individual and collective intelligence: from cognition-enhancing drugs to artificial-
intelligence software. The problem is that intelligent entities are good at achieving their goals, but if the goals are badly set they can use their power to cleverly
achieve disastrous ends. There is no reason to think that intelligence itself will make something behave nice and morally. In fact, it is possible to prove that certain
types of superintelligent systems would not obey moral rules even if they were true. Even more worrying is that in trying to explain things to an artificial intelligence
we run into profound practical and philosophical problems. Human values are diffuse, complex things that we are not good at expressing, and even if we could do
that we might not understand all the implications of what we wish for. Software-based intelligence may very quickly go from below human to frighteningly
powerful. The reason is that it may scale in different ways from biological intelligence: it can run faster on faster computers, parts can be distributed on more
computers, different versions tested and updated on the fly, new algorithms incorporated that give a jump in performance. It has been proposed that an
“intelligence explosion” is possible when software becomes good enough at making better software. Should such a jump occur there would be a large difference in
potential power between the smart system (or the people telling it what to do) and the rest of the world. This has clear potential for disaster if the goals are badly
set. The unusual thing about superintelligence is that we do not know if rapid and powerful intelligence explosions are possible: maybe our current civilisation as a
whole is improving itself at the fastest possible rate. But there are good reasons to think that some technologies may speed things up far faster than current
societies can handle. Similarly we do not have a good grip on just how dangerous different forms of superintelligence would be, or what mitigation strategies would
actually work. It is very hard to reason about future technology we do not yet have, or intelligences greater than ourselves. Of the risks on this list, this is the one
most likely to either be massive or just a mirage. This is a surprisingly under-researched area. Even in the 50s and 60s when people were extremely confident that
superintelligence could be achieved “within a generation”, they did not look much into safety issues. Maybe they did not take their predictions seriously, but more
likely is that they just saw it as a remote future problem. 4. Nanotechnology Nanotechnology is the control over matter with atomic or molecular precision. That is in
itself not dangerous – instead, it would be very good news for most applications. The problem is that, like biotechnology, increasing power also increases the
potential for abuses that are hard to defend against. The big problem is not the infamous “grey goo” of self-replicating nanomachines eating everything. That would
require clever design for this very purpose. It is tough to make a machine replicate: biology is much better at it, by default. Maybe some maniac would eventually
succeed, but there are plenty of more low-hanging fruits on the destructive technology tree. The most obvious risk is that atomically precise manufacturing looks
ideal for rapid, cheap manufacturing of things like weapons. In a world where any government could “print” large amounts of autonomous or semi-autonomous
weapons (including facilities to make even more) arms races could become very fast – and hence unstable, since doing a first strike before the enemy gets a too
large advantage might be tempting. Weapons can also be small, precision things: a “smart poison” that acts like a nerve gas but seeks out victims, or ubiquitous
“gnatbot” surveillance systems for keeping populations obedient seems entirely possible. Also, there might be ways of getting nuclear proliferation and climate
engineering into the hands of anybody who wants it. We cannot judge the likelihood of existential risk from future nanotechnology, but it looks like it could be
potentially disruptive just because it can give us whatever we wish for. 5. Unknown unknowns The most unsettling possibility is that there is something out there
that is very deadly, and we have no clue about it. The silence in the sky might be evidence for this. Is the absence of aliens due to that life or intelligence is
extremely rare, or that intelligent life tends to get wiped out? If there is a future Great Filter, it must have been noticed by other civilisations too, and even that
didn’t help. Whatever the threat is, it would have to be something that is nearly unavoidable even when you know it is there, no matter who and what you are. We
do not know about any such threats (none of the others on this list work like this), but they might exist. Note that just because something is unknown it doesn’t
mean we cannot reason about it. In a remarkable paper Max Tegmark and Nick Bostrom show that a certain set of risks must be less than one chance in a billion per
year, based on the relative age of Earth. You
might wonder why climate change or meteor impacts have been left off this
list. Climate change, no matter how scary , is unlikely to make the entire planet uninhabitable (but it could
compound other threats if our defences to it break down). Meteors could certainly wipe us out, but we would have to be very unlucky. The average mammalian
species survives for about a million years. Hence, the
background natural extinction rate is roughly one in a million per
year. This is much lower than the nuclear-war risk, which after 70 years is still the biggest threat to our
continued existence . The availability heuristic makes us overestimate risks that are often in the media,
and discount unprecedented risks. If we want to be around in a million years we need to correct that .
--- AT: Geoengineering
Governments aren’t looking to fund geoengineering – the public is against it and too
focused on renewables
Goodman 15 (Bryce Goodman, Clean Tech Entrpreneur, 02/17/2015 11:59 pm EST, “Geoengineering and the Fight Against Climate
Change: An Interview with David W. Keith”, published by Huffington Post in partnership with Generation Change NRG, <
http://www.huffingtonpost.com/bryce-goodman/geoengineering-and-the-fi_b_6680948.html>)
Unsurprisingly, there are a number of uncertainties and undesirable side-effects with this plan and
some oppose even studying geoengineering. To date, there has been no major publicly funded
research program in geoengineering. However, while the NAS report concluded that deploying
geoengineering now would be "irrational and irresponsible", it was broadly supportive of public research to improve
"understanding of the physical potential and technical feasibility of geoengineering approaches". That's one of the things about geoengineering
people aren't just against geoengineering practice, they're also against geoengineering
that is so striking:
research . I cannot really think of another scientific field where this is the case. Do you have a sense of why this is? Trying to do this
kind of deliberate intervention is a step that is different from what humanity has done before . Of
course you can argue that we have transformed the environment in all sorts of ways for agriculture, etc. But this is the first thing that is really
with the very strong, politically motivated commitment
planetary scale with a deliberate effect. Another part has to do
by some people in the climate activist world to only talk about emissions mitigation. They want to
talk about renewable energy and nothing else. And while I think that large scale use of renewables is a very sensible thing to
do, I think that this attitude is a kind of dangerous monomania. Steve Rayner has said it is like the Southern Baptist attitude towards sexual
education--if you don't talk about it people won't do it. Geoengineering is relatively cheap--you've said a program costing $1 billion a year could
have substantial effects. So in theory a single country--or wealthy person for that matter--could decide to start deploying this tomorrow. Should
geoengineering only proceed with a formal treaty and the blessing of the UN? We
need international dialogue and
collaboration but I'm not sure we need a formal treaty. And if geoengeering does happen I think the dynamic will be very
simple. Some countries will do it--likely not the US--other countries will publicly say "we decry these actions without a UN treaty" but privately
be happy because someone else is taking the heat and they get the benefit. So then what's
holding you back on conducting
your proposed research? The government wont fund it. And I think it's important in a democracy that
these experiments go through a proper external risk assessment with substantial public funding.
US will never invest more money in geoengineering – it’s too risky and other countries
will never agree
Revkin 15 (Andrew C. Revkin, February 12. 2015 5: 13 PM, “Why Hacking the atmosphere won’t happen anytime soon”, published by the
New York Times, < http://dotearth.blogs.nytimes.com/2015/02/12/why-hacking-the-atmosphere-wont-happen-any-time-soon/>)
It’s worth spending some more time on the National Academy of Sciences reports on geoengineering prospects and
concerns — the concerns mainly being about adding sun-blocking particles to the atmosphere to counteract global warming driven by the
buildup of heat-trapping greenhouse gases. I loved what the climate scientist Raymond Pierrehumbert had to say in Slate yesterday. His views
are particularly notable not only because he was one of the report’s authors but also because of his unbridled language in describing the
process and his conclusions: The nearly two years’ worth of reading and animated discussions that went into this study have convinced me
more than ever that
the idea of “fixing” the climate by hacking the Earth’s reflection of sunlight is wildly, utterly,
howlingly barking mad. In fact, though the report is couched in language more nuanced than what I myself would prefer, there is really
nothing in it that is inconsistent with my earlier appraisals. Even the terminology used in the report signals a palpable change in the framing of
the discussion. The actions discussed for the most part are referred to as “climate intervention,” rather than “climate engineering” (or the
common but confusing term geoengineering). Engineering is something you do to a system you understand very well, where you can try out
new techniques thoroughly at a small scale before staking peoples’ lives on them. Hacking the climate is different—we have only one planet to
live on, and can’t afford any big mistakes. In case you missed it, I covered the release of the report and its main findings here. Clive Hamilton,
the Australian ethics professor who wrote “Earth Masters,” a manifesto against geoengineering, came away from the report with a deeper,
darker concern, saying that its call for more research essentially legitimizes the basic idea. Hamilton, who is listed as a reviewer of the Academy
report, put his thesis this way inan Op-Ed article today in The Times: The report is balanced in its assessment of the science. Yet by
bringing
geoengineering from the fringes of the climate debate into the mainstream, it legitimizes a dangerous
approach. Given that the Central Intelligence Agency was one of the main sponsors of the Academy report on atmospheric intervention and
a companion volume on carbon dioxide removal from air, there’s also plenty of room for conspiracy theories. But jump to what Eli Kintisch
wrote yesterday in Science, and you’ll see what a tiny arena this has been: Since
2006, when Nobel Prize–winning
geochemist Paul Crutzen called for climate engineering research, scientific societies, a number of high-
level panels and prominent lawmakers have endorsed federal funding for the field. But the United
States has never established a formal mechanism to support studies of either type of geoengineering,
and agencies have distributed just a few million dollars to researchers . The biggest funder of geoengineering
research has been a nonprofit fund supported by billionaire Bill Gates, which has disbursed some $8.5 million for research and meetings since
2007. Personally, I see value in further research on both sides of the intervention question — on ways to draw CO2 from the air and on sun-
blocking options, many of which can be tested at small scale. I
don’t see the research legitimizing climate interventions
and, in fact, the reports demonstrate that such studies help clarify why it’s a very bad idea.
Pierrehumbert’s prime concern (there are plenty more, all legitimate) is that any sun-blocking
intervention done at climate scale would have to continue unabated for millenniums, or until CO2
removal was in high gear — or risk climatic whiplash if veils of reflective materials dissipated. That
should be enough to deter any countries from going global with such efforts. But I’ve long seen plenty of
other reasons why this is almost assuredly a nonstarter in any case. The main one is diplomatic , not
technological. Who sets the thermostat? Here’s how I summarized that issue in a 2007 post: It’s been hard enough figuring out
how to slow an unintended human-induced warming. How hard will it be to agree on strategies for
an engineered cooling? If you see any scenario that would result in a lone actor hacking the sky, let me know. Otherwise, I stand by a
bet I proposed on Facebook today (and have made many times before): I’d bet $1,000 that no country initiates atmospheric
geo-engineering beyond the small-research scale in my lifetime. I know. I’m pushing 60, so that’s not necessarily a very long
span, but you get the idea. With aging in mind, I’ll conclude with a little “same as it ever was” reflection. I can’t believe it, but this is my 30th
year reporting on sun-blocking substances and human-driven climate change. One of the first conversations I had about adding sulfur to the
atmosphere to counteract global warming was, appropriately, with Edward Teller — yes, the physicist and hydrogen bomb pioneer who was
one of the inspirations for Stanley Kubrick’s “Dr. Strangelove.” I first interviewed him in 1985 for a Science Digest cover story I was writing on
another type of climate intervention by humans — the hypothesized “nuclear winter” that could follow a nuclear war. (Read the article in full
here.) Around that time, he had already noted rough estimates of how many jumbo jets full of sulfur compounds would be required each year
to counteract global warming. (In 1997 he wrote on that idea in The Wall Street Journal.) In the end, here’s how I described this question in
“Global Warming: Understanding the Forecast,” my first book on climate change: Some economists, scientists, and planners look at the
historical record and conclude that our ingenuity will get us through any coming climate change, and that the immediate cost of preventing —
or at least slowing — any man-made change is unacceptably high. Moreover, they say, there is always the possibility that the models are wrong,
and that the world is actually going to warm only moderately. More research is needed before costly changes are made. Much more research.
But givenour current lack of understanding of the existing global system, most scientists say that the last
thing we should consider is adding another variable to the equation. More nasty surprises would surely
be in store. Same as it ever was, indeed. For a bit more explication, and a chuckle, here’s a great student-created primer on the
geoengineering basics that I wrote about in 2008 (as with everything, there’s room for improvement; find the fun misspelling):
the amount of carbon dioxide in the atmosphere. Or they could reflect more sunlight back into space. This is called
climate intervention or geoengineering, and it's very controversial in scientific and
environmental circles. Geoengineering poses all kinds of problems. Directly removing carbon dioxide from the air is a very slow process, and the
removed gas would need to be stored somewhere. Fertilizing the ocean with tiny bits of iron would encourage phytoplankton to grow
and consume CO2, but it could alter the ocean environment in unknown ways. Spraying sulfur dioxide particles — which spew
out of erupting volcanoes naturally — would reflect sunlight, but doing so would also thin the ozone layer, change rain patterns and potentially encourage
international conflict. Other ideas, such as painting rooftops and streets white, might help. But no strategy to increase reflection would solve problems such as
ocean acidification, which would continue so long as carbon dioxide levels are high. The financial cost of researching, building and
operating climate intervention projects would be formidable. Facilities to remove carbon dioxide
from the air, the report notes, might well cost more than simply replacing polluting power
plants with renewables. Cheaper techniques such as blasting sulfur dioxide into the sky would require perpetual effort absent serious cuts in carbon
dioxide output. The longer the world hesitates to put global emissions on a downward slope, the harder the cleanup task will be. Given the risks associated with
the planet should have a backup
climate intervention, it makes no sense to bet the climate on the ingenuity of future generations. But
plan — or, at least, the beginnings of a backup plan . The National Academy report points out that policymakers and
scientists have hardly embarked on the basic research, analysis and planning. Further research
could encourage people to put more faith in the potential of geoengineering, even though
cutting carbon dioxide emissions remains the better option . That could help those who resist greenhouse emissions
reductions. But there's also a risk that those same people will win out anyway, leaving the world with little recourse absent a backup plan .
Congress has
not been wise in its handling of the domestic discretionary budget over the past several years,
shortsightedly declining to invest in important research and infrastructure. With the resources available, low-carbon
energy technologies should remain the funding priority. But the National Academy report should at least put climate intervention on the table as worthy of some
support.
--- AT: Space – No Alien Diseases
Alien disease wouldn’t harm anyone because it would be contained and inert, but the
discovery would be so meaningful we should risk it
Warmflash 15 [David Warmflash, astrobiologist postdoc at NASA, MD, science lead for the U.S. team of the Planetary Society's Phobos
Living Interplanetary Flight Experiment.] “Might astronauts bring back a deadly disease from Mars?” Genetic Literacy Project, 9 April 2015
(https://geneticliteracyproject.org/2015/04/08/might-astronauts-bring-back-a-deadly-disease-from-mars/) – MZhu
When we talk about isolation of Mars samples and returning astronauts, it’s really just a matter of precaution until we’re
sure what we’re dealing with. But from an evolutionary perspective , it’s extremely unlikely that
microorganisms native to Mars, or another world in our Solar System, will be harmful to human
health . There are different ways that a microorganisms can be cause disease . The most feared kind of microbe
disease is infectious disease. By infection, we mean that the microorganisms actually thrive inside the human
body. This is the most unlikely scenario for an ET microbe. Microbes that infect humans are able to do
so, because they co-evolved with us, or in some cases with other animals who serve as hosts. In the case
of Ebola, the virus reached humans because it was already thriving inside bats and other “bush meat” in Africa. If an organism is going
to infect your lungs and cause pneumonia, it must already be living in an environment similar to that of
your lungs–warm and wet. That happens with the bacterium that causes tuberculosis, but it’s not going to happen with
anything living on Mars, a cold, dry environment even more so than Antarctica. Another way that microorganisms can
cause disease is by releasing a toxin into the environment and humans then get exposed to the toxin. Two examples on Earth,
both from the same genus of bacteria, are botulism and tetanus. Compared with infectious disease, releasing a chemical that happens to be
toxic to humans is quite a bit more realistic when considering possible organisms on another planet, such as Mars. When
dealing with
Martian materials, there will be a lot of containment procedures and other precautions , and the
material will be tested for toxicity . It’s a real concern, but with the toxin kind of disease there is no issue of
the organism spreading from person to person, causing an outbreak . You do not catch botulism or
tetanus from another person. You get botulism by eating food that has been contaminated and tetanus
from getting pricked with something that has been contaminated . But when we consider harm, we must think also
about harm to our environment. While there should be no similarity between the warm, wet human body and the cold, dry Martian
environment, there certainly can be environments on Earth where Mars life might thrive if carried here by a probe or human mission.
Environmental ecology and biospheres on Earth are notoriously complex, so we don’t want to release a native Martian microbe on Earth,
particularly in “Mars-like” regions of our planet. That’s something to keep in mind as we move forward, toward a Mars sample return mission,
but as noted earlier containment is going to be extremely tight. As
for disease, considering everything, the risk is fairly
low, and alongside that risk we also must keep in site of the benefits . What will knowledge of the
existence of a biosphere on another planet do to our perspective on biology ? It could work wonders in
that area, giving us unexpected insights and launching biology into a new era. At the same time, knowing
that the planet just next door to us also is a home to life, we could be sure that we inhabit a cosmos in
which life is extremely common. We could expect worlds with breathable atmosphere because of life
forms using photosynthesis to make food, worlds orbiting nearly stars that we might eventually colonize
without the need for pressure domes. And it would increase the likelihood that eventually we’ll come across an extraterrestrial
civilization.
--- AT: Space Col
Space colonization is necessary to guarantee the survival of the human race
Britt 1 -- Senior Science Writer (Robert Roy, Space.com, “The Top 3 Reasons to Colonize Space”
http://www.space.com/missionlaunches/colonize_why_011008-4.html) // DCM
<It's no secret. Sooner
or later, Earth's bell will be rung . A giant asteroid or comet will slam into the planet,
as has happened many times before, and a deadly dark cloud will envelop the globe, killing much of
whatever might have survived the initial impact.
"We live on a small planet covered with the bones of extinct species, proving that such catastrophes do
occur routinely," says J. Richard Gott, III, a professor of astrophysics at Princeton and author of "Time Travel in
Einstein's Universe."
Gott cites the presumably hardy Tyrannosaurus rex, which lasted a mere 2.5 million years and was the victim of an asteroid attack, as an
example of what can happen if you don't plan ahead.
But spacerocks may not be the only threat. Epidemics, climatological or ecological catastrophes or even
man-made disasters could do our species in, Gott says. And so, he argues, we need a life insurance policy to
guarantee the survival of the human race.
"Spreading out into space gives us more chance s," he says.
And the time is now: History instructs that technological hay should be made while the economic sun shines.
"There is a danger we will end the human space program at some point, leaving us stranded on the
Earth," Gott warns. "History shows that expensive technological projects are often abandoned after awhile. For
example, the Ancient Egyptians quit building pyramids. So we should be colonizing space now while we
have the chance.">
--- AT: Particle Accelerators
Cosmic rays thump.
Saplakoglu 18 — Yasemin Saplakoglu (Staff Writer, biomedical engineering bachelors from the University of Connecticut and a science
communication graduate certificate from the University of California, Santa Cruz), 10-5-2018, “No Particle Accelerators Will Not Destroy the
Planet, But Humans Might,” Live Science, https://www.livescience.com/63759-future-threats-to-humanity.html
"The stakes are very high this century," said British cosmologist Martin Rees . "It's the first century when human
beings … can determine the planet's future." [10 Technologies That Will Transform Your Life] For the past couple of days, news
outlets have been reporting that Rees' new book "On the Future: Prospects for Humanity " (Princeton University Press,
2018) makes a rather spectacular claim: If things go wrong, particle accelerators that slam subatomic particles
together at immense speeds — like the Large Hadron Collider near Geneva, Switzerland,— could turn
Earth into a dense sphere or black hole. In fact, Rees told Live Science in a recent interview, his book claims the
opposite: The probability of this happening is very, very low . The idea of the LHC forming mini-black
holes has been circulating for a while and is not something to worry about, he said. "I think people quite rightly
thought about this question before they did the experiments, but they were reassured," he said. The reassurance mainly comes from the fact
that nature
already performs such experiments — to an extreme. Cosmic rays , or particles with much
higher energies than those created in particle accelerators , frequently collide in the galaxy, and haven't
yet done anything disastrous like rip space apart, Rees said. "It's not stupid to think about these things,
but on the other hand, they're not serious worries ," he said. But in contrast, "if you're doing something where you have no
guidance from nature, then you’ve got to be a bit careful." It's in these cases that technology can be a realistic threat for the future, he said.
When nature doesn't know the answer Gene editing, for example, can yield new organic products that don't exist in nature, Rees said.
Sometimes, if "you tinker with a virus, then of course you can't be quite sure what the consequences are," he said. "It may well be that you can
create a form of a virus which has not arisen through natural mutations." There's much conversation around gene drives, for example —
modifications that are being considered for mosquitoes to reduce disease transmission. Gene drives essentially tweak the genetic code to alter
the likelihood of inheriting certain traits, and can lead to "unpredictable environmental effects," he said. Technology is also making it easier for
one person's actions to have far-reaching consequences, he said. "Just a few people anywhere in the world can cause something which has
global consequences in a way they couldn’t [before]," Rees said. One example is a cyberattack. Technology also does incredible things,
especially in medicine and space travel. And as such, "things can go extremely well," Rees said. "But there are all these hazards along the way
because of misuse of technologies." The second major threat to the future is our collective influence on the
climate, environment and biodiversity, he said. So, it's important to have international conversations about how to combat the
pressures humanity has placed on the world, he added. And it's much easier to solve the world's problems, such as by
combating climate change, than by packing up our things and going to a new planet , he said. "It’s a dangerous
illusion to think that we can escape the world's problems by going to Mars," Rees said. In fact, robots — who will likely be better-adapted to
space travel than humans — will mostly be the ones exploring the cosmos. [Super-Intelligent Machines: 7 Robotic Futures] Rees doesn't
think robots are truly a threat for the future. "I don't worry as much as some people do about AI taking over," Rees said.
Humans evolved from earlier primates because of natural selection, and the traits that were favored were intelligence and aggression, he said.
Electronics "are not engaged in a struggle for survival as in Darwinian selection , so there's no reason why
they should be aggressive," he said. For that reason, they probably won't kill off the human race and expand into the universe. That
would be too "anthropomorphic" of them, he said. "They might just want to sit and think," he said.
Yet, humans can’t create that energy --- assumes labs and testing.
Worrall 18 — Eric Worrall (saff writer for What’s up With That), 10-2-2018, “Forget Climate Change – Large Hadron Collider Set to
Destroy the World,” https://wattsupwiththat.com/2018/10/02/forget-climate-change-large-hadron-collider-set-to-destroy-the-world/
Renowned Cosmologist Professor Martin Rees thinks a particle accelerator experiment gone awry could
destroy the world – though there are good reasons to doubt the significance of this risk. Fun though it is to
contemplate these outlandish possibilities, there is a good reason to doubt whether any of these possibilities are a
significant risk. Every day the Earth is bombarded by untold billions of cosmic ray particles emitted long
ago by violent distant cosmic events such as the formation of black holes. Many of the particles which
strike the Earth are orders of magnitude more energetic than anything we are ever likely to produce .
Some particles like the infamous “Oh-my-god” particle which struck Earth in 1991 with an energy of 3×10^8
TeV, hitting us at 99.99999999999999999999951% of the speed of light defy explanation – we shall likely
never find a way to produce particle energies of that magnitude (for comparison the Large Hadron Collider, Earth’s
most powerful particle accelerator, produces particles at around the 4TeV range). The point is the Earth has already been
struck many times by particles of a very broad range of energies, including the range of energies used
by particle physicists. If anything bad was going to happen due to a collision between particles of a
specific energy, it should have already happened long ago when a cosmic ray of that energy struck the
Earth. On the other hand we have the Fermi Paradox – the mystery of the missing aliens. One possible explanation for why our universe
seems so empty of intelligent alien life is that (almost?) all technological civilisations make a common mistake – they reach a level of technology
which enables them to commit an act which results in their own destruction. One possible candidate for that act of self destruction is a high
energy particle physics experiment which goes horribly wrong. I haven’t read Professor Rees’ book, so for all I know he has an explanation for
the cosmic ray flaw in the “particle experiment will destroy the world” theory. But for now I’m not going to be losing any sleep over this alleged
risk.
such actions against us. What about an ASI inadvertently causing our extinction by turning us into paperclips, or
tiling the entire Earth's surface with solar panels? Such scenarios imply yet another emotion -- the feeling of valuing
or wanting something . As the science writer Michael Chorost adroitly notes, when humans resist an AI from
undertaking any form of global tiling, it "will have to be able to imagine counteractions and want to
carry them out ." Yet, "until an AI has feelings, it's going to be unable to want to do anything at all , let
alone act counter to humanity's interests and fight off human resistance ." Further, Chorost notes, "the minute an
A.I. wants anything, it will live in a universe with rewards and punishments -- including punishments from us for behaving badly. In order to
survive in a world dominated by humans, a nascent A.I. will have to develop a humanlike moral sense
that certain things are right and others are wrong. By the time it's in a position to imagine tiling the
Earth with solar panels, it'll know that it would be morally wrong to do so."[ 15] From here Chorost builds on an
argument made by Peter Singer in The Expanding Circle (and Steven Pinker in The Better Angels of Our Nature[ 16] that I also developed in The
Moral Arc[ 17] and Robert Wright explored in Nonzero[ 18]), and that is the propensity for natural intelligence to evolve moral emotions that
include reciprocity, cooperativeness, and even altruism. Natural intelligences such as ours also includes the capacity to reason, and once you
are on Singer's metaphor of the "escalator of reason" it can carry you upward to genuine morality and concerns about harming others.
"Reasoning is inherently expansionist. It seeks universal application," Singer notes.[ 19] Chorost draws the implication: " AIs
will have to
step on the escalator of reason just like humans have, because they will need to bargain for goods in a
human-dominated economy and they will face human resistance to bad behavior ."[ 20] Finally, for an AI
to get around this problem it would need to evolve emotions on its own, but the only way for this to
happen in a world dominated by the natural intelligence called humans would be for us to allow it to
happen , which we wouldn't because there's time enough to see it coming . Bostrom's "treacherous
turn" will come with road signs ahead warning us that there's a sharp bend in the highway with enough time for us to grab the
wheel. Incremental progress is what we see in most technologies, including and especially AI, which will
continue to serve us in the manner we desire and need . Instead of Great Leap Forward or Giant Fall Backward, think
Small Steps Upward. As I proposed in The Moral Arc, instead of Utopia or dystopia, think protopia , a term coined by the
futurist Kevin Kelly, who described it in an Edge conversation this way: "I call myself a protopian, not a Utopian. I believe in progress in an
incremental way where every year it's better than the year before but not by very much -- just a micro amount."[ 21] Almost all progress in
science and technology, including computers and AI, is of a protopian nature. Rarely, if ever, do technologies lead to either Utopian or
dystopian societies. Pinker agrees that there
is plenty of time to plan for all conceivable contingencies and build
safeguards into our AI systems. "They would not need any ponderous 'rules of robotics' or some
newfangled moral philosophy to do this, just the same common sense that went into the design of
food processors , table saws , space heaters , and automobiles ." Sure, an ASI would be many orders of
magnitude smarter than these machines, but Pinker reminds us of the AI hyperbole we've been fed for
decades: "The worry that an AI system would be so clever at attaining one of the goals programmed into it (like
commandeering energy) that it would run roughshod over the others (like human safety) assumes that AI will
descend upon us faster than we can design fail-safe precautions . The reality is that progress in AI is
hype-defyingly slow , and there will be plenty of time for feedback from incremental implementations,
with humans wielding the screwdriver at every stage." [ 22] Former Google CEO Eric Schmidt agrees, responding to the
fears expressed by Hawking and Musk this way: "Don't you think the humans would notice this, and start turning off
the computers ?" He also noted the irony in the fact that Musk has invested $1 billion into a company called OpenAI that is "promoting
precisely AI of the kind we are describing."[ 23] Google's own DeepMind has developed the concept of an AI off-
switch , playfully described as a "big red button" to be pushed in the event of an attempted AI takeover. "We have proposed
a framework to allow a human operator to repeatedly safely interrupt a reinforcement learning agent while making sure the agent will not
learn to prevent or induce these interruptions," write the authors Laurent Orseau from DeepMind and Stuart Armstrong from the Future of
Humanity Institute, in a paper titled "Safely Interruptible Agents." They even suggest a precautionary scheduled shutdown every night at 2 AM
for an hour so that both humans and AI are accustomed to the idea. "Safe interruptibility can be useful to take control of a
robot that is misbehaving and may lead to irreversible consequences, or to take it out of a delicate situation, or even to temporarily
use it to achieve a task it did not learn to perform or would not normally receive rewards for this."[ 24] As well, it is good to keep in
mind that artificial intelligence is not the same as artificial consciousness . Thinking machines may not
be sentient machines. Finally, Andrew Ng of Baidu responded to Elon Musk's ASI concerns by noting (in a jab at the entrepreneur's
ambitions for colonizing the red planet) it would be "like worrying about overpopulation on Mars when we have not even set foot on the planet
yet."[ 25] Both Utopian and dystopian visions of AI are based on a projection of the future quite unlike anything history has given us. Yet, even
Ray Kurzweil's "law of accelerating returns," as remarkable as it has been has nevertheless advanced at a pace that has allowed for considerable
ethical deliberation with appropriate checks and balances applied to various technologies along the way. With time, even
if an
unforeseen motive somehow began to emerge in an AI we would have the time to reprogram it before
it got out of control. That is also the judgment of Alan Winfield, an engineering professor and co-author of the Principles of Robotics, a
list of rules for regulating robots in the real world that goes far beyond Isaac Asimov's famous three laws of robotics (which were, in any case,
designed to fail as plot devices for science fictional narratives).26 Winfield points out that all of these doomsday scenarios depend
on a long sequence of big ifs to unroll sequentially: "If we succeed in building human equivalent AI and if that AI acquires a
full understanding of how it works, and if it then succeeds in improving itself to produce super-intelligent AI, and if that super-AI, accidentally or
maliciously, starts to consume resources, and if we fail to pull the plug, then, yes, we may well have a problem. The risk, while not
impossible, is improbable ."[ 27]
--- AT: Nanotech – Grey Goo
No risk of grey goo
Rachel Smith, 3/27/2018. The Solomon R. Guggenheim Foundation as an 'Emerging International Talent', described by
News Limited as 'One of Australia's brightest thinkers on the perpetual challenge of urban planning' and shortlisted as a 2012
TED Global Fellow. “Will nanobots destroy the world?” The Naked Scientists.
https://www.thenakedscientists.com/articles/questions/will-nanobots-destroy-world.
Chris Smith put this somewhat apocalyptic question to material scientist, Rachel Smith , from Ted on Facebook.
Starting with, what is a nanobot?
Rachel - This, I think, comes from an idea which was originated by a guy called Eric Drexler who is one of the fathers of nanotechnology.
Nano is this millionth of a millimetre length scale and Eric Drexler came up with the idea of these tiny little robots that he hoped would be really
useful so they could essentially make things for us on that tiny scale. He thought well, what you need is the robots to be able to make more
robots, and then they can make more of the useful things.
But then he started to worry about maybe if the robots can make more of themselves, then they can make
more and more of themselves, and kind of eat up everything of the lab they’re in . Then having eaten the lab
they’re in sort of set off across the city consuming, and as there’s this idea of the grey goo where everything in the world
gets turned into nanobots.
Having said all that, I should probably reassure your listeners that I don’t think it’s very likely to happen . In
terms of stuff that scientists can make at the moment, we’re talking about things that don’t self-
replicate , they don’t re-make themselves. But they can self-assemble; they can build themselves in the first place. Those sorts
of processes: they‘re workable - we do them in my lab. But you have to provide very much exactly the right
ingredients and exactly the right conditions. By which I might mean the temperature or the pressure,
those kinds of things.
So with current tech nologies I don’t think we need to be scared at all .
However, it would be stupid to say oh, this is physically impossible, because we know about self-replicating entities, but what we’d have
to design deliberately is something that comes with its own powerpack . That is completely adaptable to
all sorts of of different environments, too different chemical sort of species being available that also carries
all the information it needs in itself. And yeah, we’re starting to design life , and life did evolve but it
took quite a long time for life then to start from some kind of puddle on the barren Earth and turn into
what we have now. And it’s not turned out to be grey goo so I think we’re probably safe .
--- AT: Environmental Toxicity
No global toxification impact --- the first warrant is environmental security.
Hough 14 [Rupert, Environmental Scientist with Expertise in Risk Modelling and Exposure Assessment and PhD from Nottingham
University, February, “Biodiversity and human health: evidence for causality?” Biodiversity and Conservation, Vol. 23 No. 2, pg. 272-3/AKG]
Large country-level
assessments (e.g. MEA 2005; Huynen et al. 2004; Sieswerda et al. 2001) must be interpreted with some
caution. Data measured at country-level are likely to mask regional and local-level effects. Apart from the fact that
there are limitations to regression analysis in providing any proof of causality, least squares regression models
assume linear relationships between reductions in biodiversity and human health and thus imply a linear relationship between loss of
biodiversity and the provision of relevant ecosystem goods and services. A number of authors, however, have suggested that
ecosystems can lose a proportion of their biodiversity without adverse consequences
to their functioning (e.g. Schwartz et al. 2000). Only when a threshold in the losses of biodiversity is reached does the provision of ecosystem goods and
services become compromised. These models also tend to assume a positive relationship between socio-economic development and loss of biodiversity. One
problem with this expectation is that the loss
in biodiversity in one country is not per definition the result of socio-economic
developments in that particular country, but could also be the result of socio-economic developments in other
parts of the world (Wackernagel and Rees 1996). Furthermore, the use of existing data means researchers can only make
use of available indicators. Unlike for human health and socio-economic development, there are no broadly accepted core-set
of indicators for biodiversity (Soberon et al. 2000). The lack of correlation between biodiversity indicators (Huynen et al. 2004) shows
that the selected indicators do not measure the same thing, which hinders interpretation of results. Finally,
there is likely to be some sort of latency period between ecosystem imbalance and any resulting health
consequences. To date, this has not been investigated using regression approaches. Finally, it is thought that provisioning services are more crucial for
human health and well-being that other ecosystem services (Raudsepp-Hearne et al. 2010). Trends in measures of human well-being are clearly correlated with food
provisioning services, and especially with meat consumption (Smil 2002). While *60
% of the ecosystem services assessed by the MEA
were found to be in decline, most of these were regulating and supporting services , whereas the majority of
expanding services were provisioning services such as crops, livestock and aquaculture (MEA 2005). Raudsepp-
Hearne et al. (2010) investigated the impacts on human well-being from decreases in non-food ecosystem services using national-scale data in order to reveal
human well-being trends at the global scale. At the global scale, forest cover, biodiversity, and fish stocks are all decreasing; while water crowding (a measure of
how many people shared the same flow unit of water placing a clear emphasis on the social demands of water rather than physical stress (Falkenmark and
Rockstro¨m 2004)), soil degradation, natural disasters, global temperatures, and carbon dioxide levels are all on the rise, and land is becoming increasingly subject
to salinization and desertification (Bennett and Balvanera 2007). However, across
countries, Raudsepp-Hearne et al. (2010) found no
correlation between measures of wellbeing and the available data for non-food ecosystem services , including
forest cover and percentage of land under protected-area status (proxies for many cultural and regulating services),
organic pollutants (a proxy for air and water quality), and water crowding index (a proxy for drinking water availability , Sieswerda et al.
2001; WRI 2009) This suggests there is no direct causal link between biodiversity decline and health , rather the
relationship is a ‘knock-on’ effect. I.e. if biodiversity decline affects mankind’s ability to produce food, fuel and fibre, it will therefore impact on human health and
well-being. As discussed in the introduction, the fact that humans need food, water and air to live is an obvious one. All these basic provisions can be
produced in a diversity-poor environment. Therefore, to understand whether there is a potential causality relationship between
biodiversity in its own right and human health, we need to move beyond the basic provisioning services.
1AR
1AR – Rainout
No rainout --- nuclear black carbon absorbs radiation from the sun, creating a
stratospheric lofting effect that circumvents rainout --- Their studies only assume
initial rainout.
Mills et al. 14 (Michael Mills - Taine G. McDougal Professor of Engineering, the Chair of the Department of Materials Science and
Engineering. Owen Toon - Professor of atmospheric and oceanic sciences and fellow at the Laboratory for Atmospheric and Space Physics at the
University of Colorado Boulder. Julia Lee-Taylor – CIRES Research Associate at the University of Colorado, Boulder, Researcher and project
scientist for The National Center for Atmospheric Research. Alan Robock – Climatologist and Distinguished Professor in the Department of
Environmental Sciences at Rutgers University, New Jersey. <MKIM> “Multidecadal global cooling and unprecedented ozone loss following a
regional nuclear conflict” 1/4/14. DOA: 7/21/19. https://agupubs.onlinelibrary.wiley.com/doi/full/10.1002/2013EF000205)
As in previous studies of this scenario [Robock et al., 2007b; Mills et al., 2008], the BC
aerosol absorbs SW radiation, heating
the ambient air, inducing a self-lofting that carries most of the BC well above the tropopause .
CESM1(WACCM) has 66 vertical layers and a model top of ∼145 km, compared to 23 layers up to ∼80 km for the GISS ModelE used by Robock
et al. [2007b] and 39 layers up to ∼80 km for SOCOL3 used by Stenke et al. [2013]. As Figure 1 shows, we
calculate significantly
higher lofting than Robock et al. [2007b, compare to their Figure 1b], penetrating significantly into the mesosphere ,
with peak mass mixing ratios reaching the stratopause (50–60 km) within 1 month and persisting throughout the first year.
This higher lofting, in conjunction with effects on the circulation we discuss later, produces significantly longer residence
times for the BC than those in previous studies. At the end of 10 years, our calculated visible-band optical depths from the BC
persist at 0.02–0.03, as shown in Figure 2. In contrast, Robock et al. [2007b] calculate optical depths near 0.01 only at high latitudes after 10
years, a level that our calculations do not reach for 15 years. 3.2. BC Burden, Rainout, and Lifetime During
the first 4 months, 1.2–1.6
of the 5 Tg of BC
is lost in our 50 nm experiment ensemble, and 1.6 Tg in our 100 nm experiment, mostly due to rainout in the first
few weeks as the plume initially rises through the troposphere (Figure 3a). This is larger than the 1.0 Tg initially lost in
the study of Mills et al. [2008], which used a previous version of WACCM. This is likely due to the difference in our initial distribution of BC
compared to that previous study, which injected 5 Tg into a single column at a resolution four times as coarse as ours. The more
concentrated BC in the previous study likely produced faster heating and rise into the stratosphere, mitigating
rainout. Our calculated rainout contrasts with the lack of significant rainout calculated by the GISS ModelE [Robock et al., 2007b], which
assumes that BC is initially hydrophobic and becomes hydrophilic with a 24 h e-folding time scale. The mass burden reaching the stratosphere
and impacts on global climate and chemistry in our calculations would doubtless be greater had we made a similar assumption to the GISS
ModelE. Stenke et al. [2013] calculate an initial rainout of ∼2 Tg in their interactive 5 Tg simulations, which assumed BC radii of 50 and 100 nm
in two separate runs. After initial rainout, the mass e-folding time for our remaining BC is 8.7 years for the average
of our 50 nm experiment ensemble and 8.4 years for our 100 nm experiment, compared to the 6 years reported by Robock et al. [2007b], ∼6.5
years by Mills et al. [2008], 4–4.6 years reported by Stenke et al. [2013], and 1 year for stratospheric sulfate aerosol from typical volcanic
eruptions [Oman et al., 2006]. Due
to this longer lifetime, after about 4.8 years the global mass burden of BC we
calculate in our ensemble is larger than that calculated by the GISS ModelE, despite the initial 28% rainout loss.
After 10 years, we calculate that 1.1 Tg of BC remains in the atmosphere in our 50 nm experiment ensemble and 0.82 Tg in our 100 nm
experiment, compared to 0.54 Tg calculated by the GISS ModelE and 0.07–0.14 Tg calculated by SOCOL3. The
long lifetime that we
calculate results from both the very high initial lofting of BC to altitudes, where removal from the
stratosphere is slow, and the subsequent slowing down of the stratospheric residual circulation. The Brewer-Dobson circulation is
driven waves whose propagation is filtered by zonal winds, which are modulated by temperature gradients [Garcia and Randel, 2008]. As
explained by Mills et al. [2008], the BC both heats the stratosphere and cools the surface, reducing the strength of the stratospheric
overturning circulation. Figure 4 shows the vertical winds in the lower stratosphere, which bring new air up from the troposphere and drive the
poleward circulation, for the control and BC runs. The middle-atmosphere heating and surface cooling reduce the average velocity of tropical
updrafts by more than 50%. This effect persists more than twice as long as in Mills et al. [2008], which did not include any ocean cooling effects.
--- AT: Nanotech – Extinction
Fuel --- can’t self replicate, but if they can it doesn’t cause extinction
Shere 16 (Jeremy Shere, “Grey Goo Attack”, 4/2/2016, http://indianapublicmedia.org/amomentofscience/grey-goo-attack-2/)
Attack of the Killer Robots Nanotechnology scientists dream of some day creating robots the size of molecules, or even turning molecules into
machines that could roam the human body and perform all sorts of useful tasks. But some nanotechnology theorists and science fiction
aficionados imagine a more ominous possibility . What if one of these tiny robots were given the ability to self-
replicate? All it would take is a single malfunction and the robots would consume everything in the
galaxy as they multiply out of control until all that was left was a shapeless, robotic mass called “grey
goo.” Worst Case Scenario Now, before you go heading for the hills with a year’s supply of water and a survival guide, understand that the
death-by-robot scenario is just that—a scenario, and a pretty fanciful one to boot. First, we’re nowhere near the point of being
able to create a self-replicating nano-machine. But even if such machines do one day exist, they would
have a hard time taking over the universe for one simple reason: fuel . Even microscopic machines need
an energy source. Inorganic matter such as rocks and minerals wouldn’t do the trick because they just
don’t contain stuff that the machines could break down and use for power. But what if a mad scientist
created a robot that fed on organic materials such as sunlight and living things? Not to worry . Natural
life forms have had around four billion years of training to compete for resources ; the killer robots
probably wouldn’t stand much of a chance against such streamlined competitors. Plus, if the robots
were made from organic materials, they might be preyed on by bacteria or other predators.