You are on page 1of 33

1NC

Off
1NC – CP
The United States should implement a single payer general healthcare system
that extends the Amish exemption to social security to single payer healthcare.
Universal single payer healthcare wrecks religious freedom but Amish mutual
aid solves case
Frederic Fransen 06 [Frederic Fransen has a Ph. D. from the Committee on Social Thought at
the University of Chicago, and is a member of First Mennonite Church in Indianapolis, Indiana.]
“A ‘Two Kingdoms’ Approach to Health Care.” Mennonite Life, vol 61, no. 1, 2006.
https://mla.bethelks.edu/ml-archive/2006Mar/fransen.php

The Amish and Health Care

So what do the Amish have to do with any of this? The second problem of third-party payer
systems is that they take responsibility for health and health care away from moral and ethical
communities, where Anabaptist principles would suggest it belongs, and places them into
morally "neutral" hands. The Amish have resisted this across the board, and were the United
States ever to move to a compulsory single payer health insurance system, I have no doubt the
Amish would refuse to participate, on theological grounds.

Although I am no expert on the Amish, I believe that they are exemplary in continuing to live out
the core principles of the Schleitheim Confession, including their application to health care
policy. In describing them as exemplary, I do not intend to advocate giving up electricity or modern transportation, but rather to
encourage us to consider ways in which we might bring our own practice as Mennonites into closer conformity with the principles of
Schleitheim. First, assuming that few readers know much about the details of Amish practices with regard to health care, let me
The Amish exhibit a great deal of
offer a sketch of the principles of Amish health care policy as I understand them.
personal and communal responsibility over issues of health care for their community. Most
births occur at home or in Amish birthing centers, with many deliveries conducted by midwives.
They avoid buying health insurance, but rather pay out of pocket when they need to use outside
medical facilities or see doctors. With regard to various health care protocols, they generally accept the principles of
modern medicine, but are reluctant to accept radical measures to prolong life, preferring to let nature run its course and to die at
home in the care of their family. With regard to health care innovation, the Amish apply a kind of precautionary principle. That is,
rather than allowing a new technique or procedure to be used without question, they examine it to see whether it is in keeping with
their moral values. If they are persuaded that it is, then they will adopt it. Thus, although they were initially skeptical about the value
of many immunizations, they have been persuaded that it is important for public health to do so, and now generally receive
vaccinations. It is noteworthy that in persuading the Amish to be immunized, it was important to argue that it would be harmful
to others for them not to be vaccinated, and it was in keeping with their desire to be helpful to strangers, rather than out of concern
for their own safety, that compelled them to accept immunization programs. For the same reason, the Amish donate organs above
the rate of the general population, and frequently allow tissue samples to be taken for genetic studies, something for which their
close-knit communities make them especially well suited. Mentally ill members are integrated into the work of the community if
possible, and often cared for at home, although the Amish have also begun to establish their own facilities to care for the severely
mentally ill. For our purposes, what is most relevant is the Amish refusal to participate in insurance schemes. In their successful
appeal to be exempted from Social Security, a group of Amish bishops wrote the following:

We feel the Social Security Act and Old Age Survivors Insurance [OASI] is abridging and
infringing to our religious freedom. We believe in giving alms in the church according to Christ's
teaching.
It has been our Christian concern from the birth of our church group to supply those of our group who have a need, financial or
otherwise. . . . Our faith has always been sufficient to meet the needs as they come about, and we feel the present OASI is an
infringement on our responsibilities.
Two points are relevant. First, the Amish differentiate anonymous insurance from mutual aid within the community. Thus, there is
no sense in which they expect individuals simply to look out for themselves. Second,
they view forced participation in
national insurance plans as a threat to their religious beliefs, by undermining their
responsibilities within their community. In addition, there is evidence that Amish insistence on
out-of-pocket, fee-for-service health care has kept their outlays lower than that of the
population at large — sometimes significantly lower, as in the case of maternity services — not
only because they economize on usage, but also because, to a certain extent, they have been
able to work around the existing, Byzantine price structure created by our Byzantine third party
payer system.

“Universal” means without exception, general is distinct


The Law Dictionary n.d. “Universal Definition and Legal Meaning.”
https://thelawdictionary.org/universal/#:~:text=UNIVERSAL%20Definition%20%26%20Legal
%20Meaning&text=Having%20relation%20to%20the%20whole,6S%20Iowa%2C%20619%2C
%2028%20N.%20W.

UNIVERSAL Definition & Legal Meaning

Definition & Citations:

Having relation to the whole or an entirety; pertaining to all without exception; a term more
extensive than “general,” which latter may admit of exceptions. See Blair v. Howell, 6S Iowa,
619, 28 N. W. 199; Koen v. State, 35 Neb. 676, 53 N. W. 59.”., 17 L. R A. 821.

Independently, religious freedom victories for individual groups spill over to


broader protections
Lewis 19 Lewis, Andrew R. Andrew R. Lewis is an Associate Professor of Political Science at the
University of Cincinnati. He is the author of The Rights Turn in Conservative Christian Politics:
How Abortion Transformed the Culture Wars (Cambridge, 2017), which was awarded the
American Political Science Association’s Hubert Morken Award for the best book on Religion and
Politics. He has published more than a dozen peer-reviewed articles and has written for The
New York Times, The Washington Post, and FiveThirtyEight. "The Transformation of the
Christian Right’s Moral Politics." Forum, vol. 17, no. 1, 1 Apr. 2019, pp. 25-44, doi:10.1515/for-
2019-0001.

Importantly, the growing salience of religious freedom for the in-group, conservative Christians, seems to have
spillover effects regarding support for religious freedom for everyone , much like free speech.
National survey data asking about support for religious freedom is limited, but the General Social Survey has asked
about support for the political rights of atheists, which serves as a decent proxy. Figure 3 shows the
trends from 1972 to 2016. As in Figure 1 above, evangelicals consistently trail the non-evangelical population, but evangelicals’
tolerance for atheists has increased at a greater rate over the span, closing much of the gap. Like free
speech, evangelicals under 50 (dotted line) now nearly match non-evangelical’s support for atheists. In addition, experimental
evidence suggests that when evangelicals are primed by statements of clergy members to consider
their own religious freedom rights, they respond with greater tolerance toward least-liked
groups (Djupe, Lewis, and Jelen 2016). Figure 3: Evangelical and Non-Evangelical Support for Religious Liberty of Atheists over
Time, Using 3-Item Tolerance Scale (0–3). General Social Survey, 1972–2016. Figure 3: Evangelical and Non-Evangelical Support for
Religious Liberty of Atheists over Time, Using 3-Item Tolerance Scale (0–3). General Social Survey, 1972–2016. Though public opinion
research shows some positive spill-over effects, there remain challenges. Muslims, along with atheists, are consistently at the top of
the least-tolerated groups in American politics (Kalkan, Layman, and Green 2018), and conservative Christians have a particular bias
against Muslims (Karpowitz, Monson, and Patterson 2016). White evangelicals, in particular, even view their own travails as worse
than Muslims, rating that they face higher levels of discrimination (Green 2017b). Such an exclusive view of religious freedom has
only polarized their rights claims more. Thus,
while enhanced in-group rights consciousness, both in the
legal and public opinion realms, seems to have promoted growing support for the political
rights of others. Nevertheless, important limitations remain.

Those are modeled internationally


Mike Pompeo 21. Former Secretary of State; currently a distinguished fellow at the Hudson
Institute. “America’s Heritage of Religious Freedom Matters to the World”.
https://www.standingforfreedom.com/2021/04/americas-heritage-of-religious-freedom-
matters-to-the-world/
And no person can ever be true to any faith that believes in the dignity of all human life if they do not act out of concern for those
whose dignity is assailed because of their faith. Sadly, our religious freedoms are being eroded here at home.

Our preservation of religious freedom has crucial implications for our foreign policy, because
whether we are committed to our founding principles at home affects our ability to engage with and lead
our allies and partners around the world — our ability to be that “shining city on a hill.”

As Secretary, I formed the Commission on Unalienable Rights to examine just this connection. I asked a diverse set of scholars who
agreed to serve on the Commission, led by Mary Ann Glendon, to furnish advice on human rights policy grounded in both our
nation’s founding principles as well as the 1948 Universal Declaration of Human Rights. The Commission’s final report reminded
Americans of what is best in our country’s traditions, while also inviting other peoples and nations to draw on their own heritages in
order to renew a shared dedication to human rights. Just months after the Commission’s report was published, I traveled to
Indonesia and met with Nahdlatul Ulama, the largest independent Muslim organization in the world, to discuss with them the
common ground of the report’s findings.

Unfortunately, today there are numerous authoritarian governments that violate their people’s rights to
religious liberty. In China, under the rule of the Chinese Communist Party, we see a modern-day example of a government
that cares little for the inherent freedoms of its people and dismisses their unique dignity. Thin claims of religious extremism
have been invoked by the CCP to justify the oppression of religious and ethnic minorities among their own people. The most
egregious example is the ongoing, shocking, and horrifying
treatment of the Uighur population in Xinjiang that has
included, among other atrocities, slave labor, and forced sterilization.
During my tenure as Secretary of State, the State Department designated CCP activity in Xinjiang as genocide. It is a good thing that
the current administration has confirmed this important designation. What we see in Xinjiang today echoes the tyranny and
persecution the CCP inflicts on the province of Tibet and what we also have seen from them in Hong Kong.

Christians throughout China continue to suffer under the CCP’s rule as well. Congregations that are not sanctioned and
monitored by the CCP are outlawed and any scripture not approved by the party is deemed illegal. China’s example should
clarify for all that a society that lacks regard for religious liberty will soon see its political liberties
disappear.
To address the assault on religious freedom both in China and around the world, I convened the Ministerial to Advance Religious
Freedom for three straight years. These were the largest human rights events ever held at the State Department. The Ministerial
brought together leaders from every corner of the globe to discuss the issues threatening religious freedom
and to find solutions together. It proved that religious liberty matters to so many around the world and that it is an issue where
America can and must lead.
President Reagan once said, “If we lose freedom here, there is no place to escape. This is the last stand on earth.” I saw that as your
Secretary of State around the globe. I am confident that the American star will shine across the heavens so long as
we keep a proper understanding of our God-given rights at the center of our unending quest to secure freedom for our
own people and all of mankind.

Religious pluralism diffuses global bloc formation---repression causes World


War III.
Khan ’16 [Amjad; March 22; Adjunct Professor of Law at the University of California, Los
Angeles, J.D. from Harvard University; Harvard Law School National Security Journal, “Religious
Freedom as a National Security Imperative: A New Paradigm,”
https://harvardnsj.org/2016/03/religious-freedom-as-a-national-security-imperative/]
I. Introduction

“Stated simply: There


is not a single nation in the world that both respects religious freedom and poses
a security threat to the United States.” [1]
In the 1660s, Congregationalists tortured and hanged four Quaker missionaries in Boston Common.[2] Yet a century later, Americans introduced a standard of religious liberty
that remains a model for statecraft up to the modern day. As Thomas Farr notes, “theology and politics” – not secularism alone – can claim responsibility for this revolution. In
Farr’s view, this explains why American-Muslims, despite being exposed to Wahhabi doctrines, have not been radicalized like European-Muslims have. [3] Strangely, America has
failed to take lessons from its own past when tackling national security issues. Consider that before September 11, 2001, foreign policy experts viewed the Taliban’s actions to
have no bearing on national security. “Yet the very same conditions of religious intolerance that were appalling to human rights advocates were appealing to al Qaeda.” [4] The
Economist has noted: “The strange thing is that when America has tried to tackle religious politics abroad – especially jihadist violence – it has drawn no lessons from its
domestic success. Why has a country so rooted in pluralism made so little of religious freedom?”[5]

The Obama administration has defined national security in terms of domestic intelligence, economy, education, energy, federal deficit reduction, healthcare, immigration,
infrastructure, natural disasters, organized crime, science and innovation, trade, and travel.[6] It has not, however, explicitly considered religious pluralism as a key component
of its national security strategy (NSS).  Even the most recent NSS makes a mere passing reference towards the importance of protecting the rights of religious minorities.
[7] Moreover, scholarship that approaches NSS from this lens does so sparingly, and more so from a geopolitical perspective.

Given the ambiguity surrounding the meaning of “national security,” and upon grounds addressed in this Article, national
security would be best served by including the establishment and maintenance of global religious
pluralism as a central component of its strategy. That religious expression should generally be factored into
national security is not a recent notion. In his inaugural address in 1881, President James Garfield addressed the preservation of
national security while addressing what he perceived to be a threat the Mormon Church posed to American life.[8]

Accordingly, this Article proffers a new national security paradigm (NNSP) whereby religious pluralism can be established and can
further national security interests by laying emphasis on the country’s common and statutory law.  First, this Article provides an
overview of the current understanding and weaknesses of national security as well as an introduction and justification for the
implementation of the proposed NNSP.  Second, this Article analyzes how the NNSP could be implemented in Pakistan as a case
study.  Specifically, the Article examines Article 20 of Pakistan’s Constitution, which by its very express meaning restricts religious
freedom in the name of national security.[9]  According to Pakistan’s Constitution, Pakistani national security is inextricably
intertwined with religious expression.  Therefore, U.S. NSS must also approach American-Pakistani diplomatic relations from this
lens.

II. Defining National Security

A. Current Paradigm

Why have policymakers failed to incorporate religious pluralism into U.S. NSS?  One study concludes, “U.S. government officials
are often reluctant to address the issue of religion , whether in response to a secular U.S. legal and
political tradition [or] because religion is perceived as too complicated or sensitive. Current U.S. government frameworks for
approaching religion are narrow, often approaching religions as problematic or monolithic forces, overemphasizing a terrorism-
focused analysis of Islam and sometimes marginalizing religion as a peripheral humanitarian or cultural issue.”[10] Indeed, officials
such as Henry Kissinger[11] and Madeleine Albright[12] have not been reticent about this position either.

To date, no agreed upon definition of national security exists.[13] The National Security Act of 1947 does not provide a definition,
despite referring to “national security” more than 100 times.[14] Nor does the PATRIOT Act.[15] This Article, therefore, adopts the
following definition of national security as a starting point, put forth by Judge James E. Baker, former Chief Judge to the United
States Court of Appeals for the Armed Forces: “[National security concerns itself with events that] (1) threaten drastically and over a
relatively brief span of time to degrade the quality of life for the inhabitants of a state, or (2) threaten significantly to narrow the
range of policy choices available to the government of a state.”[16]
B. A New National Security Paradigm

Risks of Excluding Religious Pluralism from NSS

Policy makers cannot assume that national security objectives can be achieved through
democratization, while leaving religious pluralism unaddressed. Thomas Farr notes that following 2001,
when a democratic constitution and government were introduced into Afghanistan, the persecution of Afghani women and minority
Shiite Muslims significantly declined.[17] Religious pluralism, however, remains elusive. “The Afghan government no longer tortures
people on the basis of religion, but it continues to bring charges against apostates and blasphemers, including officials and
journalists seeking to debate the teachings of Islam.”[18]

Additionally, in 1999, the House International Relations Committee was briefed regarding regimes or non-state actors considered
gross violators of religious freedom. With the exception of Burma, those very regimes or non-state actors would eventually threaten
United States national security; namely, the Afghani Taliban, China, Iran, Iraq, North Korea, Saudi Arabia, Serbia, and Sudan.
[19] William Inboden, former Senior Director for Strategic Planning on the National Security Council, notes that the relationship
between religious pluralism and national security extends back much further:

“Including World War II, every major war the United States has fought over the past 70
years has been against an enemy that also severely violated religious freedom … from the
Nazi Reich cult, to atheistic communism, to Serbian Orthodox nationalism, to Arab
Baathism, to Islamist theocracy, to militant jihadism as practiced by Hezbollah or al Qaeda. They
ranged from
superpowers, to fragile states, to global ideological movements, to transnational
terrorist organizations. Yet one of the very few characteristics that all shared was an abiding hostility to
religious freedom.”[20]

A recent study of 143 countries concludes, “to


the extent that governments and societies restrict religious
freedoms, physical persecution and conflict increase.”[21] It is alarming, then, that religious freedom is severely
restricted for 70% of the world’s population – and Islam is the religion of the majority in most of these countries.[22]

Religious intolerance also implies a sacred right to iconoclasm and becomes central to the identity of rogue
elements. The TTP, for example, has explicitly called for, “replac[ing] the English system of democracy with Islamic Shariah”
because “the Pakistani system has nothing to do with Islam.”[23]

Merits of Including Religious Pluralism into the NSS

As weaker factions of society enjoy fundamental civil liberties, the state’s legitimacy is reinforced.
[24] In this scenario, a culture of human rights could develop and institutionalize domestically.[25] As
international relations scholar Peter Katzenstein notes, “security interests are defined by actors who respond to cultural
factors.”[26] Religious
pluralism would help neutralize monolith doctrines, thereby preventing the
formation of related power blocs.
The NNSP is also in harmony with mainstream political theory and American statecraft philosophy.  In his 1939 State of the Union
Address, President Franklin Delano Roosevelt acknowledged the proposed NNSP by noting the global threat to three “indispensable”
American institutions:

“The first is religion. It is the source of the other two—democracy and international good
faith … In a modern civilization, all three—religion, democracy and international good faith—complement and support
each other … Where freedom of religion has been attacked, the attack has come from sources opposed to democracy …
And where religion and democracy have vanished, good faith and reason in international affairs
have given way to strident ambition and brute force … The defense of religion, of democracy, and of
good faith among nations is all the same fight. To save one we must now make up our minds to save all.”[27]

As it fosters tolerant societies, the NSSP pursues the same goals as democratic peace theory espoused by Kant,[28] Paine,[29] and
Tocqueville.[30] Because it would allow religious minorities to participate in public life and hold their representatives accountable,
the NNSP fosters the same environment of accountability that Nobel Laureate Amartya Sen considers central to a functioning
democracy.[31] Operating in the same manner as what University of Chicago Law School’s Professor Aziz Huq terms “social action,”
the NNSP can engender “ideological competition,” whereby a
marketplace of ideas “raises terrorism’s
propagandizing and recruitment costs.”[32] In the alternative scenario, population segregation
inhibits idea exchange and increases alienation ; and a subsequent diaspora can spread this infection.[33] One
study notes that “80% of new recruits to the global Salafi jihad emerge from the diaspora” originally from Muslim majority nations.
[34] By
promoting ideological competition, religious pluralism can neutralize those authoritarian
regimes regulating the level of religious activity .[35] With terrorists taking a grassroots approach to recruitment,
[36] the NNSP offers an effective counterbalance. Indeed, the 2010 National Security Strategy noted, “[the] best defenses against
[the] threat [of terrorism within the United States] are well informed and equipped families, local communities, and
institutions.”[37]
1NC – DA
Single payer collapses rural health care---they’re rebounding now because of
telehealth but single payer trades off.
Lin 18. (John Lin, Researcher, Baylor College of Medicine. Single-payer health insurance would
harm rural patients. January 17, 2018. https://www.tribtalk.org/2018/01/17/single-payer-
health-insurance-would-harm-rural-patients/)
As Republicans repeal the individual health insurance mandate with their tax reforms, many of Obamacare’s defenders have focused on other
legislation to overhaul the healthcare industry. Six House representatives from Texas are cosponsoring a bill to implement single-payer in the United
States, banning private insurance and mandating universal coverage under Medicare. Though they are right to support further public involvement in
healthcare, Medicare
For All would ruin the lives of the medically underserved by reducing the
healthcare workforce, exacerbating hospital closures, and stifling technological innovation . While
many liberals promise instant reductions in medical expenses through single-payer plans, universal public healthcare systems have already led to
significant problems in other countries. The Fraser Institute notes that Canada’s single-payer system has brought them fewer doctors, less high-tech
equipment, older hospitals, longer wait times and less access to drugs than Americans have. The root problem is that single-payer

decreases payments to healthcare providers. Many staunch Medicare For All supporters have ignored the
reason for the successful cost reduction of single-payer: the number of doctors. Although countries like
Sweden have effective healthcare systems, this success is due to their high densities of doctors (4.43 per one thousand people). The latest data from
the Organization for Economic Co-operation and Development shows that the United States only has 2.57 physicians per one thousand people.
Single-payer systems discourage potential physicians and nurses from joining the labor force by
decreasing payments. An analysis by CNNMoney in 2014 concluded that Medicare typically pays only 80 percent of what private insurers
do for the same procedures, probably because Medicare sets standard reimbursement rates. The prospect of a lower salary is likely

part of the reason that a September 2017 survey by the Association of American Physicians and Surgeons found that 90 percent of
medical professionals oppose single payer. Without the approval of healthcare providers,
doctors may deny care to publicly insured patients or leave medicine entirely. With fewer
doctors for more patients, rural Americans might be forced to wait for hours to seek care miles
away at small, undermanned clinics, a major problem especially in Texas, where a 2015 report by Merritt Hawkins found that 62
percent of counties lack general surgeons. Politicians must reconsider their support for legislation that would

aggravate issues in rural communities, which already bear the brunt of downsizing operations. Just last year, Becker’s Hospital
Review reported that the majority of Texas hospitals shut down were in small towns, cutting off care for thousands of Texans. Because doctors would
treat more patients with less compensation, rural clinics in impoverished areas would increasingly be forced to
close in the face of ballooning expenses and shrinking revenues. Policymakers cannot pass single-payer without
destroying millions of lives. Even worse, the lack of innovation resulting from single-payer would stifle progress. A

2016 Doctors Survey conducted by Accenture showed that the U.S. ranked first among eight developed nations in several

forms of technology use including electronic prescribing and patient notetaking. In contrast, Canada ranked last in electronic

patient notetaking, consultation notifications and medical recordkeeping. Though Accenture gave no reason for Canada’s

failure, reduced funds from its single-payer model have undoubtedly restricted technological
development. This system would devastate American small towns because telemedicine has
recruited doctors and improved remote medical access for patients, as noted in a 2015 policy brief by the
National Advisory Committee on Rural Health and Human Services. Without large-scale access to technology for

communication, notations and prescriptions, smaller hospitals cannot expand services for their
patients. Implementing single-payer reverses digitalization and cripples rural healthcare providers by
removing funds.
Hospitals are the pulse of rural economies---key to farms.
Searcey 15. (Dionne Searcey is the West Africa bureau chief for The New York Times. Hospitals
Provide a Pulse in Struggling Rural Towns. April 29, 2015.
https://www.nytimes.com/2015/04/30/business/economy/hospitals-provide-a-pulse-in-
struggling-rural-towns.html)
BEATRICE, Neb. — “This real estate to be auctioned,” reads a banner stretched across the abandoned warehouse of a store-shelving manufacturer that
once employed generations living in and around this town of about 12,000. This isolated rural community has lost a lot of
the energy of its heyday, when shoppers roamed downtown sidewalks, freight trains rumbled past the Big Blue River, and streets clogged
at quitting time as factory workers spilled out of their plants. But it has yet to lose its economic pulse, thanks in large

measure to the Beatrice Community Hospital and Health Center, housed in a sprawling new building of concrete and green glimmering

windows on the outskirts of town. The hospital has become an economic anchor for the area. Once home to vibrant

downtowns, along with thriving local manufacturers and merchants, small towns were traditionally strongholds of the American middle class. In

recent decades, many barely managed to hold on as young people migrated to cities and those who stayed behind had
trouble even finding work. Now, however, those towns that have been able to attract hospitals and other health

care facilities have emerged as oases of economic stability across the nation’s heartland. Rural
hospitals face huge challenges; nearly 50 of them have closed in the last four years, according to the North Carolina Rural Health Research Program. But
the many successful
hospitals, beyond providing an array of jobs from the bottom to the top of the
economic ladder, also stimulate local spending and help attract new businesses that offer a stable of insured
patients. “It’s feast or famine,” said Brock Slabach, senior vice president for member services at the National Rural Health Association, a group based in
Leawood, Kan., that advocates for rural health interests. “What these
providers do is offer not only access to health care, which is hugely
important, but they contribute to the economic viability of these rural areas.” Residents of Beatrice (pronounced bee-
AT-ress) do not have to look far for towns that have shriveled since the railroads pulled out and Walmart lured away hometown shoppers. Ghost towns
ring Beatrice, with nothing left beyond an old grain elevator or graveyard. Health
care has long provided a lift for Beatrice,
which, beyond serving as a commercial hub for surrounding farms, is home to two centers for the
developmentally disabled. Recently, several new nursing homes and assisted-living facilities have opened, reflecting the town’s aging population and
the numerous farmers who move there to retire. Its median age of 42 is six years higher than that of the state. The nonprofit Beatrice hospital,

with a payroll of nearly $28 million, is an essential economic engine. After the Beatrice State Developmental Center, which employs
700, the 25-bed hospital and its network of clinics and home care services is the second-largest employer in town, with 512 workers. Revenue has
grown from $45 million in 2004 to more than $100 million last year, according to Thomas Sommers, the hospital’s chief executive. Patient visits
between 2009 and the end of last year have nearly doubled. The starting salary for a registered nurse is just over $40,000, far more than workers could
earn at the local dollar store or Dairy Queen, and more than many of the jobs at the town’s biggest remaining manufacturer, Exmark, a maker of lawn
mowers. That salary goes a long way in a community where movie tickets cost $6 and a recently remodeled three-bedroom, two-bath home with a
two-car garage is on the market for $72,500. Government support provides a crucial foundation for the health care industry, here and elsewhere.
Beatrice, for example, is one of a number of beneficiaries of a federal program that certifies rural hospitals as so-called critical access hospitals, allowing
for better Medicare reimbursement rates as long as the hospitals meet certain conditions. That formula has also worked for a hospital in Batesville,
Ind., roughly an hour’s drive from both Cincinnati and Indianapolis, a region where the manufacturing base has been eroded by waves of layoffs
through the years. The top employers in the town of about 6,500 are a maker of hospital beds and a coffin company. With 550 employees, the
Margaret Mary Community Hospital ranks third. “It’s about gaining faith,” said Tim Putnam, Margaret Mary’s chief executive. “We have to focus on
what we do well and partner with others who can provide care that we can’t.” Instead of offering specialty surgery, the hospital has expanded its
primary care access and rheumatology program to treat the county’s aging population. Mr. Putnam is working with a catch-a-ride service to arrange
cars for elderly patients with no access to public transportation. And he has made a deal with a local bank to help new doctors with student loan debt
qualify for home loans, a plan he hopes will keep trained and talented staff in place for years. Even in the poorest communities,
small hospitals can thrive. In Centreville, Miss., about 130 miles northwest of New Orleans, more than one-third of residents live below
the poverty line. But in May the hospital in the town of 1,600 plans to open a new, $21 million facility, said Chad Netterville, chief executive of the Field
Memorial Community Hospital, a quasi-public finstitution run by the two counties it serves.

That collapses food security.


Doering 13. (Christopher Doering, Gannett Washington Bureau. As more move to the city,
does rural America still matter? January 13, 2013.
https://www.usatoday.com/story/news/nation/2013/01/12/rural-decline-congress/1827407/)
It is the same in much of the country today. As
towns shrink, there is less demand for everyday services that
create jobs. A school with smaller class sizes might have to lay off teachers or a business could
abandon plans to open up a new plant , further siphoning off jobs and depriving the area of
additional tax dollars needed for things such as repaving roads or improving emergency services. In Iowa, where agriculture is the top
industry, 74% of the state's population was rural in 1900 but by 1990 that figure slipped to 39%. The shift away from rural living has likely accelerated
since then, according to those who study populations in America. The change is even more noticeable by county. In Kenneth Johnson's analysis of
Census Bureau data he found the agricultural county of Osceola in northwestern Iowa had 6,452 people in 2010 compared with 8,371 three decades
earlier and 8,956 in 1910. In contrast, Dallas County, Iowa, which is on the fringe of the Des Moines metro area, has experienced significant gains in the
past couple of decades. In 2010, its population was 66,135 compared with 29,513 in 1980 and 23,628 people in 1910. John Cromartie, a geographer
with USDA's Economic Research Service, participated in a University of Montana study that included visits to high school reunions in 21 rural
communities across the country during 2008 and 2009. Interviews with people who stayed in the community and others who had moved away showed
that the decline of a town had a direct impact on the psyche of people and affected whether they would want to return to raise their family or open a
business. " 'When I was a kid there was a bowling alley here, two drugs stores where you could go get a soda, and it was just a bustling active place that
was well kept up. Now look at it, ' " Cromartie often heard from people he interviewed. " 'Half the stores are closed up. No one comes down here
anymore. It's not a place you would send your kids to go hang out.' " To be sure, some rural parts of the United States
continue to prosper and many counties near metro areas have actually seen their ranks grow. In North Dakota, a decades-long streak of
population erosion was reversed after a boom in coal, oil and natural gas attracted thousands of workers to the area. For Vilsack and others, the

struggle is finding a way to make rural America a place enticing enough for people — especially those in
their 20s, 30s and 40s — to stay, work and raise their families. Craig Hill, president of the Iowa Farm Bureau, said farm and other commodity groups
need to advocate for the interests of agriculture, especially as the food supply comes under attack. Last year, for example, the meat industry was
criticized for using trimmings, dubbed by some as "pink slime," that have been added to beef since the early 1990s to reduce both costs and fat
content. No deaths or illnesses have ever been linked to the additive. "We went about our business assuming that consumers knew where food came
from, and knew production practices and knew of the safety and the value that they are receiving," Hill said. "We were unaware of how much the
consumer didn't know about conventional agriculture." In Washington, some lawmakers have an equally difficult task convincing others in Congress
that rural America still matters. In order to boost their presence, Sen. Tom Harkin, D-Iowa, said those representing rural America need
to form coalitions to underscore the impact these forgotten areas have on the nation's well-being . "That is a challenge in this

country, where food is abundant and, for most people, affordable and taken for granted," Harkin said. Larry Sabato, a
University of Virginia political science professor, said representatives and senators overseeing large rural areas need to press upon urban and

suburban lawmakers that "their fates are tied to rural America." Sabato pointed to the recently averted
crisis facing the dairy industry as an example. A deal reached as part of the "fiscal cliff" talks in Washington earlier this
month prevented dairy subsidies from reverting to 1949 levels, a move that could have caused milk prices to double to about $7 a gallon across the
country. The lawmakers need "to make the connection with their colleagues, to make the argument with their colleagues: here's how what's happening
in my district affects yours," Sabato said, noting it would take significant time for the push to have a meaningful impact in Congress. Farmers and
lawmakers representing agriculture-intensive states or districts strongly disagree with Vilsack's analysis of a
declining of rural America. While their political power in Washington may have eroded, farmers
and ranchers point to the food, fuel and jobs they produce, along with their impact on the country's
economy, as evidence of their ongoing importance.

Nuke war.
FDI 12, Future Directions International, a Research institute providing strategic analysis of
Australia’s global interests; citing Lindsay Falvery, PhD in Agricultural Science and former
Professor at the University of Melbourne’s Institute of Land and Environment, “Food and Water
Insecurity: International Conflict Triggers & Potential Conflict Points,”
http://www.futuredirections.org.au/workshop-papers/537-international-conflict-triggers-and-
potential-conflict-points-resulting-from-food-and-water-insecurity.html

There is a growing appreciation that the conflicts in the next century will most likely be fought
over a lack of resources. Yet, in a sense, this is not new. Researchers point to the French and Russian
revolutions as conflicts induced by a lack of food. More recently, Germany’s World War Two efforts are
said to have been inspired, at least in part, by its perceived need to gain access to more food. Yet the
general sense among those that attended FDI’s recent workshops, was that the scale of the problem in the future could be
significantly greater as a result of population pressures, changing weather, urbanisation, migration, loss of arable land and other farm

inputs, and increased affluence in the developing world. In his book, Small Farmers Secure Food, Lindsay Falvey, a participant in FDI’s March
2012 workshop on the issue of food and conflict, clearly expresses the problem and why countries across the globe are starting to take
note. .He writes (p.36), “…if people are hungry, especially in cities, the state is not stable – riots, violence, breakdown of law

and order and migration result.”¶ “Hunger feeds anarchy.”¶ This view is also shared by Julian Cribb, who in his book, The Coming Famine,
writes that if “large regions of the world run short of food , land or water in the decades that lie ahead, then
wholesale, bloody wars are liable to follow.” ¶ He continues: “An increasingly credible scenario for
World War 3 is not so much a confrontation of super powers and their allies, as a festering , self-perpetuating chain of
resource conflicts.” He also says: “The wars of the 21st Century are less likely to be global conflicts with sharply defined sides and huge
armies, than a scrappy mass of failed states, rebellions, civil strife, insurgencies, terrorism and genocides, sparked by bloody competition over
dwindling resources.Ӧ As another workshop participant put it, people do not go to war to kill; they go to war over resources, either to protect or to
gain the resources for themselves.¶ Another observed that hunger results in passivity not conflict. Conflict is over resources, not because people are

going hungry.¶ A study by the International P eace R esearch I nstitute indicates that where food
security is an issue, it is more likely to result in some form of conflict. Darfur, Rwanda, Eritrea
and the Balkans experienced such wars. Governments, especially in developed countries, are increasingly aware of this
phenomenon.¶ The UK Ministry of Defence, the CIA, the US C enter for S trategic and I nternational S tudies and the
Oslo Peace Research Institute, all identify famine as a potential trigger for conflicts and possibly even
nuclear war.

Extinction
Lynas 22 [Mark Lynas, 3-10-2022, What the science says: Could humans survive a nuclear war
between NATO and Russia?, Alliance for Science,
https://allianceforscience.cornell.edu/blog/2022/03/what-the-science-says-could-humans-
survive-a-nuclear-war-between-nato-and-russia/ HKR-MK]

‘Limited’ nuclear conflict – 100 warheads between India and Pakistan


Prior to the Ukraine war it seemed very unlikely that the superpowers would confront each other again, so many researchers turned to studying the
impacts of more limited nuclear conflicts.

One study published two years ago looked at the likely impacts of a nuclear exchange of about 100 Hiroshima-sized detonations (15 kt
yield each) on the most-populated urban areas of India and Pakistan. Each detonation was estimated to incinerate an area of 13 square km, with this
scenario generating about 5 Tg (teragrams) of soot as smoke from wildfires and burning buildings entered the atmosphere.

Direct human deaths in this “limited” nuclear war scenario are not quantified in the study, but would presumably number in the tens to hundreds of
millions. The
planetary impacts are also severe: as the soot reaches the stratosphere it circulates
globally, blocking incoming solar radiation and dropping the Earth’s surface temperature by 1.8C
in the first five years.

This would be a greater cooling than caused by any recent volcanic eruption, and more than any
climate perturbation for at least the last 1,000 years. Rainfall patterns are drastically altered,
and total precipitation declines by about 8 percent. (These results come from widely-used
climate models of the same types used to project long-term impacts of greenhouse gas
emissions.)

Food exports collapse as stocks are depleted within a single year , and by year four a total of 1.3 billion people face
a loss of about a fifth of their current food supply. The researchers conclude that “a regional conflict using <1
percent of the worldwide nuclear arsenal could have adverse consequences for global food
security unmatched in modern history.”

A 2014 study of the same scenario (of a 100-weapon nuclear exchange between India and Pakistan) found that the soot
penetrating the stratosphere would cause severe damage to the Earth’s ozone layer, increasing
UV penetration by 30-80 percent over the mid-latitudes. This would cause “widespread damage
to human health, agriculture, and terrestrial and aquatic ecosystems,” the researchers wrote. “The
combined cooling and enhanced UV would put significant pressures on global food supplies and
could trigger a global nuclear famine.”

Full-scale nuclear exchange

If global nuclear famine could result from just 100 nuclear detonations, what might be the result
of a fuller exchange of the several thousand warheads held in current inventories by the US and Russia?
One 2008 study looked at a Russia-US nuclear war scenario, where Russia would target 2,200 weapons on Western countries and the US would target
1,100 weapons each on China and Russia. In total, therefore, 4,400 warheads detonate, equivalent to roughly half the current inventories held each by
Russia and the US.

Nuclear weapons held by other states were not used in this scenario , which has a 440-Mt explosive yield,
equivalent to about 150 times all the bombs detonated in World War II. This full-scale nuclear war was estimated to cause 770 million

direct deaths and generate 180 Tg of soot from burning cities and forests. In the US, about half the
population would be within 5km of a ground zero, and a fifth of the country’s citizens would be killed outright.

A subsequent study, published in 2019, looked at a comparable but slightly lower 150 Tg
atmospheric soot injection following an equivalent scale nuclear war. The devastation causes so
much smoke that only 30-40 percent of sunlight reaches the Earth’s surface for the subsequent
six months.

A massive drop in temperature follows, with the weather staying below freezing throughout the
subsequent Northern Hemisphere summer . In Iowa, for example, the model shows temperatures staying below 0°C for 730
days straight. There is no growing season. This is a true nuclear winter.

Nor is it just a short blip. Temperatures still drop below freezing in summer for several years
thereafter, and global precipitation falls by half by years three and four. It takes over a decade
for anything like climatic normality to return to the planet.

By this time, most of Earth’s human population will be long dead. The world’s food production
would crash by more than 90 percent, causing global famine that would kill billions by
starvation. In most countries less than a quarter of the population survives by the end of year two in this scenario. Global fish stocks
are decimated and the ozone layer collapses.

The models are eerily specific. In the 4,400 warhead/150 Tg soot nuclear war scenario, averaged over the subsequent five years,
China sees a reduction in food calories of 97.2 percent, France by 97.5 percent, Russia by 99.7
percent, the UK by 99.5 percent and the US by 98.9 percent. In all these countries, virtually
everyone who survived the initial blasts would subsequently starve.
1NC – Util
The standard is maximizing expected wellbeing
1. Uncertainty and social contract require governments use util
Gooden, 1995 (Robert, philsopher at the Research School of the Social Sciences, Utilitarianism as Public
Philosophy. P. 62-63)

Consider, first, the argument from necessity. Public officials are obliged to make their choices under
uncertainty, and uncertainty of a very special sort at that. All choices—public and private alike—are made
under some degree of uncertainty, of course. But in the nature of things, private individuals will
usually have more complete information on the peculiarities of their own circumstances and on the
ramifications that alternative possible choices might have on them. Public officials, in contrast, are relatively
poorly informed as to the effects that their choices will have on individuals, one by one. What
they typically do know are generalities: averages and aggregates. They know what will happen
most often to most people as a result of their various possible choices. But that is all. That is enough
to allow public policy-makers to use the utilitarian calculus—if they want to use it at all—to choose
general rules of conduct. Knowing aggregates and averages, they can proceed to calculate the
utility payoffs from adopting each alternative possible general rules.

2. Reducing existential risks is the top priority in any moral theory


Plummer, PhD, 15
(Theron, Philosophy @St. Andrews http://blog.practicalethics.ox.ac.uk/2015/05/moral-agreement-on-saving-the-world/)

There appears to be lot of disagreement in moral philosophy. Whether these many apparent disagreements are deep and
irresolvable, I believe there is at least one thing it is reasonable to agree on right now, whatever general
moral view we adopt: that it is very important to reduce the risk that all intelligent beings on this planet
are eliminated by an enormous catastrophe, such as a nuclear war. How we might in fact try to reduce such existential
risks is discussed elsewhere. My claim here is only that we – whether we’re consequentialists, deontologists, or
virtue ethicists – should all agree that we should try to save the world. According to consequentialism, we
should maximize the good, where this is taken to be the goodness, from an impartial perspective, of outcomes. Clearly one thing
that makes an outcome good is that the people in it are doing well. There is little disagreement here. If the happiness or well-being
of possible future people is just as important as that of people who already exist, and if they would have good lives, it is not hard to
see how reducing existential risk is easily the most important thing in the whole world. This is for the familiar reason that there are
so many people who could exist in the future – there are trillions upon trillions… upon trillions. Thereare so many possible
future people that reducing existential risk is a rguably the most important thing in the world,
even if the well-being of these possible people were given only 0.001% as much weight as that of existing people. Even on a
wholly person-affecting view – according to which there’s nothing (apart from effects on existing people) to be said in
favor of creating happy people – the case for reducing existential risk is very strong . As noted in this seminal
paper, this case is strengthened by the fact that there’s a good chance that many existing people will, with the aid of life-extension
technology, live very long and very high quality lives. You
might think what I have just argued applies to
consequentialists only. There is a tendency to assume that, if an argument appeals to
consequentialist considerations (the goodness of outcomes), it is irrelevant to non-consequentialists .
But that is a huge mistake. Non-consequentialism is the view that there’s more that determines
rightness than the goodness of consequences or outcomes; it is not the view that the latter don’t
matter. Even John Rawls wrote, “All ethical doctrines worth our attention take consequences
into account in judging rightness. One which did not would simply be irrational , crazy.” Minimally
plausible versions of deontology and virtue ethics must be concerned in part with promoting
the good, from an impartial point of view. They’d thus imply very strong reasons to reduce
existential risk, at least when this doesn’t significantly involve doing harm to others or damaging one’s character. What’s even
more surprising, perhaps, is that even if our own good (or that of those near and dear to us) has much greater weight than goodness
from the impartial “point of view of the universe,” indeed even if the latter is entirely morally irrelevant, we may nonetheless have
very strong reasons to reduce existential risk. Even
egoism, the view that each agent should maximize her own good, might
imply strong reasons to reduce existential risk. It will depend, among other things, on what one’s own good
consists in. If well-being consisted in pleasure only, it is somewhat harder to argue that egoism would imply strong reasons to reduce
existential risk – perhaps we could argue that one would maximize her expected hedonic well-being by funding life extension
technology or by having herself cryogenically frozen at the time of her bodily death as well as giving money to reduce existential risk
(so that there is a world for her to live in!). I am not sure, however, how strong the reasons to do this would be. But views which
imply that, if I don’t care about other people, I have no or very little reason to help them are not even minimally plausible views (in
addition to hedonistic egoism, I here have in mind views that imply that one has no reason to perform an act unless one actually
desires to do that act). To be minimally plausible, egoism will need to be paired with a more sophisticated account of well-being. To
see this, it is enough to consider, as Plato did, the possibility of a ring of invisibility – suppose that, while wearing it, Ayn could derive
some pleasure by helping the poor, but instead could derive just a bit more by severely harming them. Hedonistic egoism would
absurdly imply she should do the latter. To avoid this implication, egoists would need to build something like the meaningfulness of a
life into well-being, in some robust way, where this would to a significant extent be a function of other-regarding concerns (see
chapter 12 of this classic intro to ethics). But once these elements are included, we can (roughly, as above) argue that this sort of
egoism will imply strong reasons to reduce existential risk. Add to all of this Samuel Scheffler’s recent intriguing arguments (quick
podcast version available here) that most of what makes our lives go well would be undermined if there
were no future generations of intelligent persons. On his view, my life would contain vastly less well-being if (say) a year
after my death the world came to an end. So obviously if Scheffler were right I’d have very strong reason to
reduce existential risk. We should also take into account moral uncertainty. What is it
reasonable for one to do, when one is uncertain not (only) about the empirical facts, but also about the moral
facts? I’ve just argued that there’s agreement among minimally plausible ethical views that we have strong reason to reduce
existential risk – not only consequentialists, but also deontologists, virtue ethicists, and sophisticated egoists should agree. But even
those (hedonistic egoists) who disagree should have a significant level of confidence that they are
mistaken, and that one of the above views is correct. Even if they were 90% sure that their view is the
correct one (and 10% sure that one of these other ones is correct), they would have pretty strong reason, from
the standpoint of moral uncertainty, to reduce existential risk. Perhaps most disturbingly still, even if
we are only 1% sure that the well-being of possible future people m atters, it is at least arguable that,
from the standpoint of moral uncertainty, reducing existential risk is the most important thing in the
world. Again, this is largely for the reason that there are so many people who could exist in the future
– there are trillions upon trillions… upon trillions. (For more on this and other related issues, see this excellent dissertation ). Of
course, it is uncertain whether these untold trillions would, in general, have good lives . It’s possible
they’ll be miserable. It is enough for my claim that there is moral agreement in the relevant sense if, at
least given certain empirical claims about what future lives would most likely be like, all
minimally plausible moral views would converge on the conclusion that we should try to save
the world. While there are some non-crazy views that place significantly greater moral weight on avoiding suffering than on
promoting happiness, for reasons others have offered (and for independent reasons I won’t get into here unless requested to), they
nonetheless seem to be fairly implausible views. And even
if things did not go well for our ancestors, I am
optimistic that they will overall go fantastically well for our descendants, if we allow them to . I
suspect that most of us alive today – at least those of us not suffering from extreme illness or poverty – have lives that
are well worth living, and that things will continue to improve . Derek Parfit, whose work has emphasized
future generations as well as agreement in ethics, described our situation clearly and accurately: “We live during the hinge of
history. Given the scientific and technological discoveries of the last two centuries, the world has never changed as fast. We shall
soon have even greater powers to transform, not only our surroundings, but ourselves and our successors. If we act wisely in the
next few centuries, humanity will survive its most dangerous and decisive period. Our descendants could, if necessary, go elsewhere,
spreading through this galaxy…. Our descendants might, I believe, make the further future very good. But that good future may also
depend in part on us. If our selfish recklessness ends human history, we would be acting very wrongly.” (From chapter 36 of On
What Matters)

3. phenomenal introspection can bridge the gap from experiential natural facts
to moral truths and necessitates hedonism by considering subjective
experience to reveal the intrinsic properties of experience. When I observe a
lemon’s yellowness shifting my visual fields from darker to lighter shades, I can
introspect on that experience and identify brightness as an intrinsic property of
seeing a lemon. Similarly, when I feel pleasure, I can introspect on the shift in
hedonic tones and identify that goodness is an intrinsic property of the pleasure
that was increased.
4. No act- omission distinction— governments must vote on bills so the act of
not signing is an action and governments are responsible for the public sphere
and must aggregate so omission of the Aff is an explicit act taken
6. Governments aren’t singular rational agents which makes theories about
individuals irrelevant – only consequentialism solves by analyzing ends divorced
from an actor
7. Degrees of wrongness - only pleasure and pain explains why telling someone
their shirt looks nice when it doesn’t is better than telling a slave owner where
a runaway - alternative frameworks can't resolve tradeoffs so they fail for
government action
8. The connection between pain and pleasure and phenomenal conceptions of
intrinsic value and disvalue is irrefutable
Blum et al. 18 [Blum K, Gondré-Lewis M, Steinberg B, Elman I, Baron D, Modestino EJ,
Badgaiyan RD, Gold MS. "Our evolved unique pleasure circuit makes humans different from
apes: Reconsideration of data derived from animal studies." U.S. Department of Veterans
Affairs. J Syst Integr Neurosci. 2018 Jun;4(1):10.15761/JSIN.1000191. doi:
10.15761/JSIN.1000191. Epub 2018 Feb 28. PMID: 30956812; PMCID: PMC6446569., accessed:
19 August 2020, https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6446569/, R.S]

Pleasure is not only one of the three primary reward functions but it also defines reward. As homeostasis
explains the functions of only a limited number of rewards, the principal reason why particular stimuli , objects,

events, situations, and activities are rewarding may be due to pleasure. This applies first of all to sex and to the
primary homeostatic rewards of food and liquid and extends to money, taste, beauty, social encounters and nonmaterial, internally set, and intrinsic
rewards. Pleasure, as the primary effect of rewards, drives the prime reward functions of learning, approach behavior, and
decision making and provides the basis for hedonic theories of reward function. We are attracted by

most rewards and exert intense efforts to obtain them, just because they are enjoyable [10].
Pleasure is a passive reaction that derives from the experience or prediction of reward and may lead to a long-lasting state of happiness. The word
happiness is difficult to define. In fact, just obtaining physical pleasure may not be enough. One key to happiness involves a network of good friends.
However, it is not obvious how the higher forms of satisfaction and pleasure are related to an ice cream cone, or to your team winning a sporting event.
Recent multidisciplinary research, using
both humans and detailed invasive brain analysis of animals has
discovered some critical ways that the brain processes pleasure [14].
Pleasure as a hallmark of reward is sufficient for defining a reward, but it may not be necessary. A reward may generate positive learning and approach
behavior simply because it contains substances that are essential for body function. When we are hungry, we may eat bad and unpleasant meals. A
monkey who receives hundreds of small drops of water every morning in the laboratory is unlikely to feel a rush of pleasure every time it gets the 0.1
ml. Nevertheless, with these precautions in mind, we may define any stimulus, object, event, activity, or situation that has the potential to produce
pleasure as a reward. In the context of reward deficiency or for disorders of addiction, homeostasis pursues pharmacological treatments: drugs to treat
drug addiction, obesity, and other compulsive behaviors. The theory of allostasis suggests broader approaches - such as re-expanding the range of
possible pleasures and providing opportunities to expend effort in their pursuit. [15]. It is noteworthy, the first animal studies eliciting approach
behavior by electrical brain stimulation interpreted their findings as a discovery of the brain’s pleasure centers [16] which were later partly associated
with midbrain dopamine neurons [17–19] despite the notorious difficulties of identifying emotions in animals.

Evolutionary theories of pleasure: The love connection BO:D

Charles Darwin and other biological scientists that have examined the biological evolution and its basic principles found various
mechanisms that steer behavior and biological development. Besides their theory on natural selection, it was
particularly the sexual selection process that gained significance in the latter context over the last century, especially when it comes to the question of
what makes us “what we are,” i.e., human. However, the capacity to sexually select and evolve is not at all a human accomplishment alone or a sign of
our uniqueness; yet, we humans, as it seems, are ingenious in fooling ourselves and others–when we are in love or desperately search for it.

It is well established that modern biological theory conjectures that organisms


are the result of evolutionary
competition. In fact, Richard Dawkins stresses gene survival and propagation as the basic mechanism
of life [20]. Only genes that lead to the fittest phenotype will make it. It is noteworthy that the phenotype is selected based on
behavior that maximizes gene propagation. To do so, the phenotype must survive and generate offspring, and be better at it than its competitors. Thus,
the ultimate, distal function of rewards is to increase evolutionary fitness by ensuring the survival of the
organism and reproduction. It is agreed that learning, approach, economic decisions, and positive emotions are the proximal functions through which
phenotypes obtain other necessary nutrients for survival, mating, and care for offspring.

Behavioral reward functions have evolved to help individuals to survive and propagate their
genes. Apparently, people need to live well and long enough to reproduce . Most would agree that homo-sapiens do
so by ingesting the substances that make their bodies function properly. For this reason, foods and drinks are rewards. Additional rewards, including
those used for economic exchanges, ensure sufficient palatable food and drink supply. Mating and gene propagation is supported by powerful sexual
attraction. Additional properties, like body form, augment the chance to mate and nourish and defend offspring and are therefore also rewards. Care
for offspring until they can reproduce themselves helps gene propagation and is rewarding; otherwise, many believe mating is useless. According to
David E Comings, as any small edge will ultimately result in evolutionary advantage [21], additional reward
mechanisms like novelty seeking and exploration widen the spectrum of available rewards and thus enhance the chance for survival, reproduction, and
ultimate gene propagation. These functions may help us to obtain the benefits of distant rewards that are determined by our own interests and not
immediately available in the environment. Thus
the distal reward function in gene propagation and evolutionary
fitness defines the proximal reward functions that we see in everyday behavior. That is why foods,
drinks, mates, and offspring are rewarding.
There have been theories linking pleasure as a required component of health benefits salutogenesis, (salugenesis). In essence, under these terms,
pleasure is described as a state or feeling of happiness and satisfaction resulting from an
experience that one enjoys. Regarding pleasure, it is a double-edged sword, on the one hand, it promotes positive feelings (like
mindfulness) and even better cognition, possibly through the release of dopamine [22]. But on the other hand, pleasure simultaneously encourages
addiction and other negative behaviors, i.e., motivational toxicity. It is a complex neurobiological phenomenon, relying on reward circuitry or limbic
activity. It is important to realize that through the “Brain Reward Cascade” (BRC) endorphin and endogenous morphinergic mechanisms may play a role
[23]. While natural rewards are essential for survival and appetitive motivation leading to beneficial biological behaviors like eating, sex, and
reproduction, crucial social interactions seem to further facilitate the positive effects exerted by pleasurable experiences. Indeed, experimentation with
addictive drugs is capable of directly acting on reward pathways and causing deterioration of these systems promoting hypodopaminergia [24]. Most
would agree that pleasurable activities can stimulate personal growth and may help to induce healthy behavioral changes, including stress
management [25]. The work of Esch and Stefano [26] concerning the link between compassion and love implicate the brain reward system, and
pleasure induction suggests that social contact in general, i.e., love, attachment, and compassion, can be highly effective in stress reduction, survival,
and overall health.

Understanding the role of neurotransmission and pleasurable states both positive and negative have been adequately studied over many decades [26–
37], but comparative anatomical and neurobiological function between animals and homo sapiens appear to be required and seem to be in an infancy
stage.

Finding happiness is different between apes and humans

As stated earlier in this expert opinion one key to happiness involves a network of good friends [38]. However, it is not entirely clear exactly how the
higher forms of satisfaction and pleasure are related to a sugar rush, winning a sports event or even sky diving, all of which augment dopamine release
at the reward brain site. Recent multidisciplinary research, using both humans and detailed invasive brain analysis of animals has discovered some
critical ways that the brain processes pleasure.
Remarkably, there are pathways for ordinary liking and pleasure, which are limited in scope as described above in
this commentary. However, there are many brain regions, often termed hot and cold spots, that significantly modulate

(increase or decrease) our pleasure or even produce the opposite of pleasure— that is disgust and fear [39]. One

specific region of the nucleus accumbens is organized like a computer keyboard, with particular stimulus

triggers in rows— producing an increase and decrease of pleasure and disgust. Moreover, the cortex has unique roles in the
cognitive evaluation of our feelings of pleasure [40]. Importantly, the interplay of these multiple triggers and the higher brain
centers in the prefrontal cortex are very intricate and are just being uncovered.

Desire and reward centers

It is surprising that many different sources of pleasure activate the same circuits between the mesocorticolimbic regions (Figure 1). Reward and desire
are two aspects pleasure induction and have a very widespread, large circuit. Some part of this circuit distinguishes between desire and dread. The so-
called pleasure circuitry called “REWARD” involves a well-known dopamine pathway in the mesolimbic system that can influence both pleasure and
motivation.

In simplest terms, the well-established mesolimbic system is a dopamine circuit for reward. It starts in the ventral tegmental area (VTA) of the midbrain
and travels to the nucleus accumbens (Figure 2). It is the cornerstone target to all addictions. The VTA is encompassed with neurons using glutamate,
GABA, and dopamine. The nucleus accumbens (NAc) is located within the ventral striatum and is divided into two sub-regions—the motor and limbic
regions associated with its core and shell, respectively. The NAc has spiny neurons that receive dopamine from the VTA and glutamate (a dopamine
driver) from the hippocampus, amygdala and medial prefrontal cortex. Subsequently, the NAc projects GABA signals to an area termed the ventral
pallidum (VP). The region is a relay station in the limbic loop of the basal ganglia, critical for motivation, behavior, emotions and the “Feel Good”
response. This defined system of the brain is involved in all addictions –substance, and non –substance related. In 1995, our laboratory coined the term
“Reward Deficiency Syndrome” (RDS) to describe genetic and epigenetic induced hypodopaminergia in the “Brain Reward Cascade” that contribute to
addiction and compulsive behaviors [3,6,41].

Furthermore, ordinary “liking”


of something, or pure pleasure, is represented by small regions mainly in the
limbic system (old reptilian part of the brain). These may be part of larger neural circuits. In Latin, hedus is the term for
“sweet”; and in Greek, hodone is the term for “pleasure.” Thus, the word Hedonic is now referring to various subcomponents of pleasure: some
associated with purely sensory and others with more complex emotions involving morals, aesthetics, and social interactions. The capacity to have
pleasure is part of being healthy and may even extend life, especially if linked to optimism as a dopaminergic response [42].

Psychiatric illness often includes symptoms of an abnormal inability to experience pleasure, referred to as anhedonia. A negative feeling state is called
dysphoria, which can consist of many emotions such as pain, depression, anxiety, fear, and disgust. Previously many scientists used animal research to
uncover the complex mechanisms of pleasure, liking, motivation and even emotions like panic and fear, as discussed above [43]. However, as a
significant amount of related research about the specific brain regions of pleasure/reward circuitry has been derived from invasive studies of animals,
these cannot be directly compared with subjective states experienced by humans.

In an attempt to resolve the controversy regarding the causal contributions of mesolimbic dopamine systems to reward, we have previously evaluated
the three-main competing explanatory categories: “liking,” “learning,” and “wanting” [3]. That is, dopamine may mediate (a) liking: the hedonic impact
of reward, (b) learning: learned predictions about rewarding effects, or (c) wanting: the pursuit of rewards by attributing incentive salience to reward-
related stimuli [44]. We have evaluated these hypotheses, especially as they relate to the RDS, and we find that the incentive salience or “wanting”
hypothesis of dopaminergic functioning is supported by a majority of the scientific evidence. Various neuroimaging studies have shown that anticipated
behaviors such as sex and gaming, delicious foods and drugs of abuse all affect brain regions associated with reward networks, and may not be
unidirectional. Drugs of abuse enhance dopamine signaling which sensitizes mesolimbic brain mechanisms that apparently evolved explicitly to
attribute incentive salience to various rewards [45].

Addictive substances are voluntarily self-administered, and they enhance (directly or indirectly) dopaminergic synaptic function in the NAc. This
activation of the brain reward networks (producing the ecstatic “high” that users seek). Although these circuits were initially thought to encode a set
point of hedonic tone, it is now being considered to be far more complicated in function, also encoding attention, reward expectancy, disconfirmation
of reward expectancy, and incentive motivation [46]. The argument about addiction as a disease may be confused with a predisposition to substance
and nonsubstance rewards relative to the extreme effect of drugs of abuse on brain neurochemistry. The former sets up an individual to be at high risk
through both genetic polymorphisms in reward genes as well as harmful epigenetic insult. Some Psychologists, even with all the data, still infer that
addiction is not a disease [47]. Elevated stress levels, together with polymorphisms (genetic variations) of various dopaminergic genes and the genes
related to other neurotransmitters (and their genetic variants), and may have an additive effect on vulnerability to various addictions [48]. In this
regard, Vanyukov, et al. [48] suggested based on review that whereas the gateway hypothesis does not specify mechanistic connections between
“stages,” and does not extend to the risks for addictions the concept of common liability to addictions may be more parsimonious. The latter theory is
grounded in genetic theory and supported by data identifying common sources of variation in the risk for specific addictions (e.g., RDS). This
commonality has identifiable neurobiological substrate and plausible evolutionary explanations.

Over many years the controversy of dopamine involvement in especially “pleasure” has led to confusion concerning separating motivation from actual
pleasure (wanting versus liking) [49]. We take the position that animal studies cannot provide real clinical information as described by self-reports in
humans. As mentioned earlier and in the abstract, on November 23rd, 2017, evidence for our concerns was discovered [50]

In essence, although nonhuman primate brains are similar to our own, the disparity between other primates and those of human cognitive abilities tells
us that surface similarity is not the whole story. Sousa
et al. [50] small case found various differentially expressed
genes, to associate with pleasure related systems. Furthermore, the dopaminergic interneurons located in the human
neocortex were absent from the neocortex of nonhuman African apes. Such differences in neuronal transcriptional programs may underlie a variety of
neurodevelopmental disorders.

In simpler terms, the system controls the production of dopamine, a chemical messenger that plays a significant role in pleasure and rewards. The
senior author, Dr. Nenad Sestan from Yale, stated: “Humans have evolved a dopamine system that is different than the one in chimpanzees.” This may
explain why the behavior of humans is so unique from that of non-human primates, even though our brains are so surprisingly similar, Sestan said: “It
might also shed light on why people are vulnerable to mental disorders such as autism (possibly even addiction).” Remarkably, this research finding
emerged from an extensive, multicenter collaboration to compare the brains across several species. These researchers
examined 247
specimens of neural tissue from six humans, five chimpanzees, and five macaque monkeys.
Moreover, these investigators analyzed which genes were turned on or off in 16 regions of the brain.

While the differences among species were subtle, there was a remarkable contrast in the

neocortices, specifically in an area of the brain that is much more developed in humans than in
chimpanzees. In fact, these researchers found that a gene called tyrosine hydroxylase (TH) for the enzyme,
responsible for the production of dopamine , was expressed in the neocortex of humans, but not
chimpanzees. As discussed earlier, dopamine is best known for its essential role within the brain’s reward
system; the very system that responds to everything from sex, to gambling, to food, and to
addictive drugs. However, dopamine also assists in regulating emotional responses, memory, and movement. Notably, abnormal dopamine
levels have been linked to disorders including Parkinson’s, schizophrenia and spectrum disorders such as autism and addiction or RDS.

Nora Volkow, the director of NIDA, pointed out that one alluring possibility is that the neurotransmitter dopamine
plays a substantial
role in humans’ ability to pursue various rewards that are perhaps months or even years away in
the future. This same idea has been suggested by Dr. Robert Sapolsky, a professor of biology and neurology at Stanford University. Dr. Sapolsky cited
evidence that dopamine levels rise dramatically in humans when we anticipate potential rewards that are uncertain and even far off in our futures,
such as retirement or even the possible alterlife. This
may explain what often motivates people to work for things
that have no apparent short-term benefit [51]. In similar work, Volkow and Bale [52] proposed a model in which dopamine can
favor NOW processes through phasic signaling in reward circuits or LATER processes through tonic signaling in control circuits. Specifically, they suggest
that through its modulation of the orbitofrontal cortex, which processes salience attribution, dopamine also enables shilting from NOW to LATER, while
its modulation of the insula, which processes interoceptive information, influences the probability of selecting NOW versus LATER actions based on an
individual’s physiological state. This hypothesis further supports the concept that disruptions along these circuits contribute to diverse pathologies,
including obesity and addiction or RDS.
Case
Underview
1AR theory is skewed towards the aff which means err neg – A] the 2NR must
cover substance and over-cover theory cause of 7-6, 2 speech aff advantage and
they get the collapse and persuasiveness advantage of a 3 minute 2AR B] their
responses to my counter interp will be new, which means 1AR theory
necessitates intervention. Implications –
A] Evaluate the theory debate after the 2NR – if the aff didn’t include weighing
in the 1AR, that’s their fault B] dropping the argument minimizes the chance
the round is decided unfairly C] if intervention will happen on theory debates,
then judges should intervene in a way that decreases the asinine nature of LD
theory
Framework
Offense
1. Dependency is inherent to American healthcare — single-payer operates
through private healthcare providers that receive funding from the state.
There’s no brightline for how much dependency is required, so the DA comes
first on quantifiable solvency

2. Absent brightlines, the aff doesn’t solve, because any measure of


dependency would still violate the universal will

3. Scaling impacts don’t matter for Kant OR Ripstein because moral rules are
binary, so don't let them make a “less is good enough” argument in the 1AR –
you picked a deontological framework, that means close IS no cigar

4. DA turns: The right of a duty to support the poor means that if we win the
neg better serves the poor, we win that the government is fulfilling its duty, and
turn the 1AC.

5. Kant goes NEG – you won’t find a more specific card


Deutsch 4 (Stephanie Deutsch – Writing submitted to Georgetown University Journal of
Health Sciences , “Should the US Adopt a National Healthcare Plan?”,
https://blogs.commons.georgetown.edu/journal-of-health-sciences/issues-2/previous-
volumes/vol-1-issue-3-april-1-2004/should-the-us-adopt-a-national-health-care-plan/, April
2004, EmmieeM)

Americans have a traditional reliance on individual responsibility and a commitment to the ideal
of a limited national government, in accord with the principles of market justice. Under market justice,
health care is viewed as an economic good, health care services are based on individual purchasing
power, rationing is based on ability to pay, and there is individual responsibility for health. Americans
should not adopt a national health care plan because embedded in the nation’s culture are the
deontological values of individual responsibility, self-reliance, and capitalism, and the
market-justice dominant society supports private rather than government solutions to social
problems of health. Like all other economic goods, health care is a business. A single-payer system will only
offer “increasingly less benefits and frighteningly higher costs” for society (Oliphant, 2002).
Adv
Sustainable growth's an oxymoron - collapse inevitable but now solves
extinction
Cosper 21 [Christopher L. Cosper, Assistant professor in the Architecture and Facility
Management Program at Ferris State University. He is also the first graduate of the Harvard
Graduate School of Design Master of Design Studies Program to concentrate in Critical
Conservation, 1-25-2021, Endless “Sustainable” Growth is an Oxymoron,
https://commonedge.org/endless-sustainable-growth-is-an-oxymoron/ HKR-MK]

On a previous Common Edge article, I


briefly discussed a concept that I call the “Triple Bottom Lie,” which
posits that more people, plus more consumption by each person, plus an economic system
completely dependent on the aforementioned items, can just keep working forever, without
consequences. Historically, the U.S. has accepted the economic shibboleth of endless growth because it reduced class conflict; a
rising tide (supposedly) lifted all boats, rafts and yachts included. We
are, however, approaching the limits of growth,
from both a resource standpoint (we’re running out of raw materials) and a technological standpoint
(our inventions are progressively less revolutionary).

But these realities have not dented the hegemony of the Triple Bottom Lie and its mantra of
“more, more, more”—more of our time, devoted to creating more stuff, so that we can achieve more. But more what,
exactly?

Efforts to create sustainable architecture and urbanism have been undercut by the Triple
Bottom Lie. Building new “high performing” LEED-rated buildings still means additional energy
consumption, and even if every new project were regenerative in nature, we have to ask if we
even need them. For example, the U.S. has 900 million square feet of empty retail space, at a time when we have
approximately 20 times the retail space per person of Germany and 30 times the retail space per person of Mexico. Do we need yet
another strip shopping center, even one that meets the Living Building Challenge?

While architects discuss the efficacy of LED light bulbs or the best insulation system for our buildings, the giant economic machine
that undergirds our industry continues to devour our collective home, Earth. But at long last, the question of economics has been
brought to the fore, by a rather unlikely source: Patrik Schumacher, principal at Zaha Hadid Architects.

Schumacher, whom controversy follows like a shadow, seemed less-than-committed to Architects Declare, the movement to which
his firm was an early signatory. Rather than follow the “draw down” plan suggested, Schumacher offered a different version. In a
2020 Architect’s Journal article, Schumacher said:

We need to allow prosperity and progress to continue, and that will also bring the resources to overcome [the climate crisis] through
investment, science and new technology.

That must be built on continued growth and cannot be built on a panicked shrinking of the economy, [which would] lead to massive
regressions and political upheavals.

Schumacher is both wrong and right.

He is wrong when he argues that “continued growth ”


is the solution. Indeed, continued growth —of population, of
industry-based economies, of unequal wealth—is
the problem. The planet cannot sustain the current
population of humans at current consumption levels, never mind more people who consume
even more per person. As mathematics professor Andrew Hwang notes in Business Insider,
“[T]he Earth can support at most one-fifth of the present population, 1.5 billion people, at an
American standard of living.”
This is not a new idea. Written in the 1970s, The Limits to Growth used computer modeling to
assert that many of the world’s most important resources would run out in a matter of a few
generations, leading to a hellish period of famine and warfare. The modeling in the book was arguably primitive, and
some of the timelines unduly pessimistic, but the basic premise holds. As scientist John Scales Avery wrote in his 2012 book, Information Theory and
Evolution, “Although the specific predictions of resource availability in Limits to Growth lacked accuracy, its
basic thesis—that
unlimited economic growth on a finite planet is impossible—was indisputably correct.”
If, perhaps, Nixon-era predictions about the future are not entirely convincing, the charts in Tony Juniper’s 2018 book, How We’re
F***ing Up Our Planet (which has, perhaps, the most cleareyed title of all time), reinforces the essential premise. In every case,
stressors are rapidly increasing while resources are diminishing. Schumacher is also wrong in seeing
technology as the panacea that will allow the carnival of conspicuous consumption to continue. Technology still leads to
the creation of stuff, and stuff requires material resources. Even the most incorrigible reprobate
cannot break the laws of physics, specifically the principle of entropy. Furthermore, the fantasy of
greater efficiencies will not save the day. In our economic system, greater efficiencies lead to lower
prices and, typically, increased—not decreased—consumption.

At this point, some creative types have suggested mining asteroids, or expanding humanity to the
stars. But assorted megalomaniac billionaires notwithstanding, this is the realm of science fiction—
or magical thinking—and not a solution to a problem that has already arrived.

In contrast to his arguments about growth and technology, Schumacher is absolutely correct when he argues that the move toward
sustainability has the potential to cause “massive regressions and political upheavals.” These “upheavals” were plainly evident
during the “Yellow Vest” protests in France, which were the result of a modest gasoline tax that proved to be one financial blow too
many for the working class, particularly in the less-affluent rural provinces where people rely on their automobiles for almost all
basic transportation.

That said, I suspect that Schumacher would not support the obvious answer: increasing the social safety net. Recognizing that people
will inevitably fight anything that threatens their livelihoods, the authors of the Green New Deal and other progressive plans to
address climate change stay equally focused on economic and social sustainability, in addition to environmental sustainability.

The Architects Declare steering group, in response to Schumacher’s advocacy for an unfettered free market approach, provided a
direct rebuttal to his positions and a concise, cogent argument against endless growth. The steering group wrote:

[W]e need a much more sophisticated discussion about growth that distinguishes between qualitative and quantitative growth.
There are some things we need to grow—such as ecosystems, human health, community cohesion, political unity, the vitality of the
commons—and some things we need to urgently shrink, such as hyper-consumption, luxury lifestyles and unconstrained aviation.

Given the richness of debate going on in…other fields, it is troubling to hear leading figures in architecture like Patrik Schumacher
talking about the need for continuous growth and progress. As Edward Abbey observed, “growth for growth’s sake is the ideology of
the cancer cell.”

Apparently, that was too much for Schumacher and ZHA, so the firm withdrew from Architects Declare. Considering that architects
must visualize an environment that does not yet exist, the lack of imagination among architects can sometimes be stunning. This
planet could be a Garden of Eden, if we—the citizens of the planet—bent our collective will toward the project. Although we do not
know everything about what Aldo Leopold calls the “biotic community,” we know enough that we should be able to avoid destroying
the soil, water, plants, and animals on which we rely. Architects and other designers could thrive in such an environment, if we
would simply frame the problem in a way that represents 21st century realities rather than 20th century nostalgia.

Ironically, New Frontier nostalgia is counterproductive, as architects and other designers have excelled when faced with limitations.
Jonny Campbell, a documentary filmmaker who was trained as an architect, made the following observation:

This brings to mind something that an architecture tutor said to us in the first days of architecture school, which was “constraint is
the catalyst of creativity.” As the design professions increasingly question how they operate in a world of finite resources, I think it is
important to carry forward a sense that constraint and scarcity, when coupled with the right approach to design, can result in
projects of great value and richness.

An economy and a culture completely devoted to more of everything cannot function


indefinitely. The consequences of dedicating our lives to endless consumption will soon enough
be upon us, and the next can we kick might just be a bucket. The only question is whether the
landing will be hard or soft—whether we’ll stagger blindly toward an abrupt and violent
denouement, or walk confidently toward a different, but potentially richer and more secure
future.

Overconsumption ruins the phosphorous cycle - extinction


Rosen 21 [Julia Rosen, 02-08-2021, Humanity Is Flushing Away One of Life’s Essential
Elements, https://www.theatlantic.com/science/archive/2021/02/phosphorus-pollution-
fertilizer/617937/ HKR-MK]

Life as we know it is carbon based. But every organism requires other elements, too, including nitrogen and
phosphorus. Nitrogen is the basis of all proteins, from enzymes to muscles, and the nucleic acids that encode our genes.
Phosphorus forms the scaffolding of DNA, cell membranes, and our skeletons; it’s a key element
in tooth and bone minerals.

Too little of either nutrient will limit the productivity of organisms, and, by extension, entire
ecosystems. On short timescales, nitrogen often runs out first. But that scarcity never lasts long,
geologically speaking: The atmosphere—which is about 80 percent nitrogen—represents an almost infinite
reservoir. And early in the course of evolution, certain microbes developed ways to convert atmospheric nitrogen into
biologically available compounds.

Alas, there
is no analogous trick for phosphorus, which comes primarily from the Earth’s crust.
Organisms have generally had to wait for geologic forces to crush, dissolve, or otherwise abuse
the planet’s surface until it weeps phosphorus. This process of weathering can take thousands,
even millions, of years. And once phosphorus finally enters the ocean or the soil, where
organisms might make use of it, a large fraction reacts into inaccessible chemical forms.

For these reasons, the writer and chemist Isaac Asimov, in a 1959 essay, dubbed
phosphorus “life’s bottleneck.”
Noah Planavsky, a geochemist at Yale University, says scientists have reached the same conclusion: “It’s what really
limits the capacity of the biosphere.”
One of the lingering mysteries about the origin of life, in fact, is how the earliest organisms got hold of enough phosphorus to
assemble their primitive cellular machinery. Some scientists think they must have evolved in environments with abnormally high
concentrations of phosphorus, like closed-basin lakes. Others have suggested that bioavailable phosphorus came to Earth in comets
or meteorites—a celestial gift that helped kick-start life.

A chronic shortage of phosphorus might also explain why it took so long for oxygen to build up
in Earth’s atmosphere. Phytoplankton first began belching out the gas about 2.5 billion years ago, with the advent of
photosynthesis. But they might not have had enough phosphorus to ramp up production, according to research by Planavsky and
others, because the element kept getting bound up in iron minerals in the ocean, helping trap the world in a low-oxygen state for
more than a billion years longer.

That we breathe oxygen today—and exist at all—might be thanks to a series of climatic cataclysms that temporarily freed the planet
from phosphorus limitation. About 700 million years ago, the oceans repeatedly froze over and glaciers swallowed the continents,
chewing up the rock beneath them. When the ice finally thawed, vast quantities of glacial sediment washed into the seas, delivering
unprecedented amounts of phosphorus to the simple marine life forms that then populated the planet.

Planavsky and his colleagues propose that this influx of nutrients gave evolution an opening. Over the next 100 million years or so,
the first multicellular animals appeared and oxygen concentrations finally began to climb toward modern levels. Scientists still
debate exactly what happened, but phosphorus likely played a part. (To Planavsky, it’s “one of the most fascinating unresolved
questions about our planet’s history.”)
Another group of scientists, led by Jim Elser of Arizona State University, speculate that such a pulse of phosphorus could have had
other evolutionary consequences: Since too much phosphorus can be harmful, animals might have started building bones as a way
of tying up excess nutrients. “Mind-blowing, right?” Elser says. “If true.”

What’s clear is that after this explosion of life, the


phosphorus vise clamped down again. Geologic
weathering kept doling out meager rations of the nutrient, and ecosystems developed ways to conserve and
recycle it. (In lakes, for instance, a phosphorus atom might get used thousands of times before reaching the sediment, Elser says.)
Together, these
geologic and biologic phosphorus cycles set the pace and productivity of life. Until
modern humans came along.
Over the course of several weeks in 1669, a German alchemist named Hennig Brand boiled away 1,500 gallons of urine in hopes of
finding the mythical philosopher’s stone. Instead, he ended up with a glowing white substance that he called phosphorus, meaning
“light bearer.” It became the 15th element in the periodic table, the incendiary material in matches and bombs, and—thanks to the
work of Liebig and others—a key element in fertilizer.

Long before phosphorus was discovered, however, humans had invented clever ways of managing their local supplies, says Dana
Cordell, who leads the food-systems research group at the University of Technology Sydney, in Australia. There and in the Americas,
for example, Indigenous people managed hunting and foraging grounds with fire, which effectively fertilized the landscape with the
biologically available phosphorus in ash, among other benefits. In agrarian societies, farmers learned to use compost and manure to
maintain the fertility of their fields. Even domestic pigeons played an important role in biblical times; their poop—containing
nutrients foraged far and wide—helped sustain the orchards and gardens of desert cities.

But human waste was perhaps the most prized fertilizer of all. Though we too need phosphorus (it accounts for about 1 percent of
our body mass), most of the phosphorus we eat passes through us untouched. Depending on diet, about two-thirds of it winds up in
urine and the rest in feces. For millennia, people collected these precious substances—often in the wee hours, giving rise to the term
night soil—and used them to grow food.

The sewage of the Aztec empire fed its famous floating gardens. Excreta became so valuable that authorities in 17th-century Edo,
Japan, outlawed toilets that emptied into waterways. And in China the industry of collecting night soil became known as “the
business of the golden juice.” In Shanghai in 1908, a visiting American soil scientist named Franklin Hiram King reported that the
“privilege” of gathering 78,000 tons of human by-products cost the equivalent of $31,000.

King, a forefather of the organic-farming movement who briefly worked at the U.S. Department of Agriculture, admired this careful
reuse of waste and lamented that he saw nothing like it at home. This, King wrote, was an unfortunate side effect of modern
sanitation, which “we esteem one of the great achievements of our civilization.”

The so-called Sanitation Revolution followed close on the heels of the Industrial Revolution. In the 1700s and 1800s, Europeans and
Americans moved to cities in unprecedented numbers, robbing the land of their waste and the phosphorus therein. This waste soon
became an urban scourge, unleashing tides of infectious disease that compelled leaders in places like London to devise ways to
shunt away the copious excretions of their residents.

Liebig and other Victorian thinkers argued that this sewage should be transported back to the countryside and sold to farmers as
fertilizer. But the volumes involved posed logistical challenges, and critics raised concerns about the safety of sewage farms—as well
as their smell. Thus, waste ultimately was sent to rudimentary treatment centers for disposal or, more often, dumped into rivers,
lakes, and oceans.

This created what Karl Marx described as the “metabolic rift”—a dangerous disconnect
between humans and the soils on which they depend —and effectively sundered the human
phosphorus cycle, reshaping its loop into a one-way pipe.

“That single disruption has caused global chaos, you could argue,” Cordell says. For one thing, it forced
farmers to find new sources of phosphorus to replace the nutrients lost every year to city sewers. To make matters worse,
agricultural research in the late 1800s suggested that plants required even more phosphorus than previously thought . And so
began a frantic race for fertilizer.
Spain and the United States laid claim to uninhabited islands in the Pacific Ocean, where workers harvested towering accumulations
of bird droppings. (Among them was Midway Atoll—later a U.S. naval station.) Back home on American soil, fertilizer companies
scoured bat caves for guano and processed the bones of the countless bison slaughtered by hide hunters on the Great Plains.
In the course of these exploits, humans reached across vast distances to secure phosphorus. The discovery of
coprolites in British fields allowed humans to reach back in time, too, seizing nutrients from another era and short-
circuiting the geologic phosphorus cycle altogether. We saw a way to turn the stubborn trickle
into a torrent, and that’s exactly what we did.

Until the late 1800s, the “stinking stones” that dotted the fields of South Carolina were considered a nuisance. But
as the cost
of imported guano soared and the Civil War reshaped southern agriculture, scientists discovered
that these nodules of phosphate rock could be processed into decent fertilizer. By 1870, the first
U.S. phosphate mines opened near Charleston and along the coast, tearing up fields, forests,
and swamps to reach the bedrock below.
A decade later, geologists discovered even larger deposits in Florida. (To this day, most of the phosphorus on American fields and
plates comes from the southeastern U.S.) Other massive formations of phosphate rock have since been identified in the American
West, China, the Middle East, and northern Africa.

These deposits became increasingly important in the 20th century, during the Green Revolution (the third revolution in agriculture, if
you’re keeping track). Plant breeders developed more productive crops to feed the world and farmers nourished them with nitrogen
fertilizer, which became readily available after scientists discovered a way of making it from the nitrogen in air. Now, the main
limit to crop growth was phosphorus—and as long as the phosphate mines hummed, that was no limit at all. Between
1950 and 2000, global phosphate-rock production increased sixfold, and helped the human
population more than double.

But for as long as scientists have understood the importance of phosphorus, people have
worried about running out of it. These fears sparked the fertilizer races of the 19th century as well as a series of anxious
reports in the 20th century, including one as early as 1939, after President Franklin D. Roosevelt asked Congress to assess the
country’s phosphate resources so that “continuous and adequate supplies be insured.”

There were also cautionary tales: Large deposits of phosphate rock on the tiny Pacific island of Nauru bolstered Australia and New
Zealand’s agricultural progress during the 20th century. But by the 1990s, Nauru’s
mines had run low, leaving its
10,000 residents destitute and the island in ecological ruins. (In recent years, Nauru has housed a
controversial immigrant detention center for Australia.)

These events raised a terrifying possibility: What


if the phosphorus floodgates were to suddenly slam shut,
relegating humanity once more to the confines of their parochial phosphorus loops? What if our
liberation from the geologic phosphorus cycle is only temporary?

In recent years, Cordell


has voiced concerns that we are fast consuming our richest and most
accessible reserves. U.S. phosphate production has fallen by about 50 percent since 1980, and
the country—once the world’s largest exporter—has become a net importer. According to some estimates, China,
now the leading producer, might have only a few decades of supply left. And under current
projections, global production of phosphate rock could start to decline well before the end of
the century. This represents an existential threat, Cordell says: “We now have a massive population
that is dependent on those phosphorus supplies.”
Many experts dispute these dire predictions. They argue that peak phosphorus—like peak oil—is a specter that always seems to
recede just before its prophecy is fulfilled. Humans will never extract all of the phosphorus from the Earth’s crust, they say, and
whenever we have needed more in the past, mining companies have found it. “I don’t think anybody really knows how much there
is,” says Achim Dobermann, the chief scientist at the International Fertilizer Association, an industry group. But Dobermann, whose
job involves forecasting phosphorus demand, is confident that “whatever it is is going to last several hundred more years.”

Simply extracting more phosphate rock might not solve all of our problems, Cordell says. Already, one
in six farmers worldwide can’t afford fertilizer, and phosphate prices have started to rise . Due to a
tragic quirk of geology, many tropical soils also lock away phosphorus efficiently, forcing farmers to
apply more fertilizer than their counterparts in other areas of the world.
The grossly unequal distribution of phosphate-rock resources adds an additional layer of geopolitical complexity. Morocco and its
disputed territory, Western Sahara, contain about three-quarters of the world’s known reserves of phosphate rock, while India, the
nations of the European Union, and many other countries depend largely on phosphorus imports. (In 2014, the EU added phosphate
rock to its list of critical raw materials with high supply risk and economic importance.) And as U.S. and Chinese deposits dwindle,
the world will increasingly rely on Morocco’s mines.

We have already glimpsed how the phosphorus supply chain can go haywire. In 2008, at the height
of a global food crisis, the cost of phosphate rock spiked by almost 800 percent before dropping again over the next
several months. The causes were numerous: a collapsing global economy, increased imports of phosphorus by India, and decreased
exports by China. But the lesson was clear: Practically speaking, phosphorus is an undeniably finite
resource.
I first heard about the potential for a phosphorus catastrophe a few years later, when a farmer friend mentioned casually that we
consume mined phosphorus every day and that those mines are running out. The more I learned, the more fascinated I became by
the story, not only because of its surprising and arcane details—eating rocks! mining poop!—but because of its universality.

Phosphorus is a classic natural-resource parable: Humans


strain against some kind of scarcity for centuries,
then finally find a way to overcome it. We extract more and more of what we need—often in
the name of improving the human condition, sometimes transforming society through
celebrated revolutions. But eventually, and usually too late, we discover the cost of
overextraction. And the cost of breaking the phosphorus cycle is not just looming scarcity, but
also rampant pollution. “We have a too-little-too-much problem,” says Geneviève Metson, an environmental scientist at
Linköping University in Sweden, “which is what makes this conversation very difficult.”

At nearly every stage of its journey from mine to field to toilet, phosphorus seeps into the environment. This
leakage has more than doubled the pace of the global phosphorus cycle, devastating water
quality around the world. One 2017 study estimated that high phosphorus levels impair
watersheds covering roughly 40 percent of Earth’s land surface and housing about 90 percent of
its people. In more concrete terms, this pollution has a tendency to fill water bodies with slimy, stinking scum.

Too much phosphorus—or nitrogen—jolts aquatic ecosystems long accustomed to modest supplies, Elser
says, triggering algal blooms that turn the water green, cloudy, and odorous. The algae not only
discourage people from recreating in lakes and rivers (people “like to see their toes,” Elser observes) but also can produce
toxins that harm wildlife and disrupt drinking-water supplies. And when the algae die,
decomposition sucks oxygen out of the water, killing fish and creating devastating dead zones.

Indeed, pollution may be the strongest argument for reducing our dependence on mined phosphorus. “If
we take all the
phosphorus in the ground and move it into the system—ooh, we’re done,” Elser says. Some
researchers have calculated that unchecked human inputs of phosphorus, combined with
climate change, could eventually push much of the ocean into an anoxic state persisting for
millennia. “I’m pretty sure we don’t want to do that,” Elser says, chuckling. Such events have occurred
numerous times over Earth’s history and are thought to have caused several mass extinctions—
for instance, when land plants evolved and sent a pulse of newly weathered phosphorus into the
ocean.
Economic crisis transitions to degrowth society – pessimism about conflict is
wrong – only transition can avoid climate catastrophe
Read and Alexander '19 [Rupert and Samuel; June 2019; Associate Professor of Philosophy
at the University of East Anglia; lecturer at the University of Melbourne, co-director of the
Simplicity Institute, research fellow with the Melbourne Sustainable Society Institute, This
Civilization is Finished: Conversations on the end of Empire—and what lies beyond, Chapter 13,
p. 52-60]//GJ

SA: You alluded earlier to the saying that every crisis is an opportunity—from which the optimist infers that the more
crises there are, the more opportunities there are! Of course, this statement must not be seen to be romanticising or desiring crisis
like some dreamy-eyed fool. In fact, our entire dialogue seems to have been based on a deep pessimism about the prospects of
smoother and less disruptive modes of societal transformation. So perhaps crisis
might be our best hope for
disrupting the status quo and initiating the transition to something else.

When the crises of capitalism deepen, as they seem destined to do in coming years and decades, the
task will be to ensure that such destabilised conditions are used to advance progressive
humanitarian and ecological ends rather than exploited to further entrench the austerity politics
of neoliberalism. I recognise, of course, that the latter remains a real possibility, as did the arch-capitalist Milton Friedman,
who expressed the point in these terms:

Only a crisis—actual or perceived—produces real change. When that crisis occurs, the actions
that are taken depend on the ideas that are lying around. That, I believe, is our basic function: to develop
alternatives to existing policies, to keep them alive and available until the politically impossible becomes the politically inevitable.

It is not often that I am in agreement with Friedman. With reluctance I have come to the conclusion that it
is probably only
through deepening crisis that the comfortable global consumer class will become sufficiently
perturbed that the sedative and depoliticising effects of affluence might be overcome. In fact, I feel
it is better that citizens are not in fact protected from every crisis situation, given that the
encounter with crisis can play an essential consciousness raising role, if it triggers a desire for
and motivation toward learning about the structural underpinnings of the crisis situation itself.

RR: Yes, the danger, if we are protected from crisis for too long, is that we wait even longer than
we would have done otherwise before addressing it. This is why Jared Diamond and others have emphasised the grave
danger of highly unequal societies (such as, disastrously, the one we now inhabit): for the elite in such societies can fool
themselves into thinking that things are basically OK way past the point of no return, while the
masses suffer and start to experience collapse; and then it is surer that the society as a whole
will collapse.
SA: And yet, as I have noted, crisis can go in many directions—it might be the wake-up call we need… or it might simply hasten the
civilisational degeneration into barbarism. What role does crisis play in your views on transition? Is the world ready for the profound
challenges that, in one form or another, lie ahead?

RR: We are now committed to climate disasters, and they will worsen, for a long time to come. But
we do not yet know whether we are committed to climate catastrophe. It is just possible that
the former may help enable us to avoid the latter. Consider the literature on ‘Disaster Studies’, in particular
Rebecca Solnit’s amazing book A Paradise Built in Hell: The Extraordinary Communities that Arise in Disaster. Solnit observes that
disasters are often recalled by their survivors as periods of great joy and profoundly meaningful
experience.
She argues that this is because, at these moments, the social order is revealed to be ‘something akin to… artificial
light: another kind of power that fails in disaster’. Its failure reveals a truer light, that comes from within us, that we can
share and grow with one another. It unshackles moral resources which we had available to us all along —
within ourselves, and in community waiting to spring into being —allowing ‘a reversion to
improvised, collaborative, cooperative and local society’. Moments of crisis allow us to see and
to start to make, for the first time, a vision of a world we always sensed was possible, but had been
unable to articulate, let alone to instantiate.

This is one vitally important way in which the long crisis we are enter-ing into is without doubt an opportunity. The
widespread
assumption that disasters always unleash a cruelty or indifference endemic to human nature is false.
This is the meaning of the title of Solnit’s book: disasters often spontaneously produce not barbarity but
generosity, community, something like a spontaneous non-dogmatic ‘ communism’.
The coming ecological and climate disasters could yet yield an improvement in human goodness. And even a consciousness—a
determination—that we have to stop such disasters from multiply-ing into catastrophe. It is perhaps unlikely that this will come into
being (enough); it is probably likelier that, instead, people’s focus will too often stay narrowly present and local,48 and that the
bigger picture will be ignored or even denied. But the
possibility of a new consciousness and
conscientiousness is one of the few great hopes we have at present of civilisational
transformation.
In any case, even if it turns out that the best that we can hope for is the second of the three ‘options’ with which I greeted your
opening question to me—the option of seeding a successor-civilisation from the very-likely wreckage of this one—then it’s still
imperative to seek out the silver linings of disaster (and even of catastrophe). Learnings that will help us deeply adapt. Such as the
way that the survivors of previous ecological collapses seem to have learnt humility with regard to nature. Our indigenous ancestors
who decimated the world’s megafauna in Europe, Asia, and Australasia, and who in many cases suffered dire con-sequences from
doing so, learned better how to live in harmony with and in natural systems.49 We will learn this lesson. The question is only
whether we learn it as we die (1), or as we (or rather, probably, a few of us) survive collapse and start to construct a new way of
living (2), or in order to transform ourselves and prevent collapse (3).

Similarly, we will go back to the land in pretty large numbers. The only live issue is whether we will do so in a part-planned and part-
voluntary way sooner,50 or in a catastrophically desperate, forced way later.

The crisis we face is above all an opportunity to learn, and to imagine and hope and do better. But some of that learning has to be
pre-emptive. By the time collapse occurs, it may/would be too late.

SA: The prospect of societal collapse is gradually getting discussed more regularly these days, even in some mainstream forums, like
prominent newspapers and ‘serious’ magazines. If it was once a fringe territory of ‘doomsayers’, today one might even say that col-
lapse is the expected course of action. Slavoj Zizek would say this is functioning to ‘normalise the apocalypse’. But for all the
attention this notion of collapse is given, it is not always discussed with much rigour or definition. What do you mean when you use
the term collapse? Is there any prospect of a ‘prosperous descent’? Or will any collapse scenario necessarily be full of pain and
suffering?

RR: This is a crucial question. The way I have been talking about ‘this civilisation’ (as finished) has been shorthand. What for?
Basically, for what Joanna Macy calls ‘industrial
growth society’. That is what is finished. The fantasy of
endless ‘progress’ (aka endless economic growth) is dead. Every further bit of material ‘progress’
now takes us further over the cliff-edge, reduces even further our slim chances of clawing our
way to some safety. We are eating into our life-support systems.

Growthism, a central part of the ideology that rules this civilisation globally, is deadly because it always makes our
task harder. You and I, Sam, are among those who have shown that net green economic growth while
remaining within planetary boundaries is deeply implausible.51 But even if we were somehow
wrong about this, it would still be true that growthism tends toward deadliness; for, by making
our collective aim into GDP growth, and thus by endlessly increasing pressure upon those
boundaries, we provide a rod for our own backs .
Even if net (i.e., economy-wide, not sector-specific) green growth were possible, it’s a rod for our collective backs. The intelligent
thing to do, obviously, is to remove the rod!

As for industrialism, nearly everyone assumes that the industrial revolution was an inevitability
and obviously a good thing. But this evinces a lack of imagination. As the consequences of
industrial-growthism lead us steadily toward the white swan of climate catastrophe and
ecological breakdown, with the sixth mass extinction well underway at our hands, surely we have to
re-assess this assumption. Surely we have to take up a more critical and thoughtful stance toward it, as the Dark Mountain
Project has helpfully done. Surely we have to ask: couldn’t the whole thing have been done with more
precaution, more slowly? And couldn’t—mustn’t—we be more selective about which industries we choose to permit and to
develop now?

We need to rein in the reckless growth of industry, and to radically roll back the many industries
that are killing us and our other-than-human kin, and steadily eliminating our kids’ future. We need to choose which
products and processes of industrial society we want to seek to preserve. For example, I hope that, in our radically relocalised
future, we may be able to preserve some of the internet as a mode of communication, to help us share knowledge and wisdom, to
continue to tackle global issues (such as climate), and to help prevent a growth in xenophobia. But we’ll have to see. Without doubt,
much of what we are accustomed to will have to go.

The sheer enormity and audacity of this task, and the way that it contradicts our ruling ideas of the
allegedly endless technical ingenuity of humanity, the allegedly beneficent nature of technology,
the ideology of ‘progress’ and ‘development’, etc., mean that it is hard to see how we could possibly do
this. So what I am saying is: such a transformation, resulting in a society on a radically different footing, is not something that any
wise person would bet on us succeeding in. A prosperous descent—which is path (3) of the possible paths that I laid out earlier—
would be wonderful, and remains possible, and so it is painful (not to mention unbearably frustrating) to admit the fact that
humanity appears very unlikely to be capable of it.

This is why, as I argued earlier, we


need the insurance policy not only of transformative adaptation but
also of deep adaptation; to help prevent path (2)—that of a successor-civilisation after a collapse-
event—itself collapsing into being path 1 (total collapse; the default outcome, the white swan that probably awaits us, on even a
reformed business-as-usual path). Some kind of collapse, quite likely driven by the interaction of water shortage and consequent
food shortage, but quite possibly driven by other things instead or as well (e.g., by pollinator failure due to the insect-apocalypse, or
possibly by plague among a climatically-weakened population), has to be considered our likely fate. Not just in Africa, Asia, and the
Middle East, but in Australia, Europe, and North America.

Industrial-growth society is finished. We will rapidly transform it into something better, or it will collapse, either to
seed something differ-ent or to simply end us. And any collapse event will be chock full of pain. It will be challenging to prevent it
from becoming a more or less total collapse; for instance, as we have already discussed, stopping nuclear waste—spent fuel rods,
not to mention live reactors—from becoming virtually endless drivers of death and suffering will require concerted effort at a time
when we will be ill-placed to make that effort. (In countries like England or the USA, do we even have the collective will to make the
sacrifices that may well be required under such circumstances? Is the combination of voluntary and forced heroism that prevented
the disaster of Chernobyl from becoming a catastrophe replicable in countries like ours that pride themselves on an ideology of
atomised individualism, countries which toy with the idea that there is no such thing as society?)

And yet, where the greatest danger lies, there too can be found the saving power. As we dare at
last to gaze into the abyss, as we find the courage to contemplate these matters that you and I are
discussing here, as we take the measure of the beauty of what we have and the folly of our squandering it, as we feel the heart-pain
of what we are committing our children to, so we can rise to the challenge. Rise up to meet it. The greatest challenge of
the entire history of our species is upon us . What an awesome and even thrilling responsibility—and, of course,
terrifying.

As I set out in answer to your previous question, one thing that in this great and terrible moment gives me very real hope is that,
when human beings are subject to the gravest of threats and the most unexpected of utter challenges, we really do tend
spontaneously to become our best selves, selfless and creative of real community.
So it is
possible that the disasters which are definitely coming and the collapse which they are
likely to lead up to may yet be the making of us .
SA: You are suggesting then that even in a collapse scenario, we might be surprised to discover that some tragic events have a silver
lining of sorts. Perhaps you could unpack that counter-intuitive idea a little further.

RR: Yes. We are living, nowadays, in ways that involve us in a virtually permanent absence of community. Disasters
enable
this to be overcome. They enable us in our small selves, our limited and limiting egos, to be
overcome. For such overcomings to be possible and to take place, there must be a full-scale
disaster, not merely an accident or something bad. Charles Fritz, who is a key influence on Rebecca Solnit’s work in this
area, emphasises this point.52 He writes that disasters need to be big enough to not leave behind ‘an undisturbed,
intact social system’. Only if that system is disrupted sufficiently can new and realer forms of
community emerge. ‘Disaster provides an unstructured social situation that enables persons and groups to perceive the
possibility of introducing desired innovations into the social system,’ according to Fritz.

When we picture collapse, we tend to imagine human beings at their worst. But what is
sometimes revealed in disaster is real community identity, which fulfils our modern lack; and this is
the very opposite of what the Hobbesian ‘script’ would have us imagine.

The etymology of the word ‘apocalypse’ is uncover/reveal. I am suggesting that, while


any collapse will necessarily
involve much pain and indeed death, as we will no longer be able to support our artificially bloated
population53 and our decadent standard of living, it doesn’t have to reveal a human nature that
is red in tooth and fist. If we proceed from a place of love and fellowship rather than from a place of distrust, the human
nature that gets revealed even in collapse could be one of unexpected solidarity and care and sacrifice.

Writers such as Margarete Buber-Neumann, Victor Frankl, and Primo Levi have made clear how, even in environments designed to
break the human spirit, unexpected possibilities of loving-kindness often flowered. So it won’t be beyond our wit (or our hearts),
when under stress, to foster such flowerings in the years of living dangerously to come.

In collapse, our
social system would of course get thoroughly —utterly—perturbed. What I am saying is that,
in the less structured situation that emerges, there is a very real chance that we can find each
other and find some deeper togetherness. So yes, this is a potential silver lining even of
collapse, especially if we can turn a partial-col-lapse scenario into a breakthrough of the human spirit. A blitz spirit for our times.
An arising of consciousness that could seed a successor-civilisation, a civilisation which someone like Gandhi would think a good
idea.

You might also like