You are on page 1of 225

Meta Ethics th

Managerial Ethics, HRM and BM, XLRI, Jamshedpur on 10 Oct, 2022


Deontology—Duty Ethics
• Emmanuel Kant
• The Ethics of Motives—The thought itself can be
ethical or unethical and not just its consequences
• Under no circumstances should the pursuit of one’s
own happiness, or one’s own benefit maximization
be carried out at the expense of the others, or that
of general wellbeing. And when in doubt, morality,
that is the wellbeing of others, must be placed
above one’s own.
Deontology
•Categorical Imperatives
• “Do unto others what you would want others to
unto you”
• “Only act according to the maxim that you can
make a universal law”.
• Practical Imperative
• “ Treat humans as ends in themselves and not as a
means to achieve an end.”
Deontology
• The Publicity Rule
• “All actions related to the right of other people, whose maxim is not
compatible with publicity, is wrong”.
• For Kant the duties to other people include respecting their dignity, helping
them in need, being grateful and conciliatory, not deceiving them, not lying,
nor mocking or slandering.
• As inner attitudes, he demands virtues such as benevolence, compassion,
gratitude, truthfulness and integrity
• Negative inner attitudes or virtues, on the other hand, are envy, dislike,
pleasure in the pain of others, arrogance, revenge and greed
• Economic obligations are respect for the laws and the property of others,
the observance of contracts and the payment of debts.
• The freedom of the individual stops where the freedom of the other begins.
Individuals are not allowed to exercise their freedom without consideration
or to the detriment of others.
Deontology
• Ethical Dilemmas
• What if negative consequences arise from the duties?
---The case of the murderer

• What if two duties contradict each other?


----the case of euthanasia
----the case individual right to privacy of data and safety
-----Non-violence vs Right to Life or Sovereignty
Utilitarianism
• Jeremy Bentham and Act Utilitarianism
• The action with the greatest net happiness is the most moral
• And net happiness for the greatest number is determined by the consequences and
not the act itself

• The Hedonistic Calculation

• The Pinto Case


• The Colloseum Case
• Hiroshima and Nagasaki
• War and Collateral Damage
Utilitarianism
• John Stuart Mill and Rule Utilitarianism
• Rule utilitarianism focusses on the general rule that maximizes happiness. The
difference lies in the overall happiness of the society, which is the outcome if general
rules are followed.
• The colloseum case will not be justified by Rule Utilitarianism, because torture leads
to negative utility.
• But Rule Utilitarianism will approve the bringing down of a hijacked plane, as in 9\11
• According to Mill, a lie is allowed, contrary to Kant, if, with all its consequences, it
produces less harm than the truth.
• The problem with this approach is that the assessment is ultimately left to the
individual. Every lie can be justified, you just have to paint the consequences of the
truth starkly enough.
• There is, therefore, the basic question of who is to evaluate ethicality; the individual
or the group or society. Without a correlation to the usefulness or well-being of
other people, a distinction between good and evil can neither be made in individual
ethics nor in social ethics
Virtue Ethics
Virtue Ethics
• Four Cardinal Virtues
• Wisdom\Prudence

• Fortitude

• Temperance

• Justice
Virtue Ethics
• Eudaimonia—Concept of the Chief Good----Happiness

• The Concept of the Intermediate Virtue


Sophism—Protagoras, Antiphon and
Thracymachus
• Ethical Relativism

• Ethnocentrism

• Majoritarianism
THE ENRON SCANDAL
• The Players:
• Kenneth Lay-Chairman and Founder
• Jeffrey Skilling-CEO and President
• Andrew Fastow- Chief Financial Officer
• Arthur Andersen- Chief Auditing Firm
• US Securities and Exchange Commission
• Enron’s Board of Directors
• Enron Shareholders
• Merril Lynch
THE ENRON SCANDAL-FACTSHEET
• Founded in 1985 by merging the Houston Natural Gas Company and
Internorth Electric Company to become Enron

• Diversified company that held assets in gas pipelines, electricity


plants, paper and pulp plants, water plants and broadband services

• Gained additional revenues in trading and project management


business, like setting up power generation plants in Indonesia,
Philippines and India
THE ENRON SCANDAL-FACTSHEET Cont…
• After Skilling came on board, the revenues of Enron increased
between 1996 and 2000 by 750%, from $13 b to $101 b.

• Attained the sixth position in the Fortune 500 list in 2001 and rated as
the Most Innovative Large Company in Fortune ‘s Most Admired
Companies Survey
THE ENRON SCANDAL-THE FALL
• The stock price began to drop from $ 90 per share in mid-2000 to $ 1
by end-November, 2001

• Enron shareholders filed a lawsuit for $ 40 b to try and recover their


wealth loss in early 2001

• The SEC launches an investigation into Enron financial dealings


THE ENRON SCANDAL-THE FALL
• The stock price began to drop from $ 90 per share in mid-2000 to $ 1
by end-November, 2001

• Enron shareholders filed a lawsuit for $ 40 b to try and recover their


wealth loss in early 2001

• The SEC launches an investigation into Enron financial dealings


THE ENRON SCANDAL-THE FALL Cont…
• Rival Houston energy company makes an opportunistic takeover
attempt of Enron in 2001, which fails

• Enron files for Chapter 11 of the US Bankruptcy Code on December


2001
THE ENRON SCANDAL-CAUSES
• Mark-to- Markets Accounting Methodology adopted by Skilling for the Trading
Business that inflated Enron’s net worth

• Special Purpose Entities and hedging created and indulged in by Fastow for Off-
Balance Sheet transactions that hid losses and helped show profits at quarter-
ends to meet Wall Street expectations

• Conflict of interest and lax auditing practices by chief audit firm Arthur Andersen,
who were getting huge non-audit consulting fees from Enron

• Incompetence\Negligence of the Board of Directors-Poor Corporate Governance


BOARD OF DIRECTORS OF ENRON
• Robert Jaedicke, Accounting Professor and former Dean of Stanford Business School

• John Mendelsohn, President of the University of Texas, Andersen Cancer Center

• Paolo Pereira, Former President and CEO of the State Bank of Rio de Janeiro in Brazil

• John Wakeham, Former UK Secretary of State for Energy and Parliamentary Secretary
to the Treasury

• Ronnie Chan, Chairman of the Hong Kong Lung Group

• Wendy Gramm, Former Chairman of the US Commodity Futures Trading Commission.


THE ENRON SCANDAL- ROOT ETHICAL CAUSES
• Lack of honesty and integrity
• Greed for quarterly and year-end bonuses and
promotions
• Lying and cheating and insider trading
• A culture of materialism, instant self-gratification and
corporate sleaze
• Complete breakdown of Virtue Ethics
MALPRACTICES AT ENRON
• Creating Accounting-Nigerian Barge Case
• Blockbuster Video Deal
• Special Purpose Entities, with Fastow and his wife as sponsors!
• Derivative hedging with Enron as the real guarantor, but disguised as
a SPE—hedging against itself!
• Cancelled projects still shown as “assets” upto the value of $200
million till the letter of cancellation was received.
• Analysts tour of the Enron Energy Services Office with imported
employees!
Enron Employee Meeting
The Ethics of Corporate Virtues

A. The Western Ethical Traditions


1. Origins in the Codes of Hamurabi and the Edicts of
Nebuchadnasser. It is alleged that the Jewish and subsequently,
Christian Ten Commandments were a filtration from the above
codes and edicts. (Show pictures of the Codes and Edicts). All
attempts to civilize social and personal behaviour and
relationships. Divine Command Theory—good or bad does not
depend upon human perceptions, cultures or conventions, but
on God’s commandments, as revealed to holy prophets and
blessed human beings or enunciated by avatars of Gods,
visiting earth. The fundamental tenets of the Jewish, Christian,
Islamic and Hindu religions are based on this tradition.
2. The Greek Philosophers and Virtue Ethics—Socrates, Plato and
Aristotle- all in the plain of the secular and the rational.
Socrates eulogised knowledge as the chief virtue, without
which no other virtues can exist. Plato underscored four
cardinal virtues that defined ethical behaviour-Prudence,
fortitude, temperance and justice. Aristotle endorsed Plato’s
four cardinal virtues and believed that virtues are imbibed
through conscious practice and habit-formation and gave the
world Virtue Ethics. (Show pictures of Socrates, Plato and
Aristotle). According to Aristotle, every human being seeks an
ultimate good and other seekings are a means to achieve the
chief good. Eudaimonia or Happiness, according to Aristotle is
the Chief Good. All other goods are intermediate goods or
means to achieve the chief good, like health, wealth, pleasure,
honour, beauty, friendship love and power.
To achieve this chief good, one must cultivate certain virtues.
These virtues are not naturally present in man, but needs to be
cultivated through habit. If these virtues are imbibed into your
character, then you will be well-ordered, rational and in
control.
Aristotle expounded the concept of the “intermediate virtue”.
That is virtue is a mean between two vices.. Fear—Courage—
Fool-hardiness; Miserliness—Generosity—Spend-thriftiness;
Small-mindedness—Honour—Conceitedness. This is not a
concept of moderation, as Kant accused Aristotle of, but a
concept of achieving an equilibrium, like a pair of scales.
Aristotle says that emotions are natural, like anger, it is how it
is directed and controlled that makes it a virtue or a vice.
3. Ethical Relativism and the Sophists: They questioned the very
existence of the Platonian absolute or cardinal virtues. For
them, virtues were born out of social convention, what the
society put value on and hence was relative to the beliefs of
particular societies. Thrasymachus went one step further. He
said the virtue of justice meant obedience to the laws of the
society you lived in and these laws were promulgated by those
in power, to serve their interests. Therefore, justice was
nothing but the interest of the strongest group. Can justify
“ethnocentrism” and even “genocide” or the “white man’s
burden” through this philosophy. (Show pictures of Protogoras,
Antiphon and Thracymachus)
4. Ethical Egoism: Thomas Hobbes- “To maximise net benefit and
minimise net harm to oneself is ethical”. This is the essence of
ethical egoism. Hobbes believed that a person’s voluntary acts
were aimed at pleasure or self-preservation. Nothing was good
in itself, unless it gives pleasure to oneself. Hedonism. (Give
examples of weed and sex). What happens when two people
want the same object of pleasure? Then life would be “solitary,
poor, nasty and brutish”, unless they can enter into a “social
contract”, that can be enforced by a sovereign power
(Leviathan). (Show pics of Thomas Hobbes)
5. The Moral Sense School: In Hobbes, self-centredness was the
main character of human beings. But others disagreed. There is
an in-built ‘moral sense’ in humans that produce virtues like
benevolence, sympathy, empathy, gratitude and so on that
creates a balance between virtue and self-interest. Selfishness
is not the only human passion. David Hume believed that
ethical behaviour was the ‘slave of passions’. Reason can at
best point out what is right and wrong, but what action a
person will take is purely based on the intensity of his feelings
towards or against the subject. Night vigil for rape victims or
Greta Thunberg. Gilligan and Noddings’ “Ethics of Care”—Give
300 bucks to a begger or take your only kid for children’s
movie? (Show pics of David Hume, Gilligan and Nel Noddings)
6. Utilitarianism: Human beings placed by nature under two
masters- Pleasure and Pain. Any action that gives you pleasure
is right; any action that gives you pain is wrong. Maximising the
net pleasure and minimising the net pain is ethical—Jeremy
Bentham. How do you assess this outcome? Through the
consequences of the act and not in the act itself. So dropping
an atom bomb each on Hiroshima and Nagasaki is not unethical
in itself, if the consequences of that act stopped further
suffering. Bentham also took into consideration the intensity,
the duration, fecundity and propinquity of the act. All this was
calculated through a cumbersome and inaccurate hedonistic
calculus. J.S. Mill, the other famous utilitarian philosopher, did
away with this hedonistic calculus and said that one can assess
the consequences of an act by measuring its consequences by
well-established and accepted rules of society. Therefore, his
utilitarianism was called Rule Utilitarianism and that of
Bentham, Act Utilitarianism. Only when there is a clash
between accepted principles\moral laws, you can fall back on
the hedonistic calculus. Maximising shareholder wealth and the
very foundation of capitalism is based on utilitarianism. More
respectable version of Ethical Egoism.
7. Deontology and Emanuel Kant: In his ethics, the act itself can
be ethical or unethical and not just the consequences. The
deontological insight is that there are moral laws that are
embedded in human nature and these laws can be subjected to
the test of reason and are universally applicable. You don’t
have to refer to God to feel the presence of these laws. He
believed in ‘categorical imperatives’, like the Golden Rule: “Do
unto others what you would want others to do unto you” or
“Treat others as an end in themselves and not as a means to
achieve an end”. Dropping those atom bombs were unethical
according to deontology, because you can’t kill 100000
Japanese (a means) to end further suffering (the end objective).
Kantian ethics comes up with difficulties and is seen as too
rigid, when faced with ethical dilemmas. Stealing food to feed
your starving children during a severe famine? Ethically wrong,
according to deontology. No room for attenuating
circumstances. Lying to save a person’s life, during a hostage
situation. Lying is unethical-no room for exceptions. Kantian
ethics does not bother about consequences.
8. Moral Rights Theory: Utilitarianism to judge any moral issues
had no rival from the mid-19th century to the mid-20th century.
But to define justice as that that only satisfies the maximum
numbers was not satisfactory to many moral philosophers. So
came John Rawls in 1970 with his “Theory of Justice”, which
was a comprehensive treatment of the issue of justice and
therefore to the normative question of “what is right and what
is wrong”. According to Rawls, a balanced person of good
character will seek the maximum benefit for the least privileged
segment of the people, while ensuring something came to him
and his group. The Maximin Principle. And if the maximum
benefit must first go to the better-off people to produce
greater benefit to the lesser off people in future, then this too
would be just and ethical. The whole focus turned to moral
rights and justice of segments of people from what is right and
wrong.. from ontological meta-ethics to normative ethics to
applied or practical ethics. How can ethics give solutions to real
and pressing problems like inequality of income within and
among nations, poverty, racial discrimination, gender issues,
animal rights.
9. Ethics in the 21st Century: The major concerns of this century
are terrorism, climate change, disruptive technological
advances like AI and Human Enhancement—Euthanasia,
Abortion, Surrogate Motherhood, Cloning, Full-Body
Replacement (Cyborgs), Autonomous Moral Agents (Humanoid
Robots). There is a revolution in the making, which will be in
full flow in the next 30 years. All these concerns need answers
from the discipline of ethics. In fact, meta-ethics, normative-
ethics and applied-ethics has once again regained its
paramount importance in the affairs of homo sapiens.

B. Indian Ethics
Dharma, or the righteous way to live our lives, is the Indian
version of ethics. Lord Krishna, in the Bhagwat Gita, elaborates
the concept of Dharma: “Every organism is born to serve a
purpose. Understanding the purpose and living accordingly is
Dharma.” In this definition, it is ethical to wage a just war, if
that is your purpose and as long as you are not seeking to gain
personal prestige or wealth or power—you can even kill your
cousins, if required. Vyasa, in the Mahabharat, expands on the
concept of Dharma— “To actively help those in need as well as
passively not harming others and being fair and just in one’s
judgements.” There are elements of Dharmashastra in Virtue
Ethics, Utilitarianism and Moral Rights Theory, except in
Deontology, which is a philosophy of absolutes—war is wrong,
even if it’s a just war; killing is wrong, even in self-defense;
stealing is wrong, even if to feed a dying person; lying is wrong,
even if that saves the life of an innocent person. Find other
ways other than war, killing, stealing and lying. Ethics is today
incorporated fully into the law of the land and it is western law
that most countries have adopted, with the exception of some
countries that follow the Shariat Law, which is based on the
Islamic version of the Divine Command Theory.

C. South-East Asian Ethics


Ethical principles in this part of the world is taken from
Buddhism, Confucianism and various variants of Taoism.
Confucious, in the 6th century BC, expounded a set of ethical
principles, like Hamurabi’s Codes, for Chinese society to live in
peace and harmony. These principles include love for one’s
fellows, filial piety, decorum, virtue and the ideal of the
superior man—this last principle was the justification for the
various kingly dynasties that existed in China till Mao-ze Tung’s
Communist revolution.
Lao-tzu, the legendary founder of Taoism and the traditional
author of the Tao-te-Ching, expounded the concept of Tao,
which is the balance in nature, the Yin and Yang and therefore
man must live in harmony with the forces of nature to be
balanced and well-ordered in one’s own life. All ethical
principles in Taoism follow from this alignment. There is no
concept of the Confucian Superior man here.

D.Ethical Issues from the Corporate World


How would you characterize today’s global political world?
Very ethical, moderately ethical or fully corrupt? How would
you characterize Indian bureaucracy? Very ethical, moderately
ethical or totally corrupt? How would you characterize the
global corporate world? Moderately ethical or totally corrupt?
Let’s take some examples and analyse these cases from the
ethical traditions you just heard:
i. The Great American Streetcar Scandal
• Back in the 1920s, most American city-dwellers took public
transportation to work every day. There were 17,000 miles of
streetcar lines across the country, running through virtually
every major American city. That included cities we don't think
of as hubs for mass transit today: Atlanta, Raleigh, and Los
Angeles. Nowadays, by contrast, just 5 percent or so of workers
commute via public transit, and they're disproportionately
clustered in a handful of dense cities like New York, Boston, and
Chicago. Just a handful of cities still have extensive streetcar
systems — and several others are now spending millions trying
to build new, smaller ones.
• Running streetcars was a very profitable business in the late
1880’s. Cities expanded, and people who found themselves
living too far from work to walk depended on them. (Some real-
estate developers built nearby suburbs around streetcar lines.)
Over time, the businessmen who ran the streetcars, called
"traction magnates," consolidated ownership of multiple lines,
establishing powerful, oftentimes corrupt monopolies in many
cities.
• In the early-to-middle 20th century a consortium of automotive
interests (led by General Motors) bought up streetcar lines in a
number of cities and converted them to bus routes. He then
states that this group was ultimately found guilty in federal
court of "conspiracy to monopolize mass transit."
• By the 1950s, virtually all streetcar companies were in terrible
shape. Between 1938 and 1950, one company purchased and
took over the transit systems of more than 25 American cities.
Their name, National City Lines, sounded innocuous enough,
but the list of their investors included General Motors, the
Firestone Tire and Rubber Company, Standard Oil of California,
Phillips Petroleum, Mack Trucks, and other companies who
stood to benefit much more from a future running on gasoline
and rubber than on electricity and rails. National City Lines
acquired the Los Angeles Railway in 1945, and within 20 years
diesel buses – or indeed private automobiles – would carry all
the yellow cars’ former passengers. Does that strike you as a
coincidence?
• “It’s easy to blame car companies because they’re the logical
economic beneficiary of this car-oriented system. But the
reality is more complex, and if there’s any conspiracy here, it’s
on the part of local officials who kept approving sprawling
subdivisions that have led to the present inefficient land use
patterns.”
• While it's true that National City continued ripping up lines and
replacing them with buses — and that, long-term, GM
benefited from the decline of mass transit — it's very hard to
argue that National City killed the streetcar on its own.
Streetcar systems went bankrupt and were dismantled in
virtually every metro area in the United States, and National
City was only involved in about 10 percent of cases.
• In Bradford Snell's words: “General Motors' destruction of
electric transit systems across the county left millions of urban
residents without an attractive alternative to automotive
travel”.
• The strongest rebuttal came from transit scholar George Hilton
(on whose work Snell had ironically relied) in his own 1974
Senate testimony (p. 2204): “I would argue that these [Snell's]
interpretations are not correct, and, further, that they couldn't
possibly be correct, because major conversions in society of
this character — from rail to free wheel urban transportation,
and from steam to diesel railroad propulsion — are the sort of
conversions which could come about only as a result of public
preferences, technological change, the relative abundance of
natural resources, and other impersonal phenomena or
influence, rather than the machinations of a monopolist”.
• But the basics of the shift are well-covered in a review
published by Cliff Slater in a 1997 issue of Transportation
Quarterly: "GM simply took advantage of an economic trend
that was already well along in the process — one that was
going to continue with or without GM's help," concludes Slater.
ii. The Enron Scandal
Enron was formed in 1985 by Kenneth Lay after merging Houston
Natural Gas and InterNorth. Several years later, when Jeffrey Skilling
was hired, he developed a staff of executives that – by the use of
accounting loopholes, special purpose entities, and poor financial
reporting – were able to hide billions of dollars in debt from failed
deals and projects. Chief Financial Officer Andrew Fastow and other
executives not only misled Enron's Board of Directors and Audit
Committee on high-risk accounting practices, but also pressured
Arthur Andersen to ignore the issues.
Enron shareholders filed a $40 billion lawsuit after the company's
stock price, which achieved a high of US$90.75 per share in mid-
2000, plummeted to less than $1 by the end of November 2001. The
U.S. Securities and Exchange Commission (SEC) began an
investigation, and rival Houston competitor Dynegy offered to
purchase the company at a very low price. The deal failed, and on
December 2, 2001, Enron filed for bankruptcy under Chapter 11 of
the United States Bankruptcy Code.
Many executives at Enron were indicted for a variety of charges and
some were later sentenced to prison. Andersen was found guilty of
illegally destroying documents relevant to the SEC investigation,
which voided its license to audit public companies and effectively
closed the firm. By the time the ruling was overturned at the U.S.
Supreme Court, the company had lost the majority of its customers
and had ceased operating. Enron employees and shareholders
received limited returns in lawsuits, despite losing billions in
pensions and stock prices.
In an attempt to achieve further growth, Enron pursued a
diversification strategy. The company owned and operated a variety
of assets including gas pipelines, electricity plants, pulp and paper
plants, water plants, and broadband services across the globe. The
corporation also gained additional revenue by trading contracts for
the same array of products and services with which it was involved.
This included setting up power generation plants in developing
countries and emerging markets including The Philippines (Subic
Bay), Indonesia and India (Dabhol). Enron was rated the most
innovative large company in America in Fortune's Most Admired
Companies survey.
Skilling constantly focused on meeting Wall Street expectations,
advocated the use of mark-to-market accounting (accounting based
on market value, which was then inflated) and pressured Enron
executives to find new ways to hide its debt. Fastow and other
executives "created off-balance-sheet vehicles, complex financing
structures, and deals so bewildering that few people could
understand them.
Although trading companies such as Goldman Sachs and Merrill
Lynch used the conventional "agent model" for reporting revenue
(where only the trading or brokerage fee would be reported as
revenue), Enron instead elected to report the entire value of each of
its trades as revenue. This "merchant model" was considered much
more aggressive in the accounting interpretation than the agent
model. Enron justified this method because they were accepting the
entire risk of the transaction.
Between 1996 and 2000, Enron's revenues increased by more than
750%, rising from $13.3 billion in 1996 to $100.8 billion in 2000. This
expansion of 65% per year was unprecedented in any industry,
including the energy industry, which typically considered growth of
2–3% per year to be respectable. For just the first nine months of
2001, Enron reported $138.7 billion in revenues, placing the
company at the sixth position on the Fortune Global 500.
Enron also used creative accounting tricks and purposefully mis-
classified loan transactions as sales close to quarterly reporting
deadlines. In Enron's case, Merrill Lynch bought Nigerian barges with
a buyback guarantee by Enron shortly before the earnings deadline,
which effectively meant that Lynch had given Enron a bridge loan for
guaranteeing the buy-back that must be paid back when actually
buying the barges. Enron mis-reported the bridge loan as a true sale,
then bought back the barges a few months later. Merril Lynch
executives were later tried and convicted for aiding Enron in its
fraudulent accounting activities.
In Enron's natural gas business, the accounting had been fairly
straightforward: in each time period, the company listed actual costs
of supplying the gas and actual revenues received from selling it.
However, when Skilling joined the company, he demanded that the
trading business adopt mark-to-market accounting, claiming that it
would represent "true economic value."[11]:39–42 Enron became
the first nonfinancial company to use the method to account for its
complex long-term contracts. Mark-to-market accounting requires
that once a long-term contract has been signed, income is estimated
as the present value of net future cash flow. Often, the viability of
these contracts and their related costs were difficult to estimate.
Owing to the large discrepancies between reported profits and cash,
investors were typically given false or misleading reports. Under this
method, income from projects could be recorded, although the firm
might never have received the money, with this income increasing
financial earnings on the books. However, because in future years
the profits could not be included, new and additional income had to
be included from more projects to develop additional growth to
appease investors. As one Enron competitor stated, "If you
accelerate your income, then you have to keep doing more and more
deals to show the same or rising income."[15] Despite potential
pitfalls, the U.S. Securities and Exchange Commission (SEC) approved
the accounting method for Enron in its trading of natural gas futures
contracts on January 30, 1992. However, Enron later expanded its
use to other areas in the company to help it meet Wall Street
projections.
For one contract, in July 2000, Enron and Blockbuster Video signed a
20-year agreement to introduce on-demand entertainment to
various U.S. cities by year's end. After several pilot projects, Enron
claimed estimated profits of more than $110 million from the deal,
even though analysts questioned the technical viability and market
demand of the service. When the network failed to work,
Blockbuster withdrew from the contract. Enron continued to claim
future profits, even though the deal resulted in a loss.
Enron used special purpose entities—limited partnerships or
companies created to fulfil a temporary or specific purpose to fund
or manage risks associated with specific assets. The company elected
to disclose minimal details on its use of "special purpose entities".
These shell companies were created by a sponsor, but funded by
independent equity investors and debt financing. For financial
reporting purposes, a series of rules dictate whether a special
purpose entity is a separate entity from the sponsor. In total, by
2001, Enron had used hundreds of special purpose entities to hide its
debt.
Corporate governance
On paper, Enron had a model board of directors comprising
predominantly outsiders with significant ownership stakes and a
talented audit committee. In its 2000 review of best corporate
boards, Chief Executive included Enron among its five best boards.
Even with its complex corporate governance and network of
intermediaries, Enron was still able to "attract large sums of capital
to fund a questionable business model, conceal its true performance
through a series of accounting and financing manoeuvers, and hype
its stock to unsustainable levels.
Although Enron's compensation and performance management
system was designed to retain and reward its most valuable
employees, the system contributed to a dysfunctional corporate
culture that became obsessed with short-term earnings to maximize
bonuses. Employees constantly tried to start deals, often
disregarding the quality of cash flow or profits, in order to get a
better rating for their performance review. Additionally, accounting
results were recorded as soon as possible to keep up with the
company's stock price. This practice helped ensure deal-makers and
executives received large cash bonuses and stock options.
Before its scandal, Enron was lauded for its sophisticated financial
risk management tools.[26] Risk management was crucial to Enron
not only because of its regulatory environment, but also because of
its business plan. Enron established long-term fixed commitments
which needed to be hedged to prepare for the invariable fluctuation
of future energy prices.[27] Enron's bankruptcy downfall was
attributed to its reckless use of derivatives and special purpose
entities. By hedging its risks with special purpose entities which it
owned, Enron retained the risks associated with the transactions.
This arrangement had Enron implementing hedges with itself.[28]

Enron's aggressive accounting practices were not hidden from the


board of directors, as later learned by a Senate subcommittee. The
board was informed of the rationale for using the Whitewing, LJM,
and Raptor transactions, and after approving them, received status
updates on the entities' operations. Although not all of Enron's
widespread improper accounting practices were revealed to the
board, the practices were dependent on board decisions. Even
though Enron extensively relied on derivatives for its business, the
company's Finance Committee and board did not have enough
experience with derivatives to understand what they were being
told. The Senate subcommittee argued that had there been a
detailed understanding of how the derivatives were organized, the
board would have prevented their use.
Enron's auditor firm, Arthur Andersen, was accused of applying
reckless standards in its audits because of a conflict of interest over
the significant consulting fees generated by Enron. During 2000,
Arthur Andersen earned $25 million in audit fees and $27 million in
consulting fees (this amount accounted for roughly 27% of the audit
fees of public clients for Arthur Andersen's Houston office). The
auditor's methods were questioned as either being completed solely
to receive its annual fees or for its lack of expertise in properly
reviewing Enron's revenue recognition, special entities, derivatives,
and other accounting practices.
Andersen's auditors were pressured by Enron's management to
defer recognizing the charges from the special purpose entities as its
credit risks became known. Since the entities would never return a
profit, accounting guidelines required that Enron should take a write-
off, where the value of the entity was removed from the balance
sheet at a loss. To pressure Andersen into meeting Enron's earnings
expectations, Enron would occasionally allow accounting companies
Ernst & Young or PricewaterhouseCoopers to complete accounting
tasks to create the illusion of hiring a new company to replace
Andersen.[11]:148 Although Andersen was equipped with internal
controls to protect against conflicted incentives of local partners, it
failed to prevent conflict of interest. In one case, Andersen's Houston
office, which performed the Enron audit, was able to overrule any
critical reviews of Enron's accounting decisions by Andersen's
Chicago partner. In addition, after news of U.S. Securities and
Exchange Commission (SEC) investigations of Enron were made
public, Andersen would later shred several tons of relevant
documents and delete nearly 30,000 e-mails and computer files,
causing accusations of a cover-up.
Corporate Audit committees usually meet just a few times during the
year, and their members typically have only modest experience with
accounting and finance. Enron's audit committee had more expertise
than many. It included:
Robert Jaedicke of Stanford University, a widely respected
accounting professor and former dean of Stanford Business School
John Mendelsohn, President of the University of Texas M.D.
Anderson Cancer Center
Paulo Pereira, former president and CEO of the State Bank of Rio de
Janeiro in Brazil
John Wakeham, former United Kingdom Secretary of State for
Energy and Parliamentary Secretary to the Treasury
Ronnie Chan, Chairman of Hong Kong Hang Lung Group
Wendy Gramm, former Chair of U.S. Commodity Futures Trading
Commission
Enron made a habit of booking costs of cancelled projects as assets,
with the rationale that no official letter had stated that the project
was cancelled. This method was known as "the snowball", and
although it was initially dictated that such practices be used only for
projects worth less than $90 million, it was later increased to $200
million.
In 1998, when analysts were given a tour of the Enron Energy
Services office, they were impressed with how the employees were
working so vigorously. In reality, Skilling had moved other employees
to the office from other departments (instructing them to pretend to
work hard) to create the appearance that the division was larger
than it was. This ruse was used several times to fool analysts about
the progress of different areas of Enron to help improve the stock
price.
Fastow and his wife, Lea, both pleaded guilty to charges against
them. Fastow was initially charged with 98 counts of fraud, money
laundering, insider trading, and conspiracy, among other crimes.[99]
Fastow pleaded guilty to two charges of conspiracy and was
sentenced to ten years with no parole in a plea bargain to testify
against Lay, Skilling, and Causey.[100] Lea was indicted on six felony
counts, but prosecutors later dismissed them in favor of a single
misdemeanor tax charge. Lea was sentenced to one year for helping
her husband hide income from the government.[101]

Lay and Skilling went on trial for their part in the Enron scandal in
January 2006. The 53-count, 65-page indictment covers a broad
range of financial crimes, including bank fraud, making false
statements to banks and auditors, securities fraud, wire fraud,
money laundering, conspiracy, and insider trading. United States
District Judge Sim Lake had previously denied motions by the
defendants to have separate trials and to relocate the case out of
Houston, where the defendants argued the negative publicity
concerning Enron's demise would make it impossible to get a fair
trial. On May 25, 2006, the jury in the Lay and Skilling trial returned
its verdicts. Skilling was convicted of 19 of 28 counts of securities
fraud and wire fraud and acquitted on the remaining nine, including
charges of insider trading. He was sentenced to 24 years and 4
months in prison.[102] In 2013 the United States Department of
Justice reached a deal with Skilling, which resulted in ten years being
cut from his sentence.[103]

Lay pleaded not guilty to the eleven criminal charges, and claimed
that he was misled by those around him. He attributed the main
cause for the company's demise to Fastow.[104] Lay was convicted
of all six counts of securities and wire fraud for which he had been
tried, and he was subject to a maximum total sentence of 45 years in
prison.[105] However, before sentencing was scheduled, Lay died on
July 5, 2006. At the time of his death, the SEC had been seeking more
than $90 million from Lay in addition to civil fines. The case of Lay's
wife, Linda, is a difficult one. She sold roughly 500,000 shares of
Enron ten minutes to thirty minutes before the information that
Enron was collapsing went public on November 28, 2001. Linda was
never charged with any of the events related to Enron.
Arthur Andersen was charged with and found guilty of obstruction of
justice for shredding the thousands of documents and deleting e-
mails and company files that tied the firm to its audit of Enron.
Although only a small number of Arthur Andersen's employees were
involved with the scandal, the firm was effectively put out of
business; the SEC is not allowed to accept audits from convicted
felons. The company surrendered its CPA license on August 31, 2002,
and 85,000 employees lost their jobs. The conviction was later
overturned by the U.S. Supreme Court due to the jury not being
properly instructed on the charge against Andersen.The Supreme
Court ruling theoretically left Andersen free to resume operations.
However, the damage to the Andersen name has been so great that
it has not returned as a viable business even on a limited scale.
The Ethics of Corporate Trusting Relationships
1. Organizational and brand identity: Research within this third
theme has generally been an extension of the second (i.e., self-
presentation) and, to a lesser extent, the first (i.e., self-
concept), to entities other than individuals, namely,
organizations and their brands. Scholars within this theme
frequently draw on the theoretical foundations in both classical
philosophy and impression management as well as the work in
the first two themes outlined above.
Here, research does not fall quite as cleanly into distinct
streams; however, in general, studies focus on either the
identity of an organization or of a brand.
First, some research has focused on the authenticity of
organizations. In defining organizational authenticity, scholars
tend to draw explicit links to the theoretical foundations in
classical philosophy as well as work from psychology within the
self-concept theme. For example, Carroll and Wheaton suggest
that “...by analogy, an organization would be authentic to the
extent that it embodies the chosen values of its founders,
owners or members...” The emphasis in
such definitions is on organizational values (i.e., the
backstage), but, at the same time, most empirical studies tend
to focus on audience perceptions of organizational action (i.e.,
the front stage). Audiences have been shown to make
authenticity attributions on the basis of observed
production processes, product names, adver-
tising campaigns, ownership struc-
ture, the extent to ́which it is “local”, and even CEO
portraits. Such attributions of authenticity tend to translate
into audience appeal for the organization and its products and
services. In addition, audiences have been shown to evaluate
the authenticity of an organization on the specific basis of its
corporate social responsibility programs and the manner in
which such programs are publicized or not. Although
most research has focused on audience perceptions of the
front stage, some have considered how orga-
nizational members collectively understand and even construct
the backstage, often via an agentic use
of its own history; such considerations have also extended
beyond the boundaries of the organization to communities and
other collective identities. In sum, this
collection of research may seem disparate at first blush, but the
common thread is an interest in organizational authenticity,
conceived as the consistency between the organization’s values
and its actions.
NIKE as an example for Org Trust:
Several universities, unified by the Worker Rights Consortium,
organized a national hunger strike in protest of their school
using Nike products for athletics. Feminist groups mobilized
boycotts of Nike products after learning of the unfair conditions
for the primarily female workers. In the early 1990s, when Nike
began a push to increase advertising for female athletic gear,
these groups created a campaign called "Just Don’t Do It" to
bring attention to the poor factory conditions where women
create Nike products.
Nike began to monitor working conditions in factories that
produce their products.[17] During the 1990s, Nike installed a
code of conduct for their factories. This code is called SHAPE:
Safety, Health, Attitude, People, and Environment.[12] The
company spends around $10 million a year to follow the code,
adhering to regulations for fire safety, air quality, minimum
wage, and overtime limits. In 1998, Nike introduced a program
to replace its petroleum-based solvents with less dangerous
water-based solvents.[18] A year later, an independent
expert[who?] stated that Nike had, "substituted less harmful
chemicals in its production, installed local exhaust ventilation
systems, and trained key personnel on occupational health and
safety issues."[dead link][19] The study was conducted in a
factory in Vietnam.
Between 2002 and 2004, Nike audited its factories
approximately 600 times, giving each factory a score on a scale
of 1 to 100, which is then associated with a letter grade. Most
factories received a B, indicating some problems, or C,
indicating serious issues aren't being corrected fast enough.
When a factory receives a grade of D, Nike threatens to stop
producing in that factory unless the conditions are rapidly
improved. Nike had plans to expand their monitoring process
to include environmental and health issues beginning in 2004.

Second, other research has focused on the authenticity of


brands. Here, too, scholars tend to emphasize the backstage in
conceptual definitions of authenticity but focus on the front
stage in empirical examinations of it. As Holt put it: “To be
authentic, brands must be disinterested; they must
be perceived as invented and disseminated by parties
without an instrumental economic agenda, by people
who are intrinsically motivated by their inherent value.”
Drawing on this early work, others have similarly em-
phasized notions of “faithfulness” and “truth”,
“consistency”, “sincerity” and “trust”. Several studies have
shown how audiences, and consumers in particular, make
authenticity attributions on the basis of emotional branding
tactics, such as storytelling. Others have shown the impact of
such factors as craft production methods and the perception of
value alignment between the brand and its employees or
consumers. Brand authenticity tends to engender such positive
responses as brand identification and attachment, product
adoption and sales.
Does brand trust matter to brand equity?
Elena Delgado‐Ballester, José Luis Munuera‐Alemán , Journal of
Product & Brand Management, ISSN: 1061-0421,Publication
date: 1 May 2005
The most recent literature on competitive advantage views
brand equity as a relational market‐based asset because it
arises from the relationships that consumers have with brands.
Given the fact that trust is viewed as the corner‐stone, as well
as one of the most desirable qualities in any relationship, the
objective of this study is to analyze the importance of brand
trust in the development of brand equity. Specifically, the
paper examines the relationships network in which brand trust
is embedded.
The findings reveal that brand trust is rooted in the result of
past experience with the brand, and it is also positively
associated with brand loyalty, which in turn maintains a
positive relationship with brand equity. From a practical point
of view, companies must build brand trust in order to enjoy the
substantial competitive and economic advantages provided by
brand equity as a relational, market‐based asset.

J&J is a good example for Brand trust. In his interview for


Lasting Leadership, Burke also spoke about trust. “Trust has
been an operative word in my life. [It] embodies almost
everything you can strive for that will help you to succeed. You
tell me any human relationship that works without trust,
whether it is a marriage or a friendship or a social interaction;
in the long run, the same thing is true about business.”
Years earlier, as Lasting Leadership noted, other companies had
demonstrated what came to be seen as poor judgment in the
way they handled defective product incidents. For example,
Coca-Cola had mismanaged the “contaminated can” incident in
Europe in 1999; Intel had initially failed to respond quickly to
the calculation errors embedded in its Pentium chip in 1994,
and Firestone had initially refused to accept responsibility for
SUV roll-overs caused by poorly manufactured tires in 2000.
Burke’s actions were the opposite. According to media reports
at the time, the Tylenol crisis led the news every night on every
station for six weeks. Burke, however, met the challenge head
on, contacting the chief of each network’s news divisions in
order to keep them informed. He also met with the directors of
the FBI and the FDA. “There were many people in the company
who felt there was no possible way to save the brand, that it
was the end of Tylenol,” Burke said. “But the fact is, I had
confidence in J&J and its reputation, and also confidence in the
public to respond to what was right. It helped turn Tylenol into
a billion-dollar business.”

The person who placed the cyanide in the Tylenol capsules was
never found.

2. Social Media, Celebrity Endorsers and Effect on Purchasing


Intentions of Young Adults--Kaitlin M. Davis

The results also indicated the three variables’ tolerance levels


for expertise, trustworthiness and attractiveness where all
lower than 0.64 (1-R2) - trustworthiness was at 0.38, expertise
0.41 and attractiveness 0.46. Results for this study show that all
three characteristics were highly correlated and were therefore
being viewed as dependent on each other through the eyes
of Millennials. The reason this may have occurred can be
attributed to the halo effect and the cognitive consistency
theory.
But, according to Nisbett and Wilson (1977), the halo effect it is
now described as “the influence of a global evaluation on
evaluations of individual attributes of a person,” (p.250). Simply
put, if we like a person, or in the case of this study a celebrity,
we often assume that people we perceive as nice or attractive
have all favorable attributes, and those we perceive as less nice
have less favorable attributes. In addition, the cognitive
consistency theory ties in with the halo effect. The theory
states that people are more comfortable when all their
judgments about a person or product go together or are all
positive or negative ( Good example—Cadbury worm and
Amitabh Bachan and Sahara Parivar Ambassador and Amitabh
Bachan)
Cadbury India, the countrys largest chocolate major, has roped
in superstar Amitabh Bachchan to endorse its brand and
announced a new packaging for its flagship Cadbury Dairy Milk
(CDM) in an attempt to regain consumer confidence following
last years disastrous worm infestation controversy. Bharat Puri,
managing director of Cadbury India, told MarkETing the new
packaging will reduce the companys dependency on external
storage conditions.
Amitabh Bachchan has a universal appeal and his endorsement
of CDM will help our objective of increasing chocolate
consumption among all ages, he said. Amitabh Bachchan will
endorse and promote Cadbury for two years.
2 The Cambridge Analytica Scandal
The Facebook–Cambridge Analytica data scandal was a major
political scandal in early 2018 when it was revealed that
Cambridge Analytica had harvested the personal data of
millions of peoples' Facebook profiles without their consent
and used it for political advertising purposes. It has been
described as a watershed moment in the public understanding
of personal data and precipitated a massive fall in Facebook's
stock price and calls for tighter regulation of tech companies'
use of personal data.
Aleksandr Kogan, a data scientist at Cambridge University,
developed an app called "This Is Your Digital Life". He provided
the app to Cambridge Analytica.[3] Cambridge Analytica in turn
arranged an informed consent process for research in which
several hundred thousand Facebook users would agree to
complete a survey only for academic use. However, Facebook's
design allowed this app not only to collect the personal
information of people who agreed to take the survey, but also
the personal information of all the people in those users'
Facebook social network. In this way Cambridge Analytica
acquired data from millions of Facebook users.
In the US, the story of how whistleblower Christopher Wylie
had built media mogul Steve Bannon’s “psychological warfare
tool” by harvesting millions of people’s Facebook profiles had
erupted across every news channel. Questions rained in on
Cambridge Analytica, Facebook, and its boss, Mark Zuckerberg,
including the most insistent – where was he?
A couple of hours later, I glanced at Twitter and saw a graph. It
showed a wavering line heading off a cliff. Facebook’s share
price had plunged $30bn in the first two hours of trading. By
the end of the week it was more than $100bn. Today it’s
$170bn down.
If there’s one tiny ray of light in all this, it’s that journalism can
have an impact – even the cash-strapped, shoestring British
variety. And if there’s a reason to despair, it’s that it’s not
enough.
Zuckerberg, its founder and chief executive, has defied
parliament. The company is quite simply beyond the rule of
law. Because what the Cambridge Analytica story exposed, by
accident, from Facebook’s reaction in the months that
followed, is the absolute power of the tech giants. Power and
unaccountability that is the foundational platform on which
populist authoritarians are rising to power all across the globe.
Power and unaccountability that continues unchecked. In
Britain, in a media landscape that is insular and self-regarding
and obsessed with what happens at Westminster, we’ve failed
to connect the dots between Facebook and Brexit and the
world outside. To the global currents that favour autocrats and
populists. And to the technology platforms assisting them.
On October 27, 2012, Facebook CEO Mark Zuckerberg wrote an
email to his then-director of product development. For years,
Facebook had allowed third-party apps to access data on their
users’ unwitting friends, and Zuckerberg was considering
whether giving away all that information was risky. In his email,
he suggested it was not: “I’m generally skeptical that there is as
much data leak strategic risk as you think,” he wrote at the
time. “I just can’t think of any instances where that data has
leaked from developer to developer and caused a real issue for
us.”
In 2013, two University of Cambridge researchers published a
paper explaining how they could predict people’s personalities
and other sensitive details from their freely accessible
Facebook likes. These predictions, the researchers warned,
could “pose a threat to an individual’s well-being, freedom, or
even life.” Cambridge Analytica's predictions were based
largely on this research.
Instead, the scandal and backlash grew to encompass the ways
that businesses, including but certainly not limited to Facebook,
take more data from people than they need, and give away
more than they should, often only asking permission in the fine
print—if they even ask at all. There has been a growing
recognition that companies can no longer be left to regulate
themselves, and some states have begun to act on it. Vermont
implemented a new law that requires data brokers which buy
and sell data from third parties to register with the state. In
California, a law is set to go into effect in January that would,
among other things, give residents the ability to opt out of
having their data sold. Multiple states have introduced similar
bills in the past few months alone. On Capitol Hill, Congress is
considering the contours of a federal data protection law—
though progress is, as always in Washington, slow-going.
If there’s one choice that Facebook has made repeatedly over
the past 15 years, it’s been to prioritize growth over privacy.
Users were consistently encouraged to make more of their
information public than they were comfortable with. The
settings to make things public were always a bit easier to use
than the ones to make things private. Data was collected that
you didn’t have any idea was being collected and shared in
ways you had no idea it was being shared.
Now Mark Zuckerberg, the CEO of Facebook, is 34. He’s a public
figure who is attacked relentlessly in the press and by
politicians around the world. He has two children, a house he
blocks from view, and a cover on his laptop camera. He’s also
seen his company get burned for ignoring user privacy, and he’s
seen that the platform he built to make the world more open
and connected can also be used by harassers, racists, trolls,
bullies, and Vladimir Putin. His company’s reputation has
faltered; growth on the main platform has slowed, and
employee morale has dropped. It seems like a good time for a
change.
“Public social networks will continue to be very important in
people's lives—for connecting with everyone you know,
discovering new people, ideas and content, and giving people a
voice more broadly,” Zuckerberg wrote. “But now, with all the
ways people also want to interact privately, there's also an
opportunity to build a simpler platform that's focused on
privacy first.”
The company’s loose policies on data collection over the years
are also what allowed it to build one of the most successful
advertising businesses in history. All the data the company
collects helps advertisers segment and target people. And it’s
the relentless pursuit of that data that has led to Facebook
being accused of making inappropriate deals for data with
device manufacturers and software partners. This is a history
that Zuckerberg knows well, and one that he acknowledged in
his post. “I understand that many people don’t think Facebook
can or would even want to build this kind of privacy-focused
platform—because frankly we don’t currently have a strong
reputation for building privacy protective services,” he wrote.
he fact that your individual messages might be encrypted in
transit does not, in any way, prevent Facebook The Entity from
knowing who your friends are, where you go, what links you
click, what apps you use, what you buy, what you pay for and
where, what businesses you communicate with, what games
you play, and whatever information you might have given to
Facebook or Instagram in the past.
Warren’s announcement is one of the clearest signs yet of just
how toxic tech companies have become in some quarters of
the Democratic Party. Democratic candidates in 2020 have
tried to steer clear of large donations from Big Oil and Big
Pharma executives, and Warren — now her party’s frontrunner
for the nomination — is comparing tech companies like
Facebook and Google to those industries’ giants.
The candidate said she would not accept contributions to her
campaign of more than $200 from “executives at big tech
companies, big banks, private equity firms, or hedge funds.”
That adds to a previous pledge, adopted by much of the 2020
field, to not accept money from pharmaceutical and fossil fuel
officers.
Things changed on October 1, when the Verge published
comments from an internal Facebook meeting during which
CEO Mark Zuckerberg said it would “suck for us” if Warren was
elected and pursued an antitrust case against the company,
describing it as an “existential” problem. Since then Warren has
aimed directly at Facebook — often using Twitter to do so:
Elizabeth Warren@ewarren What would really “suck” is if we
don’t fix a corrupt system that lets giant companies like
Facebook engage in illegal anticompetitive practices, stomp on
consumer privacy rights, and repeatedly fumble their
responsibility to protect our democracy.
Facebook changed their ads policy to allow politicians to run
ads with known lies—explicitly turning the platform into a
disinformation-for-profit machine. This week, we decided to
see just how far it goes. We intentionally made a Facebook ad
with false claims and submitted it to Facebook’s ad platform to
see if it’d be approved. It got approved quickly and the ad is
now running on Facebook.
The Cambridge Analytica Story: The illicit harvesting of personal
data by Cambridge Analytica was first reported in December
2015 by Harry Davies, a journalist for The Guardian. He
reported that Cambridge Analytica was working for United
States Senator Ted Cruz using data harvested from millions of
people's Facebook accounts without their consent. Facebook
refused to comment on the story other than to say it was
investigating. Further reports followed in the Swiss publication
Das Magazin by Hannes Grasseger and Mikael Krogerus
(December 2016), (later translated and published by Vice),
Carole Cadwalladr in The Guardian (starting in February 2017)
and Mattathias Schwartz in The Intercept (March 2017).
Facebook refused to comment on the claims in any of the
articles.
The scandal finally erupted in March 2018 with the emergence
of a whistle-blower, an ex-Cambridge Analytica employee
Christopher Wylie. He had been an anonymous source for an
article in 2017 in The Observer by Cadwalladr, headlined "The
Great British Brexit Robbery". This article went viral but was
disbelieved in some quarters, prompting skeptical responses in
The New York Times among others. Cadwalladr worked with
Wylie for a year to coax him to come forward as a
whistleblower. She later brought in Channel 4 News in the UK
and The New York Times due to legal threats against The
Guardian and The Observer by Cambridge Analytica. The three
news organisations published simultaneously on March 17,
2018, and caused a huge public outcry. More than $100 billion
was knocked off Facebook's market capitalization in days and
politicians in the US and UK demanded answer from Facebook
CEO Mark Zuckerberg. The scandal eventually led to him
agreeing to testify in front of the United States Congress.
Since in Obama’s case, direct users knew they were handing
over their data to a political campaign" whereas with
Cambridge Analytica users thought they were only taking a
personality quiz for academic purposes, and while the Obama
campaign used the data "to have their supporters contact their
most persuadable friends, Cambridge Analytica targeted users,
friends and lookalikes directly with digital ads."
Facebook director Mark Zuckerberg first apologized for the
situation with Cambridge Analytica on CNN, calling it an "issue",
a "mistake" and a "breach of trust"; in effect, he reminded
them of their Right of access to personal data. Other Facebook
officials argued against calling it a "data breach", arguing those
who took the personality quiz originally consented to give away
their information. Zuckerberg pledged to make changes and
reforms in Facebook policy to prevent similar breaches. On
March 25, 2018, Zuckerberg published a personal letter in
various newspapers apologizing on behalf of Facebook.[26] In
April they decided to implement the EU's General Data
Protection Regulation in all areas of operation and not just the
EU.
The governments of India and Brazil demanded that Cambridge
Analytica report how anyone used data from the breach in
political campaigning, and various regional governments in the
United States have lawsuits in their court systems from citizens
affected by the data breach.
In March 2019, a court filing by the U.S. Attorney General for
the District of Columbia alleged that Facebook knew of
Cambridge Analytica's "improper data-gathering practices"
months before they were first publicly reported in December
2015. In July 2019, the Federal Trade Commission voted to
approve fining Facebook around $5 billion to finally settle the
investigation to the scandal, with a 3-2 vote.
acebook has been paying users as young as 13 for access to
their personal data in another effort to monitor social trends
and capitalise on them according to a report on TechCrunch.

The social network has been paying volunteers money each


month to install an app on their phone called Facebook
Research according. This application watches and records
activity and actions on a phone and sends that information
back to Facebook.

The app offers people between the ages of 13 to 25 up to $20


per month for almost complete access to their phone’s data.
Specifically, the app installs a custom root certificate which
granted Facebook the direct ability to see users’ private
messages, e-mails, Web searches, and browsing activity – while
also requesting users to take screenshots of their Amazon
order history and send it back to Facebook for review.
Facebook was originally collecting an amount of this data
through Onavo Protect, a VPN service that it acquired back in
2013. It is suggested that the data Facebook collected through
these methods helped it to spot current or future competitors,
which then allows them to acquire or clone them.

Facebook Research app was removed from the App Store about
six months ago after Apple complained about it violating its
guidelines on data collection.

Apple has revoked a developer license from the social media


giant, effectively shutting down any iOS apps that haven’t
already been approved for the App Store.

Without the developer certificate, Facebook’s internal iOS


apps, which likely include beta versions of its consumer apps as
well as company-specific resources, will no longer work. Apple
hasn’t indicated whether this is a temporary ban or how it will
monitor Facebook’s activities in the future.

Facebook and Apple are two of the biggest companies in the


world, but they need each other to survive. If this fight ever
reached the point where Apple removed Facebook from the
App Store, both companies would feel the effects, so there’s a
certain amount of gamesmanship being played here. However,
Apple’s reputation is far more at risk than Facebook’s at this
point, so this likely amounts to the final warning.
While the merits of the programme can be debated, the
method of delivery cannot. Apple clearly states that
participants in its Enterprise Developer Program cannot
distribute apps outside of the company: “We designed our
Enterprise Developer Program solely for the internal
distribution of apps within an organisation,” an Apple
spokesperson said. “Facebook has been using their
membership to distribute a data-collecting app to consumers,
which is a clear breach of their agreement with Apple. Any
developer using their enterprise certificates to distribute apps
to consumers will have their certificates revoked, which is what
we did in this case to protect our users and their data.”
To circumvent Apple’s sandbox, Facebook used beta testing
services other than Apple’s own TestFlight, including Applause,
BetaBound, and uTest to hide the app’s true identity. The app’s
primary function is similar to the Onavo VPN that Apple
removed from the App Store in August for heavy-handed data
gathering.
But Facebook isn’t only company using iPhone users to collect
data. A follow-up report from TechCrunch claims that Google is
running a similar program using an app called Screenwise
Meter that also uses the Enterprise Developer Program to
surreptitiously collect data from iPhone users. TechCrunch says
the app has been running since 2012 and, like Facebook
Research, also offers payment in exchange for data sharing.
Google quickly issued a statement apologising for the app and
calling it ‘a mistake’ while also saying it had been disabling.
Apple has not yet publicly responded to the report.
But while Apple is certainly playing hard ball, it’s also giving
Facebook something of a pass. While revocation of the license
will cause a temporary headache for Facebook and its
employees, Apple will still allow Facebook to distribute its apps
through the App Store. It also isn’t addressing the root of the
issue, which is that Facebook was able to run its Research App
undetected for more than two years despite Apple’s claims that
“What happens on your iPhone stays on your iPhone”. It’s
basically a firm slap on the wrist.
For its part, Facebook admits to running the app, but is
challenging the media’s assessment of the story. In a
statement, the social media giant said “there was nothing
‘secret’ about the app” and participants “went through a clear
on-boarding process asking for their (or their parents’)
permission and were paid to participate”. Facebook says it shut
down the app on iOS on its own accord, though it still continues
to operate on Android phones.
But as far as Apple is concerned, the case is cut and dried:
Facebook violated its terms of service in a big way. Not only
does it skip Apple’s review process, but it collects a staggering
amount of data. To get its hands on such a treasure trove,
Facebook Research required the installation of a new profile on
the user’s iPhone as well as root certificate access, which could
open up the iPhone to malware in addition to the open portal
to Facebook.

3. The 2008 sub-prime scandal—A subversion of Trust by


Institutions
In the instance of subprime mortgage woes, there was no single
entity or individual to point the finger at. Instead, this mess was the
collective creation of the world's central banks, homeowners,
lenders, credit rating agencies, underwriters, and investors.
The Crime: The economy was at risk of a deep recession after the
dotcom bubble burst in early 2000. This situation was compounded
by the September 11 terrorist attacks in 2001. In response, central
banks around the world tried to stimulate the economy. They
created capital liquidity through a reduction in interest rates. In turn,
investors sought higher returns through riskier investments. (Greed
and Chicanery) Lenders took on greater risks too and approved
subprime mortgage loans to borrowers with poor credit.(The ethical
leaders of the banks flouted all rules and norms of prudent banking.)
In this regard, some key figures worth mentioning are:
• In the US, the Fed Funds Rate was cut from 6.25% to 1.75% in
2001.
• The rate was then cut further and was held at 1% until mid
2004.
• The real Fed Funds Rate was negative for two-and-a-half years
in the period 2002-2004.
• A Taylor Rule (the monetary-policy rule that stipulates how
much the central bank should change the nominal interest rate
in response to divergences of actual inflation rates from target
inflation rates) would have led to a Fed Funds Rate of between
2% and 5% during the period 2001-2005.
Consumer demand drove the housing bubble to all-time highs in the
summer of 2005, which ultimately collapsed the following summer.
Most of the blame is on at the mortgage originators (lenders) for
creating these problems. It was the lenders who ultimately lent funds
to people with poor credit and a high risk of default. In defense of
the lenders, there was an increased demand for mortgages, and
housing prices were increasing because interest rates had dropped
substantially. At the time, lenders probably saw subprime mortgages
as less of a risk than they really were: rates were low, the economy
was healthy and people were making their payments.
We should also mention the homebuyers who were definitely not
completely innocent. Many were playing an extremely risky game by
buying houses they could barely afford. They were able to make
these purchases with non-traditional mortgages (such as 2/28 and
interest-only mortgages) offering low introductory rates and minimal
initial costs, such as "no down payment." Their hope lay in price
appreciation, which would have allowed them to refinance at lower
rates and take the equity out of the home for use in another
spending. However, instead of continued appreciation, the housing
bubble burst, and prices dropped rapidly.
As a result, when their mortgages reset, many homeowners were
unable to refinance their mortgages to lower rates, as there was no
equity being created as housing prices fell. They were, therefore,
forced to reset their mortgages at higher rates they couldn't afford,
and many of them defaulted. Foreclosures continued to increase
through 2006 and 2007.
The increased use of the secondary mortgage market by lenders
added to the number of subprime loans lenders could originate.
Instead of holding the originated mortgages on their books, lenders
were able to simply sell off the mortgages in the secondary market
and collect the originating fees. This freed up more capital for even
more lending, which increased liquidity even more, and the snowball
began to build.
A lot of the demand for these mortgages came from the creation of
assets pooling mortgages together into a security, such as a
collateralized debt obligation (CDO). In this process, investment
banks would buy the mortgages from lenders and securitize them
into bonds, which were sold to investors through CDOs.
A lot of criticism has been directed at the rating agencies and
underwriters of the CDOs and other mortgage-backed securities that
included subprime loans in their mortgage pools. Some argue the
rating agencies should have foreseen the high default rates for
subprime borrowers, and they should have given these CDOs much
lower ratings than the "AAA" rating given to the higher quality
tranches. If the ratings had been more accurate, fewer investors
would have bought into these securities, and the losses may not
have been as bad.
Moreover, some have pointed to the conflict of interest of rating
agencies, which receive fees from a security's creator, and their
ability to give an unbiased assessment of risk. The argument is rating
agencies were enticed to give better ratings to continue receiving
service fees, or they ran the risk of the underwriter going to a
different agency. (Example of HR manager and his Secretary-Conflict
of Interest, if not moral depravity)
Another party added to the mess was the hedge fund industry. It
aggravated the problem not only by pushing rates lower but also by
fueling the market volatility that caused investor losses. The failures
of a few investment managers also contributed to the problem.
To illustrate, there is a hedge fund strategy best described as "credit
arbitrage." It involves purchasing subprime bonds on credit and
hedging the positions with credit default swaps. This amplified
demand for CDOs; by using leverage, a fund could purchase a lot
more CDOs and bonds than it could with existing capital alone,
pushing subprime interest rates lower and further fueling the
problem. Moreover, because leverage was involved, this set the
stage for a spike in volatility, which is exactly what happened as soon
as investors realized the true, lesser quality of subprime CDOs.
Because hedge funds use a significant amount of leverage, losses
were amplified and many hedge funds shut down operations as they
ran out of money in the face of margin calls.

Ethical Issues involved in the Subprime crisis:


A. The Crash of 2008 – the Relationship with Ethical Issues, Philip
Booth, Dans Finance & Bien Commun 2010/1 (No 36), pages
39 à 53
The common perception regarding the causes of the financial crash
of 2008 is that unregulated laissez-faire capitalism was allowed to let
rip and the greed of bankers, motivated by bonus packages, led to an
unprecedented degree of risk taking.
Bailouts and moral hazard: Firstly, on both sides of the Atlantic, we
have deposit insurance schemes. These are not industry-run schemes
or schemes which involve the charging of proper risk-related
premiums. In reality, they involve compensation by others for bad
decision-making on behalf of banks and their customers. Indeed,
most banks paid no premiums into the US scheme in the run up to
the crash.
Fannie Mae and Freddie Mac guaranteed payments on mortgage-
backed securities and investors believed - as can be seen from the
pricing of its capital - that if Fannie Mae and Freddie Mac failed, they
would be bailed out. Fannie Mae and Freddie Mac had only 1.2%
equity capital at the end of 2007. Personal bankruptcy law in the US
is also weak and it is often difficult for a lender to have recourse to
other assets when a borrower defaults on a mortgage. (So private
homeowners also took the risk coz only the asset that was pledged
would be lost and not other assets owned by them).
First of all, it has been suggested that the crash, at its root, was an
ethical failure. Whilst there were, no doubt, ethical failings, it is
difficult to conclude that these caused the crash, as such. Ethical
failings were widespread and not just confined to highly-paid
workers in large banks. Mortgage applicants lied about their income;
bank branch managers ‘over-sold’ loans; traders created new
products that did not provide long-term value for shareholders;
senior managers did not monitor junior managers; directors did not
properly manage senior managers on behalf of shareholders; and so
on.
Furthermore, as Sam Gregg has noted (Booth, 2010), we use the
phrase moral hazard and not simply risk hazard to describe the
situation where the financial decisions of one individual are
underwritten more widely by society. This is because people face
incentives to act less prudently - in a way that is intrinsically
unethical - if we underwrite their actions. Indeed, the financial
system is more likely to attract unethical people to work in it if
ethical behaviour is not seen to be beneficial.
B. The Financial Crisis and the Collapse of Ethical Behavior by
Gregory Curtis, Chairman of Greycourt & Co., Inc. December
4, 2008 9:58am, White Paper No. 44

In our view, poor risk controls, massive leverage, and the


blind eye were really symptoms of a much worse disease: the
root cause of the crisis was the gradual but ultimately
complete collapse of ethical behavior across the financial
industry. Once the financial industry came unmoored from its
ethical base, financial firms were free to behave in ways that
were in their – and especially their top executives’ – short-
term interest without any concern about the longer term
impact on the industry’s customers, on the broader American
economy, or even on the firms’ own employees.

We think it likely that there is a special circle in hell reserved


for subprime lending banks like Countrywide Financial, which
were at the epicenter of the subprime collapse, and at the
epicenter of the ethical collapse. Looking back, it’s clear that
the main raison d’être of the subprime banks was to sell
mortgage loans to people who couldn’t afford them.
Screaming ads were created to dupe people into applying for
these mortgages, and new, highly misleading mortgage
products were developed (teaser rates, Alt-A, etc.) to ramp up
volume. Mortgage brokers were paid big fees to lure a steady
stream of suckers into the scheme. There was a time when
this would have been seen for what it was: predatory
lending.
New Cullings for Normative Ethics -2021-22

1. Thus ethics is the philosophy of morality. Ethics (from the ancient Greek “ta ethika”,
translated as the moral doctrine) is then the science of morality, whereby the goal of this
science as part of philosophy (the friend of wisdom) is to regulate the world and in particular
the behavior of man.
2. The Cardinal Virtues of Socrates and Plato were bravery, prudence, wisdom and justice.
Virtues correspond to characteristics and are related to the persons. Aristotle (384–322 BC)
concluded here by formulating “virtue is the way to happiness (eudaimonia)”. Christian
ethics supplemented these virtues by three more: faith, love, and hope. The heavenly virtues
(and the contrary vices) of the Occidental Middle Ages were widespread by the musical work
of Hildegard of Bingen in the Christian West of the Middle Ages: humility (arrogance),
benevolence (avarice, greed), abstinence (lewdness), moderation (gluttony), goodwill (envy),
diligence (laziness), patience (anger).
3. Aristotle puts virtue over the economy because man can only achieve his happiness through
the exercise of his virtues. The perfect virtue for Aristotle is justice, which serves as a
measure of the economy.
4. Modern business management does not distinguish between moral and immoral goods. The
goods that are sold can be immoral, such as pornography or even directly harmful such as
drugs, as far as the legislator permits. . The discovery of atomic energy has enabled both the
use of this energy form and the development of the atomic bomb. Ultimately, however, it is
also clear that a society cannot be allowed to do this. It must prevent this kind of science
from being used against it. The question of the benefit or harm to man and to the society
must be posed as early as possible and answered in order to forestall societal damage.
5. For Kant (b. 22 April 1724 in Königsberg, Prussia, † 12 February 1804 ibid), an action
is especially moral when it benefits others at expense to one’s own benefit. Under no
circumstances should the pursuit of one’s own happiness, or one’s own benefit
maximization be carried out at the expense of the others, or that of general wellbeing. And
when in doubt, morality, that is the wellbeing of others, must be placed above one’s own.
6. In Kant, the mind ultimately determines whether an action is to be classified as moral. If
looking in on your old aunt is a duty, but not a moral one, if you do so only to be considered
in the inheritance.
7. But what about the consequences? Is it enough to want good? No, unfortunately not.
Otherwise, every fanatic, every terrorist would be a morally acting man, even though he
harms many people. It would depend only on the subjective assessment by the actor, his, in
his opinion, positive attitude. Well meant is not well done. Every judge has to deal with this
problem if he is to decide whether an action with a negative effect for third parties was
intentional. The intent distinguishes murder from manslaughter and thus also clearly the
penalty changes. In addition, even the actor can often not determine the motives that have
guided him, since he can also be influenced by the subconscious.
8. In principle, an ethics of conviction would suffice to produce good behaviour for mankind if
all men were to have the same perceptions and objective reason, in order to correctly asses
the consequences of their actions. Kant doubts this, which is why in his work Metaphysics of
Morals he develops a duty ethics for general human behaviour (deontological ethics, from
Greek to déon: the necessary, the duty). In addition, he developed imperatives or rules as an
aid to the practical reasoning about human coexistence: a categorical imperative and a
practical imperative as well as the publicity rule. The conviction of the agent to do good has
to be added to the dutiful action.
9. How do my actions affect people? The purpose of my action should be to do good, or at
least not to harm anyone. We should therefore take into account the purpose, which means
the effect on other people, and not regard humans as a means, i.e. without the effects of
our actions, on our actions or behaviour, which also includes allowing inaction. Kant,
however, also refers to the agent himself. He should not regard himself as a means, but also
as a purpose, and therefore not harm himself. In current situations this would mean that a
manager should not harm his health, just to further his career.
10. Kants Universal Rules: A) The Categorical Imperatives: Do unto others what you would
others to do unto you; Only act according to the maxim that you can make a universal law.
B) The Practical Imperative: Act in the way that you use humanity, both in your person and
in the person of each other, at any time not just as means but also as a purpose. The
purpose of my action should be to do good, or at least not to harm anyone. We should
therefore take into account the purpose, which means the effect on other people, and not
regard humans as a means, i.e. without the effects of our actions, on our actions or
behaviour, which also includes allowing inaction. C) The Publicity Rule: All actions related to
the right of other people, whose maxim is not compatible with publicity, is wrong.
That is, the behavioral rule is such that if the agent would fear the response of his
community should his actions become public, we assume that the rights of others
are unfairly, thus disproportionately, affected. One should ask oneself whether those
affected by an action would approve of it. For example, if a pharmaceutical company
conceals the side effects of a drug, the publicity rule would be violated because the patients
would not understand the dangers to their health.
11. For Kant the duties to other people include respecting their dignity, helping them
in need, being grateful and conciliatory, not deceiving them, not lying, nor mocking
or slandering. As inner attitudes, he demands virtues such as benevolence, compassion,
gratitude, truthfulness and integrity. Negative inner attitudes or characteristics (virtues), on
the other hand, are envy, dislike, pleasure in the pain of others, arrogance, revenge and
greed. Economic obligations are respect for the laws and the property of others, the
observance of contracts and the payment of debts. As a principle of individual freedom, “the
freedom of arbitrariness of everyone can co-exist with everyone’s freedom in accordance
with a general law.” This corresponds to the principle of modern democracy that the
freedom of the individual stops where the freedom of the other begins. Individuals are not
allowed to exercise their freedom without consideration or to the detriment of others.
12. What if negative consequences arise from the duties? Let us take the duty “You shall not
lie”. Kant sees truthfulness as a top priority, for which there can be no exceptions, even if a
murderer asks for the whereabouts of his victim, one must tell him the truth, even if the
victim is then murdered. In such a case one is not responsible for the consequences.
13. In an extreme case, two duties can also contradict each other and lead to tragic dilemmas.
Let us take the current euthanasia discussion as an example. The prohibition of killing and
the cessation of assistance prohibit the physician’s euthanasia, even if the patient explicitly
wishes for it and suffers a great deal. This is a contradiction to the assistance offered and the
principle of human dignity and the welfare of the physician against the patient.
14. Like Mill, Weber criticizes the Kantian duty ethics because of the unavoidable dilemma and
conflict situations resulting from contradictory duties. He gives two examples of duty ethics
that lead to immoral consequences. Thus, the Kantian duty of truthfulness would make the
preservation of state secrets impossible, even if this would cause great damage to the
country. The Christian commandment of nonviolence would, consistently implemented, lead
to the inability to counter violence, which would lead to further violent acts.

15. Max Weber propagates his ultimate end as an ethics of responsibility, consequentialism
(also called teleological ethics after the Greek télos, the goal, the purpose).
The actions are moral when they achieve good. This principle is the basis of our
jurisprudence “knowingly accepted” or “gross negligence” is interpreted by our
courts as fault. Having followed an order (duty ethics in the narrower sense) is not accepted
as an exculpation argument. Before the Nuremberg court many Nazis had used their orders
to kill as excuses for their actions. In the current social norms, however, orders do not set a
person free from responsibility for his actions as a person. Of course, a moral condemnation
of murderers would be difficult if they had been killed themselves if they had disobeyed an
order to kill.
16. In general, the ethics of responsibility is very demanding and therefore not always an
applicable measure. It is not always possible to clearly assess the consequences of the
actions. Either there are too many influencing factors or the result depends simply on
chance. Furthermore, teleological ethics presupposes not only a high level of information,
but also a high intellectual and moral capacity from the actor if the consequences of options
for action are not only to be foreseen, but their results are also weighed against each other.
17. It is a pure ethics of responsibility in which the conviction does not matter, but the greatest
happiness of the greatest number, or the principle of the greatest happiness (principle) of all
men. It is therefore about the determination of the net happiness resulting from actions and
their maximization. Joy and suffering are offset against each other individually as well as in
between all the people affected by the action. The action with the greatest net happiness is
the most moral—Utilitarianism.
18. Criticism about this approach is generally its hedonistic orientation, thus the strong ego and
pleasure-seeking. This kind of morality does not correspond to the idea of good found in
Plato, Aristotle, and Kant. Ultimately, everything that spreads joy is weighted equally. Is it
possible, for example, to equate joy from malevolence and lustful pleasures with the joy of
charity, and to calculate it for a net profit? ? Is it moral to sacrifice a few to save many?
Ultimately, the use of soldiers in war is always justified by higher goals, which are often not
rooted in truth. The sacrificing of slaves in the Circus Maximus of Rome would be justified if
many thousands of spectators felt more joy than the few slaves felt pain. There would be
a positive net-happiness. Utilitarianism in the narrower sense is not an ethical approach
because the welfare of others is not the focus. The approach is ethical inasmuch as the
greatest general happiness, as the happiness of all men, is striven for. In this case, damage
to third parties is acceptable.
19. Rule utilitarianism provides an alternative approach to the act utilitarianism. Rule
utilitarianism does not encourage the individual action that provides the greatest happiness,
but rather the general rule that maximizes happiness. The difference lies in the overall
happiness of the society, which is the outcome if general rules are followed. If we use rule
utilitarianism in our example, the torture of human beings could not be justified, even if a
sadist feels more joy than his victims. As a general rule torture would not maximize utility to
society, since the utility becomes negative if everybody tortures others. Nonetheless, there
is also an account of the pleasure and pain of different people in our Western democracies.
When a judge decides on the expansion of an airport, he takes into account the interest of
the general public in the form of jobs and a good traffic connection if he approves the
complaints of the residents. In the case of aircraft catastrophes such as 9/11, there are
launch orders, which are intended to minimize deaths in inner cities. Here the passengers of
the machine are sacrificed to prevent more dead on the ground.
20. Mill’s utilitarianism is therefore added to Kantian duties as a consequence of ethics, if these
do not give a clear statement of action. According to Mill, a lie is allowed, contrary to Kant,
if, with all its consequences, it produces less harm than the truth. Schopenhauer also
contradicts Kant and regards the lie as justified in these circumstances. We know this
connection under the concept “emergency or white lie”. Mill’s utilitarianism is therefore also
generally used for the solution of ethical dilemmas
21. The problem with this approach is that the assessment is ultimately left to the individual.
Every lie can be justified, you just have to paint the consequences of the truth extremely
enough. There is, therefore, the basic question of who is to evaluate ethically; the individual
or the group or society. Without a correlation to the usefulness or well-being of other
people, a distinction between good and evil can neither be made in individual ethics nor in
discourse ethics.
22. Thus, on the one hand, people need the right to vote and to regulate procedures by
means of a discourse ethics, to grasp and weigh the views and interests of all parties
involved in order to arrive at a morally balanced decision or to carry out a moral evaluation
as a collective. In addition, they must first be consensus- and common-minded, and thus also
morally oriented, and be able to put themselves into the position of other parties in order to
develop a moral reconciliation result. Otherwise, suboptimal horse-trading will result in the
enforcement of the stronger group or no decision will be made at all. Individual ethics is
therefore the basis for ethical evaluations and institutional ethics, individual ethics and
discourse ethics must work together.
23. Moral Economics: Morality Must Be Worthwhile: It is precisely this goal conflict between
one’s own benefit and that of the other that moral economy (or economic theory of
morality) addresses. It asserts that if morality is to be achieved, the incentives must be
designed to make moral behaviour worthwhile. The main representative and co-founder of
moral economy, Karl Homann (born April 9, 1943 in Everswinkel), developed an approach to
incentive ethic that tries to direct the individual’s development into the morally desired
direction using the right incentive design.
24. Homann rejects the moral self-control of the individual by means of internalized values
because it would be exploited in market competition. If, for example, child labor is not
prohibited, an entrepreneur must resort to it because they would otherwise have a
competitive disadvantage. A moral framework should be designed in such a way that self-
interest becomes socially productive.
25. Human Rights: An economic action is moral or ethical if it does not harm others. The basis
for this assessment is the acceptance of the rights of other people and living creatures
(including animals). Though the ideas originate from the Enlightenment, human rights were
formulated for the first time in the American Virginia Bill of Rights in 1776 and then in the
French Declaration of Human Rights in 1789.These rights are thus internationally legitimated
and interested or affected parties can demand their implementation.
26. Many human rights have been formulated. The most well-known is the “Universal
Declaration of Human Rights” of the General Assembly of the United Nations of 10
December 1948, the so-called UN Declaration of Human Rights. Karl Marx was an opponent
of the human rights movement. He saw the rights of society threatened by human rights.
The individual concept of freedom was dismissed as a bourgeois invention in the countries
of real socialism. Instead, so-called basic social rights were expressed; the right to work, the
right to vocational training and social protection. However, the individual was left with no
decision-making freedom and his life was planned centrally.
27. From the rights, however, come indirect obligations for how people are to deal with each
other. People must recognize the rights of the others as equal in principle and even accept a
restriction of their own rights, if it is the only way for the rights and freedom of others or
general welfare to be guaranteed. And finally, there is a duty to work for the realization of
human rights.
28. Markets and Ethics: To what extent does the market exhibit moral and ethical behaviour? In
1989, Kerber found that the young leaders were inclined to opportunism and accepted
immoral and often criminal behavior when material success was achieved. Slogans like
“Everyone is the next one”, “One hand washes the other” or “To achieve a higher goal,
sometimes wrong cannot be circumvented” were popular.
29. Kerber summarized the trend as follows: “The tendency seems to be a stronger ego-
orientation and more attention to success, material goods and enjoyment”. At the beginning
of the 1990s there was a trend away from duties such as order, discipline, loyalty,
thoroughness and reliability to so-called unfolding values such as independence, self-
responsibility, participation and creativity.
30. Adam Smith was aware that the invisible hand alone is not sufficient to protect the common
good from damage by individuals. He stressed the necessity of an economic system and a
system to keep order, which did not exclude intervention to protect the common good. (The
Theory of Moral Sentiments). Only if the legal system functions well and there is “trust in the
sovereignty of the state” can trade on markets develop to the advantage of people,
according and create welfare. Smith also identifies the most important components of order
to be internal security, jurisprudence, infrastructure, educational institutions and national
31. defence. Adam Smith had already differentiated between an economy and an economic
system. The economic system must set the framework for economic behaviour in such a way
that the invisible hand of the market and competition can develop optimally, meaning that
the actions of people determined by their own interests are channelled for the common
good.
32.Institutional Ethics: The State Regulatory Framework
33. The Ethical Prisoner Dilemma: Even if the company were to behave
morally, it does not know how the other companies will behave, and
therefore must assume immoral behaviour and behave immorally in
order to ensure its survival. The ethical prisoner dilemma is not just true
for companies in competition but also for companies with unethical
business cultures and for the employees themselves. This also applies to
the internal competition of employees within the company. Here an
employee can gain a career advantage by lying. Unethical companies
cannot realize the collective best case with high productivity if the
employees do not behave morally. Like with Enron, the employees
compete internally and do not cooperate. The return on teamwork
cannot be realized.
34.
Payment B behaves morally B behaves immorally
A behaves morally 5,5 0,6
A behaves immorally 6,0 1,1=Nash Equilibrium
35.The ethical prisoner dilemma for a fair competition is as follows: The
worst case for a company manager A is if he behaves morally, but the
company manager of another company B does not and the best case for
A is if A behaves immorally, but B does not. B is in the same decision-
making situation. The result is the combination in which both companies
operate unfairly, thus the worst case for all (Nash equilibrium= No one
can unilaterally improve their profits thru another strategy). Without
ethical rules, such as law enforcement when the ethical prisoner
dilemma arises, a company finds itself in the worst-case situation if it
behaves ethically (Fig. 6.1).

36. When the decision-making structure for A and B changes as a result of


the company having to bear only a part of the costs of a decision in the
case of decisions made at the expense of third parties (external effects).
In the case of the environment, the cost of pollution is borne by the
public, whose health and quality of life are adversely affected (negative
external effects). In the case of work safety, the company can save costs
at the expense of employees.
37.
Payment A/B/Third B behaves morally B behaves
Parties immorally
A behaves morally 1,1,(0) 0,5,-5
A behaves immorally 5,0,-5 {3,3,(-10)}

38. An empirical study has shown that managers are only willing to stick to
moral standards when they believe their business partners are sticking
to them. If this is not so, they are not willing to behave morally, even if
they consider the rules to be important and meaningful. In the case of
the prisoner’s dilemma, there is uncertainty about the conduct of the
other companies. Even if they all wanted to behave ethically, they could
not, because there was then the risk of coming into the worst case
situation.
39.The solution to this problem is: 1. Clarification of A and B on the value of
moral behaviour, thus increasing the incentive to behave morally.
2. Moral behavior is rewarded by incentives (morality must be worth
practicing). An ethical consumer awareness leads to increased sales of
ethical products.
3. Binding contracts with sanctions: laws, state control and sanctions in
cases of misconduct (ethical order policy).
1. Tools of Ethics for Management
2. Vision and Values: But there is also a great danger. The visions and
principles sound ethical and convey the impression that the company is
solely a good thing. The suspicion is always there that some companies
present the ethical guidelines only for image and PR, but that they play
no role in everyday functioning of the company. However, if a case
publicly contradicts the guidelines and is not an exception, this obvious
contradiction seems hypocritical and weakens the credibility of the
company. The guiding principles can be checked by the public and
requested by the stakeholders. Corporate ethics is the consistent
implementation of ethical goals in company policy and not a pure PR
action. Again, there must be no contradictions. For example, it is
hypocritical when a clothing producer who uses child labor in India to
keep production cheap is trying to create a morally positive image in
Germany by promoting SOS Children’s Villages.
3. With the help of a product lifecycle analysis, a company can determine
the effects of production on humans or nature at every production
stage. For example, Shell has identified 350 stakeholder groups from
business, politics and environment in the project of exploration of gas
deposits in the Amazon basin in Peru (“Camisea Project”), contacted 200
groups directly and classified 40 groups as primary stakeholders.
Following an intensive ethical stakeholder analysis, Shell concluded that
the environmental impacts and the negative impact on the natives were
predominant and dispensed with the exploitation of gas deposits in the
Amazon basin. An action is ultimately only ethically justifiable if the
interests of the shareholder are weighed with those of the stakeholders.
One cannot principally be subordinated to the other, but the priority
must be examined ethically in each case. Criteria for an ethical test will
be the greatest concern.
However, society’s sense of responsibility cannot be left to companies,
but must be demanded by society in the public and in the form of laws.
If this does not happen, the company does not have a monetary
incentive to behave in a socially desirable way, but rather can maximize
its profit at the expense of society (such as the non-environmental
disposal of production waste, competition offences or even balance
sheet manipulation).
4. Organizational Ethics: The James A Waters study identified seven
barriers in companies that hamper moral or legal behaviour. Four of
these barriers refer to corporate culture and the remaining three refer
to the organizational structure of the company:
a. Division of work; specialization (division of work): If a task is
distributed to many, the specialists have no overall view. If every
employee sees only his small section, there can be mistakes and
misunderstandings if there is a lack of coordination. Furthermore, it is
easy to create blinders, which leads to the dominance of special
interests and lack of a global view (selectivity of the viewing angle)
b. Separation of decisions and execution (separation of decisions):In
a strict hierarchy the responsibility is always with the higher level, so
that all responsibility lies with the management, the executive
committee or the supervisory board. However, they have neither the
information nor the reference to carry out a follow-up on the decisions
of the lower level. As a rule, they are not involved at all, so no one is
responsible. The employees at the lower levels are only given
quantitative targets. Result-oriented quantitative management systems
support unethical behavior, since they put employees under pressure to
reach the given figures. If management only controls the performance
results, this corresponds to a goal that the end justifies the
means.
c. Principle of command and obedience (strict line of command): Waters
quotes a witness who was asked why he did not report the illegal
behavior:
“I had no power to go higher. I do not report to anyone else than my
superior“ and “I had to assume that whatever he told me came from his
superior, just as my subordinate would have to assume that what I told
him came from my superior.”
The principle of command and obedience (strict line of command) leads
to a lack of responsibility for the lower levels, which are the only ones
that have the information for an ethical impact assessment.
Decentralized management presupposes ethics among the employees,
insofar as they have to assume responsibility. They must make decisions
with an impact on the success of the company and the welfare of third
parties, i.e. a balancing in the sense of an ethical stakeholder approach.

5. According to the Waters study, four out of seven criteria that prevent
ethical behavior in the company are attributable to corporate culture:
1. an unethical role model function of the superiors, as a general
toleration of unethical behavior or as unethical socialization, thus
modelling of such behaviour by the superiors. In particular, initial start-
ups can be influenced.
2. an overgrown group loyalty that prevents misbehavior from being
reported to the outside and encourages competition among the groups.
3. a strong orientation of the success indicators on quantities in the case
of a simultaneous internal undervaluation of ethical, qualitative factors,
especially in order not to endanger the quantifiable goal fulfilment. This
results inter alia in an inhibition to openly address moral aspects in the
company.
4. A tendency of the company, thus indirectly all employees in the
company, to hide ethical violations, in order to prevent a poor image
and possible punishment from the outside.

6. How does Corporate Culture influence the employees?


Empirical studies show the influence of corporate culture on company
activities. Cullen, Parboteeah and Victor showed that a company culture
perceived as ethical by the employees had a positive effect on the
commitment of employees in the company organization. In an ethical
corporate culture, managers are increasingly ethical. Finally, the
employees lie less in an ethical corporate culture. In fact, the corporate
guidelines, the officially desired behavioral rules, often differ from the
actual, “secret rules of the game.” All other instruments for the purpose
of winning will fail if the corporate culture does not communicate them.
Can it be a good strategy if it can be seen as a contradiction to the
behavior of employees and therefore cannot be implemented? Ethical
behavior in the company therefore requires an ethical corporate culture.
Otherwise, ethical behavior as a violation of norms would be sanctioned
by the other employees (and vice versa). . Conversely, the wrong norms
of a group can also lead to opportunism, conformity, and adaptation of
the group members. This would lead to a herd behavior in an ethically
wrong direction, as was the case with the group norms of the bankers
within the framework of the financial crisis (group-think effect).
The ethical corporate identity is derived from the ethical corporate
culture and is intended to ensure a high level of identification of the
employees with the company and with the company’s goals. An ethical
corporate image thus serves the formation of an ethical corporate
culture within, the ethical reputation of the company without, and
creates trust among the stakeholders.
7. A contra-view: According to Stanford professor Jeffrey Pfeffer,
successful managers are selfish, mendacious and reckless. Peffer is thus
opposed to the dominant leadership theory that “good managers should
be modest, sincere and authentic. This fallacy spreads especially in the
leadership industry with its seminars, books, trainers, coaches and, of
course, the business schools and personnel departments.” Pfeffer does
not question that companies would benefit from an ethical leadership,
but he sees this as unrealistic and encourages young managers to
behave unethically to make careers. “Of course these are all wonderful
qualities and there is also no doubt that companies and their employees
would be better off if their leaders behaved morally. But they do not.
They usually do the opposite of it. One reason are well-known
psychological mechanisms. Whoever wants to be successful must not be
modest, but must make as much self-promotion as possible. And lies are
not only ubiquitous, but also very effective. According to a study, 74 per
cent of companies say it is right to lie about their true chances of
advancement, because they would be less involved.”
To be successful, according to Pfeffer, managers must be nasty.
Successful managers are loud and lie to themselves and others.
“Managers often present themselves completely differently than they
really are. They create their own reality and believe in it. This self-
deception also has a tremendously positive effect: anyone who can
deceive himself can also deceive others. Or the concept of moral
licensing: when people have once behaved ethically or morally, they
have the feeling that they are allowed a meanness. All of this is
empirically proven.”
What is the counter-argument for this?
8. Lawrence Kohlberg’s Hierarchy of Moral Development:
1. Level: morality according to regulation
There is only an orientation to the laws and norms, more precisely to the
sanctions. If there is only a small risk of being punished, or the penalties
are light, one behaves unethically. By way of example, the speed limits
are exceeded if there are no speed controls.
2. Level: Moral on reciprocity
“Treat others as you want to be treated, for instance by your colleague
or your competitor” (Golden rule or part of the categorical imperative as
well as “live and let live”). Man achieves the insight that we are
dependent on one another in the workplace and that other interests are
also to be accepted.
3. Level: Superior morality of responsibility
There is no reciprocity when the ethical behaviour serves overarching
goals, values that have been understood for themselves (ethos).
Behaviour is based on principles that are thought to be right and one
tries to weigh the consequences of their own actions ethically.
Conclusion and Criticism
Kohlberg shows in his step model the development of the human ethical
judgement capability in life phases.
9. Ethics in Business Management Education:
Bad examples also corrode morals as well. It can be dangerous to
continue to preach utility maximization with model thinking and to
represent this as the only rational behaviour. The consequence will be
that people orient themselves towards these maxims of action and
suppress their positive human qualities, such as compassion, willingness
to help, general sacrifice and selflessness. Management education
in particular must therefore ask whether it did not create these immoral
managers, even indirectly, perhaps not even adversely affecting social
development as a whole.
40. Fraud and Understanding the Moral Mind: Need for Implementation of Organizational
Characteristics into Behavioral Ethics Petr Houdek, Science and Engineering Ethics (2020)
26:691–707 https://doi.org/10.1007/s11948-019-00117-
a. This paper focuses on four specifc characteristics of the organizational world
that remain insufciently explored by behavioral ethics, and which encourage the
emergence and the persistence of dishonesty on the individual level: (i)
dishonesty and deception as primarily desired traits in some professions and
sectors, (ii) some degree of dishonesty as an acceptable cost for other required
traits of an employee or a manager, (iii) dishonesty and cheating as moral or
prosocial activities (or so at least as seen by the decision-makers), and (iv)
ineffciently implemented ethical systems.
b. Dishonesty and Deception as Desired Traits in Some Professions and Sectors:
i. Even though obvious social costs of dishonest behaviour may exist, if an
organization draws pure benefit from it, motivation to prevent deception in
its employees may be negligible. According to estimates, only one-fourth of
all frauds in the financial sector is exposed, and the penalty rarely matches
the extent of the damage (Dyck et al. 2017). These hidden crimes give rise to
a long-term company culture based on deceiving clients. By contrast, the
universal assumption of behavioral ethics is that dishonesty or cheating is
despicable and lowers one’s value in an organization and on the job market,
and that is why organizations are bound to prosecute and limit it.
Stereotypically, it may occur in professions such as investment bankers,
politicians, lobbyists, spies, actors, and salesmen, or fields such as public
relations (PR) or marketing. Lawyers are another example of a profession in
which the art of deception can be systematically rewarded. “[We] are
trained to rationalize. In law school, one is asked to argue that one case is
similar to or different from another. One is expected to be able to argue
every side of any issue. We are trained to draw lines from any point A to any
point B…. Rationalizing dishonesty takes practice. It gets easier over time.
c. Dishonesty as an Acceptable Cost for Other Desired Traits of an Employee:
i. If people can convincingly explain (to themselves) why their behavior is not
problematic, amoral behavior can spread. One study linked the ability to
rationalize with creativity (Gino and Ariely 2012). Both creativity and
dishonesty could be manifestations of similar mental ability (or willingness)
to break the rules, be they conventional ways of thinking in problem-solving,
or social norms. Francesca Gino and Dan Ariely experimentally showed that
more creative people are generally more willing to cheat. Priming for
creativity also increased the willingness to cheat. The relationship between
creativity and dishonesty seems to be mediated by the ability to
rationalize—to creatively bend and modify rules. Thus, organizations may
need to reach a compromise between higher creativity and innovation in
their employees, and lower respect for social norms. Dealing with the ethical
impacts of one’s work does not help one’s career outlook in an environment
where only intelligence and innovativeness are rewarded.

d. Dishonesty and Cheating as Moral or Prosocial Activities (or so at Least as Seen


by the Decision-Maker)

1. According to the moral foundations theory (Graham et al. 2011), there are at
least five basic moral preferences that cannot be simply ordered—individuals,
groups or cultures assign different priorities to them. People facing
organizational ethical dilemmas may not only care about justice (as the opposite
of dishonesty and cheating), but also about loyalty to a group, respect for
authority, sanctity or purity (not degradation), and especially care for others.
Although behavioral ethics stems from the theory of moral foundations, few
empirical studies test how employees solve contradictions between different
moral foundations.
2. An instance of the conflict between moral preferences is deception that is used
to protect others from harm or even to benefit them (white lies). Examples
include lying to avoid undermining a colleague by criticizing him in front of
others, praising poor artwork done by a child, or complimenting a partner for a
failed meal. Levine and Schweitzer (2014) showed that people who lie to help
others (especially at their own expense) are viewed as more moral than people
telling the truth and benefiting from it. A lie is typically regarded negatively only
when it is self-serving and benefiting the liar.
e. Ineffective or Harmful Ethics Systems
In effective ethical systems “[managers must model the desired behavior and employees need to see
that sanctions occur if codes are violated. Communication is a requirement for codes to be
successful” . In contrast, organizations with window-dressed ethics systems could end up punishing
people who respect the accepted moral norms..e.g plagiarism and the punishing professor and poor
feedback from students.

F. The environment in which people operate


activates explicit or implicit norms that, in turn,
influence the tendency to cross the ethical line.
Cialdini et al. (1990), for example, found that
the amount of litter in an environment activates
norms prescribing appropriate or inappropriate
littering behavior in a given setting and, as a
result, regulates littering behavior. Related research
has found that the presence of graffiti
leads not only to more littering but actually to
more theft (Keizer et al. 2008), and an abundance
of resources leads to increased unethical
behavior (Gino & Pierce 2009). In fact, even
more subtle situational factors, such as darkness
in a room, have been found to lead to increased
dishonesty (Zhong et al. 2010). Taken
together, these studies suggest that visual stimulation
from the environment or its physical
features can produce profound changes in behavior
surrounding ethical and social norms.
Like the experiments by Milgram (1974) and
Zimbardo (1969), these studies focused on situational
factors leading people to cross ethical
boundaries and demonstrated that people fail
to predict the influence of such subtle factors
on their behavior.
Cullings in BE for Ethical Infra
1. Ethically Adrift- Celia Moore and Francisca Gino, Harvard
a. Social categorization processes have a number of effects on our moral awareness: we consider
behavior manifest by members of our in-groups as morally acceptable, even when it may not be
and we define unethical behavior as less problematic as long as it is committed against a member
of an out-group.
b. However, witnessing unethical behavior committed by an out-group member makes us less
likely to follow suit; similarly, witnessing positive behavior by an in-group member may inspire
us to imitate it. These findings suggest that choosing the right exemplars of moral behavior—
either positive in-group members to emulate or negative out-group members from which to
differentiate oneself—may strengthen the magnet inside one’s moral compass.
c. Brown, Treviño, and Harrison (2005) define ethical leadership as “the demonstration of
normatively appropriate conduct through personal actions and interpersonal relationships, and
the promotion of such conduct to followers through two-way communication, reinforcement, and
decision-making”. Thus, leaders can be encouraged to use transactional efforts (e.g.,
communicating, rewarding, punishing, emphasizing ethical standards) as well as modeling to
influence their followers to behave ethically.
d. Individuals’ sense of anonymity—which, as we have discussed, facilitates unethical
behavior—is undermined when they believe they are being monitored. Studies have shown that
even when people are told their actions are anonymous, they respond to subtle cues of being
watched, such as the presence of eye-like spots on the background of the computer they are
using.
e. It seems that monitoring may work when it subtly reminds people to be their own best selves
but is less effective when it provides an external, amoral reason to comply with an external
request (such as a regulation or policy.)
f. But future research is needed to disentangle when and under what conditions monitoring
systems lead to more ethical behavior and when they backfire. Tenbrunsel and Messick‟s work
(1999) shows that external monitoring changes the way that people frame choices, moving them
from an ethical frame, in which they make choices because of a sense of intrinsic value, to a
business frame, in which they make choices to maximize profit.
g. Careful and cognizant goal-:: Setting overly narrow or ambitious goals can blind individuals
to other important objectives and over-commitment to goals can motivate individuals to do
whatever it takes to reach them.
However, carefully designed goals may have the power to appropriately direct behavior toward
ends that meet both business and moral obligation:
• Are goals are too specific or too challenging,
• Whether they include an appropriate time frame
• How they will affect risk taking, intrinsic motivation, and organizational culture
h. Intrapersonal processes
Increasing self-awareness: Individuals differ in the extent to which they are aware of their own
attitudes, feelings, needs, desires, and concerns, a trait called private self-consciousness.
Private self-consciousness or mindfullness promotes introspection and, as a result, is
associated with correspondence between attitudes and behavior. It is also
associated with a tendency to resist persuasion and efforts to change one’s attitudes.
E.g: Signing at the top of a form rather than a reminder to be truthful in filling he form at the
bottom of the form has resulted in being more truthful.

Increasing one’s sensitivity to moral emotions


The emotions that have been most frequently studied in the context of moral choice are negative
ones such as shame, guilt, and envy.
Guilt causes us to want to make amends and expiate negative feeling.
Though guilt is not a universal consequence of unethical behavior, it can be an instructive
emotion that later leads to reparative or altruistic behaviors.
In contrast, shame can lead to less adaptive behaviors such as rumination and aggression.
Behavioural Ethics-XUB
Structural Impediments to Ethical Behaviour
Structural Impediments to Ethical Behaviour in
Organisations
• Division of Labour--- leads to seeing the trees and missing the forest. Also easy to
hide small unethical practices at the ground level, if management is looking only
at the outcome and not the process.

• Separation of Power from Execution—the process is at lower levels, while the


responsibility is at higher levels. The concept of lean manufacturing and the
Toyota Way!

• Principle of Command and Obedience (Displacement of Responsibility). A case


for decentralisation of responsibility.
Cultural Impediments to Ethical Behaviour in
Organisations
• Leadership Role at all levels—Unethical Socialisation

• Company Loyalty and Fear Psychosis- Hide Ethical Violations to


protect company image.

• A Transactional Orientation—Growth and Rewards based on target-


achievement!
The Ethical Prisoner’s Dilemma
Payment B behaves morally B behaves immorally

A behaves morally 5,5 0,6

A behaves immorally 6,0 1,1=Nash Equilibrium


The Ethical Prisoner’s Dilemma
• The ethical prisoner dilemma for a fair competition is as follows: The
worst case for a company manager A is if he behaves morally, but the
company manager of another company B does not and the best case
for A is if A behaves immorally, but B does not. B is in the same
decision-making situation.
• The result is the combination in which both companies operate
unfairly, thus the worst case for all (Nash equilibrium= No one can
unilaterally improve their profits thru another strategy).
• Without ethical rules, such as law enforcement when the ethical
prisoner dilemma arises, a company finds itself in the worst-case
situation if it behaves ethically
Public Goods Ethical Dilemma
Payment A/B/Third Parties B behaves morally B behaves immorally

A behaves morally 1,1,(0) 0,5,-5

A behaves immorally 5,0,-5 {3,3,(-10)}


Public Goods Ethical Dilemma
• In the case of the environment, the cost of pollution is borne by the
public, whose health and quality of life are adversely affected
(negative external effects).
• In the case of work safety, the company can save costs at the expense
of employees.
• Even if they all wanted to behave ethically, they could not, because
there was then the risk of coming into the worst case situation.
• What is the solution? Moral Economics—Monetary incentives and
Sanctions!
Moral Economics: Morality Must Be Worthwhile

• It asserts that if morality is to be achieved, the incentives must be designed to


make moral behaviour worthwhile

• Karl Homann rejects the moral self-control of the individual by means of


internalized values because it would be exploited in market competition.

• A moral framework should be designed in such a way that self-interest


becomes socially productive-incentivizing whistleblowing!
Factors That Encourage Unethicality in Business
• Dishonesty and Deception as Desired Traits in Some Professions and Sectors--
professions such as investment bankers, politicians, lobbyists, spies, actors, and
salesmen, or fields such as public relations (PR) or marketing. Lawyers are
another example of a profession in which the art of deception can be
systematically rewarded.

• Dishonesty and Deception as Moral Traits in Ethical Dilemmas—Learning to


swim well in a grimy ocean; loyalty to your organisation and lie; lying to protect
others from harm or to even benefit them; lying is regarded negatively only
when it is self-serving; not when it helps others.

• Weak and ineffective Ethical Infrastructures


The Role of Bias in Unethical Behaviour
• Motivated Blindness
• This bias is rooted in the self-interest of people and arises when individuals face a
conflict of interest, or creates one in others, which impairs their ability to judge
rationally the ethics of their actions--- Auditing Firm and Management Consultant
• Although intentional corruption is probable, evidence on unconscious bias suggest
that professionals are often unaware of how morally compromised they are by
conflicts of interest—The Salmonella Outbreak and The Peanut Corporation of
America
• Outcome Bias
• Because outcomes are often the result of a confluence of individual decisions made
by many people, the outcome bias is manifested when people judge the ethics of a
particular action based on the outcome rather than the individual acts producing it
• In other words, OB arises when a positive outcome mitigates a known unethical
process or action creating it—e.g food poisoning!
The Role of Bias in Unethical Behaviour
• Omission Bias
• According to the omission bias, not doing something is seen as less problematic than
doing something, even if the outcomes are the same---adulterated food!
• Indirect Bias
• This bias arises when harm caused by delegating is seen as less problematic than
harm caused by oneself
• The growing use of Third Party Certifications—the courts are more lenient to parent
company.
• Identifiable Victim Bias
• When people see non-identifiable or statistical victims as less problematic
than identifiable or known victims, even when identification provides no
meaningful information, they are affected by this bias—e.g asbestos
victims!
Behavioural Ethics
Why good people break bad
The Theory of Bounded Ethicality
Chugh, Bazerman and Banaji
Bounded Ethicality--Continued

• Bounded Ethicality refers to the systematic and predictable ways in

which humans act unethically beyond their own awareness

• Bounded Ethicality- the idea that our ability to make ethical

choices is often limited or restricted because of internal and

external pressures.
Bounded Ethicality- Continued

• Self-View vs Self-Threat

• Self-Protection Mode vs Self-Enhancement Mode

• Predictive Value of the Theory of Bounded Ethicality-Self Enhancement

Mode Leads to Unethical Fall


Bounded Ethicality-Continued

• These cognitive biases operate outside our own awareness and therefore in a

way make us blind to the ethical failures we commit

• Includes a blindness component, which can be seen as activating an ethical

fading process, which removes the difficult moral issues from a given problem or

situation, hence increasing unethical behaviour


Socialisation--Social Categorisation and Moral
Distance
• The psychological process by which individuals differentiate between those who
are like them (in-group members) and those who are unlike them (out-group
members).

• Categorizing individuals as members of an out-group allows us to dehumanize


them, to exclude them from moral considerations, or to place them outside our
“circle of moral regard”, and thus mistreat them without feeling (as much)
distress.

• The notion of moral distance holds the idea that people will have only ethical
concerns about others that are near to them. If the distance increases, it
becomes easier to behave in unethical ways.
Albert Bandura
The Theory of Moral Disengagement
The Theory of Moral Disengagement
Eight Cognitive Distortion Mechanisms

Moral Justification--justify one’s actions as serving

the ‘‘greater good’’


The Theory of Moral Disengagement--Continued

• Euphemistic Labelling--using sanitized or convoluted language to make an

unacceptable action sound acceptable

• Advantageous comparison--making a behaviour seem less harmful or of no

import by comparing it to even worse behaviour


Advantageous Comparison
The Theory of Moral Disengagement--Continued

Displacement of responsibility--deflect responsibility for

their own behaviour by attributing it to social pressures or

the dictates of others, usually a person of higher power or

authority
Displacement of Responsibility
Diffusion Of Responsibility

Diffusion of responsibility--avoid personal feelings of culpability for their actions by

hiding behind a collective that is engaged in the same behaviour– also called the

By-stander Effect
Diffusion of Responsibility
The Theory of Moral Disengagement--Continued

• Distortion of Consequences--misrepresenting the results


of one’s actions by minimizing them or focusing only on
the positive.
Distortion Of Consequences
The Theory of Moral Disengagement--Continued

Attribution of Blame-- justifying one’s behaviours in

reaction to someone else’s provocation or behaviour


Attribution of Blame
The Theory of Moral Disengagement--
Continued

• Dehumanization-- minimizing or distorting the

humanity of others so as to lessen identification with

or concern for those who might be harmed by one’s

actions
Behavioural Ethics-Cullings
• 1.“Toward a Better Understanding of Behavioral Ethics in the
Workplace”, David De Cremer and Celia Moore, Annual Review
of Organizational Psychology and Organizational
Behavior,2020, 7:19.1–19.25
• When organizations fail to conduct their business in an
honorable way, they damage their reputations, the
interests of the industries they represent, and eventually
the welfare of society as a whole. As a result, trust in
business is hit hard, and profits and performance suffer.
This makes identifying how organizations can improve the
ways in which they manage unethical behaviors more
important, and is perhaps why ethics in organizations has
never received more research attention than it does
today.
• 2. Behavioral Field Evidence on Psychological and Social
Factors in Dishonesty and Misconduct , Lamar Pierce Olin
Business School Washington University in St. Louis Parasuram
Balasubramanian Olin Business School Washington University
in St. Louis
• Social processes: One of the most promising and
important topics on dishonesty is how social processes
influence behavior, with a growing body of work using
behavioral field evidence to explore it. Bucciol et al. [4]
used direct observation and interviews to identify how
bus passengers travelling with family members were
more likely to have a valid ticket, but not those travelling
with friends…. This is consistent with a field experiment
by Wenzel [11] that found information on others’
behavior improved tax compliance, as well as results
showing employees become more dishonest when joining
dishonest firms.
• Fairness, equity, and social comparison: Social
comparison and related fairness and equity concerns are
also a focus of recent work. Early work by Greenberg [13]
was one of the first to address this topic using behavioral
field data, showing increased theft following a pay
decrease at two out of three factories.
• Moral reminders and preferences: Related to this, Shu et
al. [23] used a field experiment to show that insurance
customers who signed at the top of forms reported higher
annual mileage than those who signed at the bottom,
presumably because signing provided a moral reminder.
• Culture: Other papers focused on how interactions within
and across ethnic and national groups can change levels
of dishonesty, including favoritism in Olympic judging
[25], ethnic diversity and corruption in Indonesia [26], and
stock market fraud in Kenya [27]. One approach by
Bianchi and Mohliver [28] links economic conditions
during executives’ formative periods to stock option
backdating.
• Professionalism: Similarly, teachers who are expected to
instill ethical values in children have been shown to cheat
when pressured with strong financial and career
incentives [31].
• Incentives and control: Monitoring, for example, has been
shown to reduce theft [33, 34], unexcused absenteeism
[35], and dishonest reporting [36] in organizational
settings such as call centers, restaurants, schools, and
banks
3. Blind forces: Ethical infrastructures and moral disengagement in
organizations, Sean R. Martin, Jennifer J. Kish-Gephart, James R.
Detert. Organizational Psychology Review 1–31, Reprints and
permission: sagepub.co.uk/journalsPermissions.nav DOI:
10.1177/2041386613518576
• To address unethical behavior in organizations, scholars have
discussed the importance of creating an ethical organizational
context or ethical infrastructure that encourages ethical, and
sanctions unethical, behavior both formally and informally.
• Specifically, in recent decades, research in social psychology,
behavioral economics, and behavioral ethics has increasingly
uncovered the multitude of ways in which otherwise good
people can be morally blind and engage in unsavory acts
without being aware of the unethical nature of their actions.
• bounded ethicality, self-deception and ethical fading, intuitive
morality, plus a host of other cognitive biases, indirect agency
biases, attribution biases. Moral disengagement, a theory that
explains the process and mechanisms by which an individual’s
moral self-regulatory system is decoupled from his or her
thoughts and actions, represents a particularly powerful
manner by which individuals can rationalize or neutralize
reprehensible conduct.
• While organizational infrastructures may be effective in
reducing the unethical behavior that organizational members
are aware of, this aforementioned research suggests that even
in organizations with formal and informal systems prioritizing
ethics, many unethical decisions and behaviors may go
unrecognized, or be rationalized in ways that make them seem
ethical to insiders.. Extreme examples of this phenomenon,
referred to in O’Reilly and Chatman’s (1996) review of
organizational culture, include the thoughts and actions of cult
members who see their organization as morally beyond
reproach.
• Trevino and Brown (2004, p. 75) described Arthur Anderson
employees believing in the ethicality of their organization
saying, ‘‘we’re ethical people; we recruit people who are
screened for good judgment and values.’’ Yet at the same time,
their auditors and consultants were engaged in numerous
unethical activities. These examples suggest it is possible for
members to perceive their organization as being one in which
ethics are prioritized, routinely enacted, and as having formal
and informal systems supporting those priorities— that is, as
having a strong ethical infrastructure— and yet still be working
in an environment where various unethical behaviors go
unnoticed or are easily rationalized.
• Importantly, we do not argue that strong ethical infrastructures
necessarily foster more unethical behavior in an absolute
sense. Indeed, they likely do root out severe and blatantly
unethical types of behavior (Jones, 1991). Rather, we argue
that there are numerous outcomes associated with perceptions
of a strong ethical infrastructure that can trigger members’
tendencies to morally disengage about common, less intense
behaviors. Further, we argue that moral disengagement likely
plays a role in reinforcing members’ perceptions that the
ethical infrastructure of their organization is strong.
• We follow Tenbrunsel and colleagues’ (2003) lead in
considering culture and climate as key components of a more
expansive term—ethical infrastructure—that incorporates
these constructs and others to describe the general ethical
context of an organization.
• When organizational members perceive consistent
expectations being communicated by the formal and informal
systems, the organization’s ethical culture is said to be strong
and employees are likely to abide by the clear and consistent
messages about behavioral expectations. When these
messages are seen as conflicting, the ethical culture is deemed
weaker (Trevin ˜o, 1990). Whether based on Trevino’s
theorizing or other models of ethical culture that have been
proposed (e.g., Hunt & Vitell’s [1986] research on corporate
ethical values, and Kaptein’s [2008, 2011] corporate ethical
values model), empirical work generally supports the expected
negative relationship between perceptions of the
organization’s ethical culture and unethical behaviour.
• Corresponding to the introduction of ethical culture, Victor and
Cullen (1987) introduced the ethical climate construct, or ‘‘a
group of perspective climates reflecting the organizational
procedures, policies, and practices with moral consequences’’
(Martin & Cullen, 2006, p. 177). The authors identified two
dimensions that, when crossed, theoretically derive nine ethical
climate types. The first dimension, ethical criteria, includes
three broad categorizations of moral philosophies: egoism,
benevolence, and principled. These dimensions parallel
Kohlberg’s theory of cognitive moral development wherein an
individual’s level of moral reasoning is classified as self-
centered (Level 1), other-centered (Level 2), or focused on
broad principles of fairness and justice (Level 3). The second
dimension, locus of analysis, draws on sociology literature (e.g.,
Merton, 1957) to identify the referent group as individual (i.e.,
within the individual), local (i.e., internal to the organization
such as a work group), or cosmopolitan (i.e., external to the
organization such as a professional organization).
• Later theorizing offered a more simplified model of the
relationship between ethical climate types and unethical
behavior, arguing that employees are more likely to behave
ethically in organizations that stress ‘‘the consideration of
others’’ (such as benevolent and principled climates) versus in
organizations that stress self-interest (egoistic climate).
Empirical results, which rest on employees’ perceptions of their
work environment, generally support a positive relationship
between egoistic climate and unethical behavior, and negative
relationships between benevolent and principled climates and
unethical behaviour.
• Researchers have recognized that ethical climate and ethical
culture are highly related descriptors of an organization’s
overall ethical context. In a comprehensive model, Tenbrunsel
et al. (2003) subsumed elements of ethical culture and ethical
climate under the term ‘‘ethical infrastructure,’’ which they
defined as the organizational climates, informal systems, and
formal systems relevant to ethics in an organization. The
authors modeled ethical infrastructure as three concentric
circles—starting with the innermost circle of formal systems,
followed by informal systems, and then encompassed by the
outermost circle, organizational climates—that simultaneously
support and influence each other. The formal systems refer to
the documented and standardized procedures upholding
(un)ethical standards. The informal systems are those signals
that are not documented—they are felt and expressed through
interpersonal relationships. Both the formal and informal
elements are undergirded by individuals’ shared perceptions of
those systems.
• Over the past decades, approaches to studying the ethical
decision making of individuals have proliferated and evolved.
Some emphasize ethical decision making from a more
deliberative frame—emphasizing, for example, individuals’
moral awareness and reasoning, their level of moral
development, their dispositional tendency to attend to and
reflect upon moral information, or their prioritization of a
moral identity (their desire to be and be seen as a moral
person). From this perspective, individuals are treated as
decision makers who perceive moral information, establish
moral judgment, form an intention for action, and act
accordingly. And indeed, moral awareness and level of moral
development have been found to be positively (negatively)
related to (un)ethical intentions.
• Recently, however, other research has shown that individuals,
and not just those with obvious moral development limitations,
often engage in unethical behavior with little pre-active
cognition about the moral considerations involved. This work
has shown how various factors can lead individuals to make
decisions that result in unethical behavior that is either unseen
or cognitively re-construed. One particularly valuable approach
to explaining the overlooking or re-construing of unethical
behavior is the study of moral disengagement (Bandura,
1986)—a process by which the connection between individuals’
moral self-regulation systems and thoughts and actions is
interrupted. Moral disengagement can operate as an automatic
and anticipatory factor preventing individuals from perceiving
moral cues, or as a post hoc rationalization to justify unethical
decisions. In other words, not only can it facilitate unethical
action by dampening moral awareness and preventing
individuals from perceiving moral information, but it can also
bias judgment when individuals are somewhat morally aware.
• The notion that individuals have the cognitive capability to
rationalize inconsistencies in their espoused moral beliefs and
their behavior in practice, and thus make themselves (and
others) blind to ethical gaffs, has a long history. For example,
drawing on interviews of white-collar criminals accused of
embezzling money from their employers, Cressey noted that
‘‘normal’’ people refused to accept their actions as criminal.
Rather, they minimized their indiscretions by using neutral
language (e.g., ‘‘borrowing’’ rather than ‘‘stealing’’) or citing
injustices at the hand of the victim (i.e., their organizations).
Similarly, criminal theorists Sykes and Matza (1957) argued
against the prevailing theory that juvenile delinquency was the
result of learning a different set of values in low socioeconomic
environments. Instead, the authors suggested that juvenile
delinquents share society’s conventional values but, in certain
situations, use cognitive neutralization techniques to weaken
the apparent necessity of applying those values. The authors
identified several neutralization techniques such as denying
responsibility for one’s actions or denying that a victim had
been unjustly harmed (or harmed at all). Drawing on this
foundational work, organizational researchers have suggested
additional types of cognitive distortion techniques, that are
commonly found in organizations where systemic corruption is
uncovered.
• Moral disengagement theory posits that people generally
behave in ways that are consistent with their internal standards
of morality because they experience anticipatory negative
emotions such as guilt, shame, or remorse when they consider
deviating from those from standards. However, individuals are
at times motivated (consciously or non-consciously) to
disengage this moral self-regulatory process in ways that fit
their needs, effectively bypassing the negative emotions that
would normally come from violating internal standards.
• Bandura (1986) articulated eight cognitive distortion
mechanisms by which individuals morally disengage. Moral
justification occurs when individuals justify their actions as
serving the ‘‘greater good’’ (as in the case of substandard jobs
being characterized positively as ‘‘economic development’’).
Euphemistic labeling involves using sanitized or convoluted
language to make an unacceptable action sound acceptable—
such as ‘‘borrowing’’ software purchased by someone else, or
engaging in ‘‘creative accounting.’’ Advantageous comparison
involves making a behavior seem less harmful or of no import
by comparing it to even worse behavior. For example, a person
who takes a ream of paper home from the office for personal
use might say, ‘‘It’s not like I’m taking a printer home with me.’’
With displacement of responsibility, people deflect
responsibility for their own behavior by attributing it to social
pressures or the dictates of others, usually a person of higher
power or authority (e.g., ‘‘I was just following orders’’).
Diffusion of responsibility allows individuals to avoid personal
feelings of culpability for their actions by hiding behind a
collective that is engaged in the same behavior, or by using the
rationale that ‘‘everyone else is doing it, too.’’ Distortion of
consequences involves misrepresenting the results of one’s
actions by minimizing them or focusing only on the positive.
Claiming that one’s (unethical) actions are ‘‘no big deal,’’or that
they ‘‘don’t hurt anyone’’ are common ways of trying to
convince oneself and/or others that one’s behaviour is
acceptable because little or no harm is done. Attribution of
blame (also known as ‘‘blaming the victim’’) is the process of
justifying one’s behaviors in reaction to someone else’s
provocation or behavior (e.g., ‘‘It’s their own fault for trusting
others with this responsibility’’).The notion of ‘‘buyer beware’’
may be considered a broader example of the way business
behavior has been construed so as to make harming a victim
easily justifiable as being the victim’s own fault. Last,
dehumanization involves minimizing or distorting the humanity
of others so as to lessen identification with or concern for those
who might be harmed by one’s actions (e.g., ‘‘those clowns’’).
Additional examples of each moral disengagement mechanism
are provided in Table 1.
• Dispositional moral disengagement can be defined as ‘‘an
individual difference in the way that people cognitively process
decisions and behaviour with ethical import that allows those
inclined to morally disengage to behave unethically without
feeling distress’’ (Moore et al., 2012, p. 2). According to this
approach, people who have a tendency to morally disengage
will be more likely to engage in unethical or deviant behaviour
across situations.
• According to Beu and Buckley (2004), for instance, politically
astute leaders can influence followers toward unethical
behaviour by reframing actions and situations in ways that
draw attention away from ethical issues and by encouraging
the use of morally disengaged reasoning. An important part of
using their political skill effectively is the ability to inspire trust,
defined as one’s willingness to be vulnerable to another, which
in turn reduces others’ felt need to closely monitor their words
and deeds. In effect, the leader, whose rationale is trusted with
little thought or questioning, helps the follower to reinterpret
the situation using a morally disengaged lens.
• The very nature of moral disengagement is alarming because it
demonstrates the power of the human mind to distort
perceptions and rationales such that unethical thinking and
behavior is not recognized as such. If individuals’ perceptions of
unethical behavior are readily distorted in this way, it seems
plausible that employees could perceive (and report) that an
organization’s infrastructure is ethical when indeed unethical
rationales and practices exist and persist but are simply
unnoticed. In the following section, we thus caution against the
assumption that organizational infrastructures are ethical
because members—even many members—view them as such.
Instead, we argue that an ethical infrastructure may not only
harbor unethical thinking and behavior, but also, in some ways,
may make it more difficult for members to see certain types of
problems— particularly those of the day-to-day, less morally
intense variety (Jones, 1991). Further, we posit that moral
disengagement is, to some degree, an important factor in
preserving employees’ perceptions that their organization
enjoys an ethical infrastructure. Our argument rests on the
recognition that several fundamental human tendencies found
in prior work to motivate morally disengaged thinking may
actually be present more often in situations in which
employees perceive themselves to be part of an ‘‘ethical’’
infrastructure.
• Defined as ‘‘a motive or behavior that seeks to benefit the self’’
(Cropanzano, Stein, & Goldman, 2007, p. 6), self-interest is a
powerful human motive. A potential problem arises in that
both broad organizational objectives and specific performance
goals can at times be extremely challenging or even impossible
to achieve, and thus potentially motivate employees to take
shortcuts or engage in unethical behavior to avoid losing out on
maximum personal gain. Schein (2004) has noted that
rationalizations for unethical behavior that is easily identifiable
to outsiders—including those in different functions within the
same organization—is often unrecognized by embedded
members for whom it has become part of the taken-for-
granted fabric of their environment. This is because
‘‘normatively appropriate’’ is largely a perceptual process that
can vary among individuals and groups who have chosen, over
time, to prioritize different bases for judging social action.
• The bad news, however, is that although strong ethical
infrastructures are likely to suppress blatantly self-interested
motivations and unethical behavior, they are not necessarily
equally effective at suppressing morally disengaged reasoning
and unethical behavior related to other motivations—such as
the desire to maintain a positive self-image or the desire to
reduce cognitive load—commonly linked to strong ethical
infrastructures. Indeed, in their original theorizing, Victor and
Cullen (1987) recognized that even the venerable benevolent
(or caring) ethical climate is imperfect: Corporations with caring
or rules climates may be more prone to violations of trade laws
than corporations with a professional climate ... when faced
with the dilemma of offering a bribe or losing a contract, an
employee from a caring climate may judge that s/he is
expected to give the bribe because the contract would help
people who work for the firm, even though it is illegal. (Victor &
Cullen, 1987, pp. 67–68). This suggests that even organizations
with noble ethical intentions prioritize some values over others,
and some groups or people over others (e.g., in-groups such as
employees over out-groups such as customers or competitors),
which creates a series of opportunities for distorted cognition
about what is appropriate (Giessner & van Quaquebeke, 2010).
• In one example from Margolis and Molinsky’s (2008, p. 856)
study of ‘‘necessary evils,’’ a police officer must evict a
delinquent tenant from her home. Although this action will
cause emotional and financial pain to the tenant, the officer
needs to carry out the act to comply with the law and protect
the rights of the landlord. The officer’s reasoning—‘‘Well, they
put themselves in this situation’’ (attribution of blame)—is
likely an institutionalized rationalization that helps to minimize
the discomfort of a challenging situation while maintaining the
positive self-image of officers who must undertake such
behavior.
• Ethical infrastructure, limited cognition, and moral
disengagement: Classic psychological research has shown
various risks resulting from humans’ desire to reduce cognitive
effort (Fiske & Taylor, 1984) and their susceptibility to social
influences. For example, followers in a hierarchy will often
automatically experience an ‘‘agentic shift’’ in which they
become an instrument of a perceived authority figure and do
not think carefully for themselves about the ethical
ramifications of their own (leader instructed) behavior
(Milgram, 1969). In the classic Milgram experiments and more
recent replications, participants used morally disengaged
language to explain why they continued to shock another
person when directed to do so by an experimenter: ‘‘I was just
doing what he told me’’ (displacement of responsibility; ‘‘Basic
instincts,’’ 2007). Similarly, work in social learning theory and
social information processing indicates that individuals learn
about norms and expected behaviors from those around them
(Bandura, 1986; Salancik & Pfeffer, 1978), thus sparing
themselves the cognitive effort of having to think through or
experience everything for themselves. And when it comes to
moral reasoning and behavior more generally, the finding that
most individuals operate at a ‘‘conventional level’’ of moral
development (Kohlberg, 1969; Trevin ˜o, 1992)— wherein they
take their cue from what they see others around them doing—
suggests that individuals do not routinely ‘‘think through’’ the
ethical implications of every stimulus they face in their work
life.
• ‘‘the function of the cultural pattern [is] to eliminate
troublesome inquiries by offering ready-made directions for
use, to replace truth hard to attain by comfortable truisms, and
to substitute the self-explanatory for the questionable.’’
• As shown in Figure1, the influence of a strong perceived ethical
infrastructure on decreased cognition and hence potentially
increased moral disengagement is proposed to operate in part
through increased trust, commitment, and identification. For
instance, ethical infrastructures have been linked empirically
and theoretically to trust (seeFigure1,Step1).And, people are
less suspicious of and less concerned about monitoring the
behaviors of those they trust and more open to absorbing new
knowledge from them without careful analysis. In short, trust
allows individuals to reduce their cognitive effort (see Figure 1,
Step 2). Thus, if trust minimizes the extent to which people are
likely to closely examine others’ rationales for action, it follows
that moral disengagement in co-workers may be less likely to
be noticed or questioned and more likely to be mimicked in
ethical infrastructures because of the trust that exists in such
environments (see Figure 1, Step 3).
• Recent scandals involving some of the most well respected
corporations in the world, including Johnson & Johnson, Merck,
and Toyota, provide some anecdotal evidence for this
possibility. In 2008, for instance, Johnson & Johnson initiated a
‘‘phantom recall,’’ instructing employees to surreptitiously buy
back problematic Motrin IB caplets from convenience stores
(Besser & Adhikari, 2010). Given Johnson & Johnson’s
reputation for recalling Tylenol in the early 1980s and its
corporate reputation for a climate of care, organizational
decision makers may have unconsciously engaged in moral
licensing when initiating and overseeing this discreet recall.
Furthermore, because the legal implications of the action were
unclear and the ultimate outcome was intended to be positive
(i.e., prevent sickness from tainted medication), there were
certainly multiple bases for rationalizing that the action was
‘‘morally justified’’ and in line with the company’s strong
ethical culture.
• Solutions: training can be used to help employees identify
morally disengaged reasoning in their own and others’ thinking;
devil’s advocates; ‘‘stop and think’’ moments; ethics officers,
Those individuals would also need to be endowed with
sufficient power to avoid being blindly overruled or shouted
down by the majority.
• For example, extant research suggests that reaffirming one’s
core values helps to counter the negative effects of ego
depletion i.e., weakened self-control, because it refocuses
one’s perspective on the bigger picture,
4. Building Houses on Rocks: The Role of the Ethical Infrastructure
in Organizations, Ann E. Tenbrunsel, Kristin Smith-Crowe and
Elizabeth E. Umphress3
• We argue that designing ethical organizations requires an
understanding of how and why such systems work; that is, one
must be able to distinguish between ethical foundations of rock
and those of sand. First and foremost, such an understanding
requires an informed, theoretical identification of the
organizational elements that contribute to an organization’s
ethical effectiveness. We introduce the term ethical
infrastructure to describe these elements, which we identify as
incorporating the formal systems, the informal systems, and
the organizational climates that support the infrastructure. We
suggest that the first two elements can be categorized both by
the formality of these systems as well as the mechanisms used
to convey the ethical principles, including communication,
surveillance, and sanctioning systems. We further argue that
these formal and informal elements are part of another
element of the ethical infrastructure—the organizational
climates that support the infrastructure—that permeates the
organization.
• The third, and equally crucial step is to understand how these
elements interact to influence ethical behavior. We propose a
theory of ethical embeddedness to describe these
interrelations. We argue that formal systems are embedded
within their informal counterparts, which in turn are embedded
within the organizational climates that support the
infrastructure. The strength and ultimate success of each layer,
we assert, depends on the strength of the layer in which it is
embedded. We use this theory to develop predictions about
the relationships between the ethical infrastructure and ethical
behaviors. We conclude by linking these predictions to their
associated practical implications, including offering
recommendations for organizations that desire to enhance
their ethical effectiveness. (Draw three layers to explain this
concept of ethical infrastructure).
• Formal systems are those that are documented, that could be
verified by an independent observer. We focus on three types
of formal systems that we believe to be the most prevalent and
the most directly observable: communication, surveillance, and
sanctioning systems. Formal communication systems are those
systems that officially communicate ethical values and
principles. Formal representations of such systems include
ethical codes of conduct, mission statements, written
performance standards, and training programs. Formal
surveillance systems entail officially condoned policies,
procedures, and routines aimed at monitoring and detecting
ethical and unethical behavior. Examples include the
performance appraisal itself as well as procedures for reporting
ethical and unethical actions, including reporting hot lines and
ethical ombudsmen. Formal sanctioning systems are those
official systems within the organization that directly associate
ethical and unethical behavior with formal rewards and
punishments, respectively. Perhaps the most obvious example
of such a system is one in which unethical behavior is clearly
and negatively related to performance outcomes, such as
evaluations, promotions, salary, and bonuses.
• Each of these processes is independent. It is possible, for
example, that a performance standard is set, but never
monitored, or that behavior is monitored, but not sanctioned.
Thus, it is important to recognize the contributions that each of
these mechanisms makes to the ethical infrastructure.
• Formal communication systems include ethical codes of
conduct, mission statements, written performance standards,
and training programs. These systems are used quite frequently
by organizations.
• Informal communication systems are defined as those
unofficial messages that convey the ethical norms within the
organization. Informal, “hallway” conversations about ethics,
informal training sessions in which organization members are
“shown the ropes,” and verbal and nonverbal behaviors that
communicate ethical principles all represent different
mechanisms by which ethical principles are informally
communicated. Informal Surveillance and Sanctioning Systems.
In order for informal communication systems to be effective,
there must be an accompanying informal surveillance system,
consisting of someone or some mechanism that can informally
monitor ethical and unethical behaviors. Informal surveillance
systems are those systems that monitor and detect ethical and
unethical behavior, but not through the official channels of the
formal surveillance systems. Rather, informal surveillance
systems are carried out through, among other channels,
personal relationships (e.g., peers) and extra-organizational
sources (e.g., the police). The informal representation of the
surveillance system may best resemble a spy network, an
“internal CIA.” Informal sanctioning systems are those systems
within organizations that directly associate ethical and
unethical behavior with rewards and punishments; however,
unlike its formal counterpart, informal sanctioning systems do
not follow official organizational channels. Informal sanctioning
systems may take the form of group pressure to behave in a
certain manner or the perceived consequences that are
experienced if one engages in certain ethical or unethical
activities. Organizational members may threaten to punish
someone for engaging in an ethical behavior, such as whistle
blowing, with such punishment including isolation from group
activities, ostracism (Bales, 1958; Feldman, 1984), and even
physical harm.
• In general, we define organizational climate as organizational
members’ shared perceptions regarding a particular aspect of
an organization; in other words, organizational climates are in
reference to something (e.g. , ethics). Because climate is born
out of the context of an organization, climates vary across
different contexts. Also, because the experiences that
organizational members have of any given context are so
complex, multiple organizational climates for different aspects
of an organization exist simultaneously. We should note that
some theorists have made a fundamental distinction between
organizational climate and a related concept, organizational
culture, with the latter construct being essentially broader than
the former. However, for our purposes, we do not assume that
these are two distinct constructs, but rather that they are two
different perspectives (i.e., using different language and coming
from different disciplines) of the same phenomenon.
• Organizational climate consists of the perceptions of
organizational members (e.g., Schneider, 1990) regarding
ethics, respect, or procedural justice within organizations,
whereas formal ethical systems consists of tangible objects and
events pertaining to ethics, such as codes of ethics. Likewise,
the informal ethical system consists of tangible objects and
events relevant to ethics (e.g., conversations among workers),
while, again, climate is made of perceptions.
• At the root of the proposed curvilinear relationships between
elements of the ethical infrastructure and ethical behavior is a
proposed cognitive shift that occurs when an ethical
infrastructure is in place as compared to when such an
infrastructure is nonexistent. When an ethical infrastructure is
nonexistent, an individual must decide what is ethical. In
contrast, when an ethical infrastructure is in place, the
individual interpretation of what is ethical is supplanted by the
interpretation that is advanced by the organization. Individuals
in this type of organization no longer rely on their own values;
rather, they look to the organization to decide what is ethical.
• We argue that a weak ethical infrastructure, because it does
not promote individual reflection, results in more unethical
behavior than when the ethical infrastructure is nonexistent or
is strong. When an organization has a weak ethical
infrastructure, individuals exhibit more unethical behavior than
when such an infrastructure is nonexistent because they
engage in less sophisticated moral reasoning; instead, they look
to the organization for guidance but don’t find much help. A
weak ethical infrastructure also produces more unethical
behavior than a strong ethical infrastructure. In both cases, the
individual looks to the organization for guidance. However, by
definition, in a strong ethical infrastructure, unlike in a weak
structure, the organization is clearly conveying the importance
of ethical principles. Consequently, when an organization has a
strong ethical infrastructure, they engage in more ethical
behavior than when an organization has a weak ethical
infrastructure because the organization has sent a signal that
ethical behavior is important. While the reason for this ethical
behavior is fundamentally different for a strong ethical
infrastructure (“I am doing this because the organization has
told me it is important”) than for a nonexistent ethical
infrastructure (“I am doing this because it is the right thing to
do”), the end result is the same. Ethical behavior is therefore
higher when a surveillance and sanctioning system is either
nonexistent or strong than when such a system is weak, thus
producing the curvilinear relationship.
• Tenbrunsel and Messick (1999) provide an illustration of this
phenomenon in the domain of formal surveillance and
sanctioning systems. They argued and found support for the
proposition that cooperative behavior would be lower when a
weak versus a nonexistent sanctioning system was in place.
Using a prisoner’s dilemma as the context, subjects had the
option to either cooperate by adhering to an industry
agreement to reduce emissions or defect by not adhering to
such an agreement. Half of the subjects were told that there
would be no fines associated with defection (non-existent
surveillance and sanctioning system),whereas the other half
were told that there would be a weak surveillance and
sanctioning system (characterized by a small probability of
being caught and small fines if defection was noted). Results
provided support for the notion that the weak system would
increase undesirable behaviors, with defection rates higher in
the weak sanctioning condition than in the condition in which
no sanctions were present. An additional study extended these
findings, illustrating that a weak sanctioning system produced
less cooperative behavior than both a nonexistent sanctioning
system and a strong sanctioning system.
• Ethical systems vary in the degree to which they reflect an
organization’s commitment to ethical principles, which in turn
influences the degree to which they influence an individual
employee’s ethical behavior. The lower the perceived
commitment to ethical principles, the less salient they are in
the organizational member’s experience and hence the less
influential they are in influencing an individual’s behavior. We
argue that elements that reflect a greater degree of
commitment to ethical values are those that are more inherent
to the organization. True belief in ethical principles is reflected
not so much in what is said but in what is done. In this sense,
we predict that formal elements of the ethical infrastructure
reflect a weaker degree of commitment than informal
elements, which in turn reflect a weaker degree of
commitment than the relevant organizational climates.
• At the base of our proposition is the notion of consistency
between the various elements of the ethical infrastructure. In
order for codes of conduct and ethical training to have an
impact, they must be consistent with more systemic ethical
elements, such as the organization’s informal reinforcements
and the relevant organizational climates. If such congruence is
missing, then employees receive a mixed message,
substantially reducing the impact that these formal systems
might have. For example, imagine a situation in which an
organization engages in extensive ethical training, but has an
informal reward system that promotes individuals based on the
bottom-line, independent of the means used to get there. The
effectiveness of this training would be substantially diminished
in comparison to a situation in which the organization’s
informal system of promotions rewarded individuals who were
ethical.
• Following the strategically-focused climate argument (Smith-
Crowe et al., in press), an organization’s ethical infrastructure
will only be effective to the extent that the elements within it
act in concert. If they are to be effective, formal ethical systems
must reside in informal reinforcements and organizational
climates that are solid. If not, the formal systems act more like
a Band-Aid than an antibiotic, addressing the symptoms, but
not the underlying causes. Similarly, if the informal system is
incongruent with the pertinent climates, the effectiveness of
that informal system is compromised. We therefore argue that
stronger elements, or those which reflect a deeper
commitment to ethical principles and ideals, moderate the
effectiveness of weaker elements
• Practically, our discussion has several implications for
organizations that wish to increase their ethical effectiveness.
First, it suggests that a focus on formal systems—which are the
most visible and the most highly touted—isn’t enough. Rather,
it is important to delve below the ethical exterior to uncover,
other, perhaps more important, elements, such as informal
systems and organizational climates. Second, the relationship
between these elements is complicated, with half-hearted
attempts producing potentially disastrous results. Third, one
must look at the elements of the ethical infrastructure in
conjunction with one another, for it is really the interplay
among them that is critical.
5. “Does Power Corrupt or Enable? When and Why Power
Facilitates Self-Interested Behavior” ,Katherine A. DeCelles, D. Scott
DeRue, Joshua D. Margolis ,Tara L. Ceranic
• The questions of when and why people will advance their
own interests at the expense of the common good are
evident across a wide range of organizational behavior
research. Therefore, we define self-interested behavior as
actions that benefit the self and come at a cost to the
common good. Power presents organizations with a paradox
related to self-interested behavior. On the one hand, there is
a widespread belief and evidence that power corrupts, and
people in positions of power can have a substantial negative
impact on the common good by acting solely in their own self-
interest. Yet, power can increase perspective taking and
interpersonal sensitivity, suggesting that power might
increase the emphasis placed on others’ needs as opposed to
one’s own interests. In parallel to this research on power, it
has been argued that self-interested behaviour is a function
of individuals’ moral identity. Moral identity is the extent to
which an individual holds morality as part of his or her self-
concept and it has been shown to influence the degree to
which people emphasize their own versus others’ needs.
• We expect self-interested behavior to be a function of both
power and moral identity. We expect this interaction between
power and moral identity to manifest itself because
individuals’ traits can increase the accessibility of cognitive
concepts and then influence how people interpret
information, especially in situations where an individual
perceives him- or herself to be autonomous or powerful.
Based on this research, it follows that people with high moral
identities will have more readily available moral concepts in
their accessible mental structures and that when experiencing
feelings of power, they will be more aware of the moral
implications of a situation relative to those with a lower moral
identity. Reynolds (2006) referred to this recognition by an
individual of a situation’s moral content as “moral
awareness.” Individuals with higher moral identities are likely
to have greater moral awareness (Reynolds, 2006), which we
argue should lead them to engage in even less self-interested
behavior when feeling powerful because they are likely to be
especially aware of the moral implications of their actions.
Conversely, feeling powerful, yet having a lower moral
awareness (associated with a lower moral identity), likely
results in individuals not seeing any problem with benefiting
themselves at the expense of others.
• Across two studies, we found that power predicts self-
interested behavior differently depending on moral identity.
In our first study of working adults, there was a negative
association between trait power and self-interested work
behavior when individuals had a high moral identity, yet a
positive relationship between trait power and self-interest
when individuals had a low moral identity.
• Our research has important practical implications. As
organizations look to promote people to more powerful
positions or empower people with greater discretion, our
research suggests, understanding how central morality is to
the person’s self-concept will be a critical consideration for
predicting whether that person will engage in self-serving
behavior. For employees who are already in positions of
power or who exhibit strong trait power, it is important that
organizations work to develop their moral identity.
6. “Managing Unethical Behavior in Organizations: The Need for a
Behavioral Business Ethics Approach”, David De Cremer ,Judge
Business School, University of Cambridge ,Wim Vandekerckhove
University of Greenwich Business School
• A prescriptive approach thus implies that people are rational
human beings, who make conscious decisions about how to
act. As a result, prescriptive approaches to business ethics
assume that bad people do generally bad things and good
people do good things, because they are rational decision
makers. Explaining situations whilst sticking to this rational way
of reasoning is attractive for a variety of reasons (De Cremer,
2009, De Cremer & Tenbrunsel, 2012): (a) it is a simple
assumption that promotes an economic way of thinking about
moral violations, (b) allows to blame a few bad apples for the
emerging violations, and (c) provides a justified ground to
punish those regarded as rationally responsible. However,
many situations exist where good people do bad things - an
observation that has received considerable empirical support.
These observations challenge the accuracy of the prescriptive
approach in predicting the extent to which so-called rational
human beings will display ethical behavior. It seems to be the
case that because of rather irrational, psychological tendencies
humans do not always recognize the moral dilemma at hand
and engage in unethical behaviors without being aware of it.
Indeed, Tenbrunsel and Messick even note that people do not
see the moral components of an ethical decision, not so much
because they are morally uneducated, but because
psychological processes fade the ethics from an ethical
dilemma.
• To make sense of the fact that good people can do bad things
an alternative view point is needed that accounts for people’s
morally irrational behavior. We propose that this alternative
view point is a descriptive approach that examines more closely
how people actually take decisions and why they sometimes do
not act in line with the moral principles that are universally
endorsed. Indeed, it is intriguing to observe that the actors in
many business scandals do not see themselves as having a bad
and ethically flawed personality. They consider themselves as
good people who have slipped into doing something bad. How
can we explain this? An interesting idea put forward by the
behavioral business ethics approach is that many organizational
ethical failures are not only caused by the so-called bad apples.
In fact, closer inspection may reveal that many ethical failures
are in fact committed by people generally considered to be
good apples, but depending on the barrel they are in they may
be derailed from the ethical path.
• Taken together, the assumption that when people are
confronted with moral dilemmas they are automatically aware
of what they should be doing and therefore are in control to do
the good thing is limited in its predictive value because of the
fact that humans seem to deviate from what rational
approaches predict.
• Or, as Tenbrunsel and Smith-Crowe (2008, p. 548) note:
“Behavioral ethics is primarily concerned with explaining
individual behavior that occurs in the context of larger social
prescriptions. The role of behavioral ethics in addressing ethical
failures is to introduce a psychological-driven approach that
examines the role of cognitive, affective and motivational
processes to explain the how, when, and why of individual’s
engagement in unethical behaviour”.
• These two topics illustrate how psychological processes play a
role in shaping people’s moral judgments and actions that are
relevant to business and organizations: (a) the processes and
biases taking place during ethical decision making and (b) the
impact of the social situation on how ethical judgments and
actions are framed and evaluated. Research on these two
topics advocates the view that when it comes down to ethics,
many people are followers, both in implicit and explicit ways.
More precisely, the field of behavioral ethics makes clear that
people are in essence followers of their own cognitive biases
and the situational norms that guide their actions.
• Bounded ethicality includes the workings of our human
psychological biases that facilitate the emergence of unethical
behaviors that do not correspond to our normative beliefs.
Specifically, people develop or adhere to cognitions (biases,
beliefs) that allow them to legitimize doubtful, untrustworthy
and unethical actions. Importantly, these cognitive biases
operate outside our own awareness and therefore in a way
make us blind to the ethical failures we commit. In addition,
this blindness is further rooted in the self-favoring belief
that in comparison to the average person one can be looked
upon as fairer and more honest. These self-favoring
interpretations of who they are in terms of morality, are used
by humans in implicit ways to infer that they will not act
unethically, which as a result lowers their threshold of
monitoring and noticing actual violations of our ethical
standards.
• This concept of bounded ethicality thus literally includes a
blindness component, which can be seen as activating an
ethical fading process, which as Tenbrunsel notes is a fading
process that removes the difficult moral issues from
a given problem or situation, hence increasing unethical
behaviour. Below, we briefly discuss a number of psychological
processes that influence people to show unethical behavior
even if it contradicts their own personal beliefs about ethics.
These processes are: moral disengagement, framing, anchoring
effects, escalation effects, level construal,
and should-want self.
• Moral disengagement: Moral disengagement can be defined as
an individual’s propensity to evoke cognitions which
restructure one’s actions to appear less harmful, minimize
one’s understanding of responsibility for one’s actions, or
attenuate the perception of the distress one causes others
(Moore, 2008, p. 129).
• Framing. Depending on how a situation is cognitively
represented has an effect on how we approach moral
dilemmas and take decisions. Insights building upon the
concept of loss aversion (the notion that people perceive losses
as more negative than they regard gains of an equal magnitude
as positive, suggest that self-interest looms larger when people
are faced with loss. Indeed, losses are considered more
unpleasant than gains are considered pleasurable and hence
invite more risk-taking to avoid the unpleasant situation. Thus,
risk-taking often leads to behavior violating ethical standards.
To avoid making losses, firms can resort to unethical practices.
The 2008 Financial Crisis is one example. Put differently: when
looking at a situation in terms of losses, corruption is never far
away.
• Anchoring effects: This effect holds that our judgments and
decisions are strongly influenced by the information that is
available and accessible. Importantly, this information can be
very arbitrary or even irrelevant to the decision and judgments
one is making. Rumours of sexual harassment by superiors can
bias one’s own sexual harassment of subordinates. Experiment
of African nations in the UN by roulette spin.
• Escalation effects: One important observation concerns the
fact that those showing bad behavior never arrive immediately
at the stage of doing bad. Rather, it seems like bad behavior
emerges slowly and gradually as can be inferred from remarks
like “I never thought I would show this kind of behavior.” In the
literature this effect is referred to as the escalation effect or the
slippery slope effect. The famous social psychology experiment
by Milgram (1974) illustrates this principle. Not 450 watts, but
lower initial watts.. Thus, many unethical decisions and actions
grow slowly into existence and this escalation process itself is
not noticed consciously. For example, research by Cain,
Loewenstein, and Moore (2005) described how auditors are
often blind to client’s internal changes in accounting practices,
but only if the changes appear gradually.
• Level construal: According to this theory, acts that are in the
distant future cannot be experienced directly and therefore are
hypothetical. Hypothetical situations bring their own mental
constructions with it and a consequence of this process is that
more distant events (e.g. events on the long-term) are
represented with it less concrete details. Under such
circumstances, people adhere more easily to moral standards
as guide lines for their decisions and judgments. In contrast,
events that are closer in time are represented in less abstract
and more concrete ways. Under those circumstances people
will rely more on concrete details and relevant contextual
information to make decisions and judgments. Then, egocentric
tendencies will more easily influence the actions one will take.
• Forecasting errors: Participants consistently overestimated
their future emotional reactions to both positive and negative
events (Gilbert et al., 1998;
Wilson, Wheatley, Meyers, Gilbert, & Axsom, 2000). With
respect to what people expect they will do, literature on
behavioral forecasting shows that people overestimate their
tendency to engage in socially desirable behaviors like being
generous or cooperative (Epley & Dunning, 2000), and
underestimate their tendencies toward deviant and cruel
behavior like providing electric shocks (Milgram, 1974).
Moreover, people also overestimate their willingness to forgive
moral transgressions by overvaluing restorative
tactics such as offering apologies (De Cremer, Pillutla, &
Reinders Folmer, 2011). In a similar vein, it also follows that
people are biased in their predictions in such a way that they
will predict to behave more ethically than they actually will do
in the end.
• Should-want Selves: This distinction was introduced by
Bazerman et al. (1998) and is used to describe intrapersonal
conflicts that exist within the human mind; notably conflicts
between what we morally should be doing and what in reality
we want to do. Specifically, people predict that they will act
more morally in situations than they actually do when being
confronted with these situations. These faulty perceptions and
estimates can be explained by the distinction between should
and want selves. The “want” self is a reflection of people’s
emotions and affective impulses. Basically, the want self is
characterized more as “hot-headed”. The “should” self, in
contrast, is characterized as rational and cognitive, and can
thus be looked upon as “cool headed”. Applying this distinction
to our forecasting problem, it follows that the “should” self is
more active when making decisions on the long-term, whereas
the “want” self is doing more of the talking when it concerns
short-term decisions. Morality and ethics as standards to live by
are thus more accessible and guiding when making predictions
towards the future. Moreover, because people are generally
optimistic and have great confidence in their own judgments
they will consider their predictions towards the future as valid
and reliable.
• Social conditions: Finally, in 1971 Zimbardo (2007) conducted
an impressive experiment at the Stanford University campus in
which participants assumed the roles of “prisoner” or “guard”
within an experimentally devised mock prison setting.
Specifically, many of the participants classified as “prisoners”
were in serious distress and many of the participants
classified as “guards” were behaving in ways which brutalized
and degraded their fellow participants. Participants were so
merged into the prisoner’s setting that they took up their roles
too seriously, leading to behavior that was considered
inappropriate and unethical at times. This study shows the
powerful influence of organizational roles and how it can
implicitly influence people’s beliefs and consequently their
actions.
• Moral Distance: This idea of context being a powerful
determinant for people to act in bad and unethical ways
towards others has been central in the work of Bauman on
“Moral Distance” (Bauman, 1991). The notion of moral distance
holds the idea that people will have only ethical concerns about
others that are near to them. If the distance increases, it
becomes easier to behave in unethical ways.
• Organisational Features: A first organizational feature is the
kind of industry people may work in. For example, the LIBOR
scandal where traders manipulated the interest rate known as
Libor illustrates that a context defined in terms of finance
actually encouraged dishonest behavior. Second org feature
can be the structure of the organization that creates more
versus less distance towards others, which can influence the
degree of unethical behaviors. Based on the idea of Bauman
(1991, p. 26) that bureaucracy functions as a “moral sleeping
pill”. it stands to reason that mechanistic organization
structures introduce more distance and hence allow for more
unethical behaviors to emerge.
• In the 1990s Miceli, Near and Dworkin conducted extensive
descriptive research on whistleblowers (for an overview see
Miceli, Near & Dworkin, 2008). This work has caused a huge
shift in how prescriptive business ethics discusses
whistleblowing. ( To be read for whistleblowing)
7. MORAL DISENGAGEMENT IN THE CORPORATE WORLD, Moral
Disengagement J. White et al. JENNY WHITE, ALBERT BANDURA and
LISA A. BERO, Accountability in Research, 16:41–74, 2009 Copyright
© Taylor & Francis Group, LLC ISSN: 0898-9621 print / 1545-5815
online.
• In the course of socialization, individuals adopt standards of
right and wrong that serve as guides for conduct. They monitor
their conduct, judge it in relation to their moral standards and
the conditions under which it occurs, and regulate their actions
accordingly. They do things that give them satisfaction and a
sense of self-worth, and they refrain from behaving in ways
that violate their moral standards because such conduct will
bring self-condemnation. However, moral standards do not
function as unceasing internal regulators of conduct. Self-
regulatory mechanisms do not operate unless they are
activated. Many psychosocial manoeuvres can be used to
selectively disengage moral self-sanctions. Indeed, large-scale
inhumanities are typically perpetrated by people who can be
considerate and compassionate in other areas of their lives.
8. Ethically Adrift: How Others Pull Our Moral Compass from True
North, and How We Can Fix It, Moore, C., and F. Gino,
http://nrs.harvard.edu/urn-3:HUL.InstRepos:10996801
• The fact that human survival depends on finding ways to live
together in peaceful, mutually supportive relations created an
evolutionary imperative for fundamental moral behaviors such
as altruism, trust, and reciprocity. In other words, we are moral
because we are social.
• However, much of our immorality can also be attributed to the
fact that we are social animals. In other words, this chapter is
about why we are immoral because we are social. When he
was sentenced to six years in prison for fraud and other
offenses, former Enron CFO Andy Fastow claimed, “I lost my
moral compass and I did many things I regret” (Pasha, 2006).
Fastow’s statement implies that if his moral compass had been
in his possession, he would have made better choices. In
contrast, we argue that unethical behavior stems more often
from a misdirected moral compass than a missing one. Given
the importance of morality to our identities, we would notice if
our moral compass went missing. However, a present but
misdirected moral compass could seduce us with the belief that
we are behaving ethically when we are not, while allowing us to
maintain a positive moral self-image. The idea that one’s moral
compass can veer away from “true North” has a parallel with
navigational compasses.
Conditions within a local environment, such as the presence of
large amounts of iron or steel, can cause the needle of a
navigational compass to stray from magnetic North, a
phenomenon called magnetic deviation. Explorers who are
aware of this phenomenon can make adjustments that will
protect them from going astray, but laymen can veer wildly off
course without being aware they are doing so.
• What forces are both powerful and subtle enough to cause
people to believe their actions are morally sound when in fact
they are ethically adrift? Existing research has offered two main
explanations for this phenomenon. The first considers
individuals who are ethically adrift to be “bad apples” whose
moral compasses are internally damaged. This explanation for
ethical drift harkens back to Aristotelian notions of human
virtue and persists in contemporary discussions of character as
the foundation of morality (cf., Doris, 2002).
• Consistent with this explanation, scholars have identified some
(relatively) stable individual differences that demagnetize
moral compasses, leading to unethical behavior. According to
this view, a deviated moral compass is evidence of an
individual’s faulty human nature. Indeed, the idea that
psychometric tests can identify “bad apples” before they are
hired underlies the common practice of integrity testing among
employers.
• In this chapter we focus instead on an increasingly dominant
alternative view, grounded in moral psychology and behavioral
ethics, that suggests that individuals‟ morality is malleable
rather than stable. This alternative perspective proposes two
main reasons why we can become ethically adrift: intrapersonal
reasons (caused by human cognitive limitations) and
interpersonal reasons (caused by the influence of others). We
describe both of these reasons briefly below, before turning
the rest of our attention to the latter of these two.
• Social processes that facilitate neglect:
The research overviewed in this section suggests that social
norms and social categorization processes can lead us to
neglect the true moral stakes of our decisions, dampening
our moral awareness and increasing immoral behavior. Rather
than driving our own destiny, we look for external cues that
allow us to relinquish control of the wheel. Put another way,
“one means we use to determine what is correct is to find out
what other people think is correct”, a concept known as social
proof.
• We are more likely to engage in altruistic behavior if we see
others doing so and more likely to ignore others‟ suffering if
others near us are similarly indifferent. We are even more likely
to laugh at jokes if others are also laughing at them. In general,
the more people engage in a behavior, the more compelling it
becomes, but the actions of one person can still influence our
behavior. Some of Bandura‟s classic studies showed how
children exposed to an aggressive adult were considerably
more aggressive toward a doll than were children who were
not exposed to the aggressive model.
Together, this research suggests that others—either in groups
or alone—help to establish a standard for ethical behavior
through their actions or inaction. These “local” social norms
provide individuals with the proof they need to categorize
behavior as appropriate or inappropriate. Repeated exposure
to behavioral norms that are inconsistent with those of society
at large (as is the case, for example, with the subcultures of
juvenile delinquents) may socialize people to alter their
understanding of what is ethical, causing broader moral norms
to become irrelevant. Thus, when a local social norm neglects
morally relevant consequences, it dampens moral awareness,
and through this dampening, will increase unethical behavior.
• Social categorization: Social categorization is the psychological
process by which individuals differentiate between those who
are like them (in-group members) and those who are unlike
them (out-group members). Social categorization amplifies the
effect of social norms, as norms have a stronger effect on our
behavior when we perceive those enacting them to be similar
to ourselves. Unfortunately, this means that if we socially
identify with individuals who engage in unethical behavior, our
own ethical behavior will likely degrade as well. In one study,
college students were asked to solve simple math problems in
the presence of others and had the opportunity to cheat by
misreporting their performance and leave with undeserved
money. Some participants were exposed to a confederate who
cheated ostentatiously (by finishing the math problems
impossibly quickly), leaving the room with the maximum
reward. Unethical behavior in the room increased when the
ostentatious cheater was clearly an in-group member (a
member of the same university as the participants) and
decreased when he was an out-group member (a student at a
rival university).
• These findings suggest an intersection between social norm
theory and social identity theory. Essentially, people copy the
behavior of in-group members and distance themselves from
the behavior of out-group members, and then use this behavior
to maintain or enhance their self-esteem, but in two different
ways. In-group members‟ transgressions are perceived to be
representative of descriptive norms (those that specify how
most people behave in a given situation) and thus as less
objectionable than the same behavior by an out-group
member. In contrast, when assessing the immoral behavior of
an out-group confederate, people highlight injunctive norms
(those that refer to behaviors that most people approve or
disapprove of) and distance themselves from this “bad apple.”
Highlighting the different types of norms, depending on
whether an in-group or out-group member is modeling the
behavior helps individuals maintain a distinctive and positive
social identity for their in-group.
• Another consequence of social categorization is out-group
mistreatment. Categorizing individuals as members of an out-
group allows us to dehumanize them, to exclude them from
moral considerations, or to place them outside our “circle of
moral regard”, and thus mistreat them without feeling (as
much) distress. At a fundamental level, we conceive of out-
group members as less human and more object-like than in-
group members. Recent neurophysiological research has even
found that individuals process images of extreme out-group
members, such as the homeless or drug addicts, without many
of the markers that appear when they look at images of other
people (Harris & Fiske, 2006). Brain-imaging data even show
that individuals manifest fewer signs of cognitive or emotional
distress when they are asked to think about sacrificing the lives
of these extreme out-group members than when they
contemplate sacrificing the lives of in group members.
• Finally, social categorization also leads us to feel psychologically
closer to those whom we have categorized as members of our
in-group than to those we have categorized as out-group
members. When people feel connected to others, they notice
and experience others‟ emotions, including joy,
embarrassment and pain”. As individuals grow close, they take
on properties of each other and psychologically afford each
other “self” status. Indeed, copycat crimes are often
perpetrated by individuals who feel a psychological connection
to the models they are emulating. In other words, having a
psychological connection with an individual who engages in
selfish or unethical behavior can influence how one‟s own
moral compass is oriented.

• Organizational aggravators of moral neglect:


Organizational socialization sets up role expectations for
individuals, communicates which organizational goals are
important, and establishes appropriate ways to achieve them.
Socialization processes per se are agnostic about questions of
morality. However, when individuals are new to an
organization, or when a pre-existing organizational culture re-
socializes individuals to new institutional demands, they look
for cues from others to identify appropriate behavior, and may
acclimate to norms that are morally corrupting. Thus, through
socialization processes, organizations can exacerbate social
facilitators of moral neglect.
• The human need to belong makes it easier to successfully
socialize individuals in unethical behavior. An example of this is
described in journalist Michael Lewis‟ (1989) account of being
socialized into the sales culture at investment bank Salomon
Brothers. When he joined the firm, Lewis was informed that he
could either fit in by becoming a “jammer,” someone willing to
unload whatever stocks would most benefit Salomon Brothers,
regardless of their worth or benefit to the client, or to be
labelled a “geek” or “fool”—that is, someone who behaves
more ethically (1989). Given these options, it becomes clear
why many chose to become jammers. ( My own PhD thesis is
relevant here).
• Roles : A spectacularly un-roadworthy car, the Pinto was
susceptible to “lighting up” (bursting into flames) during low-
speed, rear-impact collisions. Gioia explains how the scripts of
his role, which included what to attend to and what to ignore
when making recall decisions, prevented him from recognizing
that leaving this model on the road could have fatal
consequences. Specifically, his scripted cues for initiating a
recall were restricted to whether negative outcomes occurred
frequently and had directly traceable causes (1992). After
determining that accidents involving “light ups” were relatively
rare and did not have a clear cause, his investigations went no
further, nor did he see this decision as morally problematic.
• Goals: In a notorious example, the sales goals set in Sears,
Roebuck & Co.‟s auto repair centers in the early 1990s
prompted mechanics to regularly overcharge customers and
undertake unnecessary work on vehicles (Yin, 1992). Goals also
played a role in the dangerous design of the Ford Pinto. The
company gave engineers a goal called the “Limits of 2,000,”
which required them to produce a car that was less than 2,000
pounds (to maximize fuel efficiency) and cost less than $2,000
(to ensure a low price). This goal influenced the placement of
the Pinto‟s rubber gas tank behind an insubstantial rear
bumper, a major factor in the Pinto‟s tendency to “light up” in
low-speed collisions (Gioia, 1992). Goals and incentives can
both telescope our attention toward an outcome and blind us
to the reasons the goals or incentives were set up in the first
place. As an example, when police officers are given a target
number of crimes to solve, they typically become motivated to
pursue the crimes whose perpetrators are easiest to catch
(such as prostitution) rather than the crimes whose
perpetrators are more elusive but at least equally as important
to catch (such as burglars) (Stone, 1988.)

• Facilitators of moral justification: If moral neglect represents


the absence of conscious consideration of the moral domain,
then moral justification refers to the process through which
individuals distort their understanding of their actions. Moral
justification allows us to reframe immoral actions as defensible,
reducing the dissonance or anticipation of guilt that may
function as an obstacle to unethical behavior, paving the way
for it.
• Self-verification: First, it motivates us to interact with people
who see us as we see ourselves, since they can confirm our self-
concept. This tendency can lead individuals to create and
maintain cultures that may perpetuate morally questionable
behaviors, as individuals will seek to remain in the company of
those who confirm their positive self-regard, regardless of their
actions. Enron CEO Jeff Skilling, for example, reportedly
surrounded himself with “yes men” who built up his ego
without questioning his decisions. These “yes men” may have
helped Skilling confirm his positive self-views as a competent
executive without drawing attention to his morally
questionable behaviors.
• Second, people also solicit self-verifying feedback from others
and look for, see, and remember information that is consistent
with their existing self-concepts. As a result, people often
misinterpret feedback in ways that are consistent with their
self-concepts and dismiss information that is accurate but
inconsistent with those self-concepts.
• Organizational aggravators of moral justification:
o Organizational identification: If moral justification
involves sanctifying corrupt practices by appealing to
worthy ends, then the organization represents a powerful
“higher cause” to which individuals can appeal to make
suspect practices appear morally worthy. Unethical
behavior in support of organizational ends has been
termed unethical prosocial behavior because it is
undertaken for ostensibly good reasons—to benefit the
company. Fortunately, there is also evidence that
identification with more virtuous institutions can mitigate
unethical behaviour. Organizational identification, then,
can work both ways: exacerbating unethical outcomes
when institutions are corrupt and mitigating unethical
outcomes when they are more virtuous. Clearly, in both
instances, the organization represents a powerful force
that can be marshalled to justify specific practices,
whether virtuous or vicious.
• Group loyalty: Just as organizations can be a
compelling source of moral justification, so too
can groups of organizational members. Group
loyalty is a fundamental facilitator of moral
justification. People may abandon global or
universal moral norms in order to give preference
to those close to them. This is perhaps most
evident in cases of nepotism, when close others
are given undue preference in employment or
resource allocation. A study of the self-regulation
of misconduct within the U.S. Military Academy
also supports the idea that explicit notions of
loyalty toward one‟s fellow officers provide a
justification to normalize and refrain from
reporting officially prohibited behaviour.
• Business framing and euphemistic language:
Sanitizing terms are used for a wide range of
harmful or otherwise prohibited business
practices. Many forms of fraud have colorful
names that evoke images far removed from their
actual, more nefarious content: “channel
stuffing” refers to the practice of booking sales to
distributors as final sales to customers, “candy
deals” involve temporarily selling products to
distributors and promising to buy them back later
with a kickback added on, “tango sheets” are
false books used to calculate earnings inflation
and hide expenses in order to hit quarterly
targets, and “cookie reserves” refer to using
surpluses from profitable years to improve the
balance sheet during leaner years (The
Economist, 2010). These terms support moral
justification by obfuscating the true purpose of
unethical activities and making consideration of
their true nature less likely.
• Intrapersonal consequences of moral
justification: If the main intrapersonal
consequence of moral neglect is a failure to
acknowledge or integrate moral considerations
into decision making and behavior, moral
justification prompts us to re-construe immoral
choices as morally innocuous or even morally
righteous. Intra-personally, moral justification can
manifest as moral disengagement, moral
hypocrisy, and moral licensing—consequences
that pervert how we evaluate moral decisions,
allowing us to make immoral choices more easily.
• Moral disengagement: Moral disengagement
refers to a set of eight cognitive mechanisms that
deactivate the self-sanctions that typically compel
us to behave morally. Thus, during his trial for
war crimes, Adolf Eichmann consistently
maintained that he would only “have had a bad
conscience if he had not done what he had been
ordered to do—to ship millions of men, women
and children to their death”. Eichmann, who here
has employed the moral disengagement
mechanism of displacing one’s moral agency to
organizational superiors, can legitimately claim
he was not guilty because his evaluation of his
own actions has been so thoroughly distorted.
Moral disengagement thus operates as a moral
compass disruptor, moving the needle towards
an activity that can be morally justified through
its mechanisms.
• Moral hypocrisy: A second intrapersonal
consequence of moral justification is moral
hypocrisy, or “morality extolled... not with an eye
to producing a good and right outcome but in
order to appear moral yet still benefit oneself” ( a
good example needed)
• Moral licensing: A third intrapersonal
consequence of the social availability of moral
justifications is moral licensing. In the last decade,
researchers have studied “compensatory ethics”,
the phenomenon of using prior moral actions as a
credential or license to commit later unethical
actions and prior unethical actions as a
motivation to engage in later ethical ones
Facilitators of moral inaction or immoral action:
Individuals may be aware of the moral content of their actions, make
accurate judgments about what is right and wrong, and still be
unable to follow through with desirable action. In this section, we
overview how social processes create obstacles to doing the right
thing or motivation to do the wrong thing.
Social processes that facilitate moral inaction or immoral action:
A number of social influences can create obstacles between good
intentions and ethical behavior. In this section, we explore social
conformity, obedience to authority, and diffusion of responsibility as
three types of social influence that make moral action less likely.
Social conformity:
Asch’s foundational experiments in the 1950s demonstrated how
individuals tend to conform to the social agreement they perceive
rather than to their own intuition about what is correct. In his most
classic experimental paradigm, participants in a room filled with
confederates were asked to assess which of a series of lines is the
same length as a “standard line.” Though the correct answer was
always unambiguous, in 12 of 18 trials, the confederates first
unanimously agree to a wrong answer. In the face of this social
consensus, 75% of respondents provided a patently wrong answer at
least some of the time. When the conforming individuals were asked
why they provided wrong answers, they responded that they feared
looking foolish and, in the face of social consensus, began to doubt
their own intuitions. A partial explanation for the low rates of
whistleblowing in corporate wrongdoing must be the compulsion to
behave in concert with majority views; accordingly, best estimates
are that less than half of those who witness organizational
wrongdoing report it. Social conformity also helps us understand why
individuals mimic the egregious behavior of others, such as American
soldiers‟ torture of prisoners at Abu Ghraib in Iraq.
Diffusion of responsibility:
The general finding that the presence of others inhibits the impulse
to help individuals in distress, known as the bystander effect, is
driven in large part by social conformity. People are less likely to
respond to an emergency when others are present, particularly when
those present are passive. Even the perception that others are
witnessing the same emergency decreases one’s likelihood of acting.
In another classic study, individuals heard a confederate having what
seemed to be a severe epileptic fit in another room. Eighty-five
percent of participants who believed they were the only other
person within earshot reported the seizure in under one minute. In
contrast, only 31% of those who were led to believe there were four
others within earshot reported the emergency before the end of the
six-minute experiment, and those who did took more than twice as
long to respond as those who believed they were alone.
The explanation of this moral inaction is often described in terms of
diffusion of responsibility: when the cost of inaction can be shared
among multiple parties, individuals are less likely to take
responsibility for action themselves.
Obedience to authority
Moral inaction may also be facilitated by individuals‟ tendencies to
obey legitimate authority figures. As Miligram‟s famous obedience
experiments showed 50 years ago, individuals relinquish personal
agency for their own actions easily in the face of requests from an
authority figure. Obedience to authority appears to be a deep-seated
psychological response that only a minority of individuals naturally
resist.
Organizational aggravators of immoral (in)action
Organizations can exacerbate social influences that lead to moral
inaction or immoral action because they are commonly structured in
ways that allow us to minimize moral agency for our actions. First,
bureaucracy and the anonymity it provides exacerbate the diffusion
of responsibility—the minimization of moral agency that occurs
when one is a member of a group.
Second, hierarchy exacerbates obedience to authority and the
displacement of moral agency onto organizational superiors.
As the Lord Chancellor of England stated 300 years ago, the
corporation “has no soul to be damned, and no body to be kicked”
(cited in Coffee, 1981), a fact that facilitates corporate misconduct
and creates a conundrum for its prosecutors. The effects of
anonymity are amplified in large bureaucracies, as both size and
division of labor make responsibility more challenging to assess and
penalties for misconduct more challenging to inflict. Two interesting
studies found that the de-individuating effect of Halloween costumes
increased morally questionable behaviour. In a different experiment
ostensibly about creativity and stress, individuals were given the
opportunity to give electric shocks to other participants. Half the
participants were de-individuated by wearing baggy lab coats,
nametags with only numbers on them, and hoods or masks to cover
their faces; the other half wore “individuating” nametags and no
costumes. Psychologically shielded from the consequences of their
actions through their costumes, de-individuated (more anonymous)
participants delivered twice the level of shock to “victims”,
compared to individuated participants (Zimbardo, 1970).
In their analysis of the My Lai massacre in Vietman, Kelman and
Hamilton cite hierarchy as a cause of this “crime of obedience”: the
massacre was initiated by an order that became perverted as it
filtered through a chain of subordinates (1989). Bandura calls the
psychological passing of moral responsibility up through the chain of
command “displacement of responsibility” (Bandura, 1990, 1999).
Milgram refers to it as the “agentic shift”, a transition from an
autonomous state where one feels a personal sense of responsibility
for one‟s actions to feeling like one is simply an agent acting on
someone else‟s behalf. Similarly, when former U.S. National Security
Advisor and Secretary of State Condoleezza Rice was asked about her
involvement in authorizing practices during the Iraq War that could
be considered torture, she said, “I didn‟t authorize anything. I
conveyed the authorization of the administration to the agency”.
Together, these results suggest that the intrapersonal consequence
of immoral actions is self-deception that allows one to maintain a
positive self-image, rather than negative emotions such as guilt or
shame, particularly when the ethicality of those actions is open to
interpretation.
An agenda for future research: Regaining control of our moral
compass
• Interpersonal processes: Promoting moral exemplars or
referents: These findings suggest that choosing the right
exemplars of moral behavior—either positive in-group
members to emulate or negative out-group members from
which to differentiate oneself—may strengthen the magnet
inside one‟s moral compass. Indeed, in their study of rescuers
of Jews during the Holocaust, Oliner and Oliner found that,
compared to non-rescuers, rescuers had more models in their
close social circles who demonstrated similar altruistic
behaviors, such as participation in the Resistance.
• Brown, Treviño, and Harrison (2005) define ethical leadership
as “the demonstration of normatively appropriate conduct
through personal actions and interpersonal relationships, and
the promotion of such conduct to followers through two-way
communication, reinforcement, and decision-making. Thus,
leaders can be encouraged to use transactional efforts (e.g.,
communicating, rewarding, punishing, emphasizing ethical
standards) as well as modeling to influence their followers to
behave ethically. (First step in building an ethical
infrastructure). Communication in particular will be central in
the work of ethical leaders. As research on positive workplace
behavior suggests, people behave better toward each other
and the organization when high levels of procedural justice
exist (fair procedures, clearly communicated) and when leaders
are oriented toward acting in their followers‟ best interests.
• Monitoring as a reminder of one’s best self: Individuals‟ sense
of anonymity—which, as we have discussed, facilitate unethical
behavior—is undermined when they believe they are being
monitored. China’s face-recognition technology and
surveillance culture and reduction of crime. Monitoring is
certainly a key lever available to organizations and
governments as they try to influence individuals toward
exemplary behavior, but future research is needed to
disentangle when and under what conditions monitoring
systems lead to more ethical behavior and when they backfire.
• Careful and cognizant goal-setting: As we have seen, goals are
a primary means of motivating and directing behavior in
organizations, but they can backfire when it comes to ethical
behavior. Setting overly narrow or ambitious goals can blind
individuals to other important objectives and over-commitment
to goals can motivate individuals to do whatever it takes to
reach them.
9. Morality rebooted: Exploring simple fixes to our moral bugs, Ting
Zhang*, Francesca Gino, Max H. Bazerman
Values-based ( encourage ethical behaviour) and structure =based
(discourage unethical behaviour)infrastructure for ethical behaviour
in orgs. Both methods should be used.
10. On Understanding Ethical Behavior and Decision Making: A
Behavioral Ethics Approach, David De Cremer, David M. Mayer, and
Marshall Schminke, ?2010 Business Ethics Quarterly 20:1 (January
2010); ISSN 1052-150X pp. 1-6
• Despite this awareness, irresponsible and unethical
behaviors and decisions still emerge. How can we explain
this? Early explanations focusing on the underlying causes
of these ethical failures promoted the idea that most
business scandals were the responsibility of a few bad
apples (De Cremer 2009). This assumption is intuitively
compelling and attractive in its simplicity. Further, at a
practical level it facilitates identification and punishment
of those deemed to be responsible. However, recent
research has focused instead on how ethical failures
witnessed in society and organizations are not the result
of so-called bad apples but rather involve a complex mix
of individual and contextual factors (Bazerman and Banaji,
2004). This research suggests most all of us may commit
unethical behaviors, given the right circumstances. This
idea is one of the major assumptions used in the
emerging field of behavioural ethics.
11. When Ethical Leader Behavior Breaks Bad: How Ethical
Leader Behavior Can Turn Abusive via Ego Depletion and
Moral Licensing, Szu-Han (Joanna) Lin, Jingjing Ma, and
Russell E. Johnson Michigan State University, Journal of
Applied Psychology © 2016 American Psychological
Association 2016, Vol. 101, No. 6, 815–830
• In this study we adopt an actor-centric perspective to
examine possible costs of exhibiting ethical leader
behaviors for leaders. To do so, we draw from theories of
ego depletion and moral licensing. One such cost may be
feeling mentally fatigued from the added effort needed to
display ethical leader behaviors over and above formal
leader role requirements, leaving actors depleted and
with insufficient willpower to control subsequent deviant
acts. While there is empirical evidence verifying that
depletion leads to abusive behavior, there are alternative
explanations as well. For example, employees who
emphasize and model morally laudable behavior may
subsequently feel it is permissible to “get away with”
questionable behaviors because they have already
demonstrated that they are ethical.
• First, we suggest that it is possible for managers to
engage in both ethical and abusive leader behaviors.
Although ethical leader behaviors and abusive leader
behaviors have been each widely examined, it is unclear
whether and how these behaviors might co-occur within
the same person. Past research has assumed that
managers are consistent in their displays of leader
behavior, but this assumption does not appear to be
accurate.
• According to ego depletion theory (Baumeister et al.,
1998), people have a limited pool of regulatory resources
to exert self-control. Over time, people may feel depleted
as these resources become diminished owing to the
performance of activities requiring self-control. According
to Baumeister, Gailliot, DeWall, and Oaten (2006),
activities such as controlling or suppressing thoughts,
making complicated decisions, and concentrating one’s
attention are especially depleting. In the workplace, for
example, it has been found that acting consistent with
rules and norms pertaining to procedural fairness and
vigilantly monitoring for potential problems are especially
depleting.
• Ethical norms are not always aligned with people’s natural
(and often self-interested) tendencies, thus self-control
on the part of leaders is required to override these
tendencies in favor of more egalitarian ones. For example,
being consistent in one’s treatment of others by
refraining from showing favoritism to a particular
individual is a key tenet of ethical conduct in
organizations. However, doing so can be quite depleting
because leaders often have a mix of high and low quality
relationships with followers, thus they must suppress
both their preferential biases toward liked followers and
their prejudicial biases toward disliked followers.
Suppressing such biases with the end goal of being
consistent and fair to everyone is depleting.
• Abiding by ethical norms is also depleting when it runs
counter to self-interest, and there are situations where
doing so may result in lower performance. For example, a
CEO of a power company who invests in proper filtration
equipment and disposal procedures for the toxins the
company produces will result in higher expenses than if
the company bypassed safety and environmental
standards by releasing toxins directly into the air or water.
These higher expenses translate into smaller returns for
shareholders and a smaller performance bonus for the
CEO. Leaders may therefore find themselves caught
between doing “what’s right” versus “what’s profitable.”
• Acting ethical can be quite complex for leaders because
the list of stakeholders may also include people outside
the company (e.g., customers and clients, the local
community and government; McWilliams & Siegel, 2001).
The ramifications of ethical violations are also broader
than incidents of poor performance or minor counter-
productivity, as the former may include litigation,
government sanctions, and irreparable reputation loss.
These broader and more damaging consequences
complicate matters further by eliciting intense emotional
reactions that must be managed and by requiring damage
control to prevent the situation from escalating.
Managing these demands and complexities require
resource-intensive information processing. Taken
together, exhibiting ethical leader behavior requires
nontrivial amounts of self-control on the part of leader.
• There are two explanations as to why such licensing
effects occur. First, displaying morally laudable behavior is
a way for people to accumulate moral credits. When
there is a surplus of these credits in their moral ledger,
this excess can be used to “purchase” the right to deviate
from social and ethical norms. Second, displaying morally
laudable behavior can also bestow on actors the
credentials of having a commendable moral selfregard.
Moral self-regard, which is a part of people’s working self-
concept, is an assessment of how moral people believe
themselves to currently be, which can fluctuate from one
moment to the next. In other words, the moral credential
perspective posits that actors’ current moral self-regard
can alter whether they view their behavior as ethical or
not. Regardless of whether ethical acts bolster moral
credits or credentials, in both instances they provide
moral license to subsequently engage in deviant acts.
• First, our findings hint at the possibility that the indirect
effect via depletion may be larger than that of moral
credits.
The Fourth Industrial revolution (4IR)
• The First Industrial Revolution-1750-1850- Steam Power to mechanize production.

• The Second Industrial Revolution-1850-1950-Electric Power and Assembly-line Production.

• The Third Industrial Revolution- 1950-1980-Electronics and the Internet to revolutionise


Communication and Flatten the World.

• The Fourth Industrial Revolution-1980 and ongoing- NBIC-leading to exponential change in


every field of human existence
The Age of Transhumanism

The Ethics of Human Enhancement


What is Human Enhancement
• Making something better than it was before by the technological,
genetic, or chemical improvements to the ‘species-specific’ normal of
healthy human beings. Enhancement is therefore distinct from
therapy, which would involve making some “abnormality” more
“normal”.

• The enhancement options being discussed include radical extension of human health-span,
eradication of disease, elimination of unnecessary suffering, and augmentation of human
intellectual, physical, and emotional capacities
What Are The Human Enhancements So Far?
• The Bionic Man-Jesse Sullivan
• Braingate--allows a person to manipulate objects in the world using only the
mind
• Cochlear Implants and Night Vision and Silent Talk
• Affective BCIs: Electrocorticography (ECoG) and Electroencephalography (EEG)
• Exoskeletons and Flexible Battlesuits-MIT’s Soldier Nanotechnologies
• Reciprocyte-an artificial nano-red blood vessel
• Pharmacological Enhancements. Stimulant drugs-Ritalin and Adderall, used by
many college students to boost concentration and ward off sleep; Provigil, used
to improve working memory and brighten mood; Anabolic steroids ; Viagra;
Aricept-improves verbal and visual memory; Resvestrol– life extender.
What Are The Human Enhancements So Far?
• Hans Moravec, former director of robotics at Carnegie-Mellon University and
developer of advanced robots for both NASA and the military, popularized the
idea of living perpetually via a digital substrate.

• He envisioned a procedure in which the entirety of the information encoded


within the neurons of a human brain could be read, copied, and uploaded to a
computer
• Immortality through software existence.

• Embodied Cognition is the opposite of brain emulation. That the body is an


extension of the mind and helps the mind to think and recognize and decide.
What is Human Enhancement
The Ethics of Human Enhancement
• Ethical Issues of Affective Brain–Computer Interfaces: a system that uses
neurophysiological signals to extract features that are related to affective states
(e.g. emotions and moods). Data protection and informed consent,
neurohacking, marketing and political manipulation, inauthentic\fake emotions
• Exacerbated Social Inequality
• Exacerbated Corporate Inequality at all Managerial Levels
• The Ethics of Autonomy , Choice and Social Life of the first Transhumans
• The Ethics of the Emaciated Family
• The Ethics of the Imbalanced Transhuman
• The Geo-Ethics of the Aryan Race
• Decease-free longevity for the privileged
• Superintelligence for the enhanced
Artificial Intelligence
Basic Understandings
The Ethics of Artificial
Intelligence
• Issac Asimov’s Three Laws of Robotics:
• A robot may not injure a human being or, through inaction, allow a human
being to come to harm.
• A robot must obey the orders given it by human beings except where such
orders would conflict with the First Law.
• A robot must protect its own existence as long as such protection does not
conflict with the First or Second Laws.
A Brief History of AI
• Algorithms and Machine Learning:
• Artificial intelligence is based on the assumption that the process of human thought
can be mechanized.
• In 1951, Marvin Minsky with Dean Edmonds, built the first neural net machine, the
SNARC, Stochastic Neural Analog Reinforcement Calculator, that started to mimic the
human brain.
• In 1955, Allen Newell and Herbert A. Simon created the "Logic Theorist“ that solved
the venerable mystery of mind\body existence. Was the mind an ethereal substance
that was not made of matter? These people proved that the mind was a replicable
neural network that worked on chemistry and electricity, whereas theirs worked on
mechanical parts, electricity and algorithms!
• The Turing Test: If a machine could carry on a conversation (over a teleprinter) that
was indistinguishable from a conversation with a human being, then it was
reasonable to say that the machine was "thinking".
What is Big Data and Strong AI?
• Big data refers to a collection of data that cannot be captured,
managed, and processed by conventional software tools within a
certain time frame.
• Big data means that instead of random analysis (sample survey), all
data is used for analysis!
• General intelligence is the ability to solve any problem, rather than
finding a solution to a particular problem. Artificial general
intelligence (or "AGI") is a program which can apply intelligence to a
wide variety of problems, in much the same way as humans can. Also
referred to as "strong AI“.
• “Strong AI” is predicted to become reality in 2045!!
What is this AI Revolution?
• It is the programmed agglomeration of algorithms that enable this
intelligence, embodied or disembodied, to analyse Big Data at super
speeds that the unenhanced human brain cannot and arrive at
correct and safe conclusions to make decisions.
• The fuel is Big Data and the technology is Machine learning
• AI can be with “man-in-the-loop”; “man-on-the loop” or “completely
independent”
• When AI becomes recursive and learns to create its own algorithms
and becomes independent and goes beyond human intelligence and
control, the point of “Singularity” would have arrived.
The Value Alignment and Control Problems
• How do you ensure that the values of AMAs are aligned to that of Human Beings?
• Due to the inherent autonomy of these systems, the ethical considerations have
to be conducted by themselves. This means, that these autonomous cognitive
machines are in need of a theory, with the help of which they can, in a specific
situation, choose the action that adheres best to the moral standards.
• Which Ethical Tradition between Deontology, Utilitarianism and Virtue Ethics is
currently favoured and why?
• Deontology has a serious problem when it comes to ethical dilemmas. To lie to
save a life..not allowed in Deontology. How do you algorithimize this in an AMA?
• There is no room for learning in Deontology—the imperatives are categorical.
How do you assess what is “good” for you, let alone for others?
• Utilitarianism as an ethical theory for AMAs fail again in the hedonistic
calculations. The time available to do this calculation and the act is so limited.
• The calculation becomes even more complicated when fecundity and propinquity
have to be considered.
The Value Alignment Problem
• Why are AI scientists veering towards Virtue Ethics?
• Because Machine learning is the improvement of a machine’s
performance of a task through experience and Aristotle’s virtue ethics is
the improvement of one’s virtues through experience.
Aristotle’s Soul Theory
VIRTUES OF REASON VIRTUES OF CHARACTER
Craftmanship= Techne
Science=episteme
Manufacturing = Poesis
Wisdom=Sophia
Practical Wisdom = Phronesis
Intuitive Thought= Nous

VIRTUES OF PURE REASON VIRTUES OF PRACTICAL REASON


Practical Life = Practical Wisdom and
Theoretical Life = Weak AI Morality = Strong AI
The Value Alignment Problem
• Whatever be the Ethical Tradition employed, the AMA has to be able
to explain to humans, their logic for arriving at their decisions.
• There is a need for an “Explainability” algorithm, that is inputted
parallelly into the AMA. Without “explainability” there can be no legal
responsibility, like some one claiming insanity.
• While AMAs based on deontology can point towards the principles
and duties which have guided their actions, a consequentialist AMA
can explain why its actions have led to the best consequences.
• An AMA based on virtue ethics on the other hand would have to
show how its virtues, which gave rise to its actions, have been formed
through experience. A tough call for Machine Learning.
The Value Alignment Problem
• AMAs based on Virtue Ethics solves the two major challenges of contemporary AI
safety research, the control problem and the value alignment problem.

• A machine endowed with the virtue of temperance would not have any desire for
excess of any kind, not even for exponential self-improvement, which might lead
to a superintelligence posing an existential risk for humanity. Since virtues are an
integral part of one’s character, the AI would not have the desire of changing its
virtue of temperance.
Should We Allow AGIs?—The Control Problem
• True AGIs will be capable of universal problem solving and recursive self-improvement.

• Consequently, they have potential of outcompeting humans in any domain essentially making humankind
unnecessary and so subject to extinction.

• Kurzweil holds that “intelligence is inherently impossible to control,” and that despite any human attempts at
taking precautions, by definition . . . intelligent entities have the cleverness to easily overcome such barriers.”

• This presents us with perhaps the ultimate challenge of machine ethics: How do you build an AI which, when it
executes, becomes more ethical than you?

• “AI Safety Engineering” field emerging: A common theme in AI safety research is the possibility of keeping a
superintelligent agent in a sealed hardware so as to prevent it from doing any harm to humankind-- Eric
Drexler
Should We Allow AGIs?—The Control Problem
• Nick Bostrom, a futurologist, has proposed an idea for an Oracle AI (OAI), which would be only capable of

answering questions.

• Finally, in 2010 David Chalmers proposed the idea of a “leakproof” singularity. He suggested that for safety

reasons, AI systems first be restricted to simulated virtual worlds until their behavioral tendencies could be fully

understood under the controlled conditions.

• The Ted Kaczinsky Manifesto: ….What we do suggest is that the human race might easily permit itself to drift into

a position of such dependence on the machines that it would have no practical choice but to accept all of the

machines decisions…. we will be so dependent on them that turning them off would amount to suicide.”

• Technological slavery.
The Ethics of AI
• The Ethics of Economic Inequality—Between nations and within nations. Need for
a Universal Minimum Wage Solution—Thomas Picketty
• The ethics of human substitution by industrial robots in employment-not a
problem in economies with declining birthrates and shrinking populations
• Since AGI can outsmart human cognitive and emotional intelligence, then they
are sapient and especially sentient and capable of robot suffering. Using such
robots as a means would then be unethical
• The issue of collateral damage and trigger-happiness in AI Warfare
• The ethics of increasing E-Waste due to robotisisation, including radio-frequency
radiation
• The Ethics of Face Recognition Technology-Loss of Privacy vs Reduction in Crime
• The Ethics of Singularity- Should we allow this?
A Dystopian View of AI
The Ethics of Human
Dignity
What is Human Personhood?
• Immanence : Whereby we are embodied spirits, ends-in-themselves,
with a human and divine destiny.

• Individuality: Unique, non-repeatable, irreducible and irreplaceable

• Sociality: We do not live, move and have our being in isolation

• Transcendence: We have a soul that is immortal, thereby connecting


us to an immortal “Other”
St. Augustine – “….. that principle within us
by which we are like God, made in the image
of God.”
Plato – “……that when a person has
died, his soul exists.”
What is Human Dignity?
• There is no single definition of human dignity as the term is “abstract
and highly ambiguous” (Kass 2008: 306; Fukuyama 2002:148). A
working definition, however, would be:

• “The dignity of a person is that whereby a person excels other


beings, especially other animals, and merits respect and
consideration from other persons,” (Lee & George 2008:410).

• HD is something special about human nature that confers on us a


“moral status” that makes us superior to other animals but “equal”
among all humans (Lee & George 2008: 415).
The Silent Scream
Child labour
The Atlantic slave trade
Forced to fish: Slavery on Thailand's trawlers
“ The postulate that personhood is a distinctly
human state within the natural order is
basically an assertion of human
exceptionalism. ”
“ Humanity as characterized by morality and
personhood requires no divine principle, nor
Imago Dei, but only the relentless force of
natural selection”
- Charles Darwin
2021-22 Cullings for Human Enhancement
1. Posthumanism, Transhumanism, Antihumanism, Metahumanism, and New
Materialisms--Differences and Relations, Francesca Ferrando, Columbia University,
Existenz, Volume 8, No 2, Fall 2013, z 8/2 (2013), 26-32
◼ Transhumanism offers a very rich debate on the impact of technological and
scientific developments in the evolution of the human species; and still, it holds a
humanistic and humancentric perspective which weakens its standpoint: it is a
"Humanity Plus" movement, whose aim is to "elevate the human condition." On the
contrary, speciesism has become an integral part of the posthumanist approach,
formulated on a post-anthropocentric and post-humanistic episteme based on
decentralized and non-hierarchical modes. Although posthumanism investigates the
realms of science and technology, it does not recognize them as its main axes of
reflection, nor does it limit itself to their technical endeavors, but it expands its
reflection to the technologies of existence.
2. Transhumanist Values, NICK BOSTROM, Oxford University, Faculty of Philosophy,
Ethical Issues for the 21st Century, ed. Frederick Adams (Philosophical
Documentation Center Press, 2003); reprinted in Review of Contemporary
Philosophy, Vol. 4, May (2005)]
◼ It promotes an interdisciplinary approach to understanding and evaluating the
opportunities for enhancing the human condition and the human organism opened
up by the advancement of technology.
◼ The enhancement options being discussed include radical extension of human
health-span, eradication of disease, elimination of unnecessary suffering, and
augmentation of human intellectual, physical, and emotional capacities. Other
transhumanist themes include space colonization and the possibility of creating
superintelligent machines, along with other potential developments that could
profoundly alter the human condition. The ambit is not limited to gadgets and
medicine, but encompasses also economic, social, institutional designs, cultural
development, and psychological skills and techniques.
◼ Transhumanism does not entail technological optimism. While future technological
capabilities carry immense potential for beneficial deployments, they also could be
misused to cause enormous harm, ranging all the way to the extreme possibility of
intelligent life becoming extinct. Other potential negative outcomes include
widening social inequalities or a gradual erosion of the hard-to-quantify assets that
we care deeply about but tend to neglect in our daily struggle for material gain, such
as meaningful human relationships and ecological diversity. Such risks must be taken
very seriously, as thoughtful transhumanists fully acknowledge.
◼ The limitations of the human mode of being are so pervasive and familiar that we
often fail to notice them, and to question them requires manifesting an almost
childlike naiveté. Let consider some of the more basic ones: Lifespan, Intellectual
capacity, Bodily functionality, Mood, energy, and self-control,
◼ In Christian theology, some souls will be allowed by God to go to heaven after their
time as corporal creatures is over. Before being admitted to heaven, the souls would
undergo a purification process in which they would lose many of their previous
bodily attributes. Skeptics may doubt that the resulting minds would be sufficiently
similar to our current minds for it to be possible for them to be the same person. A
similar predicament arises within transhumanism: if the mode of being of a
posthuman being is radically different from that of a human being, then we may
doubt whether a posthuman being could be the same person as a human being,
even if the posthuman being originated from a human being.
◼ We can, however, envision many enhancements that would not make it impossible
for the post-transformation someone to be the same person as the pre-
transformation person. A person could obtain quite a bit of increased life
expectancy, intelligence, health, memory, and emotional sensitivity, without ceasing
to exist in the process. A person’s intellectual life can be transformed radically by
getting an education. A person’s life expectancy can be extended substantially by
being unexpectedly cured from a lethal disease. Yet these developments are not
viewed as spelling the end of the original person. In particular, it seems that
modifications that add to a person’s capacities can be more substantial than
modifications that subtract, such as brain damage. If most of someone currently is,
including her most important memories, activities, and feelings, is preserved, then
adding extra capacities on top of that would not easily cause the person to cease to
exist.

◼ Transhumanism promotes the quest to develop further so that we can explore


hitherto inaccessible realms of value. Technological enhancement of human
organisms is a means that we ought to pursue to this end. There are limits to how
much can be achieved by low-tech means such as education, philosophical
contemplation, moral self-scrutiny and other such methods proposed by classical
philosophers with perfectionist leanings, including Plato, Aristotle, and Nietzsche, or
by means of creating a fairer and better society, as envisioned by social reformists
such as Marx or Martin Luther King. This is not to denigrate what we can do with the
tools we have today. Yet ultimately, transhumanists hope to go further.
◼ What is needed for the realization of the transhumanist dream is that technological
means necessary for venturing into the post-human space are made available to
those who wish to use them, and that society be organized in such a manner that
such explorations can be undertaken without causing unacceptable damage to the
social fabric and without imposing unacceptable existential risks.
◼ Existential risk – one where an adverse outcome would either annihilate Earth-
originating intelligent life or permanently and drastically curtail its potential. Several
recent discussions have argued that the combined probability of the existential risks
is very substantial. The relevance of the condition of existential safety to the
transhumanist vision is obvious: if we go extinct or permanently destroy our
potential to develop further, then the transhumanist core value will not be realized.
Global security is the most fundamental and non-negotiable requirement of the
transhumanist project. Technological progress in this field be allowed to progress
unhindered and finally the benefits must have wide access and should not be
available only to an elite section of humankind.
Cullings from AI Readings for XLRI
1. Artificial Intelligence Safety Engineering: Why Machine Ethics Is a Wrong Approach
Roman V. Yampolskiy* Roman V. Yampolskiy Department of Computer Engineering
and Computer Science, University of Louisville, V.C. Müller (Ed.): Philosophy and
Theory of Artificial Intelligence, SAPERE 5, pp. 389–396. springerlink.com © Springer-
Verlag Berlin Heidelberg 2012

The great majority of published papers are purely philosophical in nature and do little
more than reiterate the need for machine ethics and argue about which set of moral
convictions would be the right ones to implement in our artificial progeny
(Kantian [33], Utilitarian [20], Jewish [34], etc.). However, since ethical norms
are not universal, a “correct” ethical code could never be selected over others to
the satisfaction of humanity as a whole.
Consequently, we propose that purely philosophical discussions of ethics for
machines be supplemented by scientific work aimed at creating safe machines in
the context of a new field we will term “AI Safety Engineering.” Some concrete
work in this important area has already begun [17, 19, 18]. A common theme in
AI safety research is the possibility of keeping a superintelligent agent in a sealed
hardware so as to prevent it from doing any harm to humankind. Such ideas origi-
nate with scientific visionaries such as Eric Drexler who has suggested confining
transhuman machines so that their outputs could be studied and used safely [14].
Similarly, Nick Bostrom, a futurologist, has proposed [9] an idea for an Oracle AI
(OAI), which would be only capable of answering questions. Finally, in 2010
David Chalmers proposed the idea of a “leakproof” singularity [12]. He suggested
that for safety reasons, AI systems first be restricted to simulated virtual worlds
until their behavioral tendencies could be fully understood under the controlled
conditions.
Roman Yampolskiy has proposed a formalized notion of AI confinement pro-
tocol which represents “AI-Boxing” as a computer security challenge [46]. He de-
fines the Artificial Intelligence Confinement Problem (AICP) as the challenge of
restricting an artificially intelligent entity to a confined environment from which it
can’t exchange information with the outside environment via legitimate or covert
channels if such information exchange was not authorized by the confinement au-
thority. An AI system which succeeds in violating the CP protocol is said to have
escaped [46].
Similarly we argue that certain types of artificial intelligence research fall under
the category of dangerous technologies and should be restricted. Classical AI re-
search in which a computer is taught to automate human behavior in a particular
domain such as mail sorting or spellchecking documents is certainly ethical and
does not present an existential risk problem to humanity. On the other hand, we argue that
Artificial General Intelligence (AGI) research should be considered un-
ethical. This follows logically from a number of observations. First, true AGIs will
be capable of universal problem solving and recursive self-improvement. Conse-
quently they have potential of outcompeting humans in any domain essentially
making humankind unnecessary and so subject to extinction. Additionally, a truly
AGI system may possess a type of consciousness comparable to the human type
making robot suffering a real possibility and any experiments with AGI unethical
for that reason as well.
A similar argument was presented by Ted Kazynsky in his famous
manifesto [26]: “It might be argued that the human race would never be foolish
enough to hand over all the power to the machines. But we are suggesting neither
that the human race would voluntarily turn power over to the machines nor that
the machines would willfully seize power. What we do suggest is that the human
race might easily permit itself to drift into a position of such dependence on the
machines that it would have no practical choice but to accept all of the machines
decisions. As society and the problems that face it become more and more com-
plex and machines become more and more intelligent, people will let machines
make more of their decision for them, simply because machine-made decisions
will bring better result than man-made ones. Eventually a stage may be reached at
which the decisions necessary to keep the system running will be so complex that
human beings will be incapable of making them intelligently. At that stage the ma-
chines will be in effective control. People won't be able to just turn the machines
off, because they will be so dependent on them that turning them off would amount
to suicide. ” ( Kaczynski, T.: Industrial Society and Its Future. The New York Times
(September19, 1995)
Humanity should not put its future in the hands of the machines since it will not
be able to take the power back. In general a machine should never be in a position
to terminate human life or to make any other non-trivial ethical or moral judgment
concerning people.
2. Why and How Should Robots Behave Ethically?, Benjamin KUIPERS 1, Computer
Science & Engineering, University of Michigan, USA, 1University of Michigan, 2260
Hayward Street, Ann Arbor, Michigan 48109 USA, Email: kuipers@umich.edu
For an intelligent robot to function successfully in our society, to cooperate with
humans, it must not only be able to act morally and ethically, but it must also
be trustworthy. It must earn and keep the trust of humans who interact with it.
If every participant contributes their share, everyone
gets a good outcome. But each individual participant may do even better by
optimizing their own reward at the expense of the others. With self-centered utility
functions, each participant “rationally” maximizes their own expected utility,
often leading to bad outcomes for everyone.

• Should you use a sharp knife to cut into the body of a human being? Of
course not, unless you are a qualified surgeon performing a necessary op-
eration. (Deontology: a rule with an exception.)

• If you are that surgeon, is it permissible to sacrifice this patient in order to


save the lives of five others? Of course not! (Virtue ethics: a good surgeon
keeps faith with the patient.)

• Is it OK to throw the switch that saves five lives by directing a runaway trolley onto a
side track, where it will kill one person who would have been safe? Well, . . .
(Deontology says it’s wrong to allow preventable deaths; Utilitarianism says fewer
deaths is better; Virtue ethics says the virtuous person can make hard choices.)
I argue that heuristics based on utilitarianism (decision theory), deontology (rule-
based and constraint-based systems), and virtue ethics (case-based reasoning) are
all important tools in the toolkit for creating artificial agents capable of partic-
ipating successfully in our society. Each tool is useful in certain contexts, and
perhaps less useful in others.

1. The Virtuous Machine -Old Ethics for New Technology? Nicolas Berberich* and Klaus
Diepold, Department of Electrical and Computer Engineering, Technical University of
Munich, Department of Informatics, Technical University of Munich, Munich Center
for Technology in Society,* E-mail: n.berberich@tum.de
Due to the inherent autonomy of these systems, the ethical considerations have to be
conducted by themselves. This means, that these autonomous cognitive machines are in
need of a theory, with the help of which they can, in a specific situation, choose the action
that adheres best to the moral standards.
This discrepancy between what people believe that technology can do, based on its
appearance, and what it
actually can do, would not only elicit a strong uncanny valley effect, but also pose a large
safety risk. Taken together, we predict that this would lead to an acceptance problem of the
technology. If we want to avoid this by jumping over the uncanny valley, we have to start
today by thinking about how to endow autonomous cognitive systems with more human-
like behavior. The position that we argue for in this paper is that the last discrepancy
between the valley and its right shore lies in virtuous moral behavior. In the near future
we will have autonomous cognitive machines whose actions will be akin to human actions,
but without consideration of moral implications they will never be quite alike, leading to
cognitive dissonance and rejection. We believe that taking virtue ethics as the guiding moral
theory for building moral machines is a promising approach to avoid the uncanny valley and
to induce acceptance.
Cybernetics can be seen as a historical and intellectual precursor of artificial intelligence
research. While it had strong differences with the cognitivistic GOFAI (good old-fashioned
AI), cybernetic ideas are highly influential in modern AI. The currently successful field of ar-
tificial neural networks (synonymous terms are connectionism and deep learning) originated
from the research of the cyberneticians McCulloch, Pitts and Rosenblatt. Goal-directed
planning is a central part of modern AI and especially of advanced robotics. In contrast
to other forms of machine learning like supervised or unsupervised learning, reinforcement
learning is concerned with the goal-driven (and therefore teleological) behavior of agents.

Applied to AI ethics this means that a machine cannot have practical wisdom (and thus can’t
act morally) before it has learned from realistic data. Machine learning is the improvement
of a machine’s performance of a task through experience and Aristotle’s virtue ethics is the
improvement of one’s virtues through experience. Therefore, if one equates the task
performance with virtuous actions, developing a virtue ethics-based machine appears
possible.
A closer look at the structure of Aristotle’s ergon-argument allows to break with two
common misconceptions which seem to render a virtue ethical approach in machine ethics
impossible. The first misconception is ethical anthropocentrism, after which only humans
can act morally. This might have been correct in the past, but only because humans have
been the only species capable of higher-level cognition, which, according to Aristotle, is
a requirement for ethical virtues and thus moral action. If there was another species, for
example a machine, with the same capacity for reason and dispositions of character, then it
appears probable that its arete would also lie in excellent use and improvement of those.
The second misconception of Aristotle’s virtue ethics is that it takes happiness to be the goal
and measure of all actions. Since machines are not capable of genuine feelings of happiness,
it is argued, that virtue ethics can’t be applied to them. This argument is based on an
erroneous understanding of eudaimonia. Aristotle does not mean psychological states of
happiness nor maximized pleasure, as John Locke defines ’happiness’. The Greek term
eudaimonia has a much broader meaning and refers mainly to a successful conduct of life
(according to one’s ergon). A virtuous machine programed to pursue eudaimonia would
therefore not be prone to wireheading, which is the artificial stimulation of the brain’s
reward center to experience pleasure.
Out of the three subcategories of machine learning, supervised learning, unsupervised
learning and reinforcement learning (RL), the latter is the lifeworldly approach. In contrast
to the other two, RL is based on dynamic interaction with the environment, of which the
agent typically has only imperfect knowledge.
This partition originated in Aristotle’s soul theory in which he lists virtues of reason
(dianoetic virtues) next to virtues of character (ethical virtues) as properties of the
intelligent part of the soul. The virtues of reason comprise the virtues of pure reason and
the virtues of practical reason. Pure reason includes science (epist ē m ē ), wisdom (sophia)
and intuitive thought (no ̄us). Practical reason on the other hand refers to the virtues of
craftsmanship (techn ̄e), of making (poi ̄esis) and practical wisdom (phron ē sis). According to
this subdivision in pure and practical reason, there exist two ways to lead a good life in the
eudaimonic sense: the theoretical life and the practical life. AI systems can lead a theoretical
life of contemplation, e.g. when they are applied to scientific data analysis, but to lead a
practical life they need the capacity for practical wisdom and morality. This distinction in
theoretical and practical life of an AI somewhat resembles the distinction into narrow and
general AIs, where narrow AI describes artificial intelligence systems that are focused on
performing one specific task (e.g. image classification) while general AI can operate in more
general and realistic situations.
In contrast to deontology and consequentialism, virtue ethics has a hard time giving
reasons for its actions (they certainly exist, but are hard to codify). While deontologists can
point towards the principles and duties which have guided their actions, a consequentialist
can explain why her actions have led to the best consequences. An AMA based on virtue
ethics on the other hand would have to show how its virtues, which gave rise to its actions,
have been formed through experience. This poses an even greater problem if its capability
to learn virtues has been implemented as an artificial neural network, due to it being almost
impossible to extract intuitively understandable reasons from the many network weights. In
this instance, the similarity between virtue ethics and machine learning is disadvantageous.
Without being able to give reasons to one’s actions, one cannot take over responsibility,
which is a concept underlying not only our insurance system but also our justice system. If
the actions of an AMA produce harm then someone has to take responsibility for it and the
victims have a right to explanation. The latter has recently (May 2018) been codified by the
EU General Data Protection Regulation (GDRP) with regards to all algorithmic decisions.
Condensed to the most important ideas, this work has shown that
1. Virtue ethics fits nicely with modern artificial intelligence research and is a promising
moral theory as basis for the field of AI ethics.
2. Taking the virtue ethics route to building moral machines allows for a much broader
approach than simple decision-theoretic judgment of possible actions. Instead it takes
other cognitive functions into account like attention, emotions, learning and actions.
Furthermore, by discussing several virtues in detail, we showed that virtue ethics is a
promising moral theory for solving the two major challenges of contemporary AI safety
research, the control problem and the value alignment problem. A machine endowed with
the virtue of temperance would not have any desire for excess of any kind, not even for
exponential self-improvement, which might lead to a superintelligence posing an existential
risk for humanity. Since virtues are an integral part of one’s character, the AI would not
have the desire of changing its virtue of temperance. Learning from virtuous exemplars
has been a process of aligning values for centuries (and possibly for all of human history),
thus building artificial systems with the same imitation learning capability appears to be a
reasonable approach.

2. Machines That Know Right And Cannot Do Wrong: The Theory and Practice of
Machine Ethics, Louise A. Dennis and Marija Slavkovik
“The fact that man knows right from wrong proves his intellectual superiority to the other
creatures; but the fact that he can do wrong proves his moral inferiority to any creatures
that cannot.”– Mark Twain
Wallach and Allen [35, Chapter 2] distinguish between operational morality, functional
morality, and full moral agency. An agent has operational morality when the moral
significance of her actions are entirely scoped by the agent’s designers. An agent has
functional morality when the agent is able to make moral judgements when choosing an
action, without direct human instructions.
3. What happens if robots take the jobs? The impact of emerging technologies on
employment and public policy By Darrell M. West
In this paper, I explore the impact of robots, artificial intelligence, and machine learning. In
particular, I study the impact of these emerging technologies on the workforce and the
provision of health benefits, pensions, and social insurance. If society needs fewer workers
due to automation and robotics, and many social benefits are delivered through jobs, how
are people outside the workforce for a lengthy period of time going to get health care and
pensions?
Robots are expanding in magnitude around the developed world. Figure 1 shows the
numbers of industrial robots in operation globally and there has been a substantial increase
in the past few years. In 2013, for example, there were an estimated 1.2 million robots in
use. This total rose to around 1.5 million in 2014 and is projected to increase to about 1.9
million in 2017.5 Japan has the largest number with 306,700, followed by North America
(237,400), China (182,300), South Korea (175,600), and Germany (175,200). Overall, robotics
is expected to rise from a $15 billion sector now to $67 billion by 2025.6
In the contemporary world, there are many robots that perform complex functions.
According to a presentation on robots, “the early 21st century saw the first wave of
companionable social robots. They were small cute pets like AIBO, Pleo, and Paro. As
robotics become more sophisticated, thanks largely to the smart phone, a new wave of
social robots has started, with humanoids Pepper and Jimmy and the mirror-like Jibo, as
well as Geppetto Avatars’ software robot, Sophie. A key factor in a robot’s ability to be
social is their ability to correctly understand and respond to people’s speech and the
underlying context or emotion.”
Amazon has organized a “picking challenge” designed to see if robots can “autonomously
grab items from a shelf and place them in a tub.” The firm has around 50,000 people
working in its warehouses and it wants to see if robots can perform the tasks of selecting
items and moving them around the warehouse. During the competition, a Berlin robot
successfully completed ten of the twelve tasks. To move goods around the facility, the
company already uses 15,000 robots and it expects to purchase additional ones in the
future.
In the restaurant industry, firms are using technology to remove humans from parts of food
delivery. Some places, for example, are using tablets that allow customers to order directly
from the kitchen with no requirement of talking to a waiter or waitress. Others enable
people to pay directly, obviating the need for cashiers. Still others tell chefs how much of an
ingredient to add to a dish, which cuts down on food expenses.
There are computerized algorithms that have taken the place of human transactions. We
see this in the stock exchanges, where high-frequency trading by machines has replaced
human decision-making. People submit, buy, and sell orders, and computers match them in
the blink of an eye without human intervention. Machines can spot trading inefficiencies or
market differentials at a very small scale and execute trades that make money for people.15
Some individuals specialize in arbitrage trading, whereby the algorithms see the same stocks
having different market values. Humans are not very efficient at spotting price differentials
but computers can use complex mathematical formulas to determine where there are
trading opportunities. Fortunes have been made by mathematicians who excel in this type
of analysis.
Machine-to-machine communications and remote monitoring sensors that remove humans
from the equation and substitute automated processes have become popular in the health
care area. There are sensors that record vital signs and electronically transmit them to
medical doctors. For example, heart patients have monitors that compile blood pressure,
blood oxygen levels, and heart rates. Readings are sent to a doctor, who adjusts medications
as the readings come in. According to medical professionals, “we’ve been able to show
significant reduction” in hospital admissions through these and other kinds of wireless
devices.
There also are devices that measure “biological, chemical, or physical processes” and deliver
“a drug or intervention based on the sensor data obtained.” They help people maintain an
independent lifestyle as they age and keep them in close touch with medical personnel.
“Point-of-care” technologies keep people out of hospitals and emergency rooms, while still
providing access to the latest therapies.
Implantable monitors enable regular management of symptoms and treatment. For
example, “the use of pulmonary artery pressure measurement systems has been shown to
significantly reduce the risk of heart failure hospitalization.” Doctors place these devices
inside heart failure patients and rely upon machine-to-machine communications to alert
them to potential problems. They can track heart arrhythmia and take adaptive moves as
signals spot troublesome warning signs.
Unmanned vehicles and autonomous drones are creating new markets for machines and
performing functions that used to require human intervention. Driverless cars represent one
of the latest examples. Google has driven its cars almost 500,000 miles and found a
remarkable level of performance. Manufacturers such as Tesla, Audi, and General Motors
have found that autonomous cars experience fewer accidents and obtain better mileage
than vehicles driven by people.
4. CaseWestern Reserve Journal of International Law 47 (2015), Issue 1
The Debate Over Autonomous Weapons Systems Dr. Gregory P. Noone and Dr.
Diana C. Noone
The debate over Autonomous Weapon Systems (AWS) has begun
in earnest with advocates for the absolute and immediate banning of
AWS development, production, and use planting their flag first. They
argue that AWS should be banned because these systems lack human
qualities, such as the ability to relate to other humans and to apply
human judgment, that are necessary to comply with the law. In
addition, the weapons would not be constrained by the capacity for
compassion, which can provide a key check on the killing of civilians.
The opposing viewpoint in this debate articulates numerous
arguments that generally include: it is far too premature and too
speculative to make such a proposal/demand; the Law of Armed
Conflict should not be underestimated in its ability to control AWS
development and future operations; AWS has the potential to
ultimately save human lives (both civilian and military) in armed
conflicts; AWS is as inevitable as any other technology that could
potentially make our lives better; and to pass on the opportunity to
develop AWS is irresponsible from a national security perspective.
Some of the most respected and brilliant lawyers in this field are on
opposite sides of this argument.

1. Human-in-the-loop or semi-autonomous systems require a


human to direct the system to select a target and attack it, such
as Predator or Reaper UAVs.
2. Human-on-the loop or human-supervised autonomous systems
are weapon systems that select targets and attack them, albeit
with human operator oversight; examples include Israel’s Iron
Dome and the U.S. Navy’s Phalanx Close In Weapons System
(or CIWS).
3. Human-out-of-the-loop or fully autonomous weapon systems
can attack without any human interaction; there are currently
no such weapons.

First and foremost, there is immediate common ground to be


found in this debate. Any weaponry development shall be done so in
accordance with the Law of Armed Conflict (LOAC, also referred to
as International Humanitarian Law, IHL, or the Law of War). With
respect to AWS, its development and deployment would be required
to adhere to LOAC’s core principles of distinction, proportionality,
humanity and military necessity. There is readily accepted treaty
law as well as customary international law that makes this area of
discussion easy. AWS is, as all weapons and weapon systems are, a
means of warfare (whereas a method of warfare involves deployment
and tactics). All AWS would have a legal review conducted prior to
formal development as a weapon (or prior to any modification of an
existing weapon) and another legal review prior to being deployed in
the field. Therefore, the concept of AWS is not per se unlawful. At
their core, autonomous weapon systems must be able to distinguish
combatants from non-combatants as well as friend from foe. LOAC is
designed to protect those who cannot protect themselves, and an
underlying driver is to protect civilians from death and combatants
from unnecessary suffering. Everyone is in agreement on this. No
academic or practitioner is stating anything to the contrary; therefore,
this part of any argument from either side must be ignored as a red
herring. Simply put, no one would agree to any weapon that ignores
LOAC obligations.
At the present time, there are many questions and as yet few
answers with respect to Autonomous Weapon Systems. Not the least
of which include the policy implications of such systems. For instance, “How does
this technology impact the likely successes of counter-
insurgency operations or humanitarian interventions? Does not such weaponry run
the risk of making war too easy to wage and tempt policy makers into killing when
other more difficult means should be undertaken?” Will countries be more willing to
use force because their populations would have less to lose (i.e. their loved ones)
and it would be politically more acceptable?

5. Robots, Rights and Religion, James F. McGrath, Butler University, 2011


To put this another way, we might decide that we could exclude from the category of
persons those artificial intelligences that were merely programmed to imitate personhood,
and whose interaction with humans resembled that of persons simply as a result of
elaborate programming created precisely to imitate human behavior. This must be
distinguished from the case of a machine that learns human behavior and imitates it of its
own volition. This distinction is not arbitrary. Children carry out patterns of behavior that
resemble those of their parents and others around them. This is part of the learning
process, and is evidence in favor of rather than against their true personhood. The evidence
that I am suggesting would count against genuine personhood is deliberate programming by
a human programmer that causes a machine to imitate personhood in a contrived manner.
The reason for this distinction is an important one. A machine that learns to imitate human
behavior would be exhibiting a trait we witness in human persons.
In concluding this section, it should be remarked that we give human rights to human beings
as soon as they are clearly categorized as such. A person does not have to be able to speak
to have rights. Indeed, small infants whose ability to reason, communicate and do many
other things that we tend to identify with intelligence is still in the process of formation
have their rights protected by law. The issue is thus not really rights for artificial
intelligences so much as rights for machine persons. It is the definition and identification of
the latter that is the crucial issue.
Nevertheless, the distinction would seem to be a valid one for as long as it remains a
meaningful one: machines that develop their own personhood in imitation of humans will
probably deserve to be recognized as persons, whereas mere simulacra designed as an
elaborate contrivance will not.
Our creations – whether through natural biological reproduction, in vitro fertilization,
cloning, genetic construction, or artificially intelligent androids made in our image – can be
viewed as in some sense like our children. And if the comparison to our children is a useful
analogy, then we can learn much from it. There is a “flip side” to the point that children are
their own people and sooner or later we need to let them go, to make their own mistakes.
The other side of the coin is that we are not living up to our responsibilities if we let them go
too soon. Yet Is it only wrong to tamper with humanity’s nature, or is it also wrong to create
a human being (with some differences)?
our artificial offspring will in an important sense not be human, even if they are made in our
image. Other species leave the nest far earlier than human children do. In “giving birth” not
to other humans but to artificial intelligence, we cannot assume that the process will even
closely mirror a typical human parent-child scenario.

6. Robot ethics: Mapping the issues for a mechanized world , Patrick Lin Keith Abney
George Bekey
Bill Gates recently observed that “the emergence of the robotics industry ... is developing in
much the same way that the computer business did 30 years ago” [18]. As a key architect of
the computer industry, his prediction has special weight.
In a few decades—or sooner, given exponential progress forecasted by Moore’s Law—
robots in society will be as ubiquitous as computers are today, he believes; and we would be
hard-pressed to find an expert who disagrees.
In its most basic sense, we define “robot” as an engineered machine that senses, thinks, and
acts: “Thus a robot must have sensors, processing ability that emulates some aspects of
cognition, and actuators.
Surprisingly,
relationships of a more intimate nature are not quite satisfied by robots yet, considering the
sex industry’s reputation as an early adopter of new technologies. Introduced in 2010,
Roxxxy is billed as “the world’s first sex robot” [17], but its lack of autonomy or capacity to
“think” for itself, as opposed to merely respond to sensors, suggests that it is not in fact a
robot, per the definition above.
In some countries, robots are quite literally replacements for humans, such as Japan, where
a growing elderly population and declining birthrates mean a shrinking workforce [35].
Robots are built to specifically fill that labor gap. And given the nation’s storied love of
technology, it is therefore unsurprising that approximately one out of 25 workers in Japan is
a robot. While the US currently dominates the market in military robotics, nations such as
Japan and South Korea lead in the market for social robotics, such as elderly-care robots.
Other nations with similar demographics, such as Italy, are expected to introduce more
robotics into their societies, as a way to shore up a decreasing workforce; and nations
without such concerns can drive productivity, efficiency, and effectiveness to new heights
with robotics.
Like the social networking and email capabilities of the Internet Revolution, robotics may
profoundly impact human relationships. Already, robots are taking care of our elderly and
children, though there are not many studies on the effects of such care, especially in the
long term. Some soldiers have emotionally bonded with the bomb-disposing PackBots that
have saved their lives, sobbing when the robot meets its end (e.g., [38,22]). And robots are
predicted to soon become our lovers and companions [25]: they will always listen and never
cheat on us. Given the lack of research studies in these areas, it is unclear whether
psychological harm might arise from replacing human relationships with robotic ones.
Harm also need not be directly to persons, e.g., it could also be to the environment. In the
computer industry, “e-waste” is a growing and urgent problem (e.g., [31]), given the
disposal of heavy metals and toxic materials in the devices at the end of their product
lifecycle. Robots as embodied computers will likely exacerbate the problem, as well as
increase pressure on rare-earth elements needed today to build computing devices and
energy resources needed to power them. Networked robots would also increase the
amount of ambient radiofrequency radiation, like that created by mobile phones—which
have been blamed, fairly or not, for a decline of honeybees necessary for pollination and
agriculture [37], in ADDITION to human health problems (e.g., [2]).

7. Networks of Social and Moral Norms in Human and Robot Agents, B. F. Malle ∗† M.
Scheutz ∗∗ J. L. Austerweil, Department of Cognitive, Linguistic, and Psychological
Sciences,Brown University, USA∗∗ Department of Computer Science, Tufts
University, USA
The design and construction of intelligent robots has seen steady growth in the past 20
years, and the integration of robots into society is, to many, imminent (Nourbakhsh, 2013;
Sabanovi ̆ c, 2010). Ethical questions about such integration have recently gained
prominence. For example, academic publications on the topic of robot ethics doubled
between 2005 and 2009 and doubled again since then, counting almost 200 as of the time
of this conference (Malle, 2015).
Economic scholars have puzzled for a long time why such free-riding is not more common—
why people cooperate much more often than they “defect,” as game theorists call it, when
defecting would provide the agent with larger utility.
The answer cannot be that humans are “innately” cooperative, because they are perfectly
capable of defecting. The answer involves to a significant extent the power of norms. A
working definition of a norm is the following: An instruction to (not) perform a specific or
general class of action, whereby a sufficient number of individuals in a community (a)
indeed follow this instruction and (b) expect others in the community to follow the
instruction.
8. Moral Machines and the Threat of Ethical Nihilism, Anthony F. Beavers
But, though my cell phone might be smart, I do not take that to mean that it is thoughtful,
insightful, or wise. So, what has become of these latter categories? They seem to be
bygones, left behind by scientific and computational conceptions of thinking and knowledge
that no longer have much use for them.
Respecting
Kantian ethics, the problem is apparent in the universal law formulation of the categorical
imperative, the one that would seem to hold the easiest prospects for rule-based
implementation in a computational system: “act as if the maxim of your action were to
become through your will a universal law of nature ”( Kant [1785] 1981 , 30).
One mainstream interpretation of this principle suggests that whatever rule (or maxim) I
should use to determine my own behavior must be one that I can consistently will to be
used to determine the behavior of everyone else. (Kant ’s most consistent example of this
imperative in application concerns lying promises. One cannot make a lying promise without
simultaneously willing a world in which lying is permissible, thereby also willing a world in
which no one would believe a promise, particularly the very one I am trying to make. Thus,
the lying promise fails the test and is morally impermissible.) Though at first the categorical
imperative looks implementable from an engineering point of view, it suffers from a
problem of scope, since any maxim that is defined narrowly enough (for instance, to include
a class of one, anyone like me in my situation) must consistently universalize. Death by
failure to implement looks imminent; so much the worse for Kant, and so much the better
for ethics.
Classical utilitarianism meets a similar fate, even though, unlike Kant, Mill casts internals,
such as intentions, to the wind and considers just the consequences of an act for evaluating
moral behavior. Here, “actions are right in proportion as they tend to promote happiness;
wrong as they tend to produce the reverse of happiness. By happiness is intended pleasure
and the absence of pain; by unhappiness, pain and the privation of pleasure” That internals
are incidental to utilitarian ethical assessment is evident in the fact that Mill does not
require that one act for the right reasons. He explicitly says that most good actions are not
done accordingly, Thus, acting good is indistinguishable from being good, or, at least, to be
good is precisely to act good; and sympathetically we might be tempted to agree, asking
what else could being good possibly mean. Things again are complicated by problems of
scope, though Mill, unlike Kant, is aware of them. He writes, “again, defenders of utility
often find themselves called upon to reply to such objections as this — that there is not
enough time, previous to action, for calculating and weighing the effects of any line of
conduct on the general happiness” ( [1861] 1979 , 23). (In fact, the problem is
computationally intractable when we consider the ever-extending ripple effects that any act
can have on the happiness of others across both space and time.) Mill gets around the
problem with a sleight of hand, noting that “all rational creatures go out upon the sea of life
with their minds made up on the common questions of right and wrong” (24), suggesting
that calculations are, in fact, unnecessary, if one has the proper forethought and upbringing.
Again, the rule is of little help, and death by failure to implement looks imminent. So much
the worse for Mill; again, so much the better for ethics.

9. Moral Robots—Elizebeth Huh


His efforts to influence the public through reason also recall Plato’s theory of human
motivation, illustrated by his conception of the tripartite soul. In Book II of The Republic,
Socrates explains that the root of each human desire can find its origin in one part of a
three-part soul: the lowest part, epithumia, motivates the appetites and the desire for
bodily pleasure; the middle part of the soul, thumos, desires honor and a competitive form
of self-interest; and the highest part of the soul, logos, loves reason and knowledge.
Socrates explains that the size of each of these three parts varies among individuals, but
that the largest piece of each person’s soul naturally guides her into one of three social
classes. The majority of the population, motivated by bodily appetites, become the
moneymakers and craftsmen of the city and fulfill its basic needs; those who desire honor
are best fit to serve as the guardians and warriors of the city; and philosophers, whose
actions are ruled above all by reason, ought to rule.
Though Singer does not go so far as to claim that effective altruists ought to completely strip
away their emotions, he does insist that the emotions distract from his principle of
efficiency, and that this principle is essential for optimal moral decision-making. So this is
what I consider his ultimate mistake: it is his conflation of morality with efficiency, and his
belief that we do not need the emotions and some acceptance of uncertainty on our path to
moral progress.
Why is this a mistake? Let’s look at the legalization of gay marriage in the United States.
Singer’s effective altruist doctrine would have maintained that the suffering of homosexual
couples was not as great as the suffering of those starving to death, and that, therefore,
“maximally effective” altruists wishing to do “the most good” possible should not have
considered spending any time, money, or resources fighting for some the right to some
highly subjective form of emotional fulfilment.

10. Moral Machines: Mindless Morality and its Legal Implications--Andrew Schmelzer
Not all of our devices need moral agency. As autonomy increases, morality becomes more
necessary in robots, but the reverse also holds. Machines with little autonomy need less
ethical sensitivity. A refrigerator need not decide if the amount someone eats is healthy,
and limit access accordingly. In fact, that fridge would infringe on human autonomy.
Ethical sensitivity does not require moral perfection. I do not expect morally perfect
decisions from machines. In fact, because humans are morally imperfect, we cannot base
moral perfection off of humanity by holding machines to human ideals. Our moral
development continues today, and I believe may never finish. Designing an artificial moral
agent bound by the morality of today dooms it to obsolescence: ethical decisions from a
hundred years ago look much more racist, sexist, etc., and less ‘good’ from today’s
perspective; today’s ethics might have the same bias when viewed from the future
(Creighton, 2016). Because the nature of our ethics changes, an agent will stumble
eventually. Instead, we strive for morally human (or even better than human) decisions
from machines. When a machine’s actions reflect those of a human, we will have met the
standards for artificial moral agency.
We can test for artificial moral agency with the Moral Turing Test (Allen, Varner, & Zinser,
2000). In the MTT, where a judge tries to differentiate between a machine and a person by
their moral actions. An agent passes the test when the judge cannot correctly identify the
machine more often than chance. Then, a machine qualifies as a moral agent. In the
comparative Moral Turing Test (cMTT), the judge compares the behaviors of the two
subjects, and determines which action is morally better than the other (Allen, Varner, &
Zinser, 2000). When a machine’s behavior consistently scores morally preferable to a
human’s behavior, then either the agent will have surpassed human standards, or the
human’s behavior markedly strays from those standards.
Frankena (1973) provides a list of terminal values — virtues that are valued for themselves,
rather than their consequences (Yudkowsky, 2011):
Life, consciousness, and activity; health and strength; pleasures and satisfactions of all
or certain kinds; happiness, beatitude, contentment, etc.; truth; knowledge and true
opinions of various kinds, understanding, wisdom; beauty, harmony, proportion in
objects contemplated; aesthetic experience; morally good dispositions or virtues; mutual
affection, love, friendship, cooperation; just distribution of goods and evils; harmony and
proportion in one’s own life; power and experiences of achievement; self-expression;
freedom; peace, security; adventure and novelty; and good reputation, honor, esteem,
etc.
Programming all of those values directly into a single utility function (the method of
determining positive or negative results) is ridiculous. Can engineers or ethicists quantify
each value and agree on a prioritization for each? Yudkowsky (2011) proposes a ‘one-
wrong-number’ problem: a phone number has 10 digits, but dialing one wrong number does
not mean you will connect with someone 90% like the person intended. The same may
apply to virtue-based machines.
Furthermore, some values we deem worthy of implementation in our machines may
contradict each other, such as compassion and honesty (e.g. a child’s professional baseball
potential). In this way virtue-based systems still require the caveats of a rule-based system
(Allen, Varner, & Zinser, 2000). But what about non-terminal virtues, that is, virtues we
value for their repercussions?
The three methods of bottom-up development I will discuss here are neural network
learning, genetic algorithms, and scenario analysis systems.
Neural networks function similarly to neurons: connections between inputs and outputs
make up a system that can learn to do various things, from playing computer games to
running bipedally in a simulation. By using that learning capability on ethical endeavors, a
moral machine begins to develop. From reinforcement of positive behaviors and penalty of
negative ones, the algorithm learns the pattern of our moral systems. Eventually, engineers
place the algorithm in charge of a physical machine, and away it goes. One downside to this
is the uncertainty regarding what the algorithm learned. When the army tried to get a
neural net to recognize tanks hidden in trees, what looked like a distinction between trees,
tanks, and partly concealed tanks turned out to be a distinction between a sunny and cloudy
day (Dreyfus & Dreyfus, 1992). Kuang (2017) writes about Darrel’s potential solution: having
two neural networks working side by side. The first learns the correlation between input
and output, challenging situation and ethically right decision, respectively. The second
algorithm focuses on learning language and connects tags or captions from an input and
explains what cues and ideas the second algorithm used to come up with a course of action.
The second weak point stems from allowing mistakes: no amount of learning can verify that
the machine will act morally in all situations in the real world, including those not tested and
learned from.
Genetic algorithms operate on a somewhat similar principle. Large numbers of simple digital
agents run through ethically challenging simulations. The ones that return the best scores
get “mated” with each other, blending code with a few randomizations, and then the test
runs again (Fox, 2009). After the best (or acceptably best) scores based on desired outcomes
are achieved, a new situation is added to the repertoire that each program must surpass. In
this way, machines can learn our moral patterns. Once thoroughly evolved, we implement
the program, and the machine operates independently in the real world. Alternatively to
direct implementation, we could evolve the program to learn patterns quickly and
efficiently, and then run it through the neural network training. This method suffers the
same downsides as neural networking: we cannot tell what it learned or whether it will
make mistakes in the future.
The final approach involves scenario analysis. Parkin (2017) describes a method of teaching
AI by having it read books and stories and learn the literature’s ideas and social norms.
While this may not apply to machines we do not intend to behave as humans, the idea still
applies to niche or domain-specific machines. Instead of using literature as learning input,
we provide a learning program with records of past wrongdoings and successful outcomes
of ethically-blind machines in its niche. Then the program could infer the proper behaviors
for real world events it may encounter in the future. After analyzing the settings and events
of each scenario, the program would save the connections it made for later human
inspection. If the program’s connections proved ‘good,’ it would then receive a new batch of
scenarios to test through, and repeat the cycle. One downside to this approach involves
painstaking human analysis. A new program would have to go through this cycle for every
machine niche that requires a moral agent, and a human evaluator would have to carefully
examine every connection and correlation the program develops. Darrel’s (2017) explaining
neural net could work in tandem with a scenario analysis system to alleviate the human
requirement for analysis. This approach does get closer to solving the issue of working in
new environments than the previous two approaches, but may nonetheless stumble once
implemented in reality.
Bottom-up approaches utilize continued development to reach an approximation of moral
standards. Neural networks develop connections and correlations to create a new output,
but we struggle to know why the system comes to a decision. Genetic algorithms refine
themselves by duplicating the most successful code into the next generation of programs,
with a little mutation for adaptation. A genetic algorithm’s learning process also remains
obscured without careful record of iterations, which may be beyond human comprehension.
Scenario analysis systems can learn the best conduct historically shown as ethically right,
but still retains potential for error. As of yet, we do not have a reliable method to develop
an artificial moral agent.
To build an artificial moral agent, DeBaets (2014) argues that a machine must have
embodiment, learning, teleology toward the good, and empathy.
DeBaets (2014) claims that moral functioning requires embodiment because if a machine
acts in and influences the physical world, it must have a physical manifestation.
“Embodiment [requires] that a particular decision-making entity be intricately linked to a
particular concrete action; morality cannot solely be virtual if it is to be real” (DeBaets,
2014). They can work from a distance, have multiple centers of action, and have distributed
decision-making centers, but each requires physical form. Embodiment constrains machines
to a physical form, so this definition of moral agency excludes algorithms and programs that
do not interact with the physical world.
Ethical machines need learning capacity so they can perform as taught by ethical and moral
rules and extrapolate what they have learned into new situations. This requirement
excludes any top-down approach that does not involve frequent patches and updates.
Hybrid systems combine rule sets and learning capacities, and so fulfil this requirement
since they can adjust to new inputs and refine their moral behavior.
Teleology toward the good and empathy both face a sizable complication: they both require
some form of consciousness. For a machine to empathize with and understand emotions of
others, it must have emotion itself. Coeckelbergh (2010) claims that true emotion requires
consciousness and mental states in both cognitivist theory and feeling theory. Thus, if
robots do not have consciousness or mental states, they cannot have emotions and
therefore cannot have moral agency. Additionally, if a machine innately desires to do good,
it must have some form of inner thoughts or feeling that it is indeed doing good, so
teleology also requires consciousness or mental states. Much of human responsibility and
moral agency relies on this theory of mind. In court, the insanity or state of mind defence
can counter criminal charges.
However, no empirical way to test for state of mind or consciousness in people exists today.
Why require those immeasurable characteristics in our robots?
Emotionless Machinery
We interpret other humans’ behaviors as coming from or influenced by emotion, but we
have no way to truly determine emotional state. Verbal and nonverbal cues give us insights
to emotions others feel. They may imitate or fake those cues, but we interact with them just
the same as long as they maintain their deception (Goffman, 1956). We measure other
people by their display or performance of emotion.
Since the appearance of emotion in people regulates social interaction and human morality,
we must judge robots by that same appearance. Even today, machines can read breathing
and heart rate (Gent, 2016), and computers do not need to see an entire face to determine
emotion displayed (Wegrzyn, Vogt, Kireclioglu, Schneider, & Kissler, 2017). Soon enough, a
machine could learn to display human emotion by imitating the cues they’re designed to
measure. In theory, a robot could imitate or fake emotional cues as well as humans display
them naturally. People already tend to anthropomorphize robots, empathize with them,
and interpret their behavior as emotional (Turkle, 2011). For consistency in the way we treat
human display of emotion and interpret it as real, we must also treat robotic display of
emotion as real.
If the requirement for empathy changes from true emotion to functional emotion — as is
consistent with how we treat people — then an imitating robot fulfills all the requirements
for empathy, effectively avoiding the issue regarding consciousness and mental state.
Compassion could be the reason an autonomous car veers into a tree rather than a line of
children, but the appearance of compassion could also serve the same effect.
Additionally, a robot can have an artificial teleology towards good, granted that all of the
taught responses programmed into the machine are ‘good.’ Beavers’ (2011) discussion of
classical utilitarianism, referencing Mill (1979), claims that acting good is the same as being
good. The same applies to humans, as far as we can tell from the outside. Wallach and Allen
(2009) note that “values that emerge through the bottom-up development of a system
reflect the specific causal determinates of a system’s behavior”. In other words, a ‘good’ and
‘moral’ robot is one that takes moral and good actions. Thus, while we may not get true
teleology, functional teleology can suffice.
11. Human Rights and Artificial Intelligence--An Urgently Needed Agenda. Mathias Risse
Algorithms can do anything that can be coded, as long as they have access to data they
need, at the required speed, and are put into a design frame that allows for execution of
the tasks thus determined. In all these domains, progress has been enormous. The
effectiveness of algorithms is increasingly enhanced through “Big Data:” availability of
an enormous amount of data on all human activity and other processes in the world
which allow a particular type of AI known as “machine learning” to draw inferences
about what happens next by detecting patterns. Algorithms do better than humans
wherever tested, even though human biases are perpetuated in them: any system
designed by humans reflects human bias, and algorithms rely on data capturing the
past, thus automating the status quo if we fail to prevent them. 2 But algorithms are
noise-free: unlike human subjects, they arrive at the same decision on the same problem
when presented with it twice.
Also, philosophers have long puzzled about the nature of the mind. One question is if
there is more to the mind than the brain. Whatever else it is, the brain is also a complex
algorithm. But is the brain fully described thereby, or does that omit what makes us
distinct, namely, consciousness? Consciousness is the qualitative experience of being
somebody or something, its “what-it-is-like-to-be-that”-ness, as one might say. If there
is nothing more to the mind than the brain, then algorithms in the era of Big Data will
outdo us soon at almost everything we do: they make ever more accurate predictions
about what book we enjoy or where to vacation next; drive cars more safely than we do;
make predictions about health before our brains sound alarms; offer solid advice on
what jobs to accept, where to live, what kind of pet to adopt, if it is sensible for us to be
parents and whether it is wise to stay with the person we are currently with – based on a
myriad of data from people relevantly like us. Internet advertisement catering towards
our preferences by assessing what we have ordered or clicked on before is a mere
shadow of what is to come.
Future machines might be composed and networked in ways that no longer permit easy
switch-off. More importantly, they might display
emotions and behavior to express attachment: they might even worry about being
turned off, and be anxious to do something about it. Or future machines might be
cyborgs, partly composed of organic parts, while humans are modified with non-organic
parts for enhancement. Distinctions between humans and non-humans might erode.
Ideas about personhood might alter once it becomes possible to upload and store a
digitalized brain on a computer, much as nowadays we can store human embryos.
Already in 2007, a US colonel called off a robotic land-mine-sweeping exercise
because he considered the operation inhumane after a robot kept crawling along losing
legs one at a time. 5 Science fiction shows like Westworld or The Good Place anticipate
what it would be like to be surrounded by machines we can only recognize as such by
cutting them open. A humanoid robot named Sophia with capabilities to participate in
interviews, developed by Hanson Robotics, became a Saudi citizen in October 2017.
Later Sophia was named UNDP’s first-ever Innovation Champion, the first non-human
with a UN title.6 The future might remember these as historic moments. The pet world
is not far behind. Jeff Bezos recently adopted a dog called SpotMini, a versatile robotic
pet capable of opening doors, picking himself up and even loading the dishwasher. And
SpotMini never needs to go outside if Bezos would rather shop on Amazon or enjoy
presidential tweets.
If there indeed is more to the mind than the brain, dealing with AI including humanoid
robots would be easier. Consciousness, or perhaps accompanying possession of a
conscience, might then set us apart. It is a genuinely open question how to make sense
of qualitative experience and thus of consciousness. But even though considerations
about consciousness might contradict the view that AI systems are moral agents, they
will not make it impossible for such systems to be legal actors and as such own property,
commit crimes and be accountable in legally enforceable ways. After all, we have a
history of treating corporations in such ways, which also do not have consciousness.
Perhaps T. M. Scanlon’s ideas about appropriate responses to values would help.10 The
superintelligence might be “moral” in the sense of reacting in appropriate ways towards
what it observes all around. Perhaps then we have some chance at getting protection, or
even some level of emancipation in a mixed society composed of humans and machines,
given that the abilities of the human brain are truly astounding and generate capacities
in human beings that arguably should be worthy of respect.11 But so are also the
capacities of animals, which has not normally led humans to react towards them, or
towards the environment, in an appropriately respectfully way. Instead of displaying
something like an enlightened anthropocentrism, we have too often instrumentalized
nature. Hopefully a superintelligence would simply outperform us in such matters, and
that will mean the distinctively human life will receive some protection because it is
worthy of respect. We cannot know that for sure but we also need not be pessimistic.

There is an urgency to making sure these developments get off to a good start. The
pertinent challenge is the problem of value alignment, a challenge that arises way
before it will ever matter what the morality of pure intelligence is. No matter how
precisely AI systems are generated we must try to make sure their values are aligned
with ours to render as unlikely as possible any complications from the fact that a
superintelligence might have value commitments very different from ours. That the
problem of value alignment needs to be tackled now is also implied by the UN Guiding
Principles on Business and Human Rights, created to integrate human rights into
business decisions. These principles apply to AI. This means addressing questions such
as "What are the most severe potential impacts?", "Who are the most vulnerable
groups?" and "How can we ensure access to remedy?"
However, these laws have long been regarded as too unspecific. Various efforts have
been made to replace them, so far without any connection to the UN’s Principles on
Business and Human Rights or any other part of the human-rights movement. Among
other efforts, in 2017 the Future of Life Institute in Cambridge, MA founded around
MIT physicist Max Tegmark and Skype co-founder Jaan Tallinn, held a conference on
Beneficial AI at the Asilomar conference center in California to come up with principles
to guide further development of AI. Of the resulting 23 Asilomar Principles, 13 are listed
under the heading of Ethics and Values. Among other issues, these principles insist that
wherever AI causes harm, it should be ascertainable why it does, and where an AI
system is involved in judicial decision making its reasoning should be verifiable by
human auditors. Such principles respond to concerns that AI deploying machine
learning might reason at such speed and have access to such a range of data that its
decisions are increasingly opaque, making it impossible to spot if its analyses go astray.
The principles also insist on value alignment, urging that “highly autonomous AI
systems should be designed so that their goals and behaviors can be assured to align
with human values throughout their operation” (Principle 10). The ideas explicitly
appear in Principle 11 (Human Values) include “human dignity, rights, freedoms, and
cultural diversity.”
Russian manipulation in elections is a wake-up call; much worse is likely to come. Judicial
rights could be threatened if AI is used without sufficient transparency and possibility for
human scrutiny. An AI system has predicted the outcomes of hundreds of cases at the
European Court of Human Rights, forecasting verdicts with accuracy of 79%; and once
that accuracy gets yet higher it will be tempting to use AI also to reach decisions. Use of
AI in court proceedings might help generate access to legal advice to the poor (one of the
projects Amnesty pursues, especially in India); but it might also lead to Kafkaesque
situations if algorithms give inscrutable advice.
Any rights to security and privacy are potentially undermined not only through drones
or robot soldiers, but also through increasing legibility and traceability of individuals in
a world of electronically recorded human activities and presences. The amount of data
available about people will likely increase enormously, especially once biometric sensors
can monitor human health. (They might check up on us in the shower and submit their
data, and this might well be in our best interest because illness becomes diagnosable
way before it becomes a problem.) There will be challenges to civil and political rights
arising from the sheer existence of these data and from the fact that these data might
well be privately owned, but not by those whose data they are. Leading companies in
the AI sector are more powerful than oil companies ever were, and this is presumably
just the beginning of their ascension.
The Cambridge-Analytica scandal is a wake-up call here, and
Mark Zuckerberg’s testimony to US senators on April 10, 2018 revealed an astonishing
extent of ignorance among senior lawmakers about the workings of internet companies
whose business model depends on marketing data. Such ignorance paves the path to
power for companies. Or consider a related point: Governments need the private sector
to aid in cyber security. The relevant experts are smart, expensive, and many would
never work for government. We can only hope that it will be possible to co-opt them
given that government is overextended here. If such efforts fail, only companies will
provide the highest level of cyber security.
This takes me to my last topic: AI and inequality, and the connection between that topic
and human rights. To begin with, we should heed Thomas Piketty’s warning that
capitalism left to its own devices in times of peace generates ever increasing economic
inequality. Those who own the economy benefit from it more than those who just work
there. Over time life chances will ever more depend on social status at birth. We also
see more and more how those who either produce technology or know how to use
technology to magnify impact can command higher and higher wages. AI will only
reinforce these tendencies, making it ever easier for leaders across all segments to
magnify their impact. That in turn makes producers of AI ever more highly priced
providers of technology. More recently, we have learned from Walter Scheidel that,
historically, substantial decreases in inequality have only occurred in response to
calamities such as epidemics, social breakdowns, natural disasters or war. Otherwise it
is hard to muster effective political will for change.
Before this background we must worry AI will drive a widening technological wedge into
societies that leaves millions excluded, renders them redundant as market participants
and thus might well undermine the point of their membership in political community.
When wealth was determined by land ownership, the rich needed the rest because the
point of land ownership was to charge rent. When wealth was determined by ownership
of factories the owners needed the rest to work the machines and buy stuff. But those
on the losing side of the technological divide may no longer be needed at all. In his 1926
short story “The Rich Boy,” F. Scott Fitzgerald famously wrote, “Let me tell you about
the very rich. They are different from you and me.” AI might validate that statement in a
striking way.
12. From Machine Ethics To Machine Explainability and Back∗Kevin Baum, Holger
Hermanns and Timo Speith. Saarland University, Department of Philosophy
An important question arises: How should machines be constrained, such that they act
morally acceptable towards humans? This question concerns Machine Ethics – the search
for formal, unambiguous, algorithmizable and implementable behavioral constraints for
systems, so as to enable them to exhibit morally acceptable behavior.
We instead feel the need to supplement Machine Ethics with means to ascertain justified
trust in autonomous systems – and other desirable properties. After pointing out why this is
important, we will argue that there is one feasible supplement for Machine Ethics: Machine
Explainability – the ability of an autonomous system to explain its actions and to argue for
them in a way comprehensible for humans. So Machine Ethics needs Machine Explainability.
This also holds vice versa: Machine Explainability needs Machine Ethics, as it is in need of a
moral system as a basis for generating explanations.
13. The Ethics of Artificial Intelligence--Nick Bostrom, Future of Humanity Institute,
Eliezer Yudkowsky, Machine Intelligence Research Institute
AI algorithms play an increasingly large role in modern society, though usually not
labeled “AI.” The scenario described above might be transpiring even as we write. It
will become increasingly important to develop AI algorithms that are not just powerful
and scalable, but also transparent to inspection—to name one of many socially important
properties. Some challenges of machine ethics are much like many other challenges
involved in designing machines. Designing a robot arm to avoid crushing stray humans is no
more morally fraught than designing a flame-retardant sofa. It involves new programming
challenges, but no new ethical challenges. But when AI algorithms take on cognitive work
with social dimensions-cognitive tasks previously performed by humans—the AI algorithm
inherits the social requirements.
Transparency is not the only desirable feature of AI. It is also important that AI algorithms
taking over social functions be predictable to those they govern.
It will also become increasingly important that AI algorithms be robust against manipulation.
Robustness against manipulation is an ordinary criterion in information security; nearly the
criterion. But it is not a criterion that appears often in machine learning journals, which are
currently more interested in, e.g., how an algorithm scale up on larger parallel systems.
Responsibility, transparency, auditability, incorruptibility, predictability, and a tendency to
not make innocent victims scream with helpless frustration: all criteria that apply to humans
performing social functions; all criteria that must be considered in an algorithm intended to
replace human judgment of social functions; all criteria that may not appear in a journal of
machine learning considering how an algorithm scales up to more computers. This list of
criteria is by no means exhaustive, but it serves as a small sample of what an increasingly
computerized society should be thinking about.
Artificial General Intelligence (AGI)-- As the name implies, the emerging consensus is that
the missing characteristic is generality. Current AI algorithms with human-equivalent or
superior performance are characterized by a deliberately programmed competence only in
a single, restricted domain. Deep Blue became the world champion at chess, but it cannot
even play checkers, let alone drive a car or make a scientific discovery. Such modern AI
algorithms resemble all biological life with the sole exception of Homo sapiens.
To build an AI that acts safely while acting in many domains, with many consequences,
including problems the engineers never explicitly envisioned, one must specify good
behavior in such terms as “X such that the consequence of X is not harmful to humans.” This
is non-local; it involves extrapolating the distant consequences of actions.
A rock has no moral status: we may crush it, pulverize it, or subject it to any treatment we
like without any concern for the rock itself. A human person, on the other hand, must be
treated not only as a means but also as an end. Exactly what it means to treat a person as an
end is something about which different ethical theories disagree; but it certainly involves
taking her legitimate interests into account—giving weight to her well-being—and it may
also involve accepting strict moral side-constraints in our dealings with her, such as a
prohibition against murdering her, stealing from her, or doing a variety of other things to
her or her property without her consent. Moreover, it is because a human person counts in
her own right, and for her sake, that it is impermissible to do to her these things. This can be
expressed more concisely by saying that a human person has moral status.
It is widely agreed that current AI systems have no moral status. We may change, copy,
terminate, delete, or use computer programs as we please; at least as far as the programs
themselves are concerned. The moral constraints to which we are subject in our dealings
with contemporary AI systems are all grounded in our responsibilities to other beings, such
as our fellow humans, not in any duties to the systems themselves.
While it is fairly consensual that present-day AI systems lack moral status, it is unclear
exactly what attributes ground moral status. Two criteria are commonly proposed as being
importantly linked to moral status, either separately or in combination: sentience and
sapience (or personhood). These may be characterized roughly as follows:
Sentience: the capacity for phenomenal experience or qualia, such as the capacity to feel
pain and suffer
Sapience: a set of capacities associated with higher intelligence, such as self- awareness and
being a reason-responsive agent.
Others propose additional ways in which an object could qualify as a bearer of moral status,
such as by being a member of a kind that normally has sentience or sapience, or by standing
in a suitable relation to some being that independently has moral status.
Principle of Substrate Non-Discrimination: If two beings have the same functionality and
the same conscious experience and differ only in the substrate of their implementation,
then they have the same moral status.
Principle of Ontogeny Non-Discrimination: If two beings have the same functionality and
the same consciousness experience, and differ only in how they came into existence, then
they have the same moral status.
Parents have special duties to their child which they do not have to other children, and
which they would not have even if there were another child qualitatively identical to their
own. Similarly, the Principle of Ontogeny Non-Discrimination is consistent with the claim
that the creators or owners of an AI system with moral status may have special duties to
their artificial mind which they do not have to another artificial mind, even if the minds in
question are qualitatively similar and have the same moral status.
Even if we accept this stance, however, we must confront a number of novel ethical
questions which the aforementioned principles leave unanswered. Novel ethical questions
arise because artificial minds can have very different properties from ordinary human or
animal minds. We must consider how these novel properties would affect the moral status
of artificial minds and what it would mean to respect the moral status of such exotic minds.
a. Does a sapient but non-sentiet robot (zombie) have the same moral ststus as a full
AMA?
b. Another exotic property, one which is certainly metaphysically and physically
possible for an artificial intelligence, is for its subjective rate of time to deviate
drastically from the rate that is characteristic of a biological human brain. The
concept of subjective rate of time is best explained by first introducing the idea of
whole brain emulation, or “uploading.” “Uploading” refers to a hypothetical future
technology that would enable a human or other animal intellect to be transferred
from its original implementation in an organic brain onto a digital computer.
Principle of Subjective Rate of Time: In cases where the duration of an experience is
of basic normative significance, it is the experience’s subjective duration that counts.
c. For example, human children are the product of recombination of the genetic
material from two parents; parents have limited ability to influence the character of
their offspring; a human embryo needs to be gestated in the womb for nine months;
it takes fifteen to twenty years for a human child to reach maturity; a human child
does not inherit the skills and knowledge acquired by its parents; human beings
possess a complex evolved set of emotional adaptations related to reproduction,
nurturing, and the child-parent relationship. None of these empirical conditions need
pertain in the context of a reproducing machine intelligence. It is therefore plausible
that many of the mid-level moral principles that we have come to accept as norms
governing human reproduction will need to be rethought in the context of AI
reproduction.
To illustrate why some of our moral norms need to be rethought in the context of AI
reproduction, it will suffice to consider just one exotic property of AIs: their capacity
for rapid reproduction. Given access to computer hardware, an AI could duplicate
itself very quickly, in no more time than it takes to make a copy of the AI’s software.
Moreover, since the AI copy would be identical to the original, it would be born
completely mature, and the copy could begin making its own copies immediately.
Absent hardware limitations, a population of AIs could therefore grow exponentially
at an extremely rapid rate, with a doubling time on the order of minutes or hours
rather than decades or centuries.
But if the population grows faster than the economy, resources will run
out; at which point uploads will either die or their ability to reproduce will be
curtailed.
This scenario illustrates how some mid-level ethical principles that are suitable in
contemporary societies might need to be modified if those societies were to include
persons with the exotic property of being able to reproduce very rapidly.
The general point here is that when thinking about applied ethics for contexts that
are very different from our familiar human condition, we must be careful not to
mistake mid-level ethical principles for foundational normative truths. Put
differently, we must recognize the extent to which our ordinary normative precepts
are implicitly conditioned on the obtaining of various empirical conditions, and the
need to adjust these precepts accordingly when applying them to hypothetical
futuristic cases in which their preconditions are assumed not to obtain. By this, we
are not making any controversial claim about moral relativism, but merely
highlighting the commonsensical point that context is relevant to the application of
ethics—and suggesting that this point is especially pertinent when one is considering
the ethics of minds with exotic properties.
Superintelligence
Good (1965) set forth the classic hypothesis concerning superintelligence: that an AI
sufficiently intelligent to understand its own design could redesign itself or create a
successor system, more intelligent, which could then redesign itself yet again to
become even more intelligent, and so on in a positive feedback cycle. Good called
this the “intelligence explosion.”
Kurzweil (2005) holds that “intelligence is inherently impossible to control,” and that
despite any human attempts at taking precautions, by definition . . . intelligent
entities have the cleverness to easily overcome such barriers.” Let us suppose that
the AI is not only clever, but that, as part of the process of improving its own
intelligence, it has unhindered access to its own source code: it can rewrite itself to
anything it wants itself to be. Yet it does not follow that the AI must want to rewrite
itself to a hostile form.
Humans, the first general intelligences to exist on Earth, have used that intelligence
to substantially reshape the globe—carving mountains, taming rivers, building
skyscrapers, farming deserts, producing unintended planetary climate changes. A
more powerful intelligence could have correspondingly larger consequences.
This presents us with perhaps the ultimate challenge of machine ethics: How do
you build an AI which, when it executes, becomes more ethical than you? If we are
serious about developing advanced AI, this is a challenge that we must
meet. If machines are to be placed in a position of being stronger, faster, more
trusted, or smarter than humans, then the discipline of machine ethics must commit
itself to seeking human-superior (not just human-equivalent) niceness.

14. Towards the Ethical Robot by James Gips, Computer Science Department, Fulton
Hall 460 Boston College, Chestnut Hill, MA 02467
Asimov’s three laws are not suitable for our magnificent robots. These are laws for slaves.
We want our robots to behave more like equals, more like ethical people. (See Figure
1) How do we program a robot to behave ethically? Well, what does it mean for a
person to behave ethically?
2) On what type of ethical theory can automated ethical reasoning be based? At first
glance, consequentialist theories might seem the most "scientific", the most
amenable to implementation in a robot. Maybe so, but there is a tremendous
problem of measurement. How can one predict "pleasure", "happiness", or "well-
being" in individuals in a way that is additive, or even comparable?
3) Deontological theories seem to offer more hope. The categorical imperative might
be tough to implement in a reasoning system. But I think one could see using a moral
system like the one proposed by Gert as the basis for an automated ethical
reasoning system. A difficult problem is in the resolution of conflicting obligations.
Gert's impartial rational person advocating that violating the rule in these
circumstances be publicly allowed seems reasonable but tough to implement.
4) The virtue-based approach to ethics, especially that of Aristotle, seems to resonate
well with the modern connectionist approach to AI. Both seem to emphasize the
immediate, the perceptual, the non-symbolic. Both emphasize development by
training rather than by the teaching of abstract theory.
5) Knuth [1973, p.709] put it well: It has often been said that a person doesn't really
understand something until he teaches it to someone else. Actually a person doesn't
really understand something until he can teach it to a computer, i.e., express it as an
algorithm. ... The attempt to formalize things as algorithms leads to a much deeper
understanding than if we simply try to understand things in the traditional way. That
as we build the artificial ethical reasoning systems we will learn how to behave more
ethically ourselves.
15. Can robots be responsible moral agents? And why should we care? Amanda
Sharkey, Department of Computer Science, University of Sheffield, Sheffield, UK
Patricia Churchland (2011) discusses the basis for morality in living beings, and argues that
the basis for caring about others lies in the neurochemistry of attachment and bonding in
mammals. She explains that it is grounded in the extension of self-maintenance and
avoidance of pain in mammals to their immediate kin. Neuropeptics, oxytocin and arginine
vasopressin underlie mammals’ extension of self-maintenance and avoidance of pain to
their immediate kin. Humans and other mammals feel anxious about their own well-being
and that of those to whom they are attached. As well as attachment and empathy for
others, humans and other mammals develop more complex social relationships, and are
able to understand and predict the actions of others. They also internalise social practices,
and experience ‘social pain’ triggered by separation, exclusion or disapproval. As a
consequence, humans have an intrinsic sense of justice.
By contrast, robots are not concerned about their own self-preservation or avoidance of
pain, let alone the pain of others. In part, this can be explained by means of arguing that
they are not truly embodied, in the way that a living creature is. Parts of a robot could be
removed from a robot’s body without it suffering any pain or anxiety, let alone it being
concerned about damage or pain to a family member or to a human. A living body is an
integrated autopoeietic entity (Maturana and Varela, 1980) in a way that a man-made
machine is not. Of course, it can be argued that the robot could be programmed to behave
as if it cared about its own preservation or that of others, but this is only possible through
human intervention.
16. Can machines be people? Reflections on the Turing Triage Test, Dr Rob Sparrow,
School of Philosophical, Historical & International Studies, Monash University.
Finally, imagine that you are again called to make a difficult decision. The battery system
powering the AI is failing and the AI is drawing on the diminished power available to the
rest of the hospital. In doing so, it is jeopardising the life of the remaining patient on life
support. You must decide whether to ̳switch off‘ the AI in order to preserve the life of
the patient on life support. Switching off the AI in these circumstances will have the
unfortunate consequence of fusing its circuit boards, rendering it permanently inoperable.
Alternatively, you could turn off the power to the patient‘s life support in order to allow
the AI to continue to exist. If you do not make this decision the patient will die and the
AI will also cease to exist. The AI is begging you to consider its interests, pleading to be
allowed to draw more power in order to be able to continue to exist.
My thesis, then, is that machines will have achieved the moral status of persons when
this second choice has the same character as the first one. That is, when it is a moral
dilemma of roughly the same difficulty. For the second decision to be a dilemma it must
be that there are good grounds for making it either way. It must be the case therefore that
it is sometimes legitimate to choose to preserve the existence of the machine over the life
of the human being. (He may choose the robot because it is more useful to the hospital than
the human patient. This test still doesn’t make the robot a moral person. Only sapience and
sentience will make a robot a moral person,)
17. Can a Robot Pursue the Good? Exploring Artificial Moral Agency, Amy Michelle
DeBaets, Kansas City University of Medicine and Biosciences, Journal of Evolution
and Technology - Vol. 24 Issue 3 – Sept 2014 – pgs 76-86
What then, might be necessary for a decision-making and acting entity to non-accidentally
pursue the good in a given situation? I argue that four basic components collectively make
up the basic requirements for moral agency: embodiment, learning, empathy, and
teleology.
First, I want to argue that artificial moral agents, like all moral agents, must have some form
of embodiment, as they must have a real impact in the physical world (and not solely a
virtual one) if they are to behave morally. Embodiment not only allows for a concrete
presence from which to act, it can adapt and respond to the consequences of real decisions
in the world. This physical embodiment, however, need not look particularly similar to
human embodiment and action. Robotic embodiment might be localized, having actions
take place in the same location as the decision center, in a discrete, mobile entity (as with
humans), but it might also be remote, where the decision center and locus of action are
distant in space. It could also be distributed, where the decision centers and/or loci of action
take place in. (Not convincing—a server is also an entity and is embodied).
This embodied decision-making and action must also exist in a context of learning. Learning,
in this sense, is not simply the incorporation of new information into a system or the
collection of data. It is adapting both the decision processes themselves and the agent’s
responses to inputs based on previous information. It is this adaptability that allows moral
agents to learn from mistakes as well as successes, to develop and hone moral reasoning,
and to incorporate new factual information about the circumstances of decisions to be
made. several places at once, as with distributed computing or multiple simultaneous
centers of action. The unifying theme of embodiment does require that a particular
decision-making entity be intricately linked to particular concrete action; morality cannot
solely be virtual if it is to be real.
Even if an embodied robot can learn from its own prior actions, it is not necessarily moral.
The complex quality of empathy is still needed for several reasons. First, empathy allows the
agent to recognize when it has encountered another agent, or an appropriate object of
moral reasoning. It allows the A.M.A. to understand the potential needs and desires of
another, as well as what might cause harm to the other. This requires at least a rudimentary
theory of mind, that is, a recognition that another entity exists with its own thoughts,
beliefs, values, and needs. This theory of mind need not take an extremely complex form,
but for an agent to behave morally, it cannot simply act as though it is the only entity that
matters. The moral agent must be able to develop a moral valuation of other entities,
whether human, animal, or artificial. It may have actuators and sensors that give it the
capacity to measure physical inputs from body language, stress signs, and tone of voice, to
indicate whether another entity is in need of assistance and behave morally in accordance
with the needs it measures. It may respond to cries for help, but it needs to be able to
distinguish between a magazine rack and a toddler in rushing in to provide aid. Empathy,
and not merely rationality, is critical for developing and evaluating moral choices; just as
emotion is inherent to human rationality, it is necessary for machine morality. (This is the
ethics of care logic—only emotions lead to action, to empathy)
What is sometimes forgotten in defining a moral agent as such, including in the case of
A.M.A.s, is that the entity must both be designed to be, and desire to be, moral. It must
have a teleology toward the good. Just as human beings have developed a sense of the
moral and often seek to act accordingly, machines could be designed to pursue the good,
even develop a form of virtue through trial and error. They will not, however, do so in the
absence of some design in that direction. A teleology of morality introduced into the basic
programming of a robot would not necessarily be limited to any one particular ethical
theory or set of practices and could be designed to incorporate complex interactions of
decisions and consequences, just as humans typically do when making decisions about what
is right. It could be programmed, in its more advances forms, to seek out the good, to
develop “virtual virtue,” learning from what it has been taught and practicing ever-greater
forms of the good in response to what it learns.
What is Not Required for Artificial Moral Agency?
Popular futurists Ray Kurzweil and Hans Moravec have argued that sheer increases in
computational processing power will eventually lead to superhuman intelligence, and thus,
to agency. But this is not the case. While a certain amount of “intelligence” or processing
power is necessary, it is only functionally useful insofar as it facilitates learning and
empathy, particularly. Having the most processing power does not make one the most
thoughtful agent, and having the most intelligence does not make one particularly moral on
its own.
Likewise, while a certain amount of rule-following is probably necessary for artificial moral
agency, rule-following alone does not make for a moral agent, but rather for a slave to
programming. Moral agency requires being able to make decisions and act when the basic
rules conflict with each other; it also requires being able to set aside “the rules” entirely
when the situation dictates. It has been said that one cannot truly be good unless one has
the freedom to choose not to be good. While I do not want to take on that claim here, I will
argue that agency requires at least some option of which goods to pursue and what
methods to pursue them by. A supposed A.M.A. that only follows the rules, and breaks
down when they come into conflict, is not a moral agent at all.
While a machine must move beyond simple rule-following to be a genuine moral agent
(even if many of its ends and goals are predetermined in its programming), complete
freedom is not necessary in order to have moral agency.
Some have thought that a fully humanoid consciousness is necessary for the development
of moral agency, but this too, may legitimately look quite different in machines than it does
in human beings.
Consciousness is itself elusory, without a clear definition or understanding of its processes.
What can be said for moral agency, though, is that the proof is in the pudding, that decisions
and actions matter at least as much as the background processing that went into them. In
deciding to consistently behave morally, and in learning from behavior in order to become
more moral, a machine can be a moral agent in a very real sense while avoiding the problem
of consciousness entirely.
Just as consciousness is used primarily as a requirement that cannot, by definition, be met
by any entity other than a human moral agent, so the idea of an immaterial soul need not
be present in order to have a moral agent. While the idea of a soul may or may not be useful
when applied in the context of human beings in relation to the Divine, it is unnecessary for
the more limited question of moral agency. A being also need not have a sense of God in
order to be a moral being. Not only is this true in the case of many humans, who may be
atheists, agnostics, or belong to spiritual traditions that do not depend on the idea of a
deity, but it is not necessary for moral action and the development of virtue. It may be
practically helpful in some cases for a robot to believe in a deity in order to encourage its
moral action, but it is by no means a requirement.
Yet, while the robots we build will not be subject to many of the same temptations as
human moral agents, they will still be subject to the limitations of their human designers
and developers. Robots will not be morally perfect, just as humans, even in the best of
circumstances, are never morally perfect.

You might also like