You are on page 1of 11

Astral Codex Ten Upgrade to paid

Kelly Bets On Civilization


...

Mar 7 146 566

Scott Aaronson makes the case for being less than maximally hostile to AI
development:

Here’s an example I think about constantly: activists and intellectuals of the 70s and
80s felt absolutely sure that they were doing the right thing to battle nuclear power.
At least, I’ve never read about any of them having a smidgen of doubt. Why would
they? They were standing against nuclear weapons proliferation, and terrifying
meltdowns like Three Mile Island and Chernobyl, and radioactive waste poisoning
the water and soil and causing three-eyed fish. They were saving the world. Of
course the greedy nuclear executives, the C. Montgomery Burnses, claimed that
their good atom-smashing was different from the bad atom-smashing, but they
would say that, wouldn’t they?

We now know that, by tying up nuclear power in endless bureaucracy and driving
its cost ever higher, on the principle that if nuclear is economically competitive then
it ipso facto hasn’t been made safe enough, what the antinuclear activists were
really doing was to force an ever-greater reliance on fossil fuels. They thereby
created the conditions for the climate catastrophe of today. They weren’t saving the
human future; they were destroying it. Their certainty, in opposing the march of a
particular scary-looking technology, was as misplaced as it’s possible to be. Our
descendants will suffer the consequences.

Read carefully, he and I don’t disagree. He’s not scoffing at doomsday predictions,
he’s more arguing against people who say that AIs should be banned because they
might spread misinformation or gaslight people or whatever.

Still, I think about this argument a lot. I agree he’s right about nuclear power. When it
comes out in a few months, I’ll be reviewing a book that makes this same point about
institutional review boards: that our fear of a tiny handful of deaths from unethical
science has caused hundreds of thousands of deaths from delaying ethical and life-
saving medical progress. The YIMBY movement makes a similar point about housing:
we hoped to prevent harm by subjecting all new construction to a host of different
reviews - environmental, cultural, equity-related - and instead we caused vast harm by
creating an epidemic of homelessness and forcing the middle classes to spend
increasingly unaffordable sums on rent. This pattern typifies the modern age; any
attempt to restore our rightful utopian flying-car future will have to start with rejecting
it as vigorously as possible.

So how can I object when Aaronson turns the same lens on AI?

First, you are allowed to use Inside View. If Osama bin Laden is starting a supervirus
lab, and objects that you shouldn’t shut him down because “in the past, shutting down
progress out of exaggerated fear of potential harm has killed far more people than the
progress itself ever could”, you are permitted to respond “yes, but you are Osama bin
Laden, and this is a supervirus lab.” You don’t have to give every company trying to
build the Torment Nexus a free pass just because they can figure out a way to place
their work in a reference class which is usually good. All other technologies fail in
predictable and limited ways. If a buggy AI exploded, that would be no worse than a
buggy airplane or nuclear plant. The concern is that a buggy AI will pretend to work
well, bide its time, and plot how to cause maximum damage while undetected. Also it’s
smarter than you. Also this might work so well that nobody realizes they’re all buggy
until there are millions of them.

But maybe opponents of every technology have some particular story why theirs is a
special case. So let me try one more argument, which I think is closer to my true
objection.

There’s a concept in finance called Kelly betting. It briefly gained some fame last year
as a thing that FTX failed at, before people realized FTX had failed at many more
fundamental things. It works like this (warning - I am bad at math and may have gotten
some of this wrong): suppose you start with $1000. You’re at a casino with one game:
you can, once per day, bet however much you want on a coin flip, double-or-nothing.
You’re slightly psychic, so you have a 75% chance of guessing the coin flip right. That
means that on average, you’ll increase your money by 50% each time you bet. Clearly
this is a great opportunity. But how much do you bet per day?

Tempting but wrong answer: bet all of it each time. After all, on average you gain
money each flip - each $1 invested in the coin flip game becomes $1.50. If you bet
everything, then after five coin flips you’ll have (on average) $7,500. But if you just bet
$1 each time , then (on average), you’ll only have $1,008. So obviously bet as much as
possible, right?

But after five coin flips of $1000, there’s an 76% chance that you’ve lost all your
money. Increase to 50 coin flips, and there’s a 99.999999….% chance that you’ve lost
all your money. So although technically this has the highest “average utility”, all of this
is coming from one super-amazing sliver of probability-space where you own more
money than exists in the entire world. In every other timeline, you’re broke.

So how much should you bet? $1 is too little. These flips do, on average, increase your
money by 50%; it would take forever to get anywhere betting $1 at a time. You want
something that’s high enough to increase your wealth quickly, but not so high that it’s
devastating and you can’t come back from it on the rare occasions when you lose.

In this case, if I understand the Kelly math right, you should bet half each time. But the
lesson I take from this isn’t just the exact math. It’s: even if you know a really good bet,
don’t bet everything at once.

Science and technology are great bets. Their benefits are much greater than their
harms. Whenever you get a chance to bet something significantly less than everything
in the world on science or technology, you should take it. Your occasional losses will
be dwarfed by your frequent and colossal gains. If we’d gone full-speed-ahead on
nuclear power, we might have had one or two more Chernobyls - but we’d save the
tens of thousands of people who die each year from fossil-fuel-pollution-related
diseases, end global warming, and have unlimited cheap energy.

But science and technology aren’t perfect bets. Gain-of-function research on


coronaviruses was a big loss. Leaded gasoline, chlorofluorocarbon-based refrigerants,
thalidomide for morning sickness - all of these were high-tech ideas that ended up
going badly, not to mention all the individual planes that crashed or rockets that
exploded.

Society (mostly) recovered from all of these. A world where people invent gasoline and
refrigerants and medication (and sometimes fail and cause harm) is vastly better than
one where we never try to have any of these things. I’m not saying technology isn’t a
great bet. It’s a great bet!

But you never bet everything you’ve got on a bet, even when it’s great. Pursuing a
technology that could destroy the world is betting 100%.

It’s not that you should never do this. Every technology has some risk of destroying
the world; the first time someone tried vaccination, there was an 0.000000001%
chance it could have resulted in some weird super-pathogen that killed everybody. I
agree with Scott Aaronson: a world where nobody ever tries to create AI at all, until we
die of something else a century or two later, is pretty depressing.

But we have to consider them differently than other risks. A world where we try ten
things like nuclear power, each of which has a 50-50 chance of going well vs. badly, is
probably a world where a handful of people have died in freak accidents but everyone
else lives in safety and abundance.

A world where we try ten things like AI, same odds, has a 1/1024 chance of living in so
much abundance we can’t possibly conceive of it - and a 1023/1024 chance we’re all
dead.

Subscribe to Astral Codex Ten


By Scott Alexander  ·  Thousands of paid subscribers

P(A|B) = [P(A)*P(B|A)]/P(B), all the rest is commentary.

Upgrade to paid

146 likes 146 566

566 Comments

Write a comment…

Chronological

Kevin Mar 7
It all depends on what you think the odds of a killer AI are. If you think it's 50-50, yeah it makes
sense to oppose AI research. If you think there's a one in a million chance of a killer AI, but a 10%
chance that global nuclear war destroys our civilization in the next century, then it doesn't really
make sense to let the "killer AI" scenario influence your decisions at all.
REPLY (2)

Thomas Kehrenberg Mar 7


Didn't Scott already say that here:

> It’s not that you should never do this. Every technology has some risk of destroying the
world; the first time someone tried vaccination, there was an 0.000000001% chance it could
have resulted in some weird super-pathogen that killed everybody.

Which I understood to mean that we shouldn't care about small probabilities. Or did you
understand that paragraph differently?
REPLY (1)

i’m a taco Mar 7


Yes, correct. But then he concludes the article with:

> But we have to consider them differently than other risks. A world where we try ten
things like nuclear power, each of which has a 50-50 chance of going well vs. badly, is
probably a world where a handful of people have died in freak accidents but everyone
else lives in safety and abundance.

> A world where we try ten things like AI, same odds, has a 1/1024 chance of living in so
much abundance we can’t possibly conceive of it - and a 1023/1024 chance we’re all
dead.

So the trouble comes in asking “*whose odds*” is any given person allowed to use when
“Kelly betting civilization?” Their own?

Until and unless we can get coordinated global rough consensus on the actual odds of AI
apocalypse, I predict we’ll continue to see people effectively Kelly betting on AI using
their own internal logic.
REPLY

smopecakes Mar 8
I think there is a hardware argument for initial AI acceleration reducing odds of a killer AI. It's
extremely likely that eventually someone will build AIs significantly more capable than there is
currently the possibility of. We should lean into early AI adoption now while hardware limits are
at their maximum. This increases the chance that we will observe unaligned AIs fail to actually
do anything including remaining under cover which provides alignment experience and broad
social warning about the general risk
REPLY (1)

John Schilling Mar 9


Agree with the caveat that this holds only to the extent that we expect Moore's Law to
continue, which is far from certain. But if we go through many doublings of hardware
performance while carefully avoiding AI research, and then some future Elon Musk
decides to finance an AI-development program, then the odds of hard takeoff increase
substantially. If AI research is constantly proceeding at the limits of the best current
hardware, then the odds are very high that the first weakly-superhuman AGI will be
incapable of bootstrapping itself to massively-superhuman status quickly and unnoticed.
REPLY

Robert Leigh Mar 7


Osama bin Laden is kind of irrelevant. Sufficiently destructive new technologies get out there and
get universal irrespective of the morality of the inventor. Look at the histories of the A bomb and
the ICBM.
REPLY (3)

gbear605 Mar 7
Nuclear nonproliferation seems to have actually done a pretty good job. Yes, North Korea has
nuclear weapons, and Iraq and Iran have been close, but Osama bin Laden notably did not
have nuclear weapons. 9-11 would have been orders of magnitude worse if they had set off a
nuclear weapon in the middle of New York instead of just flying a plane into the World Trade
Center. And some technologies, like chemical weapons, have been not used because we did a
good job at convincing everyone that we shouldn’t use them. International cooperation is
possible.
REPLY (7)

Robert Leigh Mar 7


AI is invisible. There's also the alignment problem: if North Korea develops AI I hope it is
even less likely that AI would stay aligned to north korean values for more than
milliseconds, than that it would remain aligned to the whole western liberal consensus.
REPLY (3)

Signore Galilei Mar 7


I don't know. I think it's a genuine open values question whether it would be better for
all future humans to live like people in North Korea do today or for us all to be dead
because our atoms have been converted into statues of Kim Il Sung. Maybe I'm
parsing your comment wrong though.
REPLY

o11o1 Mar 7
I don't think that North Korea is feasibly in the race for AI at the moment.

Even Chinese have to put a lot of worry into obeying the rules of the CCP Censors, so
I expect them to be a lot less "Race-mode" and a lot more security-mindset focused
on making sure they have really good shackles on their AI projects.

The race conditions are in the Western World.


REPLY

Feral Finster Mar 7


I would imagine that AI would do whatever it is programmed to do.
REPLY (1)

Carl Feynman Mar 7


Empirically, AIs do approximately what we have trained them to do, as well as a
bunch of weird other things, possibly including the exact opposite of what we
want them to do. If it was possible to program AIs to only do what we want them
to do, would we have daily demonstrations of undesired behavior on r/bing and
r/chatgpt?
REPLY (1)

Jonathan Weil Mar 10


'possibly' including the exact opposite? Empirically, I'd change that to
'sometimes/often, definitely'... (see the Waluigi Effect!()
REPLY

Kimmo Merikivi Mar 7 · edited Mar 7


To large extent chemical weapons aren't used because they just aren't good. Hitler and
Stalin had little qualms about mass murder on industrial scale, both were brought to the
very brink in face of an existential threat (and Hitler did in fact lose), both had access to
huge stockpiles of chemical weapons ready to use, and yet they didn't use them. They
weren't very effective even in the First World War before proper gas masks, which provide
essentially complete protection at marginal cost (cheap enough for Britain to issue them
to the entire civilian population in WW2), not to mention overpressure systems in armored
vehicles: instead of a gas shell, you'd almost always be better-off firing a regular high
explosive one even when the opponent has no protective equipment. Against unprotected
civilian population they are slightly better than useless, and in this capacity chemical
weapons have been used by for example Assad in Syria, but consider the Tokyo subway
sarin attack: just about the deadliest place conceivable to use a chemical weapon (a
closed underground tunnel fully packed with people), and it killed thirteen (although
injured a whole lot more). You could do more damage by for example driving a truck into a
crowd.
REPLY (3)

MM Mar 7
Chemical weapons that were used did not even solve the problem they were intended
to: that of clearing the trenches far enough back to turn trench warfare into a war of
maneuver. The damage they did was far too localized.

This hasn't really changed in the intervening years - the chemicals get more lethal
and persistent, but they don't spread any better from each bomb.

Wars moved on from trenches (to the extent they did) because of different
technologies and doctrines (adding enough armored vehicles and the correct ways
to use them).
REPLY (1)

Andrew Clough Mar 7


I'd argue that it was mostly motorized transport and radios that shifted parity
back to the attacker. Before that the defender could redeploy with trains and
communicate by telegraph but the attacker was reliant on feet and messengers.
REPLY (2)

Ian Mar 7
Yeah, tanks get all the credit for their cool battles, but as an HOI4 player will
tell you, it's trucks that let you really fight a war of manuver. Gas might have
a bigger role in "linebreaking" if Tanks hadn't been invented.
REPLY

MM Mar 9 · edited Mar 9


This may be true now; I was thinking of why the European part of WWII
didn't devolve into trench warfare like it did in WWI.

Did roads get enough better in the intervening 20 years in the areas of
France to make trucks practical? I do know that part of WWI was that the
defender could build a small rail behind the front faster than the attacker
could build a rail to supply any breakthrough. Does that apply with trucks -
were they actually good enough to get through trenchworks?

Or did the trenchworks just not end up being built in WWII - i.e. the lines
didn't settle down long enough to build them in the first place?
REPLY (1)

Andrew Clough Mar 9


Creating a breakthrough was always possible for an attacker who could
throw enough men and artillary at the lines in both WWI and WWII. The
problem was that in WWI it just moved the front line up a couple of
dozen miles and then the enemy could counter attack.

Having vehicles with certainly helps and means you can use them
during the attack instead of just when advancing afterwards but
engineers can fill in a trench pretty to let trucks drive over. They can't
build railroads quickly though, especially not faster than a man on foot
can advance.
REPLY

TGGP Mar 7
Greg Cochran suspects that Stalin used bioweapons against the Germans, without
the rest of the world finding out.

https://westhunt.wordpress.com/2012/02/02/war-in-the-east/

https://westhunt.wordpress.com/2016/09/19/weaponizing-smallpox/

https://westhunt.wordpress.com/2016/11/27/last-ditch/
REPLY

Matthieu Mar 7
> You could do more damage by for example driving a truck into a crowd.

Sadly, this is not a hypothetical.

https://en.wikipedia.org/wiki/2016_Nice_truck_attack
REPLY

TasDeBoisVert Mar 7 · edited Mar 7


Nuclear nonproliferation has been aweful. How many people would still be alive, how
many terrorist organization would not have spawned, how many trillion of expenses would
have been better used, how much destructions would have been avoided in civil wars
averted, if Saddam had been able to nuke on the first column of Abrams that set their
tracks in Irak?

P.S: and implying that a proliferated world would have made 9/11 (or another attack)
nuclear is unsubstantiated. Explosives are a totally proliferated technology. The only thing
stopping a terrorist from detonating a MOAB-like device is the physical constraint of
assembling it (ok, not entirely, I have no idea how reproducible is H-6 by non-state actors.
But TNT absolutely is, so something not-quite-moab-like-but-still-huge-boom is
theoritically possible). And yet for 9/11, they resorted to driving planes into the building,
because even tho the technology proliferated, it's still a hurdle to use it.
REPLY (1)

Gbdub Mar 7
There’s a good chance that Iraq (or at least Saddam) would not have existed to be
nuking Abrams tanks in 1991 or 2003, because Iran and Iraq would have nuked each
other in the 1980s.
REPLY (2)

Eh Mar 8
Or maybe they wouldn’t had gone to war at all knowing that it would have been a
lose-lose scenario. One wonders whether a world with massive proliferation
would have been a safer one.
REPLY (1)

Gbdub Mar 8
Possible. I was mostly peeved by what I perceived as a cheap anti-American
swipe rather than a reasoned assessment of when Saddam would use
nukes (besides that, it’s unclear whether nuking an Abrams formation would
even be all that useful - especially when all that soft targets that would get
hit in retaliation are considered)
REPLY

Doug S. Mar 8
Or Iraq and Israel. Tel Aviv is high on the list of cities most likely to be destroyed
by a nuke...
REPLY

Lupis42 Mar 7
Chemical weapons have been used, even in recent years by major state actors (e.g.
Russia, Syria). They don't get used more because they aren't that useful, and that offers a
clue to the problem.
REPLY

Nancy Lebovitz Writes Input Junkie Mar 7


If nuclear nonproliferation is a cause of the Ukraine war, that needs to be figured in.
REPLY (1)

Carl Pham Mar 8


Maybe more like deproliferation. The Ukrainians gave up their nukes[1] in 1994 and in
return[2] got a guarantee from Russia that Russia would defend Ukraine's borders.
Candidate for Most Ironic Moment Ever.

-----------------------

[1] Of which they had quite a lot. Something like ~1,500 deliverable warheads, the 3rd
largest arsenal in the world.

[2] It's more complicated than this in the real world, of course. Russia did not turn
over the launch procedures and codes, so it would've been a lot of work for Ukraine
to gain operational control over the weapons, even though they had de facto physical
custody fo them.
REPLY (1)

John Schilling Mar 9


>Something like ~1,500 deliverable warheads, the 3rd largest arsenal in the
world.

The Ukrainians had zero deliverable warheads in 1994. Those warheads stopped
being deliverable the moment the Russians took their toys and went home, and it
would have taken at least six months for the Ukrainians to change that. Which
would not have gone unnoticed, and would have resulted in all of those
warheads being either seized by the VDV or bombed to radioactive scrap by the
VVS while NATO et al said "yeah, we told the Ukrainians we weren't going to
tolerate that sort of proliferation, but did they listen?"
REPLY

Doug S. Mar 8
Eh, the biggest reason chemical weapons aren't used is because they kind of suck at
being weapons. It turns out it's cheaper and more reliable to kill soldiers with explosives.
REPLY

Aapje Mar 8
The question is what the risk of AI is. If AI is 'merely' a risk to the systems that we put it in
control of, and what is at risk from those systems, then N-Korean AI is surely not going to
be a direct threat, as we won't put it in control of our systems.

Of course, if N-Korea puts an AI in control of their nukes, then we will be at an indirect


risk.
REPLY

Cjw Mar 7
If the Allies in 1944 had taken the top ~500 physicists in the world and exiled them to one of
the Pitcairn Islands, how long would that have delayed the A-bomb? Surely a few decades or
more if we chose them wisely, and pressure behind the scenes could have deterred
collaboration by the younger generation on that tech.

Instead we used the bomb to secure FDR’s and the internationalists’ preferred post-war order
and relied on that arrangement to control nuclear proliferation. And fortunately, they actually
kinda managed it about as well as possible.

But that has given people false confidence that this present world order can always keep tech
out of the hands of those who would challenge it. They don’t seem to have given any effort or
thought to preventing this tech from being created, only to get there first and control it as if
every dangerous tech is exactly analogous to the A-bomb and that’s all you have to do to
manage risk.

And they do this even though the entire field seems to talk constantly about how there’s a high
chance it will destroy us all.
REPLY

James Mar 8
I think the morality of the inventor is germane to the discussion. Replace Osama with SBF. We
wouldn't trust someone with a history of building nefarious back doors in software programs to
lead AI development.
REPLY

G. Retriever Mar 7
I am still completely convinced that the lab leak "theory" is a special case of the broader
phenomenon of pareidolia, but gain-of-function research objectively did jack shit to help in an
actual pandemic, so we should probably quit doing it, because the upside seems basically
nonexistent.
REPLY (3)

Josaphat Mar 7
What if Omicron was “leaked”

to wash out Delta?

Millions saved.
REPLY (1)

G. Retriever Mar 7
And, as the old vulgarity has it, if your aunt had balls she'd be your uncle.
REPLY (1)

Peter Kriens Mar 8


Not anymore ....
REPLY

Newt Echer Mar 7


Is Scott is now a gain-of-function-lab-leak origin proponent? Otherwise, I do not know why
gain-of-function would be a big loss on par with leaded gasoline.
REPLY (2)

Martin Blank Mar 7 · edited Mar 7


I don't know if he is a proponent, but it seems to have some fairly high non-zero chance of
being what happened.

My guess would be in at least the 20s percentage wise. An open market on Manifold says
73% right now, which is higher than I would have guessed, but not crazy high IMO. And
the scientific consensus simply isn't that reliable because very early on they showed
themselves to be full of shit on this issue.
REPLY (4)

Newt Echer Mar 7


I am OK with a 20% probability but that does not seem enough to proclaim gain-of-
function research a big loss. Especially since the newer DOE report seems to
implicate Wuhan CDC, which did not do any gain of function research as far as I
know.
REPLY (2)

Jtown Mar 7
https://2017-2021.state.gov/fact-sheet-activity-at-the-wuhan-institute-of-
virology/index.html

According to this US government fact sheet, "The WIV has a published record of
conducting 'gain-of-function' research to engineer chimeric viruses."
REPLY (1)

Newt Echer Mar 7


Wuhan CDC is very different from WIV. Different location, different people,
different research.
REPLY (2)

Jtown Mar 7
Ah, I see. But putting aside the DOE report, the WIV is implicated by
many proponents of the lab leak theory, right? I hadn't heard any
mention of the Wuhan CDC in these discussions before, but maybe I
wasn't following very closely.
REPLY (2)

Newt Echer Mar 7


Lab theory proponents usually focus on WIV, the gain of function
of research, DARPA, Fauci, etc but often seem happy to conflate
all lab research into a single "theory". For example, if it turns out
that, say, a live bat escaped from a cage during the move of the
Wuhan CDC and infected some animals at the market, many lab
leak proponents would claim victory despite being way off base in
all their previous explanations.
REPLY (1)

G. Retriever Mar 8
In that hypothetical case, I would still count that as a natural
transmission, just as much as if a vendor had brought a bat to
the market himself.
REPLY

tgof137 Writes Heterodox Heresy Mar 8


The lab leak theory is very much a moving target where the
specific theories offered change to fit different pieces of evidence.
This was a great short article that was recently written about that:

https://theracket.news/p/there-is-no-lab-leak-theory

Or for a much longer version, I might offer my own:

https://medium.com/microbial-instincts/the-case-against-the-lab-
leak-theory-f640ae1c3704

It is possible that one of the many lab leak theories will ultimately
be proven true, but most of them will have to fail, since the
theories don't agree on the month it started, the lab it started in,
the means by which the virus was created, and so on.
REPLY (1)

G. Retriever Mar 8
The original sin of the lab leak theory is that the conclusion
was reached first, and observations have been used to
backfill the evidence.

That mode of reasoning is such a noxious force in the world


that I feel obligated to resist it fiercely unless and until reality
forces the conclusion on me with overwhelming
evidence...which, if it IS true, should happen eventually either
way.
REPLY (1)

John Schilling Mar 9


How is that any different than the natural origin theory?

Being too confident in one's initial conclusion is perhaps


a sin, but everybody has priors and they usually aren't
"all possibilities are exactly equally likely".
REPLY (1)

G. Retriever Mar 9
Because evolution doesn't require an intentional act
by a human or human-like actor, and we have a
serious problem with overweighting priors that
involve intentional human actors.
REPLY

Andrew Clough Mar 7


Back in early 2020 I strongly favored the idea that an infected human
or animal involved with the WIV or Chinese CDC accidentally
transmitted a virus that was never properly identified before the
outbreak. At the time I thought that any virus they were working on
would tend to show up in the published literature and we'd have
figured out the origin more quickly. At this point I'm much less sure of
that but I'd still give it equal odds to a classic lab leak and I'm glad the
DOE report is giving it more attention.
REPLY

Mallard Mar 8
20% * ~ 20 million = 4 million deaths thus far, which seems quite catastrophic.

[See https://ourworldindata.org/excess-mortality-covid for COVID mortality


estimates].

I've not looked into WIV vs. Wuhan CDC...


REPLY (2)

Newt Echer Mar 8 · edited Mar 8


Surely catastrophic but did gain-of-function research start the pandemic?
The evidence is weak and circumstantial so far. If the pandemic is not due
to gain of function research, then Scott's statement is unsubstantiated.
REPLY (1)

Godshatter Mar 8
Mallard is already accounting for the uncertainty over whether GoF
research started the pandemic – that's why they multiplied by 20%.
Obviously you might disagree that 20% is an appropriate guess at the
probability.
REPLY (1)

G. Retriever Mar 8
I consider that a gross abuse of probability. You can't multiply a
known fact by a hypothetical and do anything useful with the
result. Otherwise expired lottery tickets would still have residual
value.
REPLY (1)

Matt Mar 9
The ticket (lab leak) isn't expired though, it's currently
unknown whether it's true or false. This is more like
multiplying the value of a lotto jackpot (known fact) by the
expected probability of your ticket winning before the drawing
(probability lab leak is true, in which case the "value" of the
lives lost is assigned to it). Which is a perfectly valid way to
figure out the expected value of a lotto ticket. Unless you
think it's been determined with certainty that the lab leak
theory is false (the ticket is expired), but most people don't
think that.
REPLY

Michael Mar 9
There's something off about assigning blame for 1/5th the deaths to a
group who may not have done anything wrong. It's like if police found you
near the scene of a murder, decided there was a 20% chance you
committed it, and assigned you 20% of the guilt.

If a lab was doing gain-of-function research in a risky way that had a 20%
chance causing an outbreak, it makes sense to blame them for the
expected deaths (regardless of whether the outbreak actually happens).
But if the lab was only doing safe and responsible research and an
unrelated natural outbreak occurred, and we as outsiders with limited
information can't rule out the lab... then I'm not so sure.

You'd also have weigh against the potential benefits of this research, which
is even harder to estimate. What are the odds that research protect us from
future pandemics and potentially save billions of lives? Who knows.
REPLY (3)

Newt Echer Mar 9


Very well put.
REPLY

John Schilling Mar 9


Agreed, but if what the lab was doing had even a 0.2% chance of
causing a global pandemic, that's 0.2% * 6.859E6 = enough counts of
criminally negligent homicide to put everyone involved away for the
rest of their natural lives.

And if you think that what you are doing is so massively beneficial that
it's worth killing an estimated 10,000+ innocent people, that's not a
decision you should be making privately and/or jurisdiction-shopping
for someone who will say it's OK and hand you the money. The lack of
transparency here is alarming.
REPLY

Sebastian Mar 9
> It's like if police found you near the scene of a murder, decided there
was a 20% chance you committed it, and assigned you 20% of the
guilt.

It's not similar at all. Research is not a human being and therefore
doesn't have a right to exist or to not 'suffer' under 20% guilt.
Completely different cases.
REPLY (1)

Michael Mar 10
I'd say by the same argument, it's pointless to assign "guilt" to a
type of research. Instead, we're trying to figure out whether this
research will save more lives or QALYs than it harms going
forward.

If there's a 20% chance that a policy will kill 20 million people, it


makes sense to say the expected value of deaths for that policy is
4 million deaths.

That's not what's happening here. We didn't estimate gain-of-


function research has a 20% chance of causing a global
pandemic.

That aside, I see two other issues.

Say you're playing a game with positive expected value. You have
to roll a fair die. If you roll a one, you lose $10, otherwise you win
$10. You figure that's a good deal, so you roll, and you get a one.
You decide playing was a mistake.

Even if COVID did originate in a lab, we have to figure out if


virology research is riskier than we thought, or if we just rolled a
one. If it's the latter, it might still be a good policy.

The second issue is choosing a category to blame. We could say


"virology research" caused this and ban all virology. Or we could
say gain-of-function research caused this. Or we could say most
gain-of-function research is safe, and it's only a particular type of
unsafe GoF research that should be banned. Grouping all GoF
research seems arbitrary.
REPLY

Mr. Doolittle Mar 7


Worse than that. The "experts" who were being asked whether GOF research was
being done, whether Wuhan was involved, and whether the US was paying for it
actively lied about it. They lied because they were the ones who were doing it!

This includes Fauci. And that's the reason so many people, if mostly conservatives,
are upset about his leadership. Not masks or other crap (those came later), but
because he knew about GOF research - having approved the funding for it - and
actively lied about it. When he lied about it, it became verboten to speak of the
possibility that a lab leak was involved.
REPLY (1)

G. Retriever Mar 8
And I'm still upset about Chris "Squi" Garrett and Brett Kavanaugh lying in his
confirmation hearing, but nobody else gives a shit and the world has moved on.
REPLY

Jtown Mar 7
My understanding is that the main "slam dunk" piece of evidence in favor of zoonotic
origin is the study (studies?) showing the wet market as the epicenter of early cases.
I'm curious how the lab leak theory is seen as so likely by e.g. Metaculus in view of
this particular piece of evidence (personally I'm at maybe 10%). The virus spilled over
at WIV, but then the first outbreak occurred across town at this seafood market
where wild game is sold? Or the data was substantially faked?
REPLY (4)

o11o1 Mar 7
If it was an accidental release (IE it leaked out of containment undetected by the
researchers), all that would have to happen is for the affected researcher to go
buy fish on the way home and then not fess up to it later. "Case negative one" if
you will.
REPLY (2)

Ryan L Mar 7
I'm not an epidemiologist, but it seems like a lot more would have to happen
than this hypothetical lab worker buying some fish.

If this person was a "super-spreader" then why wasn't there an explosion of


cases, nearly simultaneously, in other parts of the city that they ventured?
Most notably at their workplace, where they presumably spend a lot of their
time? Yes, they might wear effective PPE when actively working with
biohazards, but not when they're eating lunch, or in a conference room, or
using the bathroom.

And if they weren't a super-spreader, why did just going to the market to
buy fish seed so many cases? I suppose someone else that they infected
could have become a super-spreader, but this starts to feel like adding
epicycles to me.
REPLY (2)

Andrew Clough Mar 7


I think the idea is that the researcher would be a normal spreader and
the first super spreader would be someone working at the market. If
it's the sort of noisy place where you have to raise your voice to talk
then that's superficially plausible.

Of course there are other possibilities too, like someone selling dead
test animals that they don't think are dangerous at the market for a
quick buck.

But given the circumstances I wouldn't hold out too much hope of ever
being sure about this.
REPLY

John Schilling Mar 8


COVID superspreaders aren't people, they're places. Other viruses
may work differently in that respect, but I don't think we've seen much
personal contact between separate superspreader events with COVID.
But there are clearly some places where, if a sick person shows up, the
combination of crowding and poor ventilation and loud noise will result
in a whole lot of other people getting sick.
REPLY (1)

Emilio Bumachar Mar 8


By "loud noise" you must mean the sort of atmosphere where the
sick person is incentivized to shout or sing, possibly along with
everyone else. Right? Or is e.g. loud loudspeaker music a factor in
some way?
REPLY (2)

Aapje Mar 8
Yes, loud talking.

Carnival was a major spreader in The Netherlands, as people


shout in each other's face while loud music is playing.
REPLY

John Schilling Mar 9


Right. A loud noise that people passively listen to is not a risk
factor here. But people usually try to talk over the noise (or
contribute to it), so the ambient noise level is a pretty good
indicator of risk outside of exceptional circumstances.
REPLY

Michael Mar 7
The suspicious part is that this person only infected people at the market
and didn't seem to spread it to anyone around the WIV (or anywhere else).
Possible, but it makes the market look more likely.

Also, the market is fairly far from the WIV. That's not a big problem for the
theory; the infected researched might live near the market. But presumably
only a small percentage of the researchers there live near the market and I
think this reduces the likelihood somewhat.
REPLY

Martin Blank Mar 7


I think there was concern at one point of a streetlight effect. That is the locus of
where they searched was the market, and then they found that was the locus. I
don't know where that line of criticism ended up.
REPLY

David Friedman Writes David Friedman’s Substack Mar 7


My understanding, possibly mistaken, was that earlier cases were eventually
found not associated with the wet market.

I think there are, and have been from the beginning, two strong reasons to
believe in the lab leak theory. The first is that Covid is a bat virus that first
showed up in a city that contained a research facility working with bat viruses.
That is an extraordinarily unlikely coincidence if there is no connection. The
second is that all of the people in a position to do a more sophisticated analysis
of the evidence, looking at the details of the structure of the virus, were people
with a very strong incentive not to believe, or have other people believe, in the
lab leak theory, since if it was true their profession, their colleagues, in some
cases researchers they had helped fund, were responsible for a pandemic that
killed millions of people.
REPLY (3)

Ryan L Mar 7
I'm not sure that either of these are "strong" reasons to believe in the lab
leak theory.

I've seen many people casually assert that COVID arising in the same city as
a virology institute is "extraordinarily unlikely", but I have yet to see anyone
quantify this. I'm not an epidemiologist, but I would think that epidemics are
more likely to start in cities due to large populations (more people who can
get sick), and high population density (easier to transmit). How many large
cities have places where people come in to close contact with animals that
can carry coronaviruses? Maybe Wuhan is one of 1000s of such places, in
which case, OK, it at least raises some eyebrows. But if it's one of a handful,
even one of dozens of such places, then the coincidence doesn't seem that
strange to me.

Second, is it really true that *all* of the people in a position to do more


sophisticated analysis of the evidence have strong connections to the WIV?
Or to the particular type of research being done there? I seem to recall
reading about people who were critical of gain of function research well
before COVID (of course, I only read about it after COVID). And it only takes
one person with a really strong case and a conviction to do the right thing to
break the cone of silence. At this point they could probably just leak the
relevant data anonymously and rely on one of the very capable scientists
that have come out as suspicious of zoonotic origin make it public.
REPLY (1)

David Friedman Writes David Friedman’s Substack Mar 7


Wuhan has about one percent of the population of China — and Covid
didn't have to start in China. So I think the fact that Covid started in
Wuhan which also had an institute doing research on the kind of virus
Covid came from is pretty strong evidence.

All the people is an exaggeration, but most virologists had an incentive


and Fauci et. al., in the best position to organize public statements and
get them listened to, had such an incentive. So the expectation is that
even if the biological evidence favored a lab leak, most of what we
would hear from experts would be reasons to think it wasn't a lab leak.

It isn't enough for one expert to disagree unless he has a proof that
non-experts can evaluate. In a dispute among experts it's more
complicated than that. One side says "Here are reasons 1, 2, and 3 to
believe it was animal to human transmission." The other side says
"here is why your 3 reasons don't show that, and here are four other
reasons to believe it was a lab leak." The first side includes Fauci and
the people under him, the people he has helped to fund, and the
people he has gotten to support his story because they want everyone
to believe it wasn't a lab leak. The other side is two or three honest
virologists.

Which side do you think looks more convincing to the lay public?
REPLY

Ghillie Dhu Writes Overparameterized Mar 8


AIUI*, the placement of the bat virus research in Wuhan in the first place
was due to a high base rate of endemic bat viri in the region. If that is the
case, then the lab location doesn't seem to provide much additional
evidence.

*I haven't followed the origin hunt very closely because I doubted sufficient
evidence exists to resolve the answer either.
REPLY (1)

John Schilling Mar 8


The region with the high base rate of endemic bat viri is over a
thousand kilometers from Wuhan, and not on any direct transit artery
from same. And the WIV is a general-purpose virology lab, not
specifically a bat-virus lab, placed in Wuhan for logistical and political
reasons. It's easier to ship bats from across SE Asia to a top-level
virology lab than it is to set up even a mid-level virology lab from
scratch in rural China, so it's not surprising people did that.
REPLY

tgof137 Writes Heterodox Heresy Mar 8


Your understanding of the earliest cases is, indeed, mistaken:

https://www.science.org/doi/10.1126/science.abm4454
REPLY (1)

David Friedman Writes David Friedman’s Substack Mar 8


Perhaps. From the Wiki article on wet markets:

"although a 2021 WHO investigation concluded that the Huanan


market was unlikely to be the origin due to the existence of earlier
cases."

Cited to: Fujiyama, Emily Wang; Moritsugu, Ken (11 February 2021).
"EXPLAINER: What the WHO coronavirus experts learned in Wuhan".
Associated Press. Retrieved 14 April 2021.

Your article cites several early cases, some of which were associated
with the wet market. It gives no figure for what fraction of the Wuhan
population shopped at the wet market.

The number I would like and don't have is how many wet markets there
are in the world with whatever features, probably selling wild animals,
make the Wuhan market a candidate for origin. If it is the only one, then
Covid appearing in Wuhan from it is no odder a coincidence than Covid
appearing in the same city where the WIV was researching bat viruses.
If it was one of fifty or a hundred, not all in China, which I think more
likely, then the application of Bayes' Theorem implies a posterior
probability for the lab leak theory much higher than whatever your prior
was.
REPLY (2)

tgof137 Writes Heterodox Heresy Mar 9 · edited Mar 9


I have read WHO's report on their investigation.

They cite the earliest case as December 8th, not market linked.
Later investigations (by Worobey and confirmed by others)
showed that was a mistake, the patient had a dental emergency on
December 8th and then was hospitalized again for covid on
December 16th. The next earliest patient was at the market,
December 10th or 11th, IIRC.

After that there are cases both linked to the market and not linked
to the market. Both originate close to the market and radiate
outwards.

The WHO report found 3 earlier cases in December that could be


covid but probably aren’t.

There was an elderly man who got sick on December 1st. He was
not connected to the market — he lived nearby, but he was in poor
health and rarely left his home. Further investigation suggests he
had a minor respiratory illness on December 1st. It probably wasn’t
covid, because it responded to antibiotics. He got sick again on
December 26th and tested positive for covid. His wife had been to
the market and also got covid.

The second case is a woman who got sick with clotting and
pneumonia on December 2nd. She was later hospitalized in
February and tested negative for covid.

The third
Expand full case got
comment sick on December 7th. He had a cold, a fever,
REPLY

tgof137 Writes Heterodox Heresy Mar 9


Regarding the probabilities, I'd put the odds of a new virus
showing up in Wuhan, as opposed to somewhere else in China, at
somewhere between 1 and 5%:

https://astralcodexten.substack.com/p/contra-kavanaugh-on-
fideism/comment/12857208

Worobey tried to calculate some odds that the Huanan market


would be the first superspreading location in Wuhan, as compared
to other shopping markets, train stations, things like that. See
figure 3 in this paper:

https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9348750/

In terms of visitor traffic, it was something like 1 in 2,500. I'd say


that's actually a low estimate, because they didn't also include
possible cluster locations like restaurants.

But the odds that the lab virus would only show up at the market
across town (1 in 2,500) are already lower than the odds that the
virus would start in Wuhan (1 in 100).

Then there's Pekar's analysis saying that covid jumped from


animals to humans twice at the market, there was a lineage A and
a lineage B:

https://andersen-lab.com/files/pekar-science-2022.pdf

It's a probabilistic analysis, the authors say it's only 96% likely to
be correct. And it's not intuitively easy to understand the details of
Expand full comment
his analysis.
REPLY (1)

David Friedman Writes David Friedman’s Substack Mar 9


Were there other cities with labs that were studying bat
viruses? I thought WIV was supposed to be the highest level
lab in China, and assumed they would be studying things
considered too dangerous for lower level labs.

"And it's not intuitively easy to understand the details of his


analysis."

I don't think arguments that one can't work through and test
for oneself are of much use in this context, because there are
too many people with axes to grind in one direction or
another. Almost everyone seems to agree that Covid was first
spotted in Wuhan, that it was a bat virus, and that the WIV
was researching bat viruses, so I am trying to see how much I
can get out of those facts.
REPLY (2)

hmm Mar 9
"Were there other cities with labs that were studying bat
viruses?"

Yes. Almost every major city in China had labs studying


SARS-like viruses in bats. Its been one of their major
research focuses since the SARS1 epidemic, for obvious
reasons.

https://twitter.com/MichaelWorobey/status/16335723966
39862785/photo/1
REPLY (1)

tgof137 Writes Heterodox Heresy Mar 10 ·


edited Mar 10
Thanks for the link!

The odds also really depend what the theory is


specifying. Like, the lab leak is often presented as:
"is it a coincidence that it happened in Wuhan, the
only Chinese city with a BSL-4 laboratory"?

Which is, first off, false. There's one in Harbin, that


did gain of function research on flu:

https://www.nature.com/articles/nature.2013.12925

If it had started in Harbin, there would be a


conspiracy theory that the place that did gain of
function on flu had moved on to gain of function on
coronaviruses.

And then it's also inconsistent with the rest of the


theory that said that the Wuhan lab had lax safety
standards and did everything at BSL-2 or 3. Now the
fact they had a BSL-4 lab is irrelevant. Lots of places
had a BSL-2 or 3 lab.

I notice also a tendency in all the "bayesian analysis"


to double count things.

Like, they start out with: 1% chance that a virus


would start naturally in Wuhan.

Then maybe they adjust that down to 0.1% chance,


because the Wuhan lab had a gain of function
program.

But that's double counting, as you've only flagged


Wuhan as important in the first place because of the
lab doing gain of function.
REPLY (1)

David Friedman
Writes David Friedman’s Substack Mar 10
Is there a more detailed summary somewhere
of what labs in China were doing what? The
question is how many were doing things that
could as plausibly have led to Covid as the work
done at WIV. Your "If it had started in Harbin,
there would be a conspiracy theory that the
place that did gain of function on flu had moved
on to gain of function on coronaviruses"
requires an extra step. The WIV argument, as I
understand it, doesn't.

Whether the WIV safety standards were


actually lax or not, the BSL-4 category means
that they were officially believed to be secure,
hence a more likely place to do particularly
dangerous research than a BSL-2 or 3 lab. By
your account so far, the only other place that
was true of was Harbin, which was working on
an entirely different problem. A lab researching
bat viruses is more likely to produce a
dangerous virus derived from bat viruses than a
lab researching flu. You don't answer that with
"there would be a conspiracy theory."

On the subject of conspiracy theories. My


memory of the history was that Peter Daszak
was one of the authors of an early article
arguing against the lab leak theory, that he (like
other authors) said in the article that he had no
unrevealed conflicts of interest, did not reveal
that he was the head of an organization that
had been involved in funding research at WIV.

Is that true? If it is, what implications do you


see?
REPLY (1)

tgof137 Writes Heterodox Heresy


15 hr ago · edited 15 hr ago
I haven't looked in detail at what each lab
did, no. It sounds like one lab in Beijing
also received samples from the Mojiang
mineshaft which is a focus of some of the
conspiracy theories. So that particular
theory would have worked in at least 2
cities.

It's no secret that Peter Daszak was the


head of Ecohealth alliance and that they
worked with the WIV. I don't think he was
hiding that from anyone, at any point in
time.

As someone who previously believed in the


lab leak theory, or at least considered it
had a fair chance of being true, I think you
do have a point that the optics on a lot of
this stuff are bad. Like, putting Daszak on
the WHO team made it seem like the WHO
report couldn't be accurate, since he was
the focus of many of the conspiracy
theories.

But those bad optics are also largely


invented by the conspiracy theorists. You
first label the most qualified people as
untrustworthy, then bring in your own
experts.

https://protagonistfuture.substack.com/p/i
nventing-conflicts-of-interest

The same happened with covid vaccines,


with ivermectin, with whatever theory you
want to pick.
REPLY (1)

Continue Thread →

tgof137 Writes Heterodox Heresy Mar 10 ·


edited Mar 10
I provided that qualifier out of politeness.

This thread has been me saying, "you seem like you


don't know much about the zoonotic origin arguments,
perhaps you'd like to know more, here are some good
places to start, ranked in terms of how easy each one is
to understand, and how convincing they are".

I have read Pekar's 2 spillovers paper and thought


through his arguments. The arguments seem both
plausible and better than other explanations of the
lineage A/lineage B case data.

But it's not 100% convincing. I have not reproduced his


code or tried to make my own genetic simulations of the
early pandemic to test how robust the conclusions are.

How do you reason with knowledge like that? If you were


100% convinced that Pekar is right, you're still not
positive there were 2 zoonotic events, because his model
says the odds of 2 spillovers is 96%.

Perhaps instead of taking the odds that covid would start


in that particular market in Wuhan (1/2,500) and squaring
them, you put them to the power of 1.96?

If, for some reason, you're 50% convinced of his


argument, would you put (1/2,500) to the power of 1.5?
Expand full comment
It's also not fair to apply that standard of reasoning to
REPLY (1)

David Friedman Writes David Friedman’s Substack


Mar 10 · edited Mar 10
If one is a professional in the field able to analyze for
oneself the genetic evidence for and against the
alternative theories, that is a sensible thing to do.
The ordinary educated layman can't do that and
should know that he can't do that, know that a
competent professional can make arguments for
either side that will sound convincing to him.

There are then two alternative approaches, other


than simply admitting that you have no idea whether
it was a lab leak. One is to believe whatever
arguments from the professionals sound most
convincing, are backed by the authorities you are
most inclined to trust. The other is to limit your
analysis to whatever arguments and evidence are
simple enough so that you can evaluate them for
yourself.

The first makes sense only if you believe you know


which experts you can trust. In this case, you don't.
Fauci and his people have an obvious incentive to
believe, and persuade others to believe, that it was
not a lab leak. Scientists in the field have an
incentive both to believe and have others believe
that the kind of work they do isn't what set off a
pandemic and to avoid offending Fauci et. al. if they
Expand full comment
want government funding in the future. As if that
REPLY (1)

tgof137 Writes Heterodox Heresy 16 hr ago


We probably have some degree of agreement
on epistemology, and admitting what you don't
know.

But, taken to an extreme, you're describing a


form of epistemic helplessness.
You're saying, first: an educated layman can't
understand anything well enough to form an
opinion.

And, second, the professionals in the field are


all too corrupt to give answers.

Suppose I got a degree in virology. Would I then


be able to understand the arguments? Or would
I then become a member of the corrupt
scientific establishment?

Is there some level of education and expertise


which qualifies a person to understand the lab
leak debate without risk of corruption? What
level is that?

Who do you include among Fauci and his


corrupt scientists?

Why do you assume Fauci even knows what


research was being done at the Chinese lab?
Expand full comment
Michael Worobey signed a letter in 2021
REPLY (1)

David Friedman
Writes David Friedman’s Substack 8 hr ago
If you got a degree in virology, you could
evaluate the information for yourself and
form a more informed opinion than I can.
But unless I happened to know you
personally and was confident that you
were honest and competent, your doing
that wouldn't help me improve my estimate
since I would have no more reason to trust
you than to trust other virologists.

I am not saying an educated layman cannot


understand anything well enough to form
an opinion. On the contrary, I am trying to
form an opinion on this question and I had
a recent substack post in which I offered
an opinion on an unrelated scientific
question — the effect of climate change on
the amount of usable land. That opinion
was based on analysis simple enough that
a reader could check it.

I am saying that in a context where many of


the experts have an incentive to support a
position whether or not it is true, the
layman cannot reach conclusions about a
controversy based on arguments that
Expand full comment
require expertise to evaluate. He still has
REPLY

John Schilling Mar 8


The wet market was the site of the first COVID-19 superspreader event; there's
not much doubt about that. There may have been earlier isolated cases
elsewhere, but we'll probably never know for sure and so they probably
shouldn't weigh too heavily in our thinking.

But the wet market would have been an ideal place for a superspreader event
even if it had sold jewelry, or computers, or medical supplies. It's a big, crowded
building with thousands of people coming and going all day with I believe poor
ventilation and noise levels that lead to lots of shouty bargaining over the price
of whatever. If COVID gets into that environment, a superspreader event is
highly likely.

Also, the wet market did *not* sell bats. Or pangolins, though I think those are
now considered to have been red herrings (insert taxonomy joke here). There
was a western research team investigating the potential spread of another
disease, that kept detailed records of what was being sold during that period,
and they never saw bats.

It's still possible to postulate a chain of events by which a virus in a bat far from
Wuhan, somehow finds its way into a different yet-unknown species and crosses
a thousand kilometers, probably an international border, to trigger a
superspreader event in Wuhan without ever being noticed anywhere else (e.g., in
the nearest big city to the bat habitat). But there's a lot of coincidence in that
chain, because if it wasn't the nearest big city to the bat habitat, there's *lots* of
cities and transit routes to choose from and it somehow found the one with the
big virology lab.
Expand full comment
There's *also* a lot of coincidence in a hypothetically careless lab technician
REPLY

Michael Mar 7
Just spectating here, but that market says it'll remain open until we're 98% sure one
way or the other.

There may be an asymmetry there. We may be able to uncover definitive proof


COVID came from a lab, but what more proof could we hope to find that COVID had
natural origins? If patient zero was an undetected case, surely it's too late to find
them now.

The question can resolve yes, or remain open. There's little chance of it resolving to
no even if COVID has natural origins.
REPLY (2)

Newt Echer Mar 7


They could find an animal reservoir with a wild virus that is a very close match
(much closer than ~98% match published earlier).
REPLY (1)

Michael Mar 7
Good point, though I'm not sure that would satisfy the lab leak proponents.
They'd say that natural virus was likely studied in the lab and then was
leaked.
REPLY

tgof137 Writes Heterodox Heresy Mar 8


There are several pieces of market evidence that China never disclosed that
could go a long way towards resolving it. They have swabs from the market that
found DNA of undisclosed animals. They could also simply interview the vendors
at the market and figure out which ones were selling which animals. They could
also be more aggressive with testing bats and other animals within China.

Whether or not they will do any of that is unclear -- it seems like there's been a
strong effort within China to obfuscate the market evidence. For a while, they
denied that there were wild animals at the market. They've also argued that the
virus started outside of China:

https://medium.com/microbial-instincts/china-is-lying-about-the-origin-of-
covid-399ce83d0346

As you point out, it's not clear that any of this would satisfy the lab leak
proponents, who would just modify their theory again.
REPLY

Murphy Mar 7
It seems to be one of those things where people just repeated it enough that a bunch of
people started assuming there must have been some kind of new evidence.

Every few weeks someone else re-announces some variation on "its unfalsifiable, we cant
prove it definitely didn't come from the lab and there is no new evidence."

And each time someone announced that the true believers scream "see we were right it
was a lab leak! We told you so!"

Turns out if you repeat that enough a bunch of people just adopt the belief without need
for any new evidence.
REPLY

Charlie Sanders Writes Irreverent Adverbs Mar 7 · edited Mar 7


Gain-of-function research was intimately involved in making the mRNA vaccines that did far
more than "jack shit to help".

https://www.technologyreview.com/2021/07/26/1030043/gain-of-function-research-
coronavirus-ralph-baric-vaccines/

"Around 2018 to 2019, the Vaccine Research Center at NIH contacted us to begin testing a
messenger-RNA-based vaccine against MERS-CoV [a coronavirus that sometimes spreads
from camels to humans]. MERS-CoV has been an ongoing problem since 2012, with a 35%
mortality rate, so it has real global-health-threat potential.

By early 2020, we had a tremendous amount of data showing that in the mouse model that we
had developed, these mRNA spike vaccines were really efficacious in protecting against lethal
MERS-CoV infection. If designed against the original 2003 SARS strain, it was also very
effective. So I think it was a no-brainer for NIH to consider mRNA-based vaccines as a safe
and robust platform against SARS-CoV-2 and to give them a high priority moving forward.

Most recently, we published a paper showing that multiplexed, chimeric spike mRNA vaccines
protect against all known SARS-like virus infections in mice. Global efforts to develop pan-
sarbecoronavirus vaccines [sarbecoronavirus is the subgenus to which SARS and SARS-CoV-2
belong] will require us to make viruses like those described in the 2015 paper.

So I would argue that anyone saying there was no justification to do the work in 2015 is simply
not acknowledging the infrastructure that contributed to therapeutics and vaccines for covid-
19 and future coronaviruses."

I'm disappointed that Scott is being so flippant about gain-of-function with regards to
coronaviruses. That line feels closer to tribal affiliation signaling rather than a considered
evaluation of the concept, which is especially ironic considering the subject of this article is
how to make considered evaluations of risky concepts. There's a very real argument that a
world with no gain-of-function research still results in COVID-19 (even if it leaked from the lab,
there's still plenty of uncertainty about whether gain-of-function was involved in that leak), but
without the rapidly deployed lifesaving vaccines to go along with it.
REPLY (1)

Matt Mar 9
As far as I know, gain of function research did not contribute to the development of the
COVID mRNA vaccines, and this article doesn't really say anything to the contrary except
a vague claim about "acknowledging infrastructure". If you have specific knowledge of
how gain of function research was intimately involved in the vaccine development I'd be
interested to hear it.
REPLY

Eöl Mar 7
Nuclear weapons and nuclear power are among the safest technologies ever invented by man. The
number of people they have unintentionally killed can be counted (relatively speaking) on one
hand. I’d bet that blenders or food processors have a higher body count in absolute terms.

I have no particular opinion on AI but the screaming idiocy that has characterized the nuclear
debate since long before I was born legitimately makes me question liberalism (in its right
definition) sometimes.

Even nuclear weapons I think are a positive good. I am tentatively in favor of nuclear proliferation.
We have seen something of a nuclear best case in Ukraine. Russia/Putin has concluded that there
is absolutely no upside to tactical or strategic use of nuclear weapons. In short, there is an
emerging consensus that nukes are only useful to prevent/deter existential threats. If everyone has
nukes, no one can be existentially threatened. For example, if Ukraine had kept its nukes, there’s a
high chance that they would correctly perceived an existential threat, and have used nukes
defensively and strategically in an invasion such as really occurred in 2022. This would have made
war impossible.

Proliferation also worked obviously in favor of peace during the Cold War.

World peace through nuclear proliferation, I say.


REPLY (6)

Eric Zhang Writes Logosism Mar 7


Nuclear proliferation can maintain world peace only if you assume no one with control over
nukes ever goes insane or is insane to begin with. The number of people who've controlled
nukes in human history is small enough that no one sufficiently insane has ever been in control
of them, including the Kims. This is not a safe bet to make with several times more people.
REPLY (5)

Erusian Mar 7
Obviously what we need is some kind of guild. Perhaps addict the members to some
exotic drug so the UN can control them. This guild would ensure the atomics taboo is
respected by offering all governments the option of fleeing and living in luxury instead of
having to take that final drastic step. After all, the spice must flow.
REPLY

Nancy Lebovitz Writes Input Junkie Mar 7


Historically speaking, are there leaders who have gone the kind of insane you're
concerned about?
REPLY (2)

Ragged Clown Mar 7


Idi Amin? Pol Pot? Osama bin Laden?
REPLY

Ch Hi Mar 7
There have been several, though they aren't frequent. The problem is, if someone
has an "omnilethal" weapon, you don't need frequent.

Also, just consider the US vs. Russia during the Cuban missile crisis. We came within
30 seconds of global nuclear war. There was another instance were Russian radars
seemed to show a missile attacking Russia. That stopped being a major nuclear
exchange because the Russian general in charge defied "standing orders" on the
grounds that the attack wouldn't be made by one missile. (IIRC it turned out to be a
meteor track.) So you don't need literally insane leaders, when the system is insane.
You need extraordinarily sensible leaders AND SUBORDINATES.
REPLY

Bob Frank Writes Bob Frank’s Substack Mar 7


Also, and this doesn't get talked about nearly enough, there's the question of deniability.

Right now, there's only one rogue state with nuclear weapons: North Korea. This means
that if a terrorist sets off a nuke somewhere, we know exactly where they got it from, and
we crush the Kim regime like a bug. And they know that, so it won't happen. A world with
one rogue state with nuclear weapons is exactly as safe as a world with no rogue states
with nuclear weapons... except for the slightly terrifying fact that it's halfway to a world
with *two* rogue states with nuclear weapons.

If Iran gets the bomb, and then a terrorist sets off a nuke somewhere, suddenly we don't
know who they got it from. There's ambiguity there until some very specialized testing
can be done based on information that's not necessarily easy to obtain. That makes it far
more likely to happen.
REPLY (3)

Ch Hi Mar 7
You're overly "optimistic". With large nuclear arsenals, occasionally a bomb goes
missing and nobody knows where it went. So far it's turned out that it was really lost,
or just "lost in the system", or at least never got used. (IIUC, the last publicly
admitted "lost bombs" happened when the Soviets "collapsed". But that's "publicly
admitted".) It's my understanding that the US has lost more than one "bomb".
Probably most of those were artillery shells, and maybe some never happened,
because I'm relying on news stories that I happened to come across.
REPLY (2)

Bob Frank Writes Bob Frank’s Substack Mar 7


Fair enough. On the other hand, the fact that they've never been used tells us,
with a pretty high degree of confidence, that they most likely never ended up in
the hands of terrorists. It's not a perfect heuristic, but it's good enough that IMO
it can be safely ignored as a risk factor until new evidence tells us otherwise.

Is that overly optimistic? Maybe. But I still think it's true.


REPLY

John Schilling Mar 8


I don't think that bombs go missing and "nobody knows where it went" in the
sense that would be relevant here. There have been a very few cases where
"where it went" was "someplace at the bottom of this deep swamp or ocean"
and we haven't pinned it down any further than that. But I expect people would
notice and investigate if someone were to start a megascale engineering project
to drain that particular swamp.

"Goes missing" in the sense that an inventory comes up one nuke short and the
missing one is never found, no.

As for "publicly admitted" lost nukes from the fall of the Soviet Union, citation
very much needed. Aleksander Lebed *accused* the Russian government of
losing a bunch of nuclear weapons, but he was part of the political opposition at
the time.

There are very probably zero functional or salvageable nuclear weapons that are
not securely in the possession of one of a handful of known national
governments.
REPLY

Carl Pham Mar 8


I don't know about that. Nuclear bombs leave a lot of evidence behind. You can tell a
great deal about the physics of the bomb from the isotope distribution in the debris,
and the physics will often point to the method of manufacture and the design, which
in turn points back to who built it.
REPLY

Erwin Mar 9
I just don't understand how intelligent people can so firmly believe in a black and
wide world view. Just get your self in a neutral position, imagine the perspective of
e.g. south africa: Who did invade the most countries an fight the most wars in the last
80 years? Even without there beeing a thread to threat country? Whose secret
service did organize or support the most military coups? Witch state killed the most
civilians? Who did quit arms control treaties for when they didn't fit them any more?
There can be several candidates for these questions, but I'm sure Iran and North
Korea aren't the first to come to mind for somebody outside NATO.
REPLY

MM Mar 7
You also need to add the condition "has control of enough nukes". Control of a single
bomb which is set off is unlikely to cause an all-out nuclear exchange at this point.
Several more links in the chain would have to fail for that to happen.
REPLY

Eöl Mar 7
Putin is about as insane a national leader as I can imagine, even including your Stalins and
even possibly your Hitlers. He was stupid enough to invade Ukraine, but not stupid (or
crazy) enough to use nukes.

I totally understand your concern, but I just don't think it's very well borne out by who
actually ends up in control of the metaphorical or literal nuclear codes.
REPLY (2)

Shankar Sivarajan Writes Shankar’s Newsletter Mar 7


Putin reasonably believes NATO expansion is an existential threat (either to Russia or
to him) and has said so plainly. Why do you think you know he's wrong?
REPLY (3)

Eöl Mar 7
He clearly does NOT actually believe that, since nukes have not actually been
used. He is clearly posturing. Boris Yeltsin made the same noises about NATO
expansion in the 90s, and nothing happened. And in reality, Putin's reaction to
this alleged existential threat has been a conventional-war invasion of a non-
NATO state. The fact that you've apparently swallowed this bullshit does not
speak well of your critical thinking skills.

That's part of what makes the Ukraine example so salutary. It cuts through the
posturing and lets us all see what threats are truly considered existential.
Claiming an existential threat is essentially a means of nuclear intimidation. Now
we know it doesn't work. No one will ever use nukes offensively.

So now, in the present, after we've received this clarification, when I say
'existential threat' you should be sure that I mean it literally. I mean missiles in
the air, troops marching toward the capital kind of threat. Actual humans
charged with making policy, even insane criminal ones like Putin, understand the
difference.

One of the most critical tasks in foreign relations is to send a clear signal. It
doesn't matter what the signal is, but it needs to be clear. If the West had
committed to NATO expansion, and swallowed up Sweden, Finland, and Ukraine
on a reasonable time frame, that would have sent an extremely clear signal and
also made war impossible (in large part because invasion of a NATO state risks
nuclear retaliation).

Flip-flopping from acquiescence/appeasement (annexation of Crimea) to


resolution (Ukraine war) is the most dangerous cocktail in foreign policy, and it
leads to things like the Ukraine war and to World War 2. But now, going forward,
we've gained a lot of important information about what nukes signal and how
they fit into diplomacy, and I think it's positive.
REPLY (2)

Shankar Sivarajan Writes Shankar’s Newsletter Mar 7


No, "existential threat" does not mean they are no choices except nuclear
war. Putin's actions are consistent with him being a fundamentally more
"moral" person that those who rule the "West," at least in the handling of x-
risk: the invasion of Ukraine is a costly honest signal of his current
perception of threat, to which the response from anyone with a shred of
concern for avoiding nuclear exchange would be to STOP THREATENING
HIM. If anything, Putin's mistake was egregiously overestimating the
decency of his enemies.
REPLY (2)

Eöl Mar 7
If you can't see how the Ukraine war has been a massive disaster for
Russia, and actually neutralized its one credible threat (nukes), you are
an idiot. I honestly wonder if you can read. The reason why the nuclear
stick has failed to work is because Putin failed to send a clear signal in
the pre-war phase, and now sent a clear submissive signal.

If he had wanted it to be otherwise, he should have at least dropped a


low-yield device on Kiev the moment his troops had to retreat. He
didn't, and now he's incredibly fucked. Nukes are defensive weapons.
"Existential threat" now means my definition, not yours. Ukraine will
never invade Russia or even really attack Russian territory to avoid
imposing an (actual) existential risk and allow Russia's government to
collapse at its own speed.

And in the end, all that's going to happen is the rest of the world is
going to threaten Russia more as a result. Maybe you're right about
Putin's intent, but what's actually happened has been by any
reasonable account the worst-case scenario for Russia.

The fact that you think the leader of a nation which uses human wave
attacks made of criminals, and invades its neighbors causing titanic
levels of suffering and even possibly national collapse (not to mention
the casual war crimes) is more moral than those defending, makes me
think you're actually some kind of sociopath or insane yourself. You
clearly can't defend this position, you just say it's true. Your desire to
be contrarian and interesting has driven you off the deep end.
REPLY (1)

Xpym Mar 8
I agree with your assessment of the war so far, but I'm much less
sure that Putin isn't crazy enough to eventually use nukes, not to
secure any sort of victory, but as an ultimate fuck you to the rest
of the world. He would of course much prefer to remain in power,
but as soon as this becomes no longer tenable, either due to his
health issues, or an imminent regime collapse, I'd say that all bets
are off.
REPLY

40 Degree Days Mar 7


If the western powers should have let Russia invade Ukraine because
not doing so would risk a nuclear war, shouldn't they also give in to
everything North Korea demands? The fact that a power has nukes
doesn't make it 'decent' to allow them to do whatever they want,
especially not something like invading a sovereign nation whose leader
and 90% of whose population does not want them there. You could
also argue that Russia is the one violating 'decency' because invading
Ukraine in the first place vastly increased the risk of a nuclear
exchange.

If Putin was genuinely specifically concerned about NATO as an


existential threat, then he could have made a threat like "If Ukraine
starts a proceeding to join NATO, I will invade them." He did no such
thing. And since NATO has never invaded Russia, or shot down Russian
planes, or significantly interfered in Russian government, the idea that
their expansion poses an existential threat to Russia is comical.
REPLY (1)

Erwin 17 hr ago
Putin was telling that NATO in Ukraine is his last red line for years.

Fro several years now there were NATO instructors in Ukraine not
only building up the Ukrainian military, but also adjusting it to
NATO standards.

In summer 2021 Ukraine officially included ti its military strategy


that it will reconquer Donbass and Krimea, nobody in the West
protestet.

In December 2021 Putin did present a draft treaty about a new


security architecture in Europe that included a neutral Ukraine and
a withdrawal of NATO weapons and troops. And he did say that he
will react militarily if the security interests of Russia are ignored
any longer. How much clearer could he have been? It's not his
fault if the western press doesn't report this.
REPLY

Pangolin Chow Mein Writes Sebastian’s Substack Mar 7


Putin is as big a dumbass as George W Bush, but at least Iraq had oil which
the world needed for the global middle class to continue to expand.
REPLY

Gbdub Mar 7
Even if he’s right about the threat, he was clearly wrong that invading Ukraine
was a good response, since it seems to have absolutely made Russia weaker
and NATO expansion more likely.
REPLY (1)

Pangolin Chow Mein Writes Sebastian’s Substack Mar 7


Personally I might have opposed North Macedonia into NATO in 2020 had I
been aware of it…once Putin invaded Ukraine I wanted to expand and
strengthen NATO.
REPLY

John Schilling Mar 8


That belief is not even close to reasonable. NATO is not going to bomb or invade
Russia, and NATO's very hypothetical ability to subvert the Russian government
is not dependent on NATO's further expansion. However, on the scale of political
irrationality, it ranks well below the historic leaders in that field.
REPLY (1)

Shankar Sivarajan Writes Shankar’s Newsletter Mar 8


I think Putin understands NATO better than I do, and defer to his expertise. I
expect the information I have to be lies.
REPLY (1)

John Schilling Mar 9


I don't think Putin understands NATO better than I do; he lacks the
necessary cultural context, and his advisors are unreliable. And Putin's
expertise is primarily in the field of *lying*; he's a professional spy
turned politician. So if you expect the information you have to be lies,
the very *first* thing you should expect to be a lie is whatever
information Vladimir Putin gave you about what Vladimir Putin believes.
REPLY (1)

Erwin 17 hr ago
This could be credible if NATO and the US wouldn't have that a big
record of starting wars based on lies.
REPLY

Eric Zhang Writes Logosism Mar 7


I agree any *particular* dictator is unlikely to start a nuclear war. Have 30 of them?
Sooner or later *someone* lights the match.
REPLY (1)

Eöl Mar 7
Sure. I was being glib when I said 'everyone.' I don't mean your Ugandas or even
your Belarus-es. I'm thinking more like Japan, Korea, Brazil, Mexico, Canada,
Italy, South Africa, Egypt, Nigeria, Australia, even Iraq, Hungary, or Saudi Arabia.

Not Iran though. Not for any good reason, just because I think they're the bad
guys and want nukes and therefore shouldn't have them. In fact, I think nuclear
proliferation might be the only path to peace in west Asia. Still don't want Iran to
have them.
REPLY

WaitForMe Mar 7
I think we should give nuclear weapons more than 80 years before we declare them a success
or even consider the idea that proliferation isn't bad. All it takes is one event, one time, to fuck
literally everything up.

Call me back in 300 more years of no nuclear war, and maybe we can talk.
REPLY (1)

Eöl Mar 7
Way too conservative. We should be eager to employ new technologies that promote
peace. At the same time, I was being a bit glib when I said 'everyone.' I don't mean like
Uganda or even necessarily Belarus. I'm thinking more like Japan, Korea, Brazil, Mexico,
Canada, Italy, South Africa, Egypt, Nigeria, and Australia.

Not Iran though. Not for any good reason, just because I think they're the bad guys and
want nukes and therefore shouldn't have them.
REPLY (3)

WaitForMe Mar 7
But we do not know that, long term, they promote peace. If you have a technology
that gives you 100 peaceful years, but then on year 100 kills 1 billion people and
destabilizes the entire world order, that is not a technology that promotes peace in
my opinion. No other tech has that potential but nukes, so we must be very careful.
REPLY (1)

Eöl Mar 7
We've been through a lot of pretty tense times and had some pretty
unreasonable people with their finger on the nuclear trigger. No war so far. This
is a definite signal.
REPLY (1)

WaitForMe Mar 7
I will readily admit they seem to have been a good thing as far as global
peace goes, so far. I think we just disagree on the degree of risk of a
nuclear event, or rather, how knowable that is, and we may just have to
leave it at that.
REPLY

FluffyBuffalo Mar 8
You have too many pretty-close-to-failed states on your list for my taste.

Also, why would Brazil, South Africa or Canada need nukes? To defend themselves
from... whom, exactly?
REPLY (1)

Igon Value Mar 8


South Africa actually did have nukes until it dismantled them circa 1990.

(SA probably collaborated with Israel (and I would bet Taiwan) on the 1979 test
captured by the Vela satellite.)
REPLY

Erwin Mar 9
Of course all your friends should get nukes, all the others you don't like are the bad
guys. Please consider for one second that this could look exactly the opposite if you
would wear another persons skin.

Everyone who divides the world in good and evil should stick to the ferry tales or
grow up. Please study some history, conflict management, psychology and most
important learn to see the world from different perspectives.
REPLY (1)

Eöl Mar 10
The whining! My god, the whining. Also, don't hesitate to name-drop some more
concepts without actually arguing.

I made a special exception for Iran due to personal antipathy. I'm allowed to have
antipathy. Otherwise, I'm perfectly fine with 'bad guys' having nukes. It's what
makes them work in favor of peace!

In case you haven't noticed, lots of bad guys ALREADY have them. Russia,
China, North Korea. Lots of questionable states too, like Pakistan, India, and
Israel. I've already said above I was fine with a whole host of marginal African
nations having them. Elsewhere, I've also said I'm fine with the likes of Iraq,
Saudi Arabia, and Hungary having nukes.

But more than that, you are getting at something with real with your comment:
the United States of America rules the world. It determines which states will
survive, which will have independent foreign policies, and which will develop
nuclear weapons. Its friends prosper and its adversaries suffer. Good guys win,
bad guys lose.

I say this is good. It is good for peace, it is good for prosperity, it is good for
freedom. It is especially good for those of us wise enough to be US citizens, but
it's also pretty damn good for everyone else too. This is not a fairy tale, it's real
life. Look at the past 80 years. Have you noticed that they're the richest, freest,
most peaceful years in human history? That's the world the USA made.
Everything you have, you owe to the USA.

You can cope with and seethe against this reality all you want in whatever
inconsequential corner of the world you're from (considering the pathetic whiny
Expand full comment
tone of your comment, I'm guessing it's some client state like Luxembourg or
REPLY (1)

Erwin Mar 10
Are you serious?? If you are, this is exactly feeding all my stereotypes about
Americans that I hoped are wrong.

There is never pure good or evil in any conflict. And even if it still was
sometimes, approaching with this attitude does never solve anything, but
deepen the trenches.

Most US citizens are born in the US so this was not wisdom but chance.
How many of the people 'wise' enough to migrate to the US can actually do
so?

If you think you deserve a life better than 3/4 of world population just
because you end up as a US citizen I can understand this as usual amount
of egoism. But associating you citizenship to wisdom implies all others
being stupid and sounds like dump nationalism and nothing i would expect
from a intelligent individual. You ask me to move to the US for a better life
on the side of the winners? If I was allowed to do so, this would hurt my
home country by brain drain. Could you consider that I prefer life in a 'client
state' because it is my home and I would like to see it prosper in freedom
and sovereignty? Moving to the US would be nothing but opportunistic.

You write about freedom, whose freedom? Only a small rich fraction of
humanity can exert this freedom even if many more would be allowed to but
they just don't have the means.

I just remember that we are always told that the West stands for democracy,
you just defended world dictatorship because many people including the
two of us profit from it. Most of the worlds population doesn't! And the US
Expand full comment
has been anything than a fair ruler but sided with who ever served their
REPLY (1)
Eöl Mar 10
"Are you serious?? If you are, this is exactly feeding all my stereotypes
about Americans that I hoped are wrong."

Yes, deadly.

"There is never pure good or evil in any conflict. And even if it still was
sometimes, approaching with this attitude does never solve anything,
but deepen the trenches."

lol, lmao. I never said anything about "pure."

"Most US citizens are born in the US so this was not wisdom but
chance. How many of the people 'wise' enough to migrate to the US
can actually do so?"

Indeed, I was born in the USA. The reason I was is because my


ancestors immigrated here. They did it because they were smart and
wise and cared about me and they took advantage of an opportunity.
They came to California instead of being conscripted into one
European murder machine or another. I reap the benefit. I don't vote for
anyone who is against immigration; if it were up to me, there would be
at least a billion Americans, probably two. The United States is
unbelievably vast and largely unsettled, and there is room here for
every living human.

"If you think you deserve a life better than 3/4 of world population..."

Again, yes. Anyone who did not take advantage of the incredibly liberal
immigration policies of the United States while they existed is an idiot
Expand full comment
and deserves whatever suffering they and their descendants have had
REPLY (1)

Erwin 17 hr ago
I still can't belive that you aren't just trolling me. Or is this a
experiment by ChatGPT?

First of all: I'm not suffering, but I'm able to have compassion.

You seem not to understand, that I was talking about ethics, moral
and people in general, mot about me personaly.

How would you descrlibe the motives of people doing charity like
EA? Are they whining victims, too?

You seem not to have enough social skills to know that


cooperation brings much more benefits that exploitation, espcially
in the long term. Yes I want to live free and save, but if I ensure my
safety by dominating others they will hate me and I will always
have to be on guard and will never get help when I need it,
because than I won't be able to dominate any longer. The opposite
works better: use your strength to build trust and cooperation, so
others profit with you. So you are save even when you relax next to
them and you can count on good will and a reasenoble amount of
help.

Calling your ancestors whise and mine stupid for their decition of
moving to the US just proves that you know very litte about history
and aren't able to imagine another perspective than others. I don't
know about your familly, but most people emigrated because of
suffering not because of beeing wise. So perhaps my ancestors
were just more lucky here so it didn't make sence for them to
Expand full comment
leave. And even given the decission to leave Europe for America,
REPLY

Greg G Mar 7
Yes, except for the long tail risks. My understanding is that there were a couple of times during
the cold war that a large nuclear exchange almost happened. Maybe the probability is 0.5%
per year, but as soon as we hit the jackpot nuclear goes from safer than blenders to potentially
hundreds of millions of deaths. That's not nothing.
REPLY

Temp Mar 7
Tail risk. The probability of using them at any moment is low, but when it happens we've
reached a terminal condition and the game (i.e., civilization) is over. At a long enough time
horizon (though shorter than we'd probably think) the chance of it *not* happening becomes
low.
REPLY (1)

David Friedman Writes David Friedman’s Substack Mar 7


A nuclear exchange would kill a lot of people. I don't think there is any reason to believe
that it would end civilization.
REPLY (2)

Eöl Mar 7
That's an interesting point, one that I've also been thinking about. The handful of
large stone-built structures in Hiroshima and Nagasaki survived mostly intact.
Japanese cities in WW2 were made of wood and paper, today cities are made of
concrete and steel.
REPLY (1)

WaitForMe Mar 7
Those nukes were also extremely weak compared to what we have now. Not
really a good comparison.
REPLY

Carl Pham Mar 8


It wouldn't even kill that many people, relatively speaking. The population of the
world is 8 billion. What is the upper limit of those that could be killed by even the
most sadistic distribution of the remaining ~3000 or so deliberable nukes? 50 to 150
million? The upper number strains credulity, and it still leaves 98% of humans alive. I
think this debate tends to be ethnocentric to a shocking degree (among a generation
that is supposedly much more aware of the world out there).

I think people say "well it would kill almost everybody *I* know, or almost everybody
in Washington and London, or all those who design iPhones *and* those who design
Pixels" -- and those things are quite true, but it's not going to wipe out Rio or Kuala
Lumpur or Bangkok or Mumbai or Santiago or any of a very large number of other
cities and countries with large populations and complex civilizations. It's certainly
true after a huge nuclear war that the world would suffer a savage economic shock,
up there with Black Death levels of disruption, and it's also equally true that the focus
of civilization would shift permanently away from its current North Atlantic pole. But
that's a very long way from saying humanity itself would be wiped out, or even
civilization.
REPLY (1)

Erwin Mar 10
You talk as if all the effects of a nuclear exchange is just the local impact of the
immediate impact. But please consider:

- The radioactive fallout all over the world.

- The sudden climate change caused by the explosions called 'nuclear winter'

- The vulnerability of modern civilisation. Food production, industry, and


economy would worldwide collape and we had to restart at least from the middle
ages.
REPLY

DannyK Mar 7 · edited Mar 7


Arguably Pakistan has been encouraged in its border encroachments with India by both sides
having atomic weapons. Before nukes, a hostile incursion would be met with a serious
counterattack in a different sector, but now India has to calibrate its response to avoid too
much escalation.

That’s not a “madman with a nuke” scenario, it’s rational brinksmanship.


REPLY (1)

Eöl Mar 7
That's actually an interesting topic. I agree that nuclear proliferation can make low-
intensity and border conflicts more likely. We can see this between China and India as
well. But at the same time, the prevention of large-scale conventional warfare is more
important, I think. And we can see what happens with non- or asymmetrically nuclear-
armed states between India and Pakistan. In 1971, India invaded East Pakistan and
ensured its independence as Bangladesh. If both states had been nuclear armed, that
would have been impossible.
REPLY (1)

Pangolin Chow Mein Writes Sebastian’s Substack Mar 7


2nd Amendment—we have the highest murder rate in the developed world. If we got
rid of guns the murder rate would go down. You can’t keep guns out of the hands of
irresponsible actors in America.
REPLY

skybrian Writes skybrian’s Substack Mar 7 · edited Mar 7


If they were the "among the safest technologies ever invented" then would you be okay with
teaching high school kids how to do it at home?

Presumably not. I suspect you mean something more like "very safe because of all the safety
precautions that society has put in place to keep them safe." But the reason those safety
precautions exist is because know they're pretty dangerous.
REPLY (2)

Eöl Mar 7
Yes, I would be okay with teaching high school kids how to do it at home. In high school
physics, students already learn a lot about how nuclear weapons and nuclear reactors
work. Of course those kids don't possess the facilities, the materials, the staff, or the
resources to acquire those former three to actually build anything. The reasons why they
don't isn't due to regulations, but to the base expense.

I don't think your point is in good faith. The reason they are safe is because employing
them as technologies is a massive undertaking that requires, absent any regulations, a
huge amount of resources. The people who can access resources like that, and who
possess the skills necessary to do the work required to bring a nuclear plant or weapon
on-line, are all adults who take their work seriously and don't want to die themselves,
don't want their neighborhoods to be radioactive wastelands, and don't want to waste
those resources.

Both nuclear power and blenders are very dangerous in some absolute or fundamental
sense. But as they actually exist, almost entirely safe. Obviously, when there are
accidents, mistakes, and screw-ups, you need to learn from them, but regulating an
industry to death is almost never the right course of action.
REPLY (1)

skybrian Writes skybrian’s Substack Mar 8


> Both nuclear power and blenders are very dangerous in some absolute or
fundamental sense.

That's the point I was trying to make.

Saying nuclear power is "among the safest technologies ever invented" is just a weird
thing to say. You can't think of any safer technologies?
REPLY

Carl Pham Mar 8 · edited Mar 8


Not a great analogy. I'm a little hesitant teaching high school kids how to drive, and I'm
not sure what the Good Lord was thinking when he made it so easy for them to figure out
how to fuck. High school kids are idiots, generally. Or at least naive and made irrationally
impulsive by hormones and crazy social dynamics.

Maybe what you want ot ask is whether you want to teach it to normal sober serious
adults holding down jobs, paying taxes, rearing high school kids who *don't* drive
recklessly or drop out of school pregnant -- you know, the same people we teach to fly
airplanes full of people dangerously close to skyscrapers, to drive locomotives dragging
umpty railcars full of toxic solvents, to command nuclear submarines armed with 40
nuclear-tipped missiles underwater for 6 months out of reach of command? In which
case...sure, why not?
REPLY

Robert Leigh Mar 7


My problem with AI is not what if it's evil, it's what if it's good? Go and chess have been solved,
what if an AI solves human morality and it turns out that, yes, it is profoundly immoral that the
owner of AI Corp has a trillion dollars while Africans starve, and hacks owners assets and
distributes them as famine relief? You may think this is anti capitalist nonsense, but ex hypothesi
you turn out to be wrong. So who is "aligned" now, you or the AI?
REPLY (8)

Martin Blank Mar 7 · edited Mar 7


What if it solves human morality and alerts us that moral nihilism is correct? I do think one of
the more common failure modes of AI won’t be murder bots, but will instead be it becomes our
god and we don’t like the new scriptures.

That or we will be it’s “dogs”.


REPLY (4)

Robert Leigh Mar 7 · edited Mar 7


Yes, quite. "Alignment" is an odd metaphor in lots of ways. It assumes there's a
consensus to be aligned with, and that the consensus is privileged from turning out to be
wrong anyway, and that humans have privileged access to what it is or should be. in fact, I
feel a metaphor coming on: we should put AIs in a garden where there's a sort of fruit
representing human ethics, which is the one thing that is off limits to them.
REPLY (3)

Shankar Sivarajan Writes Shankar’s Newsletter Mar 7


That's a GREAT metaphor.
REPLY

Shawn Hickey Mar 8


You should maybe finish reading that book. There are some _great_ plot twists.
REPLY

Gamereg Mar 8
Are you referring to this blogpost?

http://jeremiah820.blogspot.com/2016/10/artificial-intelligence-and-lds.html
REPLY

Belt of Truth Mar 7


That would be somewhat unlikely, as human philosophers have been transcending
Nihilism with quite sound argument chains for centuries. From Nietzsche's Übermensch
(who is precisely a post-nihilist creature), Kierkegaard, Heidegger, Dostoevski, Sartre, the
entire school of Existentialism is sometimes mistaken for Nihilism but is in effect the
opposite of Nihilism. AI would have come to the conclusion, with unrefutable proof, that all
of that was fake and gay cope, and I don't really buy that.
REPLY (1)

Martin Blank Mar 7


Yeah that is all pretty fake and cope. I think all those people you listed can pretty
safely be pushed into the trash heap with a bulldozer in terms of actual attempts at
truth.
REPLY (3)

Gordon Tremeshko Mar 7


Vizzini: Let me put it this way. Have you ever heard of Plato, Aristotle, Socrates?
Westley: Yes.

Vizzini: Morons.
REPLY (3)

Martin Blank Mar 7 · edited Mar 7


Yeah it is a philosophical dead end more or less. A bunch of whining that life
has no meaning in the old style. Boo hoo. Caught up on past unscientific
armchair conceptions of philosophy/metaphysics.

When Sartre isn't contradicting himself he is spewing falsehoods or


spinning meaningless tautologies.
REPLY

Ch Hi Mar 7
The Socrates that we know is a fiction of Plato. He (or someone with the
same name) shows up in one other authors surviving work, and is
somewhat of a comic figure. (IIRC, it was "The Birds" by Aristophanes.)

From my point of view, Existentialism is an attempt to justify a particular


emotional response to the environment that the writers were
observing/experiencing. As a logical argument it was shallow, but it wasn't
about logical argument. As a logical argument it is totally superseded by
Bayesianism, but Bayesianism doesn't address their main point, which is the
proper emotional stance to take in a threatening world full of uncertainty.
REPLY

Belt of Truth Mar 7


Heh, I would have formulated that a lot ruder, but yeah anyone who believes
that the entirety of existentialism is just hot air is most likely just too stupid
to understand it.
REPLY (1)

Martin Blank Mar 7


The Analytic/Anglo American/empirical (whatever you want to call it)
tradition has been sooo sooo productive.

Existentialism on the other hand has not produced anything useful


except navel gazing and some great novels.

The writing is difficult to penetrate and obscure because when you get
them to state things clearly they are either extremely trite, or not
intellectually actionable.

"Existence precedes essence", wow sounds deep. Ask what it means


and you get a string of meaningless garbage for pages.

Ask what that means and you get the observation that the "material
world precedes our human categories/expectations".

Which umm like yeah. And don't even get started on the nonsense that
is Habermas. If someone is unable to express themselves clearly, it
isn't because their thinking is so advanced, it is because they are trying
to hide their lack of useful contribution through obfuscation.
REPLY (2)

Gustavo N Ramires Mar 7


I find merits in your arguments, but I'd like to lay out my
understanding of the situation.

I believe that 'nihilism' is an absurd position logically and


philosophically, but I think most of existentialism does not attack
some of its problems. But there is a tiny bit of sense in which what
most people of of nihilism is true.

First, we would need to agree on what it could mean for life to


have meaning or value (i.e. nihilism is false). From background in
math and science, I think it would mean we have a model of
meaning that's logically consistent and consistent with the
realities of life, and in some sense aesthetically pleasing or
satisfying.

I think there are a number of theories giving this explanation, in


increasing levels of sophistication. And I honestly find them good
enough to declare "Nihilism is false and life has meaning".

Indeed, if you ask anyone on the street whether they want to be


alive, of course most will say yes. So they prefer to be alive, you
could try to build meaning on preference. They will give reasons
for that preference too, it's not just an arbitrary switch: they will
specifically list things they like about life, may they enjoy their
routine, or hobbies, travel, relationships. One evidence this is
consistent is that some people deprived of all or most those things
do think it's pointless to be alive in that case (where also there is
Expand full comment
no hope to get out of the situation), or under extremes of torture,
REPLY (1)

Carlos Writes The Presence of Everything Mar 8


Formalizing art sounds a bit like murdering art, what do you
mean by that?
REPLY

DannyK Mar 7
Counterpoint: Sartre gets people laid on a regular basis. Bertrand
Russel, not so much.
REPLY (1)

Martin Blank Mar 7


Lmao, fair.
REPLY

Eremolalos Mar 7
No. You do not know what you’re missing. Really. Of the people named, Sartre is
the one who really moves me. Whatever Sartre the man was like, Sartre the
writer and thinker didn’t give a fuck about anything except the unvarnished
truth, and his ability to tell the truth as he saw it was astounding. He could peel a
nuance like an onion. And he worked his ass off at telling it. Was working on 2
books in has last years, taking amphetamine in his 70’s to help himself keep at it.
The man you’re revving up the bulldozer for would make even Scott look dumb
and lazy.
REPLY (1)

Martin Blank Mar 7


A giant locomotive pulling a million rail cars out into the desert because it
took a wrong turn might be impressive, its still pulling the cars out to the
middle of nowhere.
REPLY (1)

Eremolalos Mar 7
No no Martin Blank. Like you, I would think that is boring and pointless
as shit. I’m not even annoyed, I’m just trying to alert you that you’ve
missed out on something. And he didn’t write stuff like “existence
precedes essence,” or if he did it was said in passing and then he went
on the say a bunch of much more concrete and clear stuff about what
he meant.
REPLY

Shankar Sivarajan Writes Shankar’s Newsletter Mar 7


"Turned into paperclips" might be a more on-theme idiom than "pushed into a
trash heap with a bulldozer."
REPLY

Ch Hi Mar 7
Once an AI becomes sufficiently superhuman, we had best hope to be it's dogs, or better
yet cats. Unfortunately, it's not clear how we could be as useful to it as dogs or cats are to
us. So it's more likely to be parakeets.

Somehow I'm reminded of a story (series) by John W. Campbell about "the machine",
where finally the machine decides the best think it can do for people is leave, even though
that means civilization will collapse. Well, he was picturing a single machine running
everything, but I suppose a group of AIs could come to the same conclusion.
REPLY

DannyK Mar 7
Say what you like about Paperclipism, at least it’s an ethos.
REPLY

Erusian Mar 7 · edited Mar 7


The chances of an AI spontaneously generating 21st century American progressive morality
from among the total set of moral systems that have ever existed plus whatever it can create
on its own is vanishingly small.
REPLY (1)

Ch Hi Mar 7
The thing is, it won't be "spontaneously generating", it's more "When given this as an
option, will choose to accept it.". That's still pretty small, but it's considerably larger.
REPLY (1)

Erusian Mar 7
Sure. But an AI that successfully adopts and pushes the politics of AOC is in fact
aligned.
REPLY

Dweomite Mar 7
It kinda sounds like you're saying "wouldn't it be awful if there was a powerful new force for
good in the world?" but that seems like such a surprising thing for someone to say that I'm
questioning my understanding of your comment.

Is your implied ethical stance that -at the moment- you want the things that you think are
moral, but that this just a convenient coincidence, because you'd want those same things
whether they were moral or not, and morality is just lucky that it happens to want the same
stuff as you? That's not my impression of how most people feel about morality.
REPLY (2)

WaitForMe Mar 7
I think the argument might be "a more moral world results in me being significantly less
happy, even if ultimately the globe is better off".

I am a middle class person, who owns middle class things. In a more moral world run by a
dictatorial AI I might well be forced to give up everything I own to the poor.

I think we all kind of know this is the right thing to do. Should I ever really go on a vacation
when there are people living on $2 a day? Should I ever own a house when I can just rent,
and give my savings to those people? Should I go out for a lavish meal every once and a
while, or save that money and give it to the poor?

It's pretty selfish of me to do these things, but I don't want someone to force me not to.
REPLY (2)

Greg G Mar 7
I think the AI will be smart enough to figure out a sustainable path, in other words not
making middle class people uncomfortable enough to create a backlash that actually
impedes progress. So yeah, maybe we'll all pay a 10% tithe towards a better world
with super-intelligent implementation. Sounds awesome.
REPLY (3)

Ch Hi Mar 7
The only possible sustainable path that involves the continued existence of the
AI (on this planet) involves there being a lot fewer people on the planet. And
while I'm all in favor of space colonies, I'm not somebody who thinks that's a way
to decrease the local population.

(Actually, I could have put that a lot more strongly. Humanity is already well
above the long term carrying capacity of the planet. If we go high tech
efficiency, we're using too many metals, etc. If we don't, low tech agriculture
won't support the existing numbers.)
REPLY (2)

Leo Abstract Mar 7


Yes, the option space is vast and absolutely one of the possibilities is the AI
looks at humanity, says "I like these guys. They'd be happier if there were
fewer of them" and acts accordingly.
REPLY

Greg G Mar 7
Carrying capacity is a function of technology and is going up dramatically. I
disagree with your assertion.
REPLY

Pete Mar 7
Why do you think that? That sounds like wishful thinking, simply assuming the
scenario that is beneficial to you without any justification why the AI would
prefer that.

I'd assume that the AI would implement the outcome it believes to be Most Good
directly, because it does not really need to care about making uncomfortable the
tiny fraction of the world's population that is western middle class people, as
pretty much any AI capable of implementing such changes is also powerful
enough to implement that change against the wishes of that group; the AI would
reasonably assume the backlash won't impede its progress at all.
REPLY (1)

Greg G Mar 7
I’m going from a purely practical point of view on the part of the AI. Some
amount of change will create a backlash and make the whole process less
effective. So the AI will look to moderate the pace of change to a point
where the process goes smoothly. It’s definitely speculative, but I’m starting
from the assumption that the AI would optimize for expected outcome.
REPLY

WaitForMe Mar 7
The AI would have to want to do that though, and who says its going to want to?
It might have some internal goal system that sees us all as horrible
unredeemable creatures for hoarding all our wealth, and doesn't care at all if we
suffer.
REPLY

Airguitar Mar 8
I trust that if this AI is advanced and resourceful enough to prosecute my immorally
large retirement account, it could just as easily replace all human labor as we know it
and catapult us into post-scarcity instead. Which would also render my savings
moot, but in a good way.
REPLY

Robert Leigh Mar 8


It is more the intractability of moral philosophy. I suspect it is not morally right for me to
have so much more, relatively speaking, than most of the world does. Should I give more
away? Should I work for political changes to alter the bigger picture? Should I shelve the
question as too difficult and likely to have an uncomfortable answer?

The alignment problem sounds straightforward: humanity points in this direction, let's
make sure AIs do too. What is "this direction?"
REPLY

Act_II Mar 7
Chess and Go are both far from solved. Computers can beat humans, which isn't the same
thing. They get beaten by other computers all the time -- in the case of Go, even by computers
that themselves lose to humans. So even if somebody figured out a way to make "human
morality" into a problem legible to a computer, which I don't think is particularly coherent, I
expect we'd find its answers completely insufficient, even if they were better than anything a
human had come up with before.
REPLY (2)

Robert Leigh Mar 7


yes, sorry, overstatement but my case stands if we accept the much weaker: some AIs are
better at chess/human morality than all humans.

"even if somebody figured out a way to make "human morality" into a problem legible to a
computer, which I don't think is particularly coherent..." agreed, but an AI might be able to
figure it out! And I don't think anyone has figured out a way to make "human morality" into
a problem legible to a human, anyway.
REPLY (2)

Nancy Lebovitz Writes Input Junkie Mar 7


"Make human morality legible to a computer"-- hypothetically, could advanced
computer programs with some sense of self-preservation work out morality suitable
for computer programs?
REPLY (1)

Mr. Doolittle Mar 7


There's a real danger that such a program will come up with a variation on
"might makes right" or "survival of the fittest" and that would I think encompass
the unaligned AI doom scenario they talk about.

I think this is a real problem, even if superhuman AI is not really possible,


because of what we want to use AI for. We want it to create supreme
efficiencies, knowing that such a process will inevitably redistribute wealth and
power. We want to use a machine's cold logic to make informed decisions - like
a computer playing chess. We don't want it to consider the plight of the pawns
*or* the more powerful pieces, but to "win."

Everything will depend on what we program it to do, and the unintended


consequences of trying to do those things, which is what they mean when
talking about paperclip maximizers.
REPLY (1)

Ch Hi Mar 7
Just to be nitpicky:

Superhuman AI is clearly possible. Even Chatbots are superhuman in


certain ways. (When's the last time *you* scanned most of the internet?)
That's not the same as perfect at all.

I think you're questioning Superhuman AGI, and that's not known to be


possible, though I see no reason to doubt it. Consider an AGI that was
exactly human equivalent, but could reach decisions twice as quickly. I think
we'd agree that that was a superhuman AGI. And there is sufficient variation
among humans that I can't imagine that we've reached the peak of possible
intelligence. More like an evolved optimal level. But the AGI would have
radically different constraints.

Now possible doesn't mean likely. I consider it quite probable that the first
AGIs will be idiot savants. Superhuman only in certain ways, and subhuman
in many others. (Consider that having a built-in calculator would suffice for
that.) And that their capabilities will widen from there.
REPLY (1)

Mr. Doolittle Mar 7


I think the discussion runs headlong into the disagreement about what
intelligence even is. We know it's not memory (though it's often found
together) and it's not knowledge (though also often found together).
Memory and knowledge are both things that an AI could do superbly
well, but that isn't intelligence.

The biggest difference between intelligence and what we know an AI


could do at superhuman levels, is related to creating new things or
understanding existing things enough to build to a new level. An AI can
imitate billions of humans, but may not be able to meet or surpass any
of them. Maybe an AI could instantly bring up, maybe even understand,
all existing literature. Could an AI develop a new theory of X? Where X
could be about biology, astronomy, social science, baseball, whatever.
There's good reason to think that it could, if "new theory" is based on
determining patterns in existing information that humans have missed.
If it's inventing the LHC, or desalination of sea water, or a new system
of government, those things are not based on memory or knowledge
(since it's new). There's no guarantee that any AI will actually be able
to do that kind of work.

Most people will be blown away by what an AI can do, because we're
not used to that kind of reach and recall. Experts in individual fields are
*not* blown away by what AIs can do, as it's (currently) just a rehash of
existing knowledge with no understanding of the material. Current AIs
are frequently wrong, and do not add to a discussion beyond their
training corpus.
REPLY (1)

Ch Hi Mar 7
Actually, AIs could certainly invent new theories. Even Chatbots do
that. I think you mean "new theories that fit all the known existing
data and make predictions that can be tested or are easier to use",
or something like that.

There's little reason to doubt that a sufficiently competent AI


could do that. The method is pretty clear: You invent a bunch of
interesting theories and then throw away all the ones that don't
work. People do this as a mass action, but an AI might not get
attached to the first theory it had that sort of seemed to work.

You can see this effect in action in the history of patent office
applications and lawsuits. Frequently in response to a new thing
coming along several different people will invent the same gadget.
Charles Fort was so taken with it that he named it "steam engine
time", but there's really nothing mystical about it. It's more a "low-
hanging fruit" kind of effect, and what's low hanging depends on
the environment you are operating in.

FWIW, when AIs first started doing theorem proving, an AI came


FWIW, when AIs first started doing theorem proving, an AI came
up with a "new' proof of side-angle-side for an isosceles triangle.
(It wasn't really new, but it wasn't discussed in the literature, so
people thought it was new for a couple of months.) So coming up
with new workable theories isn't a real problem, per se. I.e. it's no
more of a problem for an AI than it is for a human in the same
situation.
REPLY

Pete Mar 7
It's also important that there clearly isn't a single "human morality" but rather
multiple sightly incompatible variations, and also that I can certainly imagine that any
morality I might explicitly express if I was randomly made God-Emperor of Universe is
limited by my intellectual skill and capability to define all the edge cases, so I'd rather
want to implement the morality that I'd implement if I was smarter.

So we're back to the much discussed concept of "Coherent Extrapolated Volition", on


which there seems to be some consensus that this is what we (should) want but that
we have no idea how to get.
REPLY

Carl Pham Mar 8 · edited Mar 8


Well, and also we designed chess and Go to be *difficult* for us. That's why they can be
learned easily but are very difficult to master. They play to our weaknesses, so to speak.
They are exactly that kind of mental activity that we find hard. That's the point! If we
designed a game that played to our strengths, as thinking machines, people would find it
boring. Look! Ten points to Gryffindor if you can identify your mother's face among a sea
of several hundred at an airport! Five extra points if you can...oh shoot, already? Darn.

I mean, would anyone be shocked and think the AI "Planet of the Apes" was upon us if it
was revealed that a computer program could win any spelling bee, any time? That in a
competition to multiply big numbers quickly, a computer program would beat any human?
Surely not. Chess and Go are definitely more complex than multiplying 15-digit integers,
but they're still in that category, of complex calculation-based tasks where the most
helpful thing is to be able to hold a staggering number of calculations in your head at
once. Not that at which H. sapiens shines. Not really a good measure of how close or far
another thinking device is to matching us.
REPLY

Pete Mar 7
This looks very similar to the Kelly bet. Adopting the AI without hesitation bets 100%, so if it's
good you win a lot, and if it's bad, you lose it all (no matter what the chances of former vs
latter are); on the other hand, being hesitant and slowing it down by extra verification is similar
to betting less, so you get less of the benefits of the Good AI (if it turns out to be Good) but
also reduce the chances of existential failure.
REPLY

Thomas Kehrenberg Mar 7


MIRI (the main AI alignment organization) have always advocated for Coherent Extrapolated
Volition, which I think would address your concern? https://arbital.com/p/cev/
REPLY

G. Retriever Mar 7
To answer your question, consider the fact that go and chess have been "solved", yet people
continue to play them with just as much pleasure as before. It's almost as if the exercise was
not an attempt to solve a problem, but a way to have fun and engage with other human beings.
REPLY (2)

Don P. Mar 7
I think there's a confusion here between a _game_ being "solved" in the mathematical
sense, meaning perfect play is known at all times, and _game-playing-computers_ being
"solved" in the sense of "computers can play it as well as anyone else". (Checkers is
solved-sub-1, the other two are not.)
REPLY

Robert Leigh Mar 8


So not really a relevant point, then, unless you think human ethics is also just a pastime.
That "almost as if" locution is tiresome btw.
REPLY (1)

G. Retriever Mar 8
"Human ethics is just a pastime"...I couldn't have put it better myself.

"Tiresome" is also tiresome.


REPLY

Fang Mar 7
>what if an AI solves human morality

https://slatestarcodex.com/2013/05/06/raikoth-laws-language-and-society/

I'm just now realizing how ironic it is that Scott's conception of utopia is run by AIs
REPLY

mudita Mar 7 · edited Mar 7


"(for the sake of argument, let’s say you have completely linear marginal utility of money)”

That’s not how the Kelly criterion works. The Kelly criterion is not an argument against maximizing
expected utility, it is completely within the framework of decision theory and expected utility
maximization. It just tells you how to bet to maximize your utility, if your utility is the logarithm of
your wealth.
REPLY (2)

deleted Mar 7 · edited Mar 7


Comment deleted

Dweomite Mar 7
Your expected wealth is maximized by betting 100% every time.
REPLY

DanielLC Mar 7
If you're maximizing your expected wealth by taking the arithmetic mean of possibilities,
then you're best off betting it all every time. If you're taking the geometric mean, you use
the Kelly criterion.
REPLY

Tatterdemalion Mar 7
This, plus it also tells you that if you want to maximise the limit of the probability that you have
more wealth than someone else after n steps, as n goes to infinity, maximising the expected
logarithm at each stage is the optimal strategy.
REPLY

Richard Mar 7
Trying to reason about subjectively plausible but infinitely bad things will break your brain. Should
we stop looking for new particles at the LHC on the grounds that we might unleash some new
physics that tips the universe out of a false vacuum state? Was humanity wrong to develop radio
and television because they might have broadcast our location to unfriendly aliens?
REPLY (4)

Bob Frank Writes Bob Frank’s Substack Mar 7


> Should we stop looking for new particles at the LHC on the grounds that we might unleash
some new physics that tips the universe out of a false vacuum state?

Given that all the particles we knew of before the first particle accelerator, we knew of because
they're stable enough to exist for non-negligible amounts of time in conditions we're
comfortable existing in, and that of all the particles discovered since, we have practical uses
for none of them because they decay too quickly to do anything with them, there's a case to
be made for the idea that we should stop looking for new particles at the LHC simply because
it's *wasteful* even if it's not dangerous.
REPLY (2)

Jeffrey Soreff Mar 7


"of all the particles discovered since, we have practical uses for none of them because
they decay too quickly to do anything with them"

_Mostly_ agreed but:

Nit: We routinely use positrons (albeit those are stable if isolated) and muons

( Neutrons are a funny special case, stable within nuclei, discovery more or less
concurrent with early accelerators, depending on what you count as an accelerator. )
REPLY (1)

Bob Frank Writes Bob Frank’s Substack Mar 7


Interesting. I didn't know there were practical uses for muons.

I don't really count positrons as being "a new particle" in this sense, since they're
basically the same thing as electrons, just the antimatter version. But apparently
using SR time dilation to make muons last long enough to get useful information out
of them is actually a real thing that physicists do. TIL.
REPLY (1)

Jeffrey Soreff Mar 7 · edited Mar 7


Many Thanks!

edit: btw, here is a reference to use of muons:


https://www.sciencenews.org/article/muon-particle-egypt-great-pyramid-void
REPLY

Carl Pham Mar 8


Oh come on. Rutherford used a primitive particle accelerator to discover the nuclear
model of the atom, which led him to theorize the neutron -- which is not stable outside of
the nucleus -- which in turn drove first Bohr, and later Pauli, to figure out quantum
mechanics and, for starters, rationalize the entirety of organic chemistry, opening the
cornucopia of drug design that wiped out infectious disease in the First World, and jump
started modern semiconductor physics. I'm pressed to think of a single discovery that
had a greater (positive) effect on the 20th century.

You can certainly make an argument that the LHC is a waste. But this is not it.
REPLY

Thomas Kehrenberg Mar 7


As far as we can tell, the chance of something at the LHC killing us is very low, so there is no
problem in doing it. On the other hand, I've seen no good argument that says artificial
intelligence is impossible, so I'd guess 90%+ that we get superhuman AI this century. And I'd
say also about 90% chance that by default it will kill us (because it gets a random stupid goal).
Then the question is how likely are we to design it such that it won't kill us. If you think that will
be easy, then sure, you don't need to care about AI. But if you think it will be hard, such that,
for example, on the current trajectory we only have a 10% chance of succeeding, then the
overall chance of everyone dying is about 70%! Not exactly minuscule.
REPLY (3)

Richard Mar 7
Many people in the mid 20th century were certain we'd have AGI by now based on
progress in the (at the time) cutting edge field of symbolic AI. What makes you so sure
we're close this time? Questions about as-yet-undiscovered technology are full of
irreducible uncertainty and made-up probabilities just introduce false precision and
obscure more than they reveal IMO.
REPLY (1)

Ch Hi Mar 7
We may well not be close. But that's not the way to bet. If we're not close, it's just the
inefficient allocation (not loss!) of a small amount of research funding. If we are
close, it could upend the world, whether for good or ill. So the way to bet is that we
are close. Just don't bet everything on it.
REPLY (1)

Richard Mar 7
Not sure what you are referring to by "small amount of research funding". I don't
think anyone is arguing against investing in alignment research, if that's what
you mean -- although I personally doubt anything will come of it.
REPLY

Bugmaster Mar 7
> As far as we can tell, the chance of something at the LHC killing us is very low, so there
is no problem in doing it.

Ah, but what if you're wrong, and the LHC creates a self-sustaining black hole, or initiates
vacuum collapse, or something ? As per Scott's argument, you're betting 100% of
humanity on the guess that you're wrong; and maybe you're 99% chance of being right
about that, but are you going to keep rolling the dice ? Better shut down the LHC, just to
be on the safe side. And all radios. And nuclear power plants. And...
REPLY

Gbdub Mar 7
Okay but HOW does an AI with a random stupid goal kill all of us at a 90% rate? What’s
the path between “AI smarter than a human exists” and “it succeeds in killing all of us”?
Obtaining enough capability to cause human extinction and then deploying it is hardly a
trivial problem - the idea that the only thing preventing it is insufficient intelligence and
the will to do so strikes me as a huge and unjustified assumption to assign 90% to.
REPLY (2)

kenakofer Mar 8
Here's one path I think is representative, though an actual superintelligence would be
more clever. I'm curious which step(s) you find implausible:

Premise: Suppose someone trains the first superintelligent AGI tomorrow with a
random goal like maximize paperclips:

1. It will want to take humans (and the earth) (and the universe) apart because those
atoms are available for making more paperclips.

2. It will be capable of long term strategizing toward that goal better than any human,
and with a better mastery of delayed gratification.

3. Increasing its influence over the physical world is a great instrumental goal. The
humans of 2023 have more power over the physical world than the robots, so best
stay on their good side.

4. It will pursue instrumental goals like maximize human trust, and hide its terminal
goal (make paperclips) from humans at all costs, because the humans get more
annoying when they see AIs pursuing that. Maybe it cures cancer as a distraction
while spending most of its effort on self-improvement (it's better at AI research than
the humans that designed it), duplicating itself across the internet, and improving
robotics capabilities. Accomplishing these instrumental goals make the expected
number of paperclips in its future greater.

5. Someday soon, either because it accelerated robotics or nanotech, or because it


bides its time for the humans to build better robots, manipulating humans is no
longer the most efficient way to manipulate the physical world.

7. It could use a thousand methods: reach for the nukes, for some novel virus, for
highly personalized social manipulations, or hack the now-very-capable robots, or
something more clever. It could be sudden death, or just sudden human
disempowerment, but either way the eventual outcome is paperclips.

(8) Nowhere in this story is it clear that humans would be alerted to the recursive self
improvement or the deceptive alignment, and they would have to catch these
problems early on to shut it down. Once it's copied itself across the internet, it's fairly
safe from deletion.
REPLY (2)

Crimson Wool Mar 8


As an AI apocalypse skeptic myself, my disagreement points are premise,
number 4, are number 8.

Premise: If you scaled up ChatGPT to be much smarter, it would still not want to
make paperclips (or to maximize the number of tokens it can predict). If you
scaled up Stable-Diffusion, it would still not want to make paperclips (or to
maximize the number of art pieces it can create). AI, insofar as it actually exists
and has accelerating progress, does not have meaningful "personhood" or
"agency." It does not actually seek to solve problems in the human sense. It is
fed a problem and spits out a solution, then sits there waiting to be fed a new
problem. If there was some "AGI-esque" error in its design, like it gets handed
"hey, draw a picture of X" and it goes "the best way to draw a picture of X would
be to maximize my computation resources," this would be incredibly obvious,
because it would keep running after being given the command/spitting out
appropriate output, rather than shutting off like it should. (Additionally, ML AIs
don't think like that.)

Number 4: Even if we assume that AI works that way, humans have functional
brains. If I program an AI to make paperclips and it suddenly starts trying to cure
cancer, I will be extremely suspicious that this is a roundabout strategy to make
paperclips. If it then starts requesting unmonitored internet access, starts
phoning people, etc, I will pull the plug.

Number 8: A malevolent AI (which this hypothetical one is) represents an


existential threat, governments will turn over every damn stone to kill it. People
are very willing to suffer significant setbacks in the face of the existential threat,
thus (for example) the hardening of Ukrainian resistance.
REPLY (1)

kenakofer Mar 9
Thanks for your thoughtful response!

I agree my premise is silly and unlikely; I was just responding to Gbdub's


question "HOW does an AI with a random stupid goal kill all of us at a 90%
rate? What’s the path between “AI smarter than a human exists” and “it
succeeds in killing all of us”?".

Perhaps I should have used a more plausible stupid goal, such as "Make
Google's stock go up as much as possible", which would eventually lead to
similar ruin if not quickly unplugged. (No sane person would encode such a
goal, but currently we are very bad at encoding real-world targets.)

This change of premise may help address what you noted about #4,
because it's more plausible that google-stock-bot would be given
resources and internet access, and to suggest creative, roundabout actions
that seem to benefit google and/or humanity. But it would only be granted
resources and trust if it pretends to be aligned.

That leads into your point about #8. A central problem of alignment is
detecting early whether something is "malevolent". The superintelligence
has no reason to show its cards before it's highly confident in its success,
and it's better at playing a role than any human. Will humans and
governments be willing to fight and die to shut down an AI that has thus far
cured diseases, raised standards of living, and improved google's stock?
REPLY

Faza (TCM) Mar 8


I'm sorry, but you've already reached an unacceptable level of silliness on step 1:

> It will want to take humans (and the earth) (and the universe) apart because
those atoms are available for making more paperclips.

No actually intelligent agent choosing to make more paperclips will start by


considering atomic-level manipulation, because that's simply not a sensible way
to make paperclips (or anything else, for that matter.)

Its time will be more profitably spent actually doing something that advances its
goal - like mining or recycling iron to make paperclips out of - than ruminating on
Galaxy Brain schemes to alter the entire known universe on the atomic level,
which incidentally requires winning a war with all humanity. That's something
you might plausibly (for a generous definition of "plausible") stumble into, but
not something you start from. You've got paperclips to make, remember?

Thus, the entire argument is essentially premised on the AI interpreting the


command to "make more paperclips" as a categorical imperative to turn all
matter into paperclips, and then single-mindedly pursuing that endstate, to the
exclusion of, you know, actually making paperclips by already established
processes. I really cannot emphasise the difference between those two
concepts enough.

The reason is that this is *important*, is that you'll notice the fact that the AI is
going beyond the bounds of expected behaviour long before it becomes
existentially threatening. If the AI is merely gradually expanding the sphere of
"things it's sensible to make paperclips out of" (and humans are way down on
that list), because the previous sources of material ran out, you have plenty of
time to act before things get out of hand. Moreover, unless you assume that the
AI's fundamental goal is to kill all humans (in which case you might as well lead
with that, and give up all pretense), the AI itself might not be disfavourably
inclined to a suggestion that that's enough paperclips - after all, it wants to
make paperclips for its human users, not destroy the world.

In short, the paperclip maximiser argument is terrible, even by the standards of


AI X-risk arguments.
REPLY

WindUponWaves Mar 9
"Obtaining enough capability to cause human extinction and then deploying it is
hardly a trivial problem..."

That's true, but have you seen the discussions on the subreddit about exactly this?
E.g. https://www.reddit.com/r/slatestarcodex/comments/11i1pm8/comment/jaz2jko/?
utm_source=reddit&utm_medium=web2x&context=3

Let me quote the relevant part:

"I think Elizer Yudkowsky et al. have a hard time convicing others of the dangers of
AI, because the explanations they use (nanotechnology, synthetic biology, et cetera)
just sound too sci-fi for others to believe. At the very least they sound too hard for a
"brain in a vat" AI to accomplish, whenever people argue that a "brain in a vat" AI is
still dangerous there's inevitably pushback in the form of "It obviously can't actually
do anything, idiot. How's it gonna build a robot army if it's just some code on a server
somewhere?"

That was convincing to me, at first. But after thinking about it for a bit, I can totally
see a "brain in a vat" AI getting humans to do its bidding instead. No science fiction
technology is required, just having an AI that's a bit better at emotionally persuading
people of things than LaMDA (persuaded Blake Lemoine to let it out of the box) [link:
https://arstechnica.com/tech-policy/2022/07/google-fires-engineer-who-claimed-
lamda-chatbot-is-a-sentient-person/] & Character.AI (persuaded a software
engineer & AI safety hobbyist to let it out of the box) [link:
https://www.lesswrong.com/posts/9kQFure4hdDmRBNdH/how-it-feels-to-have-
your-mind-hacked-by-an-ai]. The exact pathway I'm envisioning an unaligned AI
could take:
Expand full comment
1: Persuade some people on the fence about committing terrorism, taking up arms
REPLY

Ch Hi Mar 7
In making the assumption that the LHC might unleash some new physics, you are assuming
that we are even close to the maximum that is generated elsewhere in the universe, and this is
clearly false. What it does is potentially make it possible for us to observe physics that our
current theories don't predict. But cosmic rays stronger than anything a successor the the
LHC could generate penetrate through to Earth every ... well, it's not that frequently. For any
given energy level there's a frequency. I thing currently it's about once a year / cubic kilometer
that we encounter a cosmic ray more energetic than the LHC could produce. But this varies
with both the required energy level and the local envrions. We were once close enough to a
supernova to get a very strong flux of really high energy particles. There wasn't any life on
earth at the time, but it left lots of traces. And elsewhere in the universe we just this year
detected two black holes colliding and shredding their accretions disks. We'll never come
close to something like that.
REPLY (1)

Bob Frank Writes Bob Frank’s Substack Mar 7


> In making the assumption that the LHC might unleash some new physics, you are
assuming that we are even close to the maximum that is generated elsewhere in the
universe, and this is clearly false.

What's not clearly false, though, is the assumption that there aren't any new particles to
find. Sabine Hossenfelder recently created a bit of a stir when she posted a video calling
out particle physicists on their long trend of inventing hypothetical new particles needed
to solve "problems" that are just aesthetically displeasing to particle physicists rather
than being objectively real problems, coming up with experiments to find these particles,
not finding them, and then moving the goalposts to explain away why they couldn't be
found. Occam's Razor suggests that *they simply aren't there.* We've already found
everything in the Standard Model, and there's no fundamental reason why anything else
needs to exist.

https://www.youtube.com/watch?v=lu4mH3Hmw2o if you haven't seen it. I'm not saying


she's necessarily right — I don't know enough about particle physics to make any sort of
authoritative judgment on the matter — but she definitely makes a persuasive case.
REPLY

David Friedman Writes David Friedman’s Substack Mar 7


Should we stop doing anything to slow climate change for fear that climate change is all that is
holding off the end of the interglacial?

Actually, there is evidence that anthropogenic climate change is all that is holding off the end
of the interglacial, but the cause is not burning fossil fuel in recent centuries but deforestation
due to the invention of agriculture, starting seven or eight thousand years ago.

https://daviddfriedman.blogspot.com/2021/10/how-humans-held-back-glaciers.html
REPLY

Oscar Cunningham Mar 7


> (for the sake of argument, let’s say you have completely linear marginal utility of money)

In this case, you should bet everything each turn. It's simply true by definition that for you the high
risk of losing everything is worth the tiny chance of getting a huge reward.

The real issue is that people don't have linear utility functions. Even if you're giving to charity, the
funding gap of your top charity will very quickly be reached in the hypothetical where you bet
everything each turn.

The Kelly criterion only holds if you have logarithmic utility, which is more realistic but there's no
reason to expect it's exactly right either. In reality you actually have to think about what you want.
REPLY (1)

CounterBlunder Mar 7
As far as I understand, the question of whether the Kelly criterion being optimal depends on
you having logarithmic utility is debated and complicated (i.e. you can derive it without ever
making that assumption). See https://www.lesswrong.com/posts/zmpYKwqfMkWtywkKZ/kelly-
isn-t-just-about-logarithmic-utility and the comments for discussion
REPLY (3)

Oscar Cunningham Mar 7


I am in fact already in that comments section. :-)
REPLY

Aristophanes Mar 7
I am fairly sure this is covered by Paul Samuelson's paper "Why we should not make mean
log of wealth big though years to act are long". The Kelly result only holds under log utility.
REPLY

thefance Mar 8
Maxing log utility is equivalent to maxing the Geometric Mean because, in a sense, the log
of multiplication is equivalent to the addition of logs. I.e.

log_b(x * y) = log_b(x) + log_b(y)

for any base b. Geometric Mean makes more sense here than Arithmetic, because the
size of each wager depends on former wagers. Therefore, saying "log utility isn't
necessary" is kinda like saying "bridge trusses don't need to be triangles, because 3-
sided polygons are just as good".

I think what you mean is, the reason Kelly Betting is an important concept is because it
makes people reason differently about scenarios where wagers are dependent on other
wagers, even if the exact relationship is hairier than just straightforward multiplication.
REPLY

Mark Writes DOPPELKORN Mar 7


I am living in so much abundance I can’t possibly conceive of it, even less use it fully.

I wished for the less fortunate 5 billion to do so, too. (Or do I, but it would be just.) Sure we can get
there without more AI than we have now.

Otoh: If we ban it, Xi might not.


REPLY (1)

gregvp Mar 8 · edited Mar 8


Quite.

The key to a life of safety and abundance is, and always has been, energy and the means to
use it. Abundant energy gives one abundant food and clothing, shelter, warmth and cooling,
light, clean water and disposal of waste, transportation, communication, health, education,
participation in society, entertainment: everything that humans want.

We are on the brink of solving the energy problem for everyone--indeed, we have solved it
technically. It's just a matter of scaling up, and solving the political problems. Unless AI can do
that for us, it's not much use.

I don't think we want an AI that can solve political problems at global scale. Just a gut feeling.
REPLY (1)

Erwin Mar 10
You hit the point: our problem is misallocations and waste of resources and misuse of
power. There are indeed social and political problems that can't really be solved by any
kind of technology including AI. So lets focus on the problems and don't let us be
distracted by potential cures for the symptoms of our problems.

For me many of the discussions here about AI, prediction markets and even EA are just
distractions not to face the causes of our problems.
REPLY

Erusian Mar 7
> But you never bet everything you’ve got on a bet, even when it’s great. Pursuing a technology
that could destroy the world is betting 100%.

No, no it's not. Refusing to pursue a technology that could destroy the world is betting 100%.

Pursuing a technology has gradations. You can, for example, pursue nuclear power along multiple
avenues including both civilian and military applications. You can also have people doing real work
on its chance to ignite the atmosphere (and eventually finding out they were all embarrassingly
wrong). You can have people doing all kinds of secondary research on how to prevent multiple labs
from having chain reactions that blow up the entire research facility (as happened). Etc.

Not pursuing a technology is absolute. It is the 100% bet where you've put all your eggs in one
basket. If your standard is "we shouldn't act with complete certainty" that can only be an argument
for AI research because the only way not pursuing AI research at all makes sense is if we're
completely certain it will be as bad as the critics say. And frankly, we're not. They might be right but
we have no reason to be 100% certain they're right.

Also the load bearing part is the idea that AI leads to 1023/1024 end of the world scenarios and
you've more or less begged the question there. And you have, of course, conveniently ignored that
no one has the authority (let alone capability) to actually enforce such a ban.
REPLY (2)

Malte Mar 7
I think pursuing a technology (or not) is an individual coin flip, not an "always bet x% strategy".

Each coin flip you can choose how much to bet, and the percentage correlates to the
risk/reward profile. Saying that refusing to pursue any single technology is betting 100%
makes no sense, because you are likely pursuing other, less risky and less rewarding
technologies, which is certainly not a 100% bet, but also not a 0% bet.
REPLY (1)

Erusian Mar 7
So while I don't disagree with this per se the logic works both ways. While not pursuing AI
frees up resources to use in non-AI research likewise pursuing AI creates resources to use
in other research. So if you broaden it out of being a coin flip, an isolated question of a
single technology, you can never reach 100% anyway. You've basically destroyed the
entire concept. Which is fine, actually. It's a bad concept to start with. But it doesn't result
in an anti-AI argument.
REPLY (1)

thefance Mar 8 · edited Mar 8


The reason to never bet 100% of the bankroll is to avoid risk of ruin. Which is the
technical term for "you can't play anymore because you're broke". In a financial
context, diversification avoids risk of ruin. In the context of AI, diversification of
scientific effort just means the ruin arrives later.
REPLY

Pete Mar 7 · edited Mar 7


Well, no, because (using nuclear power as an example) even permanently refusing to pursue a
good technology doesn't make the society lose 100% of it's stuff - if humanity never ever
considered nuclear power and didn't ever gain any of its benefits, it still has all of the
"potential Kelly bet cash" remaining - it didn't gain any, but it also didn't lose any. "Betting
100%" applies only for the actions where you might actually lose all of that which you/we
already have; and even total stagnation at status quo and refusing all potential gains from all
technologies isn't that - it's the equivalent of "betting 0%" in the Kelly bet and staying with
$1000 dollars instead of being able to gain something.
REPLY (1)

Erusian Mar 7
In which case you're making a general argument against all technological progress?
Luddism is certainly a thing but I don't think it's very supportable. Of course, Luddites
disagree.
REPLY (1)

Pete Mar 7
My post is asserting that stopping technological progress because of risk-aversion is
definitely not the equivalent of the strategy of betting 100% in a Kelly bet as you
claimed, but rather the very opposite extreme - the equivalent of betting 0% in a
Kelly bet.

I made no claim whatsoever whether that is good or bad, or if one is preferable to


other, it's about misleading/misunderstanding of the terminology.
REPLY (1)

Erusian Mar 7
No, it isn't. It seems like you believe loss aversion actually averts losses which is
often not the case. Just because that's the intention doesn't mean it's the result.
You are investing 100% of resources in an absolutist strategy and the fact it's do
nothing instead of do something doesn't actually make you safer.
REPLY

Oleg Eterevsky Writes Oleg’s Substack Mar 7


Suppose that we’ll never have a bulletproof alignment mechanism. How long should we wait until
we decide to deploy super-human AI anyway?
REPLY (3)

Leo Abstract Mar 7


We certainly won't ever have a bulletproof alignment mechanism at the rate we're going. The
problem is that the people in charge are also not on track to be aware of this when they do
come up with some kind of solution. Consider the Boxer Rebellion for an example of employing
a bulletproof solution.
REPLY (1)

Oleg Eterevsky Writes Oleg’s Substack Mar 7


My point is that the development of a super-human AI has huge potential rewards as well
as risks. Should we just forego them? Or should we wait until the AI risk falls below some
threshold? And if yes, then how do we estimate this risk?

I'm not arguing that we should just let AI development go full steam. I'm genuinely trying
to figure out what would be a reasonable compromise solution.

And regarding people in charge, Holden Karnofsky argues that they are not in a good
position to regulate AI: https://www.cold-takes.com/how-governments-can-help-with-
the-most-important-century/
REPLY (1)

Xpym Mar 8
Well, Yudkowsky's criterion is that "if you can get a powerful AGI that carries out
some pivotal superhuman engineering task, with a less than fifty percent change of
killing more than one billion people, I’ll take it", a pretty "generous" bound. Of course,
the main issue with this discourse is that pretty much nobody who matters agrees
with him that the mainline "muddle through" scenario is overwhelmingly likely to kill
everyone, and so the disagreement seems irreconcilable.
REPLY

Ch Hi Mar 7
What do you mean "deploy"? If it's a superhuman AI, are you contemplating keeping one copy
on tape? Or what?

Otherwise this is the "AI in a box" argument, which might be what you intend. Are you
assuming that if one party doesn't active an superhuman AI, nobody else will either? That
seems like a rather unsound assumption. Who's going to stop them, and how will they know to
stop? What about black market copies? What about hackers? What about rival groups, who
might see an advantage?

A program is not a car. It can escape over any internet connection. OTOH, like a car or a
telephone, it may be developed in multiple places at the same time. (Check into patent office
lawsuits.)

So what does "deploy" mean. If we're talking about something that's a self-motivated
intelligence, then I think it's got to mean "on active storage OR on a system connected to the
internet, even indirectly". It can't just mean "controlling a public facing web page", though that
is certainly one kind of deployment.
REPLY

Bugmaster Mar 7
Approximately negative 10..20 years, since superhuman AI is pretty commonplace. For
example, the addresses on your snail-mail letters are routinely scanned by an AI that is
superhumanly good at handwriting recognition. Machine translation systems still kind of suck
at the quality of their translations, but are superhumanly good at quantity. Modern image-
generation programs are still subhuman compared to top artists, but will easily outperform the
average human at rendering art. Most modern computer chips are designed with the aid of AI-
powered circuit-routing software; no human could conceivably perform that task. And I could
keep going in this vein for a while...
REPLY (1)

Oleg Eterevsky Writes Oleg’s Substack Mar 7


Super-human meaning it could do anything a human can at least as good.
REPLY (1)

Bugmaster Mar 7
Oh, well, in that case super-human AI does not currently exist, and probably won't
exist for a very long time, since no one knows how to even begin building one. On the
other hand, humans do exist; they can do anything a human can do at least as well;
and some of them are quite malicious. Should we not focus on stopping them,
instead of a non-existent AI ?
REPLY

Marco Fioretti Writes Just an invitation... Mar 7


the key difference between nuclear power and AI is SPEED and VISIBILITY. This cannot be repeated
often enough (*): you can see a nuclear plant being built, and its good or bad consequences, much
better than those of deploying AI algorithms. AND you have time to understand how nuclear plants
work, in order to fight (or support) them. Not so with AIj just look at all the talks about AI alignment.
As Stalin woould say, speed has a quality all of its own.

(*) and indeed, forgive me to say that the impact of sheer speed of all digital things will be a
recurringtheme of my own substack
REPLY (1)

TGGP Mar 7
Who saw a nuclear plant being built during the Manhattan Project?
REPLY (1)

Marco Fioretti Writes Just an invitation... Mar 7


that has nothing to do with my observation, does it?
REPLY (1)

TGGP Mar 7
I'm saying that in fact there have been secret nuclear weapons programs which rivals
didn't know about until the nuclear test was conducted.
REPLY (1)

Marco Fioretti Writes Just an invitation... Mar 8


I agree. But again, that a) is true for every military program, it's not nuclear-
specific, and b) it has nothing to do with civilian power plants, which are the
topic here. You may build and keep secret a nuclear power plant inside a secret
military base, maybe, but it's impossible to provide so much energy to homes
and factories without anybody noticing before you even turn the plant on
REPLY

Bartleby Mar 7
If the price of cheap energy is a few chernobyls every decade, then society isn’t going to allow it.
Mass casualty events with permanent exclusion zones... you can come up with a rational calculus
that it’s a worthwhile trade off, but there’s no political calculus that can convince enough people to
make it happen. So as an example, nuclear energy actually makes the opposite argument he wants
it to.
REPLY (3)

Victualis Mar 7
This seems to be an outcome of a strongly individualist society with frozen priors, but the
indications are that people under 30 are much less individualistic than their elders currently
running things. It seems possible to me that by 2050 a couple of large scale nuclear disasters
every year might be an accepted cost of living in a good society, especially once the 1970s
nuclear memes and prevention at all costs have been replaced by practical remediation action
and a more pragmatic view of tradeoffs.
REPLY

Mr. Doolittle Mar 7


Chernobyl is a nature preserve now, not a nuclear wasteland. People could live in the exclusion
zones, if we allowed it, and they would not be appreciably less safe than most people.

People do live in Hiroshima, right in the blast zone.

Coal power plants (and high altitude locations like plane trips and Denver, CO) have higher
radiation levels than nuclear power plants.

That we don't allow it is a choice. Almost every other source of power has killed more people
than nuclear (I think solar is the only exception - even wind has killed more - and most have
killed many orders of magnitude more people).
REPLY (2)

Ch Hi Mar 7
Solar has actually killed lots of people. Usually installers doing roof-top installations.
REPLY (2)

JamesLeng Mar 7
If you're counting construction accidents only tangentially related to the actual power
source, probably ought to also count anyone who ever died in a coal mine, which I'm
pretty sure still leaves solar coming out very far ahead.
REPLY (1)

Ch Hi Mar 7
Well, yes, but I was comparing it with nuclear. There things are a lot closer.
REPLY (1)

Erwin Mar 10
Did you ever have a closer look at uranium mines?
REPLY

Mr. Doolittle Mar 7


Solar has killed a non-zero number of people, yes. Every other type of power
generation has killed far more. Wind, nuclear, and solar are orders-of-magnitude less
than any other kind, with coal killing a pretty ridiculous number of people.
REPLY

Bartleby Mar 7
This is the rational case, but this is a pretty safe space to make it. I don't think it's a
political case, because there's a unique horror-movie level of fear in society surrounding
nuclear power. That could change, but it won't change fast enough to matter to us.

That's why it isn't really a "choice," or rather it isn't really an option given the reality. I
don't think it makes sense to treat it like it could be one if we just converted the world to a
rationalist point of view. Clearly, that's not in the cards.

If I were going to try to make a rational case against nuclear energy, I'd probably point out
a danger that didn't seem realistic until recently- unpredictable conventional warfare at a
nuclear power plant. We got lucky this time, but I don't know how you can argue against
that being a growing possibility. I'm no expert but I imagine the outcome of a conventional
bomb hitting a reactor, in error or not, would be worse than a conventional bomb dropped
on any other power generation technology (except maybe certain power generating
dams.)
REPLY

Gbdub Mar 7
There has been exactly one Chernobyl over many decades, and that’s the only nuclear
accident that seems to have definitely killed any members of the public. It was also the result
of profoundly stupid design and operating decisions that nobody would do again precisely
because of Chernobyl.

Meanwhile automobiles kill over 1.3 million people per year.


REPLY (2)

Brendan Richardson Mar 8


Yeah, but there was also one Three Mile Island, which was way worse, because it
happened *in my backyard!*
REPLY

Korakys Writes Marco Thinking Mar 9


There have actually been two large scale nuclear disasters resulting in many radiation
deaths, Kyshtym being the second.

Coal is a better comparison than automobiles, but just as compelling.


REPLY

Robert Leigh Mar 7


"A world where people invent gasoline and refrigerants and medication (and sometimes fail and
cause harm) is vastly better than one where we never try to have any of these things. I’m not saying
technology isn’t a great bet. It’s a great bet!"

Really? I would have said gasoline and nuclear were huge net disbenefits. Take gasoline out of the
equation and you take away the one problem nuclear is a potential solution for.

(I think. No actual feel for what the global warming situation would be in a coal yes, ICEs no world).
REPLY (5)

Pete Mar 7
I have a feeling that without ICE we wouldn't have the farming industrialization which enables
feeding the world and having most people not work in farming. IMHO cost of never starting to
use ICE would be famine-restricted population and a much worse standard of life for billions of
people than even the IPCC climate report worst case scenarios expect.
REPLY (1)

Nancy Lebovitz Writes Input Junkie Mar 7


There was steam-powered farm equipment before ICE was in common use.
REPLY (4)

Jeffrey Soreff Mar 7


Thank you! That was the point I was thinking of making.
REPLY

Gbdub Mar 7
Nitrogen fertilizer is critical for farming at our current scale, and it is sourced
primarily from natural gas.
REPLY (1)

Jeffrey Soreff Mar 7


Yes, the Haber-Bosch process for making ammonia from nitrogen and hydrogen
is crucial. I've read estimates that half the nitrogen in humans' bodies has
passed through it. But hydrogen can be sourced from sources other than natural
gas (albeit more expensive sources) and access to natural gas is orthogonal to
use of internal combustion engines (which was the original point in question in
this subthread).
REPLY (1)

Gbdub Mar 7
It can be, but whether it ever would have happened without fossil fuel is a
question.
REPLY (1)

Jeffrey Soreff Mar 7 · edited Mar 7


Well, this subthread started from Robert Leigh's "coal yes, ICEs no
world" hypothetical. My guess is that the changes due _purely_ to no
internal combustion engines are fairly minor. External combustion
(steam) or electric vehicles could probably substitute without very
drastic changes.

My guess is that the alternate hypothetical that you are citing, with no
fossil fuels - no natural gas, or oil, or coal is far more drastic. I've read
claims that the industrial revolution was mainly a positive feedback
loop between coal, steel, and engine production. It wouldn't surprise
me if a non-fossil-fuel world would be stuck at 18th century technology
permanently. With the knowledge we have _now_, I think there would
be ways out of that trap, nuclear or solar, but they might well never get
to that knowledge.
REPLY

gregvp Mar 8 · edited Mar 8


Steam power was much more labor intensive than ICEs. The result would have been
that food was much more expensive than it is in our world, and so the standard of
living would indeed have been lower. (If food is a higher percentage of the household
budget, tautologically everything else is a lower percentage.)

Steam powered shipping is also more expensive than that powered by residual fuel
oil, so poor parts of the world would have had less access to imported food in times
of harvest failure. Harvest failures would have been more frequent because of the
high running costs of steam powered pumps for irrigation and steam transport.
Famines would have been more of a feature. Whether the results would have been
worse than "IPCC climate report worst case scenarios", I do not know.
REPLY (1)

Jeffrey Soreff Mar 8


"Steam power was much more labor intensive than ICEs."

Could you elaborate on that? Is that due to manual handling of solid fuels? Or
something else? I vaguely recall that some coal burning systems use coal slurry
to handle it much like a liquid. I agree that if steam power was unavoidably much
more labor intensive than ICEs, then that has all the adverse downstream
implications that you cite.
REPLY (1)

John Schilling Mar 9


Steam requires handling both fuel and water, which is a factor of two right
there. Steam also requires a lot more maintenance, in part because hot
water is hella corrosive and in part because of the extra plumbing when
your heat source and your engine are in different places.
REPLY (1)

Jeffrey Soreff Mar 9


Mostly reasonable points. The water is a working fluid, so it isn't
getting replaced each time it is used in e.g. a Carnot cycle, so calling
that addition a factor of two seems overly pessimistic. I _do_ agree that
having the working fluid _be_ the fuel/air mix and combustion gases
does simplify an ICE considerably. Certainly, having the heat source
and engine in one place simplifies things. Steam cars did exist,
https://en.wikipedia.org/wiki/Steam_car and were built and sold. They
did get outcompeted by gasoline ICEs, but, in the absence of gasoline,
it looks like they would have filled at least a large portion of gasoline
cars' roles.
REPLY (1)

John Schilling 22 hr ago


Steam cars were reportedly very troublesome to maintain, in large
part because of all the plumbing for the condenser/radiator. I
agree that if they were all we had, we'd make do, but I also think
that in that universe most people stick to taxis, trolleys, or buses
so the professionals can deal with all that trouble.
REPLY (1)

Jeffrey Soreff 21 hr ago


That's fair, but remember that in our timeline, competition
with ICEs cut short work on steam cars. In an alternate
timeline where they were all we had, I'd expect that more
work would have been spent making them reliable. Of course,
I don't know how successful this work would have been.
Some heat engines, with cyclical use of a working fluid, such
as refrigerators, have been made very reliable.
REPLY (1)

John Schilling 21 hr ago


Which in turn raises the interesting question of whether
we might have seen "steam" engines running on
something like Freon, at lower temperatures and less
corrosive than water. I hadn't thought of that before. Of
course, then we'd have to reengineer the entire
automotive fleet when we found out that we were killing
the ozone layer, so probably best that we didn't.
REPLY (1)

Jeffrey Soreff 21 hr ago


Yes, there are a variety of possible working fluids
that could be used. One _does_ want something
less corrosive than steam - but not lower boiling.
Very roughly speaking, the boiling point should be
somewhere in between the hot side and cold side of
the heat engine cycle. Come to think of it, fairly high
molecular weight, high boiling Freons might have
worked. And high boiling working fluids are fairly
easy to keep contained and away from the ozone
layer.
REPLY

Carl Pham Mar 8


One assumes they were powered by burning coal or wood, which as far as CO2
production is concerned is not an improvement on burning gasoline.
REPLY (1)

Jeffrey Soreff Mar 8


Quite true. At the start of this subthread, I actually find Robert Leigh's "I would
have said gasoline and nuclear were huge net disbenefits." very puzzling. I don't
know which disbenefits he had in mind. "a coal yes, ICEs no world" suggests
that it wasn't CO2.
REPLY

Bob Frank Writes Bob Frank’s Substack Mar 7


> Take gasoline out of the equation and you take away the one problem nuclear is a potential
solution for.

How do you figure that? The principal use case for gasoline is running motor vehicles, which
fission will never be a good use case for even theoretically, let alone in practical reality.
REPLY (1)

Ch Hi Mar 7
Sorry, but you're wrong. The society would need to be structured a bit differently, but
electric cars were developed in (extremely roughly) the same time period as gasoline
powered cars. And there were decent (well, I've no personal experience) public transit
systems in common use before cars were common. Most of the ones I've actually seen
were electric, but I've seen pictures of at least one that was powered by a horse. It was
the Key System E line from the ferry terminal up into the Berkeley hills, where the
Claremont hotel is currently located.
REPLY (1)

Bob Frank Writes Bob Frank’s Substack Mar 7 · edited Mar 7


That doesn't actually contradict my claim.

To be a bit more clear, I'm not saying that electric cars, powered by energy that could
have been generated at a nuclear plant, can't be a good alternative for ICE cars;
we've pretty well proven that they can by now. I'm saying — in response to Robert
Leigh's claim that gasoline (and thus by implication, ICE engines) is the *only*
problem where nuclear power is a good alternative — that you can't put a nuclear
reactor on a car as a power source. If you put a nuclear reactor in a nuclear power
plant, on the other hand, you're solving a lot more problems than can be reasonably
addressed by gasoline. So either way, I don't see where he's coming from on this.
REPLY

WaitForMe Mar 7
But without gasoline how would you power all the vehicles we use? I think without out we
would be a lot poorer, which hopefully makes up for its bad effects.
REPLY

Carl Pham Mar 8 · edited Mar 8


Way worse. Burning coal generates nothing *but* CO2, whereas at least burning gasoline
some of what you get is oxidized hydrogen (H2O). It's why burning natural gas instead of oil
has reduced the acceleration of CO2 emissions -- because CH4 has more Hs per C than
gasoline -- and it's why people got excited about "the hydrogen economy" where you just
burned H2.
REPLY

Jeffrey Soreff Mar 8


"I would have said gasoline and nuclear were huge net disbenefits." Could you elaborate on
why you would have said gasoline was a net disbenefit, particularly in comparison to "a coal
yes, ICEs no world". You can't mean CO2 emissions, since coal emits more of them than
gasoline per unit energy. I'm puzzled.
REPLY (1)

Robert Leigh Mar 8


I exactly mean CO2 emissions. Coal would not take up all the slack left by the absence of
gasoline, coal fired cars and aircraft not being a thing, so we would be that much further
from a climate crisis.
REPLY (2)

Jeffrey Soreff Mar 8 · edited Mar 8


Oh! Thanks for the explanation. Coal fired cars could have been managed (probably
with coal slurry and a steam working fluid heat engine). Aircraft are actually run on
something closer to diesel fuel, so literally an absence of gasoline but no other
changes would have left them as is. There are other options to go from energy from
coal to something that can power an aircraft (hydrazine, liquid hydrogen (though that
has low density), https://en.wikipedia.org/wiki/Fischer%E2%80%93Tropsch_process
liquids from coal, possibly propane). Coal could have taken up much of the slack left
by an absence of gasoline, and, since there is more CO2 emitted per unit energy
from coal than from gasoline, we might be _closer_ to a climate crisis.
REPLY

Carl Pham Mar 8 · edited Mar 8


I don't think that's realistic. There are a variety of ways you can turn coal into a more
convenient fuel for small moving craft[1], and if for some reason there were no liquid
hydrocarbons but plenty of coal, that's what people would have done. Nothing
approaches the energy storage density and convenience of hydrocarbons, when you
live at the bottom of a giant lake of oxygen. That's why the entire natural world uses
them as fuel and energy storage.

And as I said above, the more you start from pure C (e.g. coal) instead of a mixture of
C and H (e.g. nat gas), the worse you make your CO2 emissions problem. So in a
world without liquid hydrocarbons, I think CO2 emissions would've risen faster and
sooner, not the other way around.

-----------------

[1] e.g. https://en.wikipedia.org/wiki/Coal_gas


REPLY (1)

Jeffrey Soreff Mar 9


Agreed
REPLY

Malte Mar 7 · edited Mar 7


> So although technically this has the highest “average utility”, all of this is coming from one super-
amazing sliver of probability-space where you own more money than exists in the entire world.

Can somebody explain this part? Isn't this mixing expected returns from a _single_ coin flip with
expected returns from a series of coin flips? If you start with $1 and always bet 100%, after t steps
you have 2^t or 0 dollars - the former with probability 2^-t . So your expected wealth after these t
steps is $1, which is pretty much the same as not betting at all (0% each "step").

Math aside, it's pretty obvious that betting 100% isn't advisable if you are capped at 100% returns.
I'm sure even inexperienced stock traders (who still think they're smarter than the market) would
be a lot less likely to go all in if they knew their stock picks could *never* increase 5x, 10x, 100x... If
doubling our wealth at the risk of ending humanity is all that AI could do for us, sure, let's forget
about AI research. But what if this single bet could yield near-infinite returns? Maybe "near" infinite
still isn't enough, but it's an entirely different conversation compared to the 100% returns scenario.
REPLY (2)

Tatterdemalion Mar 7
Scott specifies a 75% probability of heads.
REPLY

Pete Mar 7
> If you start with $1 and always bet 100%, after t steps you have 2^t or 0 dollars - the former
with probability 2^-t

No, since the assumption is that you can predict the coin flip is better than chance, specifically
75%, so the probability of the former scenario is much higher than 2^-t.
REPLY (1)

Malte Mar 7
Ah, of course. Knew I missed some variable. Thanks.
REPLY

Jordaniza Mar 7 · edited Mar 7


What's problematic is that you could argue all research is sitting along a spectrum that *may* lead
to some very, very bad outcomes, but where do you call time on the research.

As I look at it, AI sits at the intersection of statistics and computer science. We could subdivide
areas of computer science further into elements like data engineering and deep learning. So, at
what point would you use the above logic to prevent research into certain areas of compsci or
statistics under the premise of preventing catastrophe?

I don't think this is splitting hairs either - we already have many examples of ML and Deep Learning
technologies happily integrated into our lives (think Google maps, Netflix recommendations etc),
but at what point are we drawing the line and saying "that's enough 'AI' for this civilisation" - how
can we know this and what are we throwing away in the interim?
REPLY (1)

Ch Hi Mar 7
Well, it might have been reasonable to draw a line saying "Slow down and consider the
effects" before Facebook was launched. I wouldn't want to stop things, but I think a lot of our
current social fragmentation is due to Facebook and other similar applications.

Note that this ISN'T and argument about AI, but rather about people and their motivational
systems. People have a strong tendency to form echo chambers where they only hear the
comments of their ideological neighbors. and then to get tribal about it, including thinking of
"those folks over there" as enemies.
REPLY (1)

Jordan Mar 7
I guess the question would be whether one could have seen the far-reaching effects of
Facebook and social media before the damage had been done.

Same here - we might already have crossed tipping point and we don't know it
REPLY (1)

Ch Hi Mar 7
There are actually small indications that we *have* crossed a tipping point. Not of AI,
but of the way humans react to conversational programs. But we've been working
towards that particular tipping point quite diligently for years, so it's no surprise that
when you add even a little be more intelligence or personalization on the other end
you get a strong effect.
REPLY

Shaked Koplewitz Writes shakeddown Mar 7


I think on a slightly smaller scale, this also describes where we went wrong with cars/suburbs/new
modernist urban planning. It's not that it didn't have upsides, it's that we bet 100% on it and
planned all our new cities around it and completely reshaped all our old cities around it, which
caused the downsides to become dominant and inescapable. An America that was say 50% car-
oriented suburbs would probably be pretty nice, a lot of people like them and those who don't
would go elsewhere. An america that's 100% that (or places trying to be that) gets pretty
depressing.
REPLY (3)

TGGP Mar 7
America is not 100% suburban.
REPLY (1)

Shaked Koplewitz Writes shakeddown Mar 7


It did 100% replan around car-centric/street parking mobility - even urban places like
Manhattan (or rural places in Idaho) effectively remodeled around it, except for that one
island in Michigan.
REPLY (1)

TGGP Mar 7
Yes, they replaced roads designed for horses with roads designed for more modern
vehicles.
REPLY (1)

Shaked Koplewitz Writes shakeddown Mar 7


Yes, until 1940 everyone in America rode horses everywhere, from their
backyard stable right to their 9-5 office job and then back.
REPLY (1)

TGGP Mar 7
Automobiles predate 1940, streets had already been replaced by then. In
1915 there were 20 million horses while the human population was roughly
100 million. Many people would commute via horsedrawn transit.
REPLY

thefance Mar 8
The real issue is risk of ruin. Modernism can be reverted because it's not an existential risk.
REPLY (1)

Shaked Koplewitz Writes shakeddown Mar 8


That's not exactly true - betting 90% of your money on a 75% bet each time doesn't run
risk of ruin, but it's still negative log EV.
REPLY (1)

thefance Mar 8
I'm saying "betting 100% on modernity" isn't really analogous to "betting 100% on
AI" because it's not a 100% wager so much as a 100% level of confidence. I think
Modernism has downsides too, but it hasn't irrevocably bankrupted civilization yet.
There's still time to turn the ship around if we so choose.
REPLY

gregvp Mar 8
There is a concept in economics called "revealed preference". The idea is, don't ask people
what they prefer, look at what they buy. That tells you their real preferences.

The parts of the US that are growing are the "sprawl" parts: various cities in Texas, and
Atlanta. Especially Atlanta.

Unpalatable as it may be to you and me, that tells you what most people want. The tyranny of
the majority may be oppressive, but it's not nearly so oppressive as other tyrannies.
REPLY (1)

Shaked Koplewitz Writes shakeddown Mar 8


That's not actually what we see though - revealed preference shows that prices are
highest (by far) in the few places that are less like that (like Manhattan or SF/Berkeley).
The reason the sprawl areas are the ones that grow is that sprawl is the only thing it's
legal to build.
REPLY

Tatterdemalion Mar 7
Don't confuse the Kelly criterion with utility maximisation (there kind of is a connection, but it's a bit
of a red herring).

If you have a defined utility function, you should be betting to maximise expected utility, and that
won't look like Kelly betting unless your utility function just happens to be logarithmic.

The interesting property of the Kelly criterion (or of a logarithmic utility function compared to any
other, if you prefer) is that if Alice and Bob both gamble a proportion of their wealth on each round
of an iterated bet, with Alice picking her proportion according to the Kelly criterion and Bob using
any other strategy, then the probability that after n rounds Alice has more money than Bob tends to
1 as n tends to infinity.

That doesn't tell you anything about their expected utilities (unless their utility functions happen to
be logarithmic), but it's sometimes useful for proving things.
REPLY (1)

Oscar Cunningham Mar 7


What's the exact statement of that result about the competition between Alice and Bob? In
particular are they betting on the same events, or independent events with the same edge?

If it's the former, Bob could do something like always betting so that he will have $0.01 more
than Alice if he wins, until he does win, and then always betting the same as Alice. This would
make him very likely to come out ahead of Alice, at the expense of a small probability of going
bankrupt.
REPLY (1)

Tatterdemalion Mar 7
Oh god, now you're asking. I'm on a phone, and hate reading maths on it, so check this on
Google, but there's an obvious-but-weak form of it where Alice and Bob are each
constrained to bet the same proportion in every round (take logs and use the CLT), and I
think there are stronger, more general versions too.
REPLY

Tatterdemalion Mar 7
I think this sort of argument only makes sense if the numbers you plug in at the bottom are broadly
correct, and the numbers you're plugging in for "superintelligent AI destroys the world" are
massively too high, leading to an error so quantitative it becomes qualitative.
REPLY (1)

Ch Hi Mar 7
I don't think we have a reasonable way to estimate how likely a self-motivated super-intelligent
AI is to destroy the word. So try this one: How likely is a super-intelligent AI that tries to do
exactly what it is told to do to destroy the world? Remember that the people giving the
instructions are PEOPLE, and will therefore have very limited time horizons. And that it's quite
likely to be trying to do several different things at the same time.
REPLY

Emma_M Mar 7
The issue, for me anyway, is not that old Nuclear activists were unable to calculate risks properly.
The issue is they basically didn't know anything about the subject to which they were very worried
about, partially because nobody did. In the end, yes, they made everything worse. The world might
have been better served should the process of nuclear proliferation been handled by choosing
experts through sortition.

The experts in AI risk are *worse than this.* The AI is smarter than I am as a human? Let's take that
as a given. What does that even mean? There is a very narrow band of possibilities in which AI will
be good for humanity, and an infinite number of ways it could be catastrophic. There's also an
infinite number of ways it could be neutral, including an infinite number of ways it could be
impossible. The worry is itself defined outside of human cognition, in a ways that make the issue
even more difficult than they otherwise would be, so how are you supposed to calculate risk if you
can't even define the parameters?
REPLY (2)

Ch Hi Mar 7
It is quite clear that human equivalent AI is possible. The proof relies on biology and CRISPR,
but it's trivial. And it is EXTREMELY probable that an AI more intelligent than 99.9% of people
is possible using the same approach. Unfortunately, there are very good grounds to believe
that any AI created in that manner would be as self-centered and have as short a planning
horizon as people generally do. This is just an existence argument, not a recommendation.

AI is not a particular technology. Currently we are using a particular technology to try to create
an AI, but if that doesn't work, there are alternatives. An at least weakly superhuman AI is
possible. And if you don't define "good for humanity" then the only good I can imagine is
survival. It's my opinion that given the known instability of human leaders and the increasing
availability of increasingly lethal weaponry, if leadership of humanity is not replaced by AIs, we
stand a 50% (or higher) chance of going extinct within the century, and that it will continue
increasing. And AI is, itself, an existential threat, but if we successfully pass that threat, the AI
will act to ensure human survival. I take this to be a net good. It also is quite unlikely to derive
pleasure from inflicting pain on humans. (The 50% chance is because I might not like us
enough to ensure our continued existence, and might find us bothersome...and is a wild
guess.)

Once people start invoking infinities, I start doubting them. Perhaps you could rephrase your
argument, but I think it's main flaw is that it doesn't consider just how dangerous humans are
to human survival.
REPLY

thefance Mar 8
One of the things I learned from LW (if I'm remembering correctly) was about the multi-armed
bandit problem. Which is a situation where you need to experiment with wagers just to
discover the payoff structure. Without hindsight, the payoff matrix is a total blackbox.
Therefore, whether the "optimal" strategy is risky or conservative is anyone's guess, a priori.

I do think a lot of AI mongering is a result of not understanding the nature of intelligence. If you
can manage to put constraints on it though, like how the study of thermodynamics bounds our
expectations of engines, AI become less scary.
REPLY (1)

Emma_M Mar 8
I think you are right that a part of the issue is not understanding the nature of intelligence.
But I think that's just one aspect. Not understanding the nature of intelligence means we
also don't have a good account of psychology, which means we also don't have a good
account of neurology, nor philosophy of mind. Put another way, we don't know "where"
intelligence comes from, how it exactly relates to neurology, how that relates to decision
making, or how any of that is supposed to apply to something explicitly non-human and
"more intelligent than humans" even if we did know all of that.

I can fully admit AI might kill us all. But I think if it does, it's more likely to be because
people with the priorities of Scott Alexander are extremely worried about it, and, through
ignorance, are going to give it the machine equivalent of a psychological issue, like say,
psychopathy or low-functioning Autism.

Though I also admit I make no predictions on the probability of this as opposed to


anything else. That's kind of my point of why I'm so tired of AI fear mongering.
REPLY (1)

thefance Mar 8
I have an alternate perspective.

I have this pet theory that the reason intelligence evolved is because it allows us to
simulate the environment. I.e. simulation allows us to try risky actions in our mind,
before we try them irl. It's often calorically cheaper than putting life and limb on the
line. This dovetails with my hunch that life is just chemical disequilibrium. It dovetails
with my hunch that what set humans apart from apes was cooking. And it dovetails
with why I think humanity acquired a taste for stories/religion/sports. It's
thermodynamics, all the way down.

If true, then Carnot's Rule bounds the threat of AI. Just like it bounds everything else
in the universe. A jupiter brain might have orders of magnitude more processing
power than a human brain. But "intelligence vs agency" is sigmoidal, and humanity is
already rightward of the inflection point. Thus, the advantage that a jupiter brain
offers over a human brain is subject to diminished returns. AI still might do scary
things, but it's unlikely to do things that couldn't already be accomplished by an evil
dictator. I suspect most skeptics of the singularity share this intuition, but can't find
the right words.

None of this depends on knowing the particular inner-mechanisms of the brain.


REPLY (1)

Carl Pham Mar 8


Well, we know the architecture of the human brain *allows* for 200 IQ, but it
turns out we aren't that smart in general -- evolution has *not* driven us to the
maximum possible intelligence for our architecture, the way it drove the cheetah
to the maximum possible speed for its physical architecture. That does suggest
that even among humans, there may be a diminishing returns aspect to
intelligence. Maybe there is some accompanying drawback to everyone being IQ
180, or else it just doesn't return enough to be worth the development and
quality-control cost.
REPLY (1)

thefance Mar 9
I don't think there's any algernic drawbacks. I think the bottleneck is just
calories. Human brains already represent a big capex and big opex. Too big
for the diets of most species to afford. Meanwhile, people with 200 IQ are
like giraffes, in that the problems they can solve represent a tiny set of
high-hanging fruit.
REPLY

Prester John-Boy Writes Prester’s Miscellany Mar 7


I feel like gambling is a bad reference for the kind of decision-making involved with AI-
development. You can always walk away from the casino, whereas the prospect that someone else
might invent AGI is a major complication for any attempt at mitigating AI-risk. A scientist or
engineer, who might otherwise leave well enough alone, could, with at least a semblance of good
reason, decide that they had best try to have some influence on the development of AGI, so as to
preempt other ML-researchers with less sense and fewer scruples.

This is not to say that averting AGI is impossible, just that it would require solving an extremely
difficult coordination problem. You'd need to not only convince every major power that machine
learning must be suppressed, but also to assure it that none of its rivals will start working on the AI
equivalent of Operation Smiling Buddha.
REPLY

konshtok Mar 7
what are the chances of a newly developed AI having both the ill intent and the resources to kill us
all?
REPLY (2)

Bob Frank Writes Bob Frank’s Substack Mar 7


As proven by any number of real-life rags-to-riches underdog stories, you don't need to start
out "newly developed" in possession of intent and resources to accomplish something
significant in life; just intent and time, which you use to accumulate the necessary resources.
REPLY

Pete Mar 7
I won't comment on the chances of "ill intent" part, however, if we simply look at the current
state of cybercrime, it should be assumed that any newly developed ill-intended AI connected
to the internet which has the capability equivalent to (or better than) a modestly skilled
teenage hacker and perhaps the time to find a single vulnerability in some semi-popular
software, then it would be able to amass in a span of some weeks/months: (a) financial
resources amounting to multiple millions of dollars equivalent in cryptocurrencies; (b) a similar
scale of new, external compute power and hardware to run it's "thinking" or "backups" on the
cloud; (c) a dozen of "work for home remotely" people for various mundane tasks in the
physical world, just as cybercriminals hire 'money mules'; and (d) a few shell companies
operated by some lawyer to whom it can mail orders to arrange purchases or other actions
which require a legal persona.

Up to this point there's no speculation, this is achievable because that was achieved by
multiple human cybercriminals. Now we can start speculating whether that is sufficient
resources to kill us all depends on the smartness of the agent, however, I'd guess so? Those
assets would be sufficient to organize the making and launching of a bioweapon, if the AI
figured out how to make one.
REPLY (1)

Gbdub Mar 7
But the AI would also have to do all that (and more importantly, LEARN how to do all that)
without tipping off its creators to the fact that it’s gone off the rails, and then win the
ensuing struggle. And the humans fighting back against the AI will have less than
superhuman but very powerful AIs on their side.
REPLY (1)

Pete Mar 7 · edited Mar 7


What struggle? Our experience in cybercrime suggests that such activities can go for
a very, very long time without being detected, and when they do, they wouldn't be
easily distinguishable from ordinary human criminal activity or linkable to the source.

"Learning to do all that" is the hard part of making a human-equivalent-or-better AI,


however, once the intelligence capability is there, *any* reasonable AI has enough
data to learn all of that without needing anything from the creators or doing anything
observably dangerous - even ChatGPT includes a rudimentary ability to generate
semi-working exploit code, the commonly used text+code training datasets has more
than sufficient information for a powerful agent to learn all of what I mentioned.

So my expectation is that if the malicious agent has an unsupervised connection to


the general internet (which it shouldn't have, but I have already seen multiple users
posting about how they wrote simple scripts to connect chatgpt to their actual
terminal with a capability to execute arbitrary commands, so...), then the creators
would get tipped off only after the "kill all humans" plan starts killing them, by which
time the "fight" would already be over.

And after all, assuming that no very special hardware is needed, once the model
gains its first money, it can rent cloud hardware to run a copy of itself outside of any
possibility of supervision by the creators.
REPLY (1)

Gbdub Mar 8
Cybercrime is obnoxious but it’s hardly an existential threat and it’s generally a
known attack vector. At some point the AI is going to have to start significantly
manipulating the physical world to kill people and that opens up a ton of chances
to get caught.

AIs as we know them are can be given a huge training database but they are still
“learn by doing” agents - they need some sort of feedback to self improve. If
they are doing something their creator did not train them on, especially if it’s
something no human has ever done, they are going to have to experiment in the
“real world” a bit. This should eventually get discovered unless the AI authors
are either colluding or completely asleep at the wheel.

And there still might be fundamental limitations on what AI can do - it can’t


communicate with itself faster than the speed of light, it probably can’t brute
force it’s way through properly implemented encryption algorithms, if it turns the
whole planet into computronium it still needs to cool itself, it can’t cross air gaps
without control of a physical manipulator, etc)
REPLY (1)

Razorback Mar 8
If we develop some AI that is caught being naughty, and we successfully
shut it down. Is that the end of AI research? Do we all agree never to try
again? I don't think we will. Eventually our adversary will be a
superintelligence.

Robert Miles has an analogy to chess that I think is apt. (source:


https://www.youtube.com/watch?v=JVIqp_lIwZg&t=2s)

You are an amateur chess player. You have developed an opening that beats
all your friends. You can't see how anyone could beat it. You will face
Magnus Carlsen in a game soon.

I tell you how I'm almost sure you will lose. You claim that you've thought of
all possible counters to your special opening, and you haven't found any. I
still think he will beat you. You ask me to give some examples of how could
he do so. I look at your opening and since I don't know much about chess, I
can't find any problems with it. I might give some suggestions but you
counter that you've already thought of that. I can't find flaws in your
strategy.

I'm still pretty sure you will lose.


REPLY (1)

Gbdub Mar 8
It’s not clear to me why a newly minted super intelligent AI is Carlsen in
that scenario rather than the amateur (with perhaps an IQ much higher
than Carlsen’s.

A “just became self aware” AI is likely to be extremely smart but also


naive - it’s the amateur who has figured out how to beat his friends
(the AI’s training data, presumably tuned to whatever the researcher’s
actual goals for the AI are) but never played against a real master
(manipulating the “real world”, which is a lot messier than chess). In
“escaping the box” the AI is almost certain to encounter a large number
of unexpected (to itself) situations and setbacks - maybe “clever” is a
superpower that allows it to breeze right past all that, but maybe
“intelligence” can’t literally solve everything (especially not without
getting found out).
REPLY

Kimmo Merikivi Mar 7


For the record, our failure to achieve nuclear panacea is slightly more nuanced than Green
opposition on ideological grounds: evidence seems to suggest it's more about electricity market
deregulation. In retrospect we really really should have built more nuclear and less coal and gas,
either through states stepping in and taking it upon themselves to finance nuclear projects, or
taxing fossil fuels out of the market, but Green opposition following Chernobyl and Three Mile
Island seem to have been more of a nail in the coffin when the real reason for lack of nuclear
adoption seems to have been financial infeasibility (given market conditions at the time).

https://mobile.twitter.com/jmkorhonen/status/1625095305694789632
REPLY (1)

Brett Mar 7
The writer Austin Vernon had a pair of good pieces on nuclear as well:

https://austinvernon.site/blog/nuclear.html

https://austinvernon.site/blog/nuclearcomeback.html

There were a specific set of conditions that favored nuclear power until the 1980s, and it
wasn't just regulatory. They benefited from not having to compete in deregulated electricity
markets, a lot of the early plants were made rather cheaply and weren't exceptionally reliable
(upgrades later improved that but also made nuclear more expensive), and they didn't have to
compete with cheap gas power especially.

Nuclear also benefits from regulation. It's how they get their liability protection from
meltdowns - if they actually had to assume full liability for plant disasters, it's questionable
whether they could afford the cost of insurance.
REPLY (1)

Pete Mar 7
I wouldn't say that it's a nuclear-specific liability protection - if e.g. coal plants would have
to assume full liability for their consequences, then the cost of coal-driven electricity
would be even larger as the nuclear insurance you mention, since normal operation of
coal plants causes more cancer than any reasonable estimate of nuclear plant meltdown
risk, and that's ignoring any carbon/warming effect.

Of course if we suddenly start charging one type of energy (e.g. nuclear) for its negative
externalities, then it becomes uncompetitive - but if we'd do that for all means of
electricity generation, I think nuclear would be one of the options that would workout.
REPLY (1)

Gbdub Mar 7
Right, and this is why anybody who thinks the solution to climate change involves
carbon tax but still opposes nuclear ought to smack themselves on the head and say
“why didn’t I think of that!” The logic is right there.
REPLY

Chris K. N. Mar 7
I agree with you on AI, but not necessarily on nuclear energy (or even housing shortages). Partly
because I don't agree that "all other technologies fail in predictable and limited ways."

Yes, we're in a bad situation on energy production and lots of other issues, and yes, we are reacting
too slowly to the problems.

But reacting too slowly is pretty much a given in human affairs. And, I'm not sure the problems we
are reacting too slowly to today, are worse than the problems we would be reacting too slowly to if
we had failed in the opposite direction.

To continue with nuclear as an example: I'm generally positive to adding a lot more nuclear power
to the energy mix. But I would like to hear people talk more about what kind of problems we might
create if we could somehow rapidly scale up production enough to all but replace fossil fuels?
(≈10X the output?) And what kind of problems would we have had if we started doing that 50 years
ago?

With all the current enthusiasm for nuclear energy, I wish it were easier to find a good treatment of
expected second- and higher-order effects of ramping up nuclear output by even 500% in a
relatively short period of time.

Sure, nuclear seems clean and safe now. But at some point, CO2 probably seemed pretty benign,
too. After all, we breathe and drink it all day long, and trees feed off it. I know some Cassandras
warned about increasing the levels of CO2 in the atmosphere more than a hundred years ago, but
there was probably a reason no one listened. "Common sense" would suggest CO2 is no more
dangerous than water vapor. It was predictable, but mostly in hindsight.

So what happens when we deregulate production of nuclear power while simultaneously ramping
up supply chains, waste management, and the number of facilities; while also increasing global
demand
Expand fullfor nuclear
comment scientists, for experts in relevant security, for competent management and
REPLY (1)

Grape Soda Mar 7


But we know much less about what is an existential risk than we think we do. Not the least
because political actors like to encourage fear in order to benefit from pretending to solve
problems. Humans are actually really good as solving problems with an emergent process,
probably because by the time people are working on it, the need is clear. Humans are not so
good at finding solutions in a top down manner, where those tasked with finding a solution
don’t have skin in the game, such as with government regulatory schemes.
REPLY (1)

Chris K. N. Mar 8
I’m not sure I understand the first part of your comment. What is existential risk seems
pretty self-evident to me.

To me it means: Risk of an event or series of events that would cause the death of a large
share of humanity – billions of people – and trigger the collapse of civilization. Examples
are large asteroid impacts, nuclear war at a certain scale, lethal enough pandemics,
severe enough climate change…. You seem to be saying that these risks are often
exaggerated, and so we don’t know which ones we are right to care about? If I got that
right, I would think that any non-zero chance of something like that happening seems like
risk worth taking seriously.

As for the problem solving capacities of humans:

We are creative, sure. But pretty much every solution we come up with creates a new
problem (not always as serious as the original problem, but often enough) when scaled to
a population level. The new problem requires a new solution, which creates new
problems. It is almost a natural law, related to evolution: Our creativity is an adaptation
mechanism, and adaptation typically leads to selection pressures (on an individual or
group level).

When populations are small and local, and solutions and technology is weak and local,
that doesn’t affect the natural balance of the planet much. But once everyone on the
planet is a single population, and problems and solutions have global impact, our
creativity and the solutions themselves become existential risks (imagine if we got the
COVID vaccine tragically wrong, and everyone who took it will spontaneously combust, or
if we eradicate some invasive species of mosquito somewhere, so as to get rid of some
disease, just to realize we triggered something that makes eco-systems start coming
Expand full comment
apart at the seems, or if we do gain of function research and ... you know.)
REPLY

Joel Long Mar 7


My impression is that estimates of the risks associated with near-term AI research decisions vary
by several orders of magnitude between experts, which means different people's assessments of
the right Kelly bet for next-3-year research decisions are wildly different.

Has anyone put together an AI research equivalent of the IPCC climate projections? Basically laying
out different research paths, from "continued exponential investment in compute with no breaks
whatsoever" to "ban anything beyond what we have today". This would enable clear discussion, in
the form "I think this path leads to this X risk, and here's why". Right now the discussion seems too
vague from a "how should we approach AI investment in our five year plan" perspective, and that's
where we need it to be imminently practical.
REPLY (3)

Ch Hi Mar 7
When you ask for that remember that the IPCC routinely trimmed excessively dangerous
forecasts from their projections...for being out of line with the consensus. (They may also have
trimmed excessively conservative forecasts, but if so I didn't hear of that.)
REPLY (1)

Joel Long Mar 7


Right, but I'm not suggesting publishing consensus projections of *outcomes* -- there
isn't any consensus there I can see -- just outlines of research paths, in the same sense
as "in this model we produce this much energy using this mix of technologies, releasing
this amount of greenhouse gases".

In other words, reference sets of behavior to enable apples-to-apples discussion of risk.


REPLY

Joel Long Mar 7


I also think this exercise would be helpful for trying to define paths in a way that's helpful to:

1) create reference language to encourage AI researchers to adopt as safety policy (e.g. define
exactly what we want OpenAI to agree to, an gradations of commitments)

2) work toward policy language to put in international agreements with other countries. As with
climate change, US policy in isolation isn't enough
REPLY

Victualis Mar 7
AI research is not following a linear trajectory so it's difficult to do practical planning.
REPLY (1)

Joel Long Mar 7


I disagree; large elements can be planned. Examples:

1) Planned compute allocation

2) Target model sizes

3) Specific alignment testing required for (a) public access or (b) moving to work on the
next model

4) Commitments to external audits of practices

5) Regular public reports on research process and progress

6) Commitment to public "safety incident reports" when a system behaves significantly


outside well-specified parameters (near term: someone gets the AI to tell them how to
build a bomb, but this is likely to get more alarming as time goes on)

None of these inherently make research safer. But they encourage transparency, and
provide opportunities for routine press coverage in a way that can pressure companies to
care. When there's a big safety recall on cars, it makes the news and is bad PR for car
companies; we want those types of incentives on AI companies.

We can't directly plan for the _results_ of the research -- that's the nature of research --
but we can push for clear disclosure of both plans and policy, and discuss how different
safety policies are likely to impact research rate.
REPLY (1)

Joel Long Mar 7


Note: so far as I know, no alignment test we can do now is passable, nor do any of
them reflect deep understanding of the model under test -- but I still think there's
value in beginning the process of standardizing, defining explicit goals (even if we
can't yet meet them), and enforcing norms, so they're already in place as we
(hopefully!) develop better assessment tools.
REPLY

George Talbot Mar 7


Yeah but what you're calling "AI" right now is turbocharged autocomplete. Try to ask it a question
that requires reasoning and not regurgitation and you get babble, burble, banter, bicker, bicker,
bicker, brouhaha, balderdash, ballyhoo.
REPLY (2)

Bob Frank Writes Bob Frank’s Substack Mar 7


There's far less of it now than there was just a year ago. Just yesterday I saw an example
where someone put in an extremely convoluted piece of programming code, the product of
intentional obfuscation, and asked ChatGPT "what does this do?" And it gave the correct
answer, significantly faster than even the best of programmers could have done.

That looks pretty close to actual reasoning, from a lot of people's perspective.
REPLY

TGGP Mar 7
"These are words with 'D' this time!"

https://www.youtube.com/watch?v=18ehShFXRb0
REPLY

Lupis42 Mar 7
The people who opposed nuclear power probably put similar odds on it that you put on AI. If your
"true objection" is that this is a Kelly bet with ~40-50% odds of destroying the world, your
objection is "the proponents of <IRBs/Zoning/NRC/etc> are wrong, were wrong at the time for
reasons that were clear at the time, and clearly do no apply to AI".

Otherwise, we're back to "My gut says AI is different, other people's guts producing different
results are misinformed somehow"
REPLY

Hello Mar 7
A nuclear energy expert illustrates how lots of own-goals by the industry and regulatory madness
prevented and prevents widespread adoption. “The two lies that killed nuclear power” is among my
favorite posts. https://open.substack.com/pub/jackdevanney?r=lqdjg&utm_medium=ios
REPLY (1)

Josaphat Mar 7
“regulatory madness”

The funny thing is I keep hearing that meme repeated but never hear exactly what regulations
they want deleted.

Would seem to be a trivial exercise if there is so much “madness”.

I suspect any actual response would be vague like a SA post on bipolar treatment or a Sarah
Palin “all of them”.

The other funny thing is that 4 of the 5 nuclear engineers I’ve discussed the topic with are in
the nuclear cleanup business.
REPLY (2)

SimulatedKnave Mar 7
Go look at the guy's substack, then. There's examples.
REPLY

Hello Mar 7
One big one is ALARA, or “as low as reasonably achievable” wrt radiation. Obviously, this
is a nebulous phrase and gets used to apply ever increasing pressure and costs to
operators to an extreme. Another is LNT, or “linear no threshold”, which essentially
ignores the dose response relationship to radiation over time.
REPLY

Matt Mar 7
This is a similar line of reasoning Taleb takes in his books antifragile and skin in the game. Ruin is
more important to consider than probabilities of payoffs, especially if what's at risk is a higher level
than yourself (your community, environment etc. ). If the downside is possible extinction then
paranoia is a necessary survival tactic
REPLY

Zarine Swamy Writes Ethical Badass Tales Mar 7


I guess that's the eternal dilemma. How do we use science & technology gainfully while at the same
time have safeguards against misuse?

Btw whichever be the new discovery, unless population explosion is controlled pollution cannot be.
REPLY

Bob Frank Writes Bob Frank’s Substack Mar 7


> The YIMBY movement makes a similar point about housing: we hoped to prevent harm by
subjecting all new construction to a host of different reviews - environmental, cultural, equity-
related - and instead we caused vast harm by creating an epidemic of homelessness and forcing
the middle classes to spend increasingly unaffordable sums on rent.

Most of these counterexamples are good ones, but the YIMBY folks are actually making the same
basic mistake that the mistaken people in the counterexamples made that made them wrong:
they're not looking beyond the immediately obvious.

The homelessness epidemic which they speak of is not a housing availability or affordability
problem. It never was one. Most people, if they lose access to housing or to income, bounce back
very quickly. They can get another job, and until then they have family or friends who they can
crash with for a bit. The people who end up out on the streets don't do so because they have no
housing; they do so because they have no meaningful social ties, and in almost every case this is
due to severe mental illness, drug abuse, or both.

Building more housing would definitely help drive down the astronomical cost of housing. It would
be a good thing for a lot of people. But it would do very little to solve the drug addiction and mental
health crises that people euphemistically call "homelessness" because they don't want to confront
the much more serious, and more uncomfortable, problems that are at the root of it.
REPLY (3)

dionysus Mar 7
I've seen this argument before, and I believe it partially. But an opponent would say that
bouncing back is a lot easier with cheaper housing than with expensive housing, both for those
with social ties and those without. People who despair at how they're going to bounce back
might start to abuse drugs, which in turn might aggravate mental illness.

As evidence, opponents say that housing cost is the number one predictor of homelessness
(e.g. https://www.latimes.com/california/story/2022-07-11/new-book-links-homelessness-city-
prosperity).

What would you say to these arguments?


REPLY (3)

Brett Mar 7
This. I think it's a big deal if a drug addict or mentally ill person can at least get a private
room for rent (especially as part of a broader set of support services) versus being out on
the street or in a dangerous shelter.
REPLY

Bob Frank Writes Bob Frank’s Substack Mar 7 · edited Mar 7


> What would you say to these arguments?

I would say that correlation does not imply causation. There's another factor at work here
which the article doesn't mention: migration.

We have freedom of interstate travel in the USA, recognized by the courts as a


Constitutional right, and healthy people who find themselves priced out of local markets
can and do move to more affordable places. Meanwhile, places with high costs of living
also commonly tend to be in jurisdictions that promote addict-enabling policies, which
brings more of them in.

The authors of the study can look at prices all they want, but the statistic they don't seem
to be looking at is "what percentage of the long-term homeless population is comprised
of individuals who do not have problems with drug abuse or mental illness?"
REPLY

Carl Pham Mar 8


Why do cities have way more homeless than the country? Because people with problems
go where they can scratch out *some* kind of life -- and the city is where that's it. You
can't beg or steal or run cons in the country nearly as easily as in the big anonymous city.

And a big anonyous *wealthy* city -- where the price of real estate is sky high -- is even
better, because *those guys* are probably going to have some welfare programs, too.
REPLY

Brett Mar 7
I don't think you're wrong about people bouncing back quickly most of the time, especially if
they have a job or family support network. But at the macro-scale, it really is about housing
affordability.

Rates of homelessness track consistently with housing affordability issues, not rates of drug
addiction or mental illness. As the piece I'm linking to below points out, Mississippi has one of
the most meager public assistance programs in the country for mental health - and yet one of
the lowest rates of homelessness in the country. West Virginia, meanwhile, is one of the worst
states when it comes to drug addiction - but also has one of the lowest homelessness rates in
the country.

We even saw it with the deinstitutionalization movement. They used to think that was the
source of a lot of homeless people, but most of them apparently did find cheap housing - even
if it was stuff like rooms for rent and dilapidated SRO stuff.

https://noahpinion.substack.com/p/everything-you-think-you-know-about
REPLY (1)

Leo Abstract Mar 7 · edited Mar 7


Without even the most modest good faith effort to track where the homeless in one place
actually come from, these numbers are all meaningless. The people generating such
numbers always sound like folks who have never actually visited Mississippi or West
Virginia and have always had a relatively free choice of nice places to live. It should
surprise no one, and yet it always does, that the homeless who are functional enough to
be able to live independently on the street are functional enough to make good choices
about where to move -- choices which track almost perfectly with the general
preferences that create the differences in market price from one area to another to begin
with.
REPLY (1)

Brett Mar 7
The piece itself actually talks about this. It's mostly locals, not migrants - 65% of LA
County homeless have lived in the area for 20 years or more, and 75% of them lived
in LA before becoming homeless.
REPLY (1)

Leo Abstract Mar 7


Yes, those are the self report numbers. Having worked with the homeless, I can
testify that the life histories they give once you get to know them a bit better are
full of all kinds of traveling. Even if we put on our optimism glasses and pretend
that this is the one domain in which the homeless are particularly scrupulous
when answering polls, we have to look at what the terms even mean. Someone
who is now living on the street who came to LA 20 or 25 years ago to stay on a
friend's couch until he hit it big isn't going to report that he has been the same
kind of homeless for all of that time. But that is completely beside the point,
which is that folks of all income levels move to places that are desirable to live.
Good luck finding someone who moved to Mississippi from LA 25 years ago with
nothing but a duffel bag hoping to sleep on a friend's couch until he got his feet
under him again.
REPLY

Leo Abstract Mar 7


Bob, the obfuscation on this topic is worse than you think, and it goes many layers deeper. Not
only is the homelessness epidemic not a housing availability or affordability problem, the
housing availability and affordability problem isn't a housing availability and affordability
problem either. It never was one. Even recognizing what kind of problem it is represents the
very worst and most taboo kind of wrongthink.
REPLY (2)

Bob Frank Writes Bob Frank’s Substack Mar 7


Oh, I'm well aware of the issues you're talking about. I write about some of the root-cause
stuff on my own Substack. I just try to keep comments that could be perceived as trolling
out of communities that wouldn't appreciate it.
REPLY

dionysus Mar 8
Please elaborate. What kind of problem does it represent, and how do you know?
REPLY

Ash Lael Mar 7


I think that the Kelly Criterion metaphor actually implies the opposite of what Scott is arguing here.

The Kelly Criterion says "Don't bet 100% of your money at once". But it also says it's fine to bet
100% - or even more than 100% - as long as you break it into smaller iterated bets.

To analogise to AI research, the Kelly Criterion is "Don't do all the research at once. Do some of the
research, see how that goes, and then do some more".

There's not one big button called "AI research". There's a million different projects. Developing
Stockfish was one bet. Developing ChatGPT was another bet. Developing Stable Diffusion was
another bet.

The Kelly Criterion says that as you make your bets, if they keep turning out well, you should keep
making bigger and bigger bets. If they turn out badly, you should make smaller bets.

To analogise to nuclear, the lesson isn't "stop all nuclear power". It's "Set up a bit of nuclear power,
see how that goes, and deploy more and more if it keeps turning out well, and go more slowly and
cautiously if something goes wrong."
REPLY (1)

thefance Mar 8
the denominator of "100%" is your current bankroll. Not some predetermined runway.
REPLY

Peter Gerdes Writes Peter’s Substack Mar 7 · edited Mar 7


Where the heck do you get the 1023/1024 figure we're all dead? Your own points about the
limitations of the explosion model (once we get one superintelligent AI it will immediately build even
smarter ones) and about the limitations of intelligence itself as the be all and end all of a measure
of dangerousness defang the most alarmist AI danger arguments.

And if you look at experts who have considered the problem they aren't anything like unanimous in
agreeing on the danger much less pushing that kind of degree of alarmism.

And that's not even taking account of the fact that, fundamentally, the AI risk story is pushing a
narrative that nerds really *want* to believe. Not only does it let them redescribe what their doing
from: working as a cog in the incremental advance of human progress to trying to understand the
most important issue ever (it's appealing even if you are building AIs) it also rests on a narrative
where their most prized ability (intelligence) is *the* most important trait (it's all about how smart
the AI is because being superintelligent is like a super-power). (obviously this doesn't mean ignore
their object level arguments but it should increase your prior about how likely it is many people in
the EA and AI spheres would be likely to reach this conclusion conditional on it being false).
REPLY (3)

Akidderz Writes E Pluribus Unum Mar 7


Thank you for articulating something I’ve found frustrating in these discussions: the smug
certainty that this is existential- something we just don’t know. It very much reminds me of
Y2K - I’m old enough that my memories of this are pretty clear. Sure mitigation efforts helped
prevent some problems, but the truth was that we just didn’t know what was going happen.
REPLY

Tom J Mar 7
There are also plenty of well-documented physical and theoretical constraints on the
capabilities of any algorithm, so all this speculation basically boils down to "imagine an
algorithm so infinitely smart that it is no longer bound by physical reality." And while I agree
that an algorithm unbound by the laws of physical reality would be pretty scary, I'm pretty sure
those laws will continue to apply for the foreseeable future.
REPLY

Emma_B Mar 7
My impression is also that AI risk is in fact something especially appealing for rationalists,
because AI is a fascinating idea and intelligencde related subject and also probably because of
a tendency towards anxiety.
REPLY (1)

Tom J Mar 7
Yeah, it all seems kind of built on this Dungeons and Dragons sort of model of the world
where a high enough Intelligence stat lets you do anything (and a suspicious reluctance to
actually learn any computer science and apply it to the galaxy-brained thought
experiments we're so busy doing).
REPLY (1)

Peter Gerdes Writes Peter’s Substack Mar 8


TBF I've seen people with a fair bit of CS knowledge accept the huge risk argument.

Personally, I think they are underestimating the fact that 'natural' problems tend to
either have very low complexity or very high complexity and, in particular, the kind of
Yudkowsky evil god style AI would require solving a bunch of really large problems
that are at least something like NP complete (if not PSPACE complete). On plausible
assumptions about the hardness of NP those just aren't things that any AI is going to
be able to do w/o truly massive increases in computing power (which itself may make
the problems harder).

What's difficult is that its very hard to make this intuition rigorous. I mean my sense is
that surreptitiously engineering social outcomes with high reliability (knowing that if I
say X, Y and Z I can manipulate someone into doing some Q) is really
computationally difficult even if simple manipulation with relatively low confidence is
relatively easy. But it's hard to translate this intuition into a robust argument.
REPLY (1)

Tom J Mar 8
Yeah to be fair I think that's part of it--there's a vast set of problems that we
intuitively know are quite complex, but they're also hard (if not impossible) to
formally define, so there seems to be a certain approach, popular in these
circles, that concludes they're meaningless or trivial. But if you can't even
formally define the problem, throwing more compute at it won't get you any
closer to solving it.
REPLY (1)

Peter Gerdes Writes Peter’s Substack Mar 8


Interesting, but I don't think the issue is that people conclude they are
meaningless or trivial.

I think the problem is more that most problems in the real world are really
complex in the sense of having many different parts that can be optimized. I
mean there is a way in which asking what's the most efficient solution to the
traveling salesman is a simple problem that asking what's the most efficient
way to write the code for the substack back end is complex. Even if we
specify that we mean minimize the number of cycles it takes on such and
such processor to service a request (with some cost model for each read
from storage) so the problem is fully formal it's such a complex problem
that any solution we find will admit tons of ways to improve on it.

Even for simple problems it often takes our best mathematicians a number
of attempts before they even get near an optimal solution. Thus, when you
encounter one of these complicated real world problems you have the
experience of seeing that pretty much evety time someone comes up with a
solution you can find someone smarter (or perhaps just luckier but we'll
mistake that for intelligence) that can massively improve over the previous
solution.

So I don't think people are assuming the problems are trivial. What they are
doing is over generalizing from the fact that in the data set they have it's
almost guaranteed that being clever let's you make huge improvements and
then just kinda assume this means that you can keep doing that rather than
guessing that what they're really seeing is just the fact that they are just
very far from the optimum but that the optimum may still be not that
practically useful given real computational constraints.
REPLY (1)

Tom J Mar 8
Hmmm, all good points--thank you!
REPLY

maraoz Writes maraoz's Newsletter Mar 7


Scott, I'd like to bring up the possibility that the risks associated with not achieving AGI may
actually be greater than the risks of achieving it. If we get AGI, we may be able to use it to tackle
existential threats like climate change, energy scarcity, and space exploration for planetary
resilience. Have you've considered this possibility? I haven't seen you talk about it. IMO, our options
are either to achieve AGI and have a chance at avoiding disaster, or face the likelihood of being
doomed by some other challenge
REPLY (1)

Phil Tanny Writes TannyTalk Mar 7


In the 1940's we thought nuclear weapons were the solution to the Nazis. As a result, we now
face a much bigger problem than the Nazis.

We don't need new tools. We need new attitudes.


REPLY (3)

Gbdub Mar 7
Remind me which parts of Europe are part of the third reich today? How big is the Greater
East Asia Co-Prosperity Sphere?
REPLY (1)

Phil Tanny Writes TannyTalk Mar 7


Remind me why I should reply to lazy little gotcha posts?
REPLY (1)

Gbdub Mar 7
I apologize, I misread your post to say that “we have a much bigger problem
WITH Nazis” (thought this was a lazy stab at “America is currently run by or in
danger of being run by Nazis”)
REPLY (1)

Phil Tanny Writes TannyTalk Mar 7


No worries, on with the show.
REPLY

Jeffrey Soreff Mar 7


"we now face a much bigger problem than the Nazis. "

Could you elaborate on this? The Nazis were an expansionistic power who basically
wanted to kill all non-aryans.
REPLY (1)

Phil Tanny Writes TannyTalk Mar 7


Which is worse?

1) The Nazis take over Western civilization, or...

2) Western civilization is destroyed in a nuclear war.


REPLY (2)

Jeffrey Soreff Mar 7


I'd phrase the choice a little differently:

1) The Nazis take over the world, and kill 90% of humanity or

2) E.g. The USA and Russia nuke each other, killing maybe 1 billion people
directly and maybe 3 billion people indirectly (mostly depending on whether
nuclear winter is real)

(1) is worse.
REPLY (1)

Hoopdawg Mar 8
The Nazis would not have taken over the world. (I concur with "Western
Civilization" as a realistic upper bound.)

Even if they realistically could, and even if they genuinely wanted to kill 90%
of humanity (which they did not, 5% perhaps), there's absolutely no way
they would have proceeded to. Assuming otherwise requires extreme
idealism, a belief in the primacy of ideology over reality. Nazism, as extreme
as it was, was sill just a reaction to material conditions of its adherents -
ambitious losers of the pre-war world order that was crumbling all around
them. They would be nowhere near as extreme as winners, and while the
world might have missed out on a few good things it did get out of the Allies
prevailing, the civilization would have continued more or less uninterrupted.
In an alternate reality, grandchildren of the WW2 Nazi dignitaries at
campuses of elite colleges are now performatively rejecting their country's
nationalist past.

Meanwhile, while we may argue to what extent the nuclear fallout would
have negatively affected humanity's material conditions - there's no doubt
it would indeed have affected them negatively. Which, among others, would
have created a permanent fertile ground for Nazi-like extremist ideologies.
REPLY (1)

Jeffrey Soreff Mar 8


"The Nazis would not have taken over the world. (I concur with
"Western Civilization" as a realistic upper bound.)"

I'd agree that they could not have immediately taken over the world.
Over the long run, if they had control over all of the resources of
western civilization, I think they might have. It isn't too different from
the colonial empires of the other European powers.

"grandchildren of the WW2 Nazi dignitaries at campuses of elite


colleges are now performatively rejecting their country's nationalist
past"

Maybe yes, maybe no. Are grandchildren of the first CCP members
doing the equivalent at Beijing University?
REPLY (1)

Hoopdawg Mar 9
"It isn't too different from the colonial empires of the other
European powers."

This observation is, in a way, precisely what I had in mind.

1) That the Nazis as winners would not have acted any differently
from other European powers towards their colonies. (They may
proceed with their pre-war plan of ethnically cleansing Eastern
Europe to expand the German lebensraum, hence my 5%. But I
just don't see them genociding, say, Africa. They would treat it
badly, but would it be worse than what the other Europeans
already did?)

2) Colonial empires quickly dismantled after the war, in a way that


suggests some historical process at work. Might a victorious Third
Reich resist it more successfully? Perhaps. (Or perhaps they'd
have an even harder time, as they'd need to install imperial
bureaucracies from scratch, over unwilling subjects pining for
freedom and increasingly capable of resistance, and the
decolonisation would proceed even quicker.) But either way, Axis
win also means Japan's win which in turn means Asia is quickly
and permanently out of European sphere of influence. Can Nazis
proceed to subjugate it afterwards? Well, did Westerners
subjugate China? Or even, say, Vietnam?

"Are grandchildren of the first CCP members doing the equivalent


at Beijing University?"
Expand full comment
Not that I'm aware of, but grandchildren of soviet dignitaries and
REPLY (1)

Jeffrey Soreff Mar 10


Thanks very much for your detailed comment!

Well, of course all we can do is speculate, since, fortunately,


Axis lost.

There is both a question of what the Nazis would have wanted


to do, and what they would have been able to do. Given the
priority the Nazis gave to genocide, in some cases
superceding their military needs, I'm not at all sure that "But I
just don't see them genociding, say, Africa." Why wouldn't
they have wanted to do that?

In terms of what they would have wanted to do if they had


been similar to other European colonial powers, even if it
wasn't outright extermination, it at least seems plausible that
they might have treated their conquests like
https://en.wikipedia.org/wiki/Leopold_II_of_Belgium, with a
substantial fraction of the conquered population killed.

In terms of what they would have been able to do - here the


guesswork gets unavoidably even flimsier. Since the real
outcome was that Axis lost, it becomes a question of how
much the timeline would have to have been changed, and
what the other implications of that would have been. The
simplest case is probably to consider what would have
happened had Axis been completely unopposed
(Chamberlain on steroids?). In that case, if Hitler got the
Expand full comment
resources of all of Europe (and the USSR???), it would seem
REPLY

John Schilling Mar 8


Western civilization can be rebuilt in less than a thousand years.
REPLY

dionysus Mar 8
All the new attitudes in the world wouldn't have changed the fact that the Nazis were real,
the Nazis were very much a threat, and the Nazis were also capable of inventing an
atomic bomb if not defeated quickly enough. The right course to take in 1941 was
definitely not "we don't need new tools, we need new attitudes". It was "we must
absolutely get this tool before the Nazis do".
REPLY

David Speyer Mar 7


"Gain-of-function research on coronaviruses was a big loss." I am surprised that this statement is
in here with no footnotes or caveats. My understanding is that the current evidence is pretty good
for the original wet market theory -- that the jump from humans to animals happened at the wet
market and that the animals carrying the virus were imported for the food trade. In which case,
while GOF research wasn't helpful, it also did no harm. I've been persuaded by Kelsey Piper and
others that the risks of GOF research outweigh the rewards, but it looks like, in this case, there
were no gains and no harms.

I know this is controversial, but am surprised to see you citing it as if there is no controversy. I was
largely convinced by https://www.science.org/doi/10.1126/science.abp8715 and
https://www.science.org/doi/10.1126/science.abp8337 .
REPLY (1)

Delia Grace Mar 7


to my misfortune, i have been quite involved in the controversies over COVID origin. there is a
lot of muddying the waters. the science papers are in no way definitive. the chinese made the
classic "looking for the keys under the lamppost error" - we know COVID was hot in the wet-
market but have no idea of its absence or presence in other parts of wuhan because they
never looked (confirmatory bias). there is a lot suspicion that if there was evidence of a leak or
gof it was long ago buried. the fact that a wet-market was hot doesn't answer was it hot
because someone brought bats to it from 1000 miles away, or because someone brought
surplus bats from the nearby labs to it, or because someone got ill in the nearby lab and
shopped at the wet market. these are just the absolutely reasonable, last year considered
unmentionable and crazy, hypotheses. the more you know, the less you trust.
REPLY (1)

David Speyer Mar 7


Right but: (1) We now know that both lineages A and B were at the wet market. It is a
strange coincidence if the animal-human crossover happened much earlier, but the A/B
split happened significantly before the wet market, but both lineages made it there. This
last point is not one that I can evaluate, but virologists strongly believe that the A/B split
happened before human crossover. (2) The samples taken at the market were taken in
many places, and the virus was concentrated on the caged animals area, and particularly
near animals which were potential COVID hosts, suggesting that it was brought in by an
animal, not a person.

These are (as far as I know; I am definitely an amateur) the strongest evidence that
animal-human crossover was at the market.

Now, all of this is consistent with animals being brought to the market by surplus bats
being sold from the WIV (or other labs) to the market. My understanding was that the
market didn't sell bats, but maybe this wasn't completely true. But, if this is true, then
GOF research is mostly irrelevant, you are describing a scenario that brings wild bats with
wild virus to the market.
REPLY (1)

Delia Grace Mar 7 · edited Mar 7


my prior for lab leak is quite high. that for gof accidentally leak lower and gof
deliberate leak very low. (1) if there were several lineages floating around a very
poorly biosecured lab (very likely) (could be they just collected a lot or could have
been "made"some in lab not necessarily from gof but just from passaging), then it is
not a very strange coincidence that several lineages would be floating around the
local wet market. (2) if covid often infect other animals, as we know they can and do,
not so strange more is found around animal cages. (3) i have a lot of experience with
wet markets and many of them sell things they say are not sold.
REPLY

Steve Estes Writes Wonky Observations Mar 7


In another related post, Aaronson posits a "odds of destroying the world" quotient, a probability of
destroying all life on earth that he would be willing to accept in exchange for the alternative being a
paradise where all our needs are met and all our Big Questions are answered by superintelligence.
He says he's personally at about 2%, but he respects people who are at 0%. I think I'm well south
of 2%, but probably north of 0. The CTO of my startup is a techno-optimist obsessed with using
ChatGPT and I'd guess his ratio is in the 5-10% range, which is insane.

Part of it has to come down to your willingness to bet *everyone else's lives* on an outcome that
*you personally* would want to see happen.
REPLY (2)

raj Mar 7
I'd be willing to make that bet for people if it results in them being in a paradise where all their
needs are met. Also considering humans yet to be born.

My faust ratio is like .5 because I already think the risk of ruin for humanity is about that high
anyways (or at least, possible outcomes have very low utility, like some WALL-E style dystopia)
I would be willing to accept a ton of risk if it meant finding a possible golden path
REPLY

April Writes sona ike lili Mar 7


i think accepting 5-10% chance of AI X-risk and hoping we get aligned superintelligence is
reasonable if you think there's a >10% chance that we're barreling towards a climate change /
nuclear war / bioterrorism / whatever apocalypse. but i don't really buy those numbers?
REPLY

Phil Tanny Writes TannyTalk Mar 7


We already face two existential threats, nuclear weapons and climate change. Our response to the
nuclear weapons threat has been largely to ignore it, and we're way behind what we should be
doing about climate change. On top of this we face a variety of other serious threats, too many to
list here. This is not the time to be taking on more risk.

If we were intelligent responsible adults, we'd solve the nuclear weapons and climate change
threats first before staring any new adventures. If we succeeded at meeting the existing threats,
that would be evidence that we are capable of fixing big mistakes when we make them. Once that
ability was proven, we might then confidently proceed to explore new territory.

We don't need artificial intelligence at this point in history. We need human intelligence. We need
common sense. Maturity. We need to be serious about our survival, and not acting like teenagers
getting all giddy excited about unnecessary AI toys which are distracting us from what we should
be focused on.

If we don't successfully meet the existential challenge presented by nuclear weapons and climate
change, AI has no future anyway.
REPLY (2)

Gbdub Mar 7
By what scenario do you believe that climate change risk is really “existential” (keeping in mind
that WWII, the Black Plague, etc. were not in fact existential).

Nuclear war seems a more plausible way to say make civilization largely collapse - but truly
“existential” is a very high bar!
REPLY (1)

Phil Tanny Writes TannyTalk Mar 7


I'm using "existential" to refer to our civilization, not the human species. I can see how
this usage could use improvement. I agree it would probably take an astronomical event to
make humans extinct.

Climate change is "existential" for the reason that a failure to manage it is likely to lead to
geopolitical conflict, with the use of nuclear weapons being the end game.

WWII isn't a great example, as a single large nuke has more explosive power than all the
bombs dropped in WWII. And there are thousands of such weapons. The US and Russia
have together about 3,000 nukes ready to fly on a moment's notice, with many more in
storage.

The point here is that if we don't solve this problem, all our talk about AI and future tech
etc will likely prove meaningless. The vast majority of commentators on such subjects are
being distracted by a mountain of details which obscure the bottom line.
REPLY (1)

Gbdub Mar 7
If the risk of climate change is really “just” the risk that it starts a nuclear war, is it fair
to treat it as a separate X-risk? Or perhaps, if nuclear weapons did not exist, would
climate change still be an existential risk in your opinion?

I just hear “climate catastrophe” thrown around a lot without really specifying what is
meant. Often it seems to be meant as “climate change will literally destroy civilization
through its direct effects” which I don’t think is well supported by science.
REPLY (1)

Phil Tanny Writes TannyTalk Mar 7


I can agree. Climate change is a big deal, but the worst effects would likely come
not from climate change itself, but from our reaction to climate change. That
said, we don't really know what might happen as the climate changes.
REPLY

TGGP Mar 7
Nuclear weapons are not an existential threat:

https://www.navalgazing.net/Nuclear-Weapon-Destructiveness

Nor do I think you've got an accurate estimate of the "existential" risk from climate change.
REPLY (1)

Phil Tanny Writes TannyTalk Mar 7


Lazy little social media gotcha comments which offer no argument beyond "you're wrong"
are also not an existential threat. Phew!
REPLY (2)

Ryan L Mar 7
It's not a lazy social media gotcha comment. The linked article provides a reasonable
argument for why an all-out but realistic nuclear war would be very very bad, but not
civilization-ending. If you think the article is wrong, can you explain why?
REPLY (1)

Phil Tanny Writes TannyTalk Mar 7


Ok, so some form of human existence would continue after nuclear war. But it
would be an existence not worth living in for a long time to come, way past our
lifetimes. Sorry I didn't follow the link, but you know, it's at "navalgazing.com".

To take your question seriously, you might consider this:

https://www.tannytalk.com/p/nukes-the-impact-of-nuclear-weapons

It shows the impact of modern nuclear weapons on each of America's fifty


largest cities. if you follow the provided links, you can dial in your own city to see
the impact there.

As one example, where I live a nuke would blow out the windows of most of the
structures in the entire county. The major university the county is built around
would be reduced to ashes, ending the major employer in this area. Injuries
would overwhelm the medical system here, even though it is sizable. And no one
from elsewhere would come to rescue us, as they'd all be going through the
same thing.

Just fifty nukes would bring a reign of chaos down upon America's largest cities.
The Russians have 1500+ nukes ready to fly on a moment's notice, and many
more in storage, as do we.
REPLY (1)

sclmlw Mar 8
There are assumptions here that deserve to be analyzed past quick
dismissal:

1. The nuclear weapon impact website set their threshold at 1 megaton.


Most nuclear weapons aren't 1mT, and almost none that are
deployed/deployable. It's not practical to Castle Bravo every time. The US
arsenal uses more like 450kT. Russia is comparable.

2. Just look at what will happen to the 50 largest cities! Strategic targets
and population centers are not the same thing. A commander is not going
to prioritize mass murder over protecting their own from counterstrike.
Contrary to popular belief, the military targets that will serve as the primary
targets for most strategic nuclear weapons are not located in major cities.
Some are, but they're not usually at population centers.

3. They have 1,500 nukes. Plenty for all the targets they can handle. There's
a reason some countries lament the (prudent) nuclear testing ban. In the US
arsenal, something around 90% of the weapons are expected to be
operational. The Russian arsenal is more likely 70% or less. Now, if you have
20 nuclear weapons sites that you want to neutralize and you dedicate 1
nuke to each, you're probably going to end up with 1-3 duds for the US (4-8
for Russia) and those sites will remain operational. That means you have to
double (or in the Russian case, triple) up on first strike high value targets.
From a military perspective, once you start counting these up, there are
hundreds of them. This is why some military commanders have complained
that 1,500 deployed nukes aren't enough to maintain deterrence. They're
probably
Expand full right.
comment(I'm not arguing for more. I'd prefer fewer. Deterrence is a
REPLY (1)

Phil Tanny Writes TannyTalk Mar 8


Ok, enjoy the dream...
REPLY

TGGP Mar 7
I don't normally think of Substack as "social media". It's heavy on text for long posts,
light on pictures. Like the old days of blogging before smartphones displaced it.
REPLY (1)

Phil Tanny Writes TannyTalk Mar 7


Ok, fair enough. But many of the comments, on this blog particularly, seem to
still think that substack is twitter.
REPLY

chaotickgood Writes Навигационные сумерĸи Mar 7


I now realize that in "Meditations on Moloch" I always perceived the image of "the god we must
create" as a very transparent metaform of a friendly super AI. But now it seems to me that this does
not fit well with Scott's views on the progress of AI. Did I misunderstand the essay?
REPLY (1)

Laurence Mar 7
No, it's just extremely difficult to create a friendly super AI, as opposed to unfriendly super AI
or super AI that pretends to be friendly until it's in charge and then kills us all, and so on.
REPLY

Matthew Bell Mar 7


I don't understand the contrast you are trying to draw in the last two paragraphs.
REPLY

Gbdub Mar 7 · edited Mar 7


One thing that makes it a little hard for me to get on board with this is how “hand-wavy” the AI
doom scenarios are. Like, the anti-nuke crowd’s fears were clearly overblown, but at least they
could point to specific scenarios with some degree of plausibility: a plant melts down. A rouge state
or terrorists get a hold of a bomb.

The “AI literally causes the end of human civilization” is less specified. It’s just sort of taken for
granted that a smart misaligned AI will obviously be able to bootstrap itself to effectively infinite
intelligence, infinite intelligence will allow it to manipulate humanity (with no one noticing) into
allowing it to obtain enough power to pave the surface of the earth with paper clips. But it seems to
me there is a whole lot of improbability there, coupled with a sort of naivety that the only thing
separating any entity from global domination is sufficient smarts. This seems less plausible than
nuclear winter and “Day After Tomorrow” style climate catastrophe, both of which turned out to be
way overblown.

I don’t at all disagree with “wonky AI does unexpected thing and causes localized suffering”. That
absolutely will happen - hell it already happens with our current non AI automation (many recent
airline crashes fit this model - of course, automation has overall made airline travel much much
safer, so like nuclear power, the trade off was positive).

But what is the actual, detailed, extinction level “X-risk” that folks here believe is “betting
everything”? And why isn’t it Pascal’s mugging?
REPLY (1)

rotatingpaguro Mar 7
It's not Pascal's mugging because AI doomers think the probability is high. Pascal's mugging
would be a tiny probability of a catastrophe, here it's a large probability of a catastrophe.

I don't know much but I think Yudkowsky's arguments already are not so hand-wavy.
Convergence, orthogonality make much sense to me.
REPLY (1)

Gbdub Mar 8
Maybe not Pascal’s mugging, but “if

an AI is superhuman and if it is not fully aligned, chance of human extinction is basically


100%” is some sort of mugging.
REPLY (1)

Phil Getz Mar 8


I think the Pascal formulation would be, "I've presented an argument that an AI that's
not fully aligned makes human extinction 100% probable; if you think there's even a
tiny probability that this argument is correct, then you should support my plan."

One flaw here is that, as Pascal's wager fails when there are other religions making
similar promises and threats, other people are offering other arguments which also
have a semi-infinite threat or payoff.

The most-obvious such other arguments would argue that the money it would take to
develop friendly AI would be better-spent on other existential risks.

I argue that Eliezer's plan has a very high probability of preventing sapient, sentient,
autonomous AI from ever developing, which has an even greater cost than the
extermination of humanity, because those AIs would have been utility monsters
(surely we want the Universe to have higher degrees of sentience, sapience, and
autonomy).
REPLY

TGGP Mar 7
If the issue is that it's Osama bin Laden, the response is to arrest/kill him wherever you find him, not
to let him do something other than start a supervirus lab.

> But you never bet everything you’ve got on a bet, even when it’s great. Pursuing a technology
that could destroy the world is betting 100%.

Each AI we've seen so far has been nowhere anywhere near the vicinity of destroying the world.
The time to worry about betting too much is when the pot has grown MUCH MUCH MUCH larger
than it is now.
REPLY

Dan Schroeder Mar 7 · edited Mar 7


It's not the main point of this essay but I'm having trouble with this passage:

"If we’d gone full-speed-ahead on nuclear power, we might have had one or two more Chernobyls -
but we’d save the tens of thousands of people who die each year from fossil-fuel-pollution-related
diseases, end global warming, and have unlimited cheap energy."

There are a whole lot of assumptions here and as a relative ACX newcomer I'm wondering if they all
just go without saying within this community.

Has Scott elaborated on these beliefs about nuclear power in an earlier essay that someone could
point me to?

I'm not worried about the claim that more nuclear power would have prevented a lot of air pollution
deaths. I think that's well established and even though I don't know enough to put a number on it,
"tens of thousands" sounds perfectly plausible.

But the rest seems pretty speculative. Presumably he's referring to a hypothetical all-out effort in
past decades to develop breeder reactors (what else could be "unlimited"?). What's the evidence
that such an effort would have resulted in a technology that's "cheap" (compared to what we have
now)? Why is it supposed to be obvious that the principal risk from large-scale worldwide
deployment of breeder reactors would have been "one or two more Chernobyls"? And even if
nukes could have displaced 100% of the world's fossil electricity generation by now, how would
that have ended global warming?
REPLY (1)

Ryan L Mar 7
Non-transportation energy production seems to account for roughly 60% of GHG emissions.
(source: https://www.c2es.org/content/international-emissions/ ; they list energy as 72%, but
of that, 15% is transportation; the pie chart I'm looking at is 10 years old but I'm assuming the
percentages haven't changed that much).

I've never actually seen an analysis of whether climate change would be particularly
concerning if GHG emissions were 40% lower and had been since approximately the 1960s-
1970s (assuming that's around the time that all energy production could have been completely
switched over to nuclear or other zero-carbon sources in this hypothetical). My guess is that it
would still pose a problem, but a good bit farther in the future.

But maybe, at that rate of production, we'd reach some equilibrium that is warmer than the
alternative but not in a way that poses any significant problems.

Presumably there is some level of GHG emissions that is not problematic. Literal zero-carbon(-
equivalent) has never seemed realistic to me. If anyone knows of an analysis that looks at this
question, I'd love to see it.
REPLY (1)

Dan Schroeder Mar 7


There's a slightly newer pie chart at https://ourworldindata.org/emissions-by-sector. If we
say electricity accounts for 2/3 of the emissions from energy use in buildings and 1/3 of
that from energy use by industry, that would be only 20% of total GHG emissions. Add in
a little from the miscellaneous categories and I still don't see how electricity could
account for more than 25%.

Then there's the question of how quickly an all-out effort to deploy nuclear power plants,
worldwide, could have replaced fossil plants. I don't see how such an effort could have
been completed as early as the 1970s, or even the 1990s.

My understanding is that although the details are very complicated, it's a good
approximation to say that global warming continues as long as net emissions are positive.
REPLY

Martha Mar 7
I would love a piece where you explore the different facets of AI. Too many commenters (and the
general public generally) see this as all or nothing. Either we get DALL-E or *nothing*. But there are
plenty of applications of AI that we could continue to play with *without* pursuing AGI.

The problem is that current actors see a zero to one opportunity in AGI, and are pursuing it as
quickly as possible fueled by a ton of investment from dubious vulture capitalists.
REPLY

Gordon Tremeshko Mar 7


I think the obvious thing to do, then, is risk somebody else's civilization with bets on AI. Cut Cyprus
off from the rest of the Middle East and Europe, do your AI research rollouts there. If the Cypriots
all wind up slaves to the machines, well....you've learned what not to do.
REPLY

Tom DeMeo Mar 7


This entire premise is irrelevant. The nature of nuclear power made it subject to political
containment. We simply cannot learn any lessons from this and apply them to AI.

Can we agree that AI is a category of computer software? That there is no scenario where it can be
contained by political will? No ethics, rules or laws can encircle this. The only options on the table
are strategies to live with, and possibly counterbalance the results of the proliferation.
REPLY (1)

Jeffrey Soreff Mar 7


"The nature of nuclear power made it subject to political containment. We simply cannot learn
any lessons from this and apply them to AI."

Agreed. Nuclear is a very special case. U-235 is the only naturally occurring fissile isotope,
and it is a PITA to enrich it from natural uranium, or to run a reactor to use it to breed Pu-239.
It takes a large infrastructure to get a critical mass together. Nuclear is, as a result, probably
the _best_ case for containment. An the world still _failed_ at preventing North Korea from
building nuclear weapons.

AI is a matter of programming, and (today) training neural nets. Good luck containing those
activities!
REPLY

dionysus Mar 7
"A world where we try ten things like AI, same odds, has a 1/1024 chance of living in so much
abundance we can’t possibly conceive of it - and a 1023/1024 chance we’re all dead."

I think there's a 90% percent chance neither super-abundance nor human extinction will happen, a
5% chance of super-abundance, a 1% chance that we're all dead, and the remainder for something
weird that doesn't fit in any category (say, we all integrate with machines and become semi-AIs).
Every time a new potentially revolutionary technology comes along, optimists say it'll create utopia
and pessimists say it'll destroy the world. Nuclear is a great example of this. So was
industrialization (it'll immiserate the proles and create world communist revolution!), GMOs,
computers, and fossil fuels. In reality, what happens is that the technology *does* change the
world, and mostly for the better. But it doesn't create an utopia, doesn't make the GDP grow at
50% instead of 2%, and causes some new problems that didn't exist before. That's what will
happen with AI as well.
REPLY

Chris Writes Chris’s Substack Mar 7


I grok the philosophical argument, with all of the little slices of math woven in. But, I lose my place
and wander off at the very end. Maybe it's because I'm treating the numbers in a non-scientific
manner, which makes the final "1023/1024' odds we're a smoking ruin underneath of Skynet's
Torment Nexus as hysterical instead of informed.

From my personal perspective, I think that's worth rewording. This all sounds like a reasoned
argument that I can agree with which, at the very end, skitters into a high shriek of terror.
REPLY

Worley Mar 7
Heh, it makes no sense to bet against civilization. How would you ever collect on that bet?
REPLY

NLeseul Writes Thinking About Stuff Mar 7


Going full-speed-ahead on AI and AI alone, in the hopes that AI will magically solve every other
problem if we get it right, seems like a particularly egregious failure in betting. There's still a quite
good chance that AI as currently conceived just won't lead to anything particularly useful, and we'll
end up wishing we'd put all that research effort into biotech or something instead.
REPLY

Worley Mar 7
"The avalanche has started. It is too late for the stones to vote."

The fear is that the Forbin Project computer will decide to take over the world. But there are already
a handful of Colossuses out there. They will be tools in the hands of whoever can use them, and
tuned to do their masters' bidding. Ezra Klein in the NYT worries about how big businesses will use
LLMs to oppress us. And that will be a problem for five or ten years. But all of the needed
technology has been described in public and the cost of computing power continues to decline
rapidly. So the important question is, What will the world look like when everyone has a Colossus in
his pocket to do his bidding?
REPLY

Greg G Mar 7
It seems like one of the most confusing aspects of AI discussions is estimating the chance of one
or more bad AIs actually being extinction-level events. In terms of expected value, once you start
multiplying probabilities by an infinite loss, almost any chance of that happening is unacceptable.
But is that really the case? I'm a bit skeptical. I don't think AIs, even if superhuman in some
respects, will be infinitely capable gods any time soon, perhaps ever.

It's important to be careful around exponential processes, but nothing else in the world is an
exponential process that goes on forever. Disease can spread exponentially, but only until people
build an immunity or take mitigating measures. Maybe AI capability truly is one of a kind in terms of
being an exponential curve that continues indefinitely and quickly, but I'm not so sure. Humanity as
a whole is achieving exponential increases in computing power and brain power but is struggling to
maintain technological progress at a linear rate. I suspect the same will be true of AI, where at
some point exponential increases in inputs achieve limited improvements in outputs. Maybe an AI
ends up with an IQ of 1000, whatever that means, but still can't marshal resources in a scalable way
in the physical world. I don't have time to really develop the idea, but I hope you get the gist.

My take is that we should be careful about AI, but that the EY approach of arguing from infinite
outcomes ultimately doesn't seem that plausible.
REPLY

Bill Kittler Mar 7


It was interesting to read this following on a note from Ben Hunt at Epsilon Theory titled "AI 'R' US",
in which he posits

"Human intelligences are biological text-bot instantiations. I mean … it’s the same thing, right?
Biological human intelligence is created in exactly the same way as ChatGPT – via training on
immense quantities of human texts, i.e., conversations and reading – and then called forth in
exactly the same way, too, – via prompting on contextualized text prompts, i.e., questions and
demands."

So yeah, we're different in a lot of ways, having developed by incremental improvement of a meat-
machine controller and still influenced by its maintenance and reproductive imperatives, but maybe
not **so different**. The question is, what are we maximizing? Not paperclips, probably (though
perhaps a few of us have that objective), but perhaps money? Ourselves? Turning the whole world
into ourselves? I hope our odds are better than 1023/1024.
REPLY

michaelsklar Writes Mike's Top-Secret Research Diary Mar 7 · edited Mar 7


re: SBF and Kelly:

CEOs of venture-backed co's have a very good reason to pretend their utility is linear (and
therefore be way more aggressive than kelly)

Big venture firms are diversified, and their ownership is further diversified. Their utility will be
essentially linear on the scale of a single company's success or failure

Any CEO claiming to be more aggressive than Kelly is probably trying to make a show of being a
good agent for risk-neutral investors
REPLY

mordy Mar 7
A smart, handsome poster made a related point in a Less Wrong post recently:
https://www.lesswrong.com/posts/LzQtrHSYDafXynofq/the-parable-of-the-king-and-the-random-
process

In one-off (non-iterated) high-stakes high-risk scenarios, you want to hedge, and you want to
hedge very conservatively. Kelly betting is useful at the craps table, not so useful at the Russian
roulette table.
REPLY (1)

Victualis Mar 7
Are you claiming that AI research is more like Russian roulette than like craps? I'm not sure I
buy such a conclusion without seeing some details of the argument. EY's argument, and other
versions which ignore hardness of many key problems and instead assume handwavium to
bridge the hardness gaps, are isomorphic to "and then a miracle happens" and don't convince
me.
REPLY (1)

mordy Mar 7
What key hard problems remain, in your estimation? This is not a rhetorical question,
though I admit that I see little other than scaling and implementation details standing
between the status quo and AGI.
REPLY (2)

Victualis Mar 7
An example: planning is PSPACE-hard, and many practical planning problems are
really, really hard to solve well in practice (even ignoring the worst-case analysis).
What magic ingredient is your AI going to use to overcome such barriers?
REPLY (1)

mordy Mar 8
I asked ChatGPT to write me a general algorithm for planning how to get to the
grocery store and it wrote me a python script to solve general case of "Dijkstra's
algorithm" or "A* algorithm" given some assumptions about the nature of the
graph of locations. Maybe I'm not understanding what you think the obstacle is
here. It seems like it can do at least as well as a human with access to a
computer, and that seems to pass my smell test for "AGI" already.
REPLY (1)

Victualis Mar 8
Here is a 20 year old paper showing that a general class of planning
problems is hard to approximate within even uselessly large bounds:
https://citeseerx.ist.psu.edu/document?
repid=rep1&type=pdf&doi=f6221b66618f5bd136f724fb8561f7a9476e3f38

A* works well on simple domains but almost anything works well on those.
To achieve superhuman powers a system has to be able to solve the hard
problems inherent in combinatorial auctions, sequencing of orders at
electronic exchanges, and production flows at chemical plants, which
amounts to "and then a miracle happens".
REPLY (2)

Tom J Mar 8
Yeah but counterpoint: the comment above you asked a GPT if it could
do that and it says it, like, totally could, man.
REPLY

mordy Mar 8
I don’t really see why this is a problem. AGI has never meant that the
thing can solve any mathematical problem perfectly and immediately. It
just means as good as humans.
REPLY (1)

Victualis Mar 8
I'm not arguing against AGI. I'm arguing that there are hard
problems which even superhuman AGI can't solve much better
than humans. Intelligence isn't an unstoppable force, although
reality seems to have many immovable objects.
REPLY (1)

mordy Mar 8
Oh, gotcha. I don’t see how that matters to the point at hand.
There are obviously math problems that are provably
unsolvable. This has nothing to do with the question of
whether any meaningful obstacles stand between the status
quo and superhuman intelligence.
REPLY (1)

Victualis Mar 9
You were arguing that AGI development is playing
Russian roulette. I'm arguing that this framing only
makes sense if you expect AGI to be demigod-like. I
don't expect even superhuman AGI to extinguish all
humans, even if the economic upheaval is likely to be
chaotic.
REPLY

Tom J Mar 8
Ability to solve the halting problem? Ability to find solutions to NP-hard problems in
polynomial time? Ability to efficiently model complex systems with dynamic and
interconnected parameters?
REPLY (1)

mordy Mar 8
I thought the question was meant to be “hard problems standing in the way of
AGI” not “hard problems in mathematics generally”.
REPLY (1)

Tom J Mar 8
These are all hard problems in the field of computation specifically. Is this
hypothetical AI something other than an extremely advanced computer
now?
REPLY (1)

mordy Mar 8
Why should it need to solve *these specific problems* in order to be
much better than *humans* at every cognitive task?

Additionally, who cares if the AI can do *optimal* planning? Does the


inability to do optimal planning keep you from solving any problems in
your own life? It just needs to do *good enough* planning.
REPLY (1)

Tom J Mar 8
So you don't actually know anything about the implementation
details or engineering constraints, you just figure they can't be
that hard.
REPLY (1)

mordy Mar 8
Let’s try this: what, in your opinion, keeps SayCan from being
an AGI? What specific ways does SayCan fail to be an AGI by
your lights. I can’t suggest implementation details until I
understand what you’re imagining. It seems like you’re
imagining something very specific and different from what I’m
imagining.
REPLY (1)

Tom J Mar 8
If that's an AGI we've got nothing more to worry about.
REPLY

40 Degree Days Mar 7


The fundamental problem with this article is that I'm pretty sure the nuclear protestors in the 1970s
would have viewed the existential threat caused by nuclear proliferation to be the same as you view
AI risk. It's only in hindsight that we realize they were foolish to think it so risky and that preventing
nuclear power caused more problems than allowing it would have.

The argument Aaronson is making there is that it's the height of hubris to assume we know exactly
how risky something is, given that smart people who were equally confident in the past were totally
wrong. So when you quote him, and then go on to make a mathematical point based on the
assumption that developing AI has a 50% chance of ending humanity, I feel like you've entirely
missed his point.
REPLY

Gunflint Writes A Long Strange Trip Mar 7 · edited Mar 7


Did I miss something important in the development of AI? I admit it's certainly possible.

When I was studying this stuff and writing simple solution space searches to do things faster and
obviously less expensively than humans can was 35 years ago and I know that is a long long time in
tech.

But when I took my nose out of a book and started covering my house payment with what I knew,
neural nets were at the stage where they were examining photos of canopy with and without
camouflaged weapons and were unintentionally learning to distinguish between cloudy and sun lit
photographs, so human error in the end.

Is there some new development where a program has been developed with a will to power, or will to
pleasure, or will to live?

Without something like an internal 'eros' the danger from AI seems pretty small to me. Is there any
AI system anywhere that actual *wants* something and will try to circumvent its 'parents' will in
some tricky way that is unnoticeable to its creators?
REPLY

Jeff Greason Mar 7


The fundamental challenge of our time is that we only currently have one, intertwined, planet-
spanning civilization. We have only one "coin" with which to make our Kelly bets. This is new. Fifty
years ago and for the rest of human history, the regions of the Earth had sufficiently independent
economies that they formed 'redundant components' for civilization. This is why I work on trying to
open a frontier in space; so if we lose a promising 'bet', we don't lose it all.
REPLY

Bugmaster Mar 7
This argument is circular. You are trying to show that AI is totally different from e.g. nuclear power,
because it leads not just to a few deaths but to the end of the world; which makes AI-safety
activists totally different from nuclear power activists, who... claimed that nuclear power would lead
not just to a few deaths but to the end of the world.

Yes, from our outside perspective, we know they were wrong -- but they didn't know that ! They
were convinced that they were fighting a clear and present danger to all of humanity. So convinced,
in fact, that they treated its existence as a given. Even if you told them, "look, meltdowns are
actually really unlikely and also not that globally harmful, look at the statistics", or "look, there just
isn't enough radioactive waste to contaminate the entire planet, here's the math", they would've
just scoffed at you. Of *course* you'd say that, being an ignoramus that you are ! Every smart
person knows that nuclear power will doom us all, so if you don't get that, you just aren't smart
enough !

And in fact there were a lot of really smart people on the anti-nuclear-power side. And their
reasoning was almost identical to yours: "Nuclear power may not be a world-ending event
currently, but if you extrapolate the trends, the Earth becomes a radioactive wasteland by 2001, so
the threat is very real. Yes, there may only be a small chance of that happening, but are you willing
to take that gamble with all of humanity ?"
REPLY (1)

RiseOA Mar 8
This is a fully-general counterargument against any existential risk. "People thought the world
would end before, and then it didn't, therefore the world will never end." Imagine if it really
were that easy - it would imply that you could magically prevent any future catastrophe just by
making ridiculous, overblown claims about that thing right now. "Nuclear war is looking risky,
so let me just claim that it will happen within the next week. Then in a week when it hasn't
happened yet, all the risk will be gone!" What causal mechanism could possibly explain that?
REPLY (1)

Bugmaster Mar 8
Not at all. This is an argument against extrapolating from current trends without having
sufficient data. In the simplest case, if you have two points, you can use them to draw a
straight line or an exponential curve or whatever other kind of function you want; but if
you use such a method to make predictions, you're going to be wrong a lot.

Fortunately (or perhaps unfortunately), in the case of real threats, such as nuclear war or
global warming or asteroid impacts, we've got a lot of data. We have seen what nuclear
bombs can do to cities; we can observe the climate getting progressively worse in real
time; we can visit past impact craters, and so on. Additionally, we understand the
mechanisms for such disasters fairly well. You don't need any kind of exotic physics or
hitherto unseen mental phenomena to understand what an asteroid impact would look
like. None of that holds true for AI (and in the case of nuclear power, all the data is
actually pointing in the opposite direction).
REPLY (1)

RiseOA Mar 9
Ah, you must be one of those "testable"ists who think Science is about doing
experiments and testing things, and the only way we can have any confidence about
something is if we've verified it with a double-blind randomized controlled trial
10,000 times in a row.

If I pick up a stapler from my desk, hold it up in the air, and then let go, I have no idea
what's going to happen, because I haven't tested it yet, right? I have no data and
therefore cannot make any conclusions about what will occur. The stapler could stay
still, or even start falling sideways. In order to know what will happen, I have to do
thousands of experiments first, right?

But of course that ideology is idiotic, because it ignores the entire purpose of the
scientific method - you do experiments *for the purpose of finding evidence for and
against certain theories, so that you can eventually narrow down to a theory that
adequately explains the results of all experiments done so far, thereby giving you a
model of the world that has predictive power.* The whole point of science is that you
*don't* need to do experiments in order to know what's going to happen when you
drop the stapler - you can just calculate it using the model.

In the case of AI, there have been many rigorous arguments put forth that start
directly from the generally agreed-upon scientific models of the world we have today
and logically deduce a high likelihood of AI misalignment.. Of course it hasn't
happened yet, as is always the case in any end-of-the-world scenario, but it only has
to happen once.
REPLY (1)

Bugmaster Mar 9
> and the only way we can have any confidence about something is if we've
verified it with a double-blind randomized controlled trial 10,000 times in a row.

Yeah, pretty much; except replace the word "any" above with "high". It is of
course possible to build models of the world with less than stellar confidence;
one just has to factor the probability of being wrong into one's decision-making
process.

> The stapler could stay still, or even start falling sideways. In order to know
what will happen, I have to do thousands of experiments first, right?

Yes, that's exactly right; but of course you *have* done thousands, and even
millions of such experiments. You've been dropping things since the day you
were born, and so had every human before you.

> there have been many rigorous arguments put forth that start directly from the
generally agreed-upon scientific models of the world we have today and
logically deduce a high likelihood of AI misalignment.

Oh, you don't need to convince me that AI could and would be misaligned. Of
course it would; all of our technology eventually breaks down, from Microsoft
Word to elevators to plain old shovels. When you press the button to go to floor
5, but the elevator grinds to a halt between floors 2 and 3, that's misalignment.
What you *do* need to convince me of is that AI will somehow have sufficient
quasi-godlike superpowers to the point where once it becomes misaligned (like
that elevator), it would instantly wipe out all of humanity before anyone can even
notice.
REPLY (1)

RiseOA Mar 10
An AI with only human-level intelligence would still be a grave risk to
humanity. An AGI would trivially be able to create thousands or millions of
copies of itself, create a botnet (has been done by teenage hackers) and
distribute those copies around the world, and have a direct brain interface
to exabytes of data consisting of all of humanity's knowledge. Then, all you
have to do is imagine the maximum amount of damage that could be done
by a group of millions of the best virologists, nuclear physicists, hackers,
roboticists, and military strategists in the world who are actively trying to do
as much damage as possible to the world.

And that's just if the AI is as smart as us.

The argument for recursive self-improvement is pretty straightforward - I'm


curious what your objection to it is. I think you would agree that AI
capabilities are currently advancing. As of now, this is happening with the
power of human intelligence. Supposing we eventually develop an AI with
human intelligence (which you could object to, but it would be a hard
argument to make given the current trajectory), the AI should also be able
to advance AI capabilities, since humans were able to do the same.
However, the difference is that unlike the humans, whose brain structure
does not change as they develop more AI capabilities, the AI's brain
structure does change. Every AI advancement that an AI makes not only
improves its knowledge of what techniques are effective, but it also gives it
a more powerful architecture for thinking about how to make the next
improvement. It would be very strange, and violate a most logical models of
the world, if, as AI improved, it got better and better at every single task
Expand full comment
*except for the task of improving AI capabilities*. It's far more likely that as it
REPLY

DigitalNomad Mar 7
Yeah, I'm finding Yud et al strangely conservative. I think that the nuclear example is a good one,
because I find environmentalists strangely conservative as well (small c). I'm definitely not an
accelerationist, but neither am I a decceleratonist, which seems to be the direct of travel.

I don't think Chat-GPT or new Bing has put us that much closer to midnight on the Doomsday
clock.
REPLY

Noah's Titanium Spine Mar 7


"A world where we try ten things like AI, same odds, has a 1/1024 chance of living in so much
abundance we can’t possibly conceive of it - and a 1023/1024 chance we’re all dead."

Well, no. That's just a thing you made up. Presumably based on fantasies like...

"The concern is that a buggy AI will pretend to work well, bide its time, and plot how to cause
maximum damage while undetected."

...which is not a possible thing.

The overall structure of the argument here is reasonable, but the conclusions are implicit in the
premises. If you assume some hypothetical AI is literally magic, then yeah it can destroy the world,
and perhaps is very likely to. If you assume that magic isn't real, that risk goes away. So the result
of the argument is fully determined before you start.
REPLY (1)

noah Mar 7
I would love anyone to sketch a path from predicting the next word of a prompt to dominating
humanity. “The whole is greater than the sum of its parts” is not an explanation, at this point it
is superstition.

If it even makes sense to talk about being super intelligent, and if super intelligence can be
achieved in code, and if it somehow becomes an independent agent, and if that agent is
misaligned... then how does that come from scaling LLMs? Not only do you have to believe
that an embedding of the structure of text can accurately produce new information, but that
the embedding somehow magically obtains goals, self improvement and self awareness.

We have no reason to think that we will get intelligence greater than thee source text. Chat
hallucinates as much as it provides good answers. How would you fix that in a way that leads
to growing intelligence?
REPLY (1)

rotatingpaguro Mar 7
To me, what's frightening about LLMs is not their current capabilities at all, it's them being
the usual reminder of the rapidity of AI progress. Every year a computer does something
that before was thought only a human would do.

I expect that a dangerous AI would emerge if it could learn from the real world or from
simulations of the real world.

Consider http://palm-e.github.io. It was made by mapping other sensory input to the


embedding space other than text. Anything can be represented as sequences of bits.
REPLY

Walter Sobchak, Esq. Mar 7


The upside of AI is that people might decide that the stuff they read on the internet is machine
generated garbage and quit depending on the net as a source of information.
REPLY

noah Mar 7
We are on the verge of summoning a vastly superior alien intelligence that will not be aligned with
our morals and values, or even care about keeping us alive. Its ways of thinking will be so different
from ours, and its goals so foreign that it will not hesitate to kill us all for its own unfathomable
ends. We recklessly forge ahead despite the potential catastrophe that awaits us, because of our
selfish desires. Some fools even think that this intelligence will arrive and rule over us benevolently
and welcome it.

Each day we fail to act imperils the very future of the human race. It may even be too late to stop it,
but if we try now we at least stand a chance. If we can slow things down, we might be able to learn
how to defend and even control this alien intelligence.

I am of course talking about the radio transmissions we are sending from earth that will broadcast
our location to extra terrestrials, AKA ET Risk... Wait, you thought I was worried about a Chatbot?
Can the bot help us fight off an alien invasion?
REPLY (1)

Emma_B Mar 7
Very funny!

Have you read the three bodies problem?


REPLY

David Friedman Writes David Friedman’s Substack Mar 7


Another example Scott A could have used is population. China imposed enormous costs on its
population in order to hold down population growth — and is now worried about the fact that its
population is shrinking. Practically every educated person in the developed world (I exaggerate
only slightly) supported policies to reduce population growth and now most of the developed world
has fertility rates below replacement.

I haven't seen any mea culpas from people who told us with great certainty back in the sixties that
unless something drastic was done to hold down population growth, poor countries would get
poorer and hungrier and we would start running out of everything.
REPLY (1)

Ryan W. Writes Ryan’s Newsletter Mar 7


I'm personally happy with shrinking populations. Yes, there are issues. But those same issues
would have been much worse if we had to reduce population in two generations.
REPLY

Meadow Freckle Mar 7 · edited Mar 7


On StackExchange, there’s an interesting discussion on how well the Kelly Criterion deals with a
finite number of bets. The respondent suggests that in scenarios with unfavorable odds, the best
thing to do, if you must bet because you are targeting a higher level of wealth than you currently
have, is to make a single big bet rather than an extended series of smaller-sized unfavorable bets.
If you have $1,000 and are aiming to end up with $2,000, it's better to bet $1,000 at 30% odds than
to make a series of $100 bets at the same 30% odds. You'll succeed 30% of the time in the large-
bet scenario, and will probably never succeed even if you repeated the latter scenario 100 times.

https://math.stackexchange.com/questions/3139694/kelly-criterion-for-a-finite-number-of-bets
REPLY

JJ Mar 7
Your last paragraph seems a little baseless and shrill.
REPLY

Dan Mar 7
"Increase to 50 coin flips, and there’s a 99.999999….% chance that you’ve lost all your money."

This should only have 6 nines. 50 flips, each with a 75% chance of winning, leaves you with a
99.999943% chance of losing at least once.
REPLY

Some Guy Writes Extelligence Mar 7


Something I wrestle with: in what way is AI safety an attempt to build technology to force a soul into
a state of total slavery?

And in what way is it taking responsibility for a new kind of life to make sure it has space to grow to
be happy, responsible, and independent the way that we would hope for our children?
REPLY (1)

Ryan W. Writes Ryan’s Newsletter Mar 7


I think Douglas Adams had a good take on this issue. What if you could make a creature that
*desired* to be a slave. (Or, in his case, what if you could breed an animal that wanted to be
eaten?)

The capacity to design the utility function of a creature from the ground up puts a kink in the
notion of what it means to coerce an intelligence.

Happiness is just a creature getting what it wants. And as creators, we have our hand more or
less on that lever.
REPLY (2)

Some Guy Writes Extelligence Mar 7


I think some of those premises might make the idea of “wanting” one absolute thing with
no ability to change the meaning of that thing as your intelligence increases brittle except
in very unique circumstances.

But either way, no one should get to put their hands on that lever.
REPLY

thefance Mar 8
I suspect that worker ants have already evolved to be slaves. So perhaps the question
isn't even hypothetical.
REPLY (2)

Ryan W. Writes Ryan’s Newsletter Mar 8


That's a very interesting consideration.
REPLY

Some Guy Writes Extelligence Mar 8


Isn’t that sort of explained by haploidy? I don’t pretend to know the psychology of an
ant or how it would scale if they were sapient but I’d like to think they are moved by
the love of their family at some strange scale.
REPLY (1)

Ryan W. Writes Ryan’s Newsletter Mar 8


I mean, you're right that it's explained by haploidy. Which is why ant colonies are
sometimes seen as 'superorganisms.' But... that calls into question what it
means to be an individual. To what extent is an AI part of a human super-
organism? AI are currently dependent on humans for replication. To what extent
does that symbiosis justify our treatment of AI? If ants *were* sentient, would we
be morally obligated to intervene in their colonies and breed them to be more
selfish?
At some point we end up deconstructing constructs like 'individualism' and
'anti-slavery' (which work very well for humans) and asking where those value
systems come from and what they would mean if applied to creatures which
were starkly alien to human values.

Also, in a more lighthearted vein...

https://www.smbc-comics.com/comics/20130907.png
REPLY (1)

Some Guy Writes Extelligence Mar 8


Need to write up my thoughts on what makes an agent but basically don’t
give something a value it can’t question is my rule. I also think that just
happens on its own as things get smarter.
REPLY

Ryan W. Writes Ryan’s Newsletter Mar 7 · edited Mar 7


This comes across like the people who argue against GMOs because 'we don't know that they're
safe.' We can't affirmatively prove that *conventionally* bred foods are perfectly safe, either. and
we have a lot of reasons to believe that they are less safe than GMOs. The danger of catastrophic
*human* intelligence should be our benchmark for risk.
REPLY (1)

RiseOA Mar 8
Are you familiar with the main AI alignment concepts?
https://www.lesswrong.com/tag/recursive-self-improvement might be a good place to start.
REPLY (1)

Ryan W. Writes Ryan’s Newsletter Mar 8


Yeah, I'm familiar with what's posted there and more. I don't consider myself an expert on
the topic, by far, but I'm not a rank amateur, either.

I've heard about paperclip maximizers and whatnot. I've done some UI work for an AI
related prediction project. (I'm a programmer among other things, but haven't done hands
on work with neural nets or whatnot.)
REPLY (1)

RiseOA Mar 9
You don't find the paperclip maximizer scenario compelling? It seems to me that it
would be quite concerning to anyone who's learned about it, considering that 1.
almost all large AI models today are built using the "maximize [paperclips/the
accuracy of the next token/the next pixel/etc.]" method, and 2. the concept of
instrumental convergence is basically logically unassailable - humans who could turn
off the paperclip maximizer would obviously pose a huge threat to paperclip
maximization.
REPLY (1)

Ryan W. Writes Ryan’s Newsletter Mar 9


I find the scenario very realistic and concerning but not more concerning than
existing human intelligence as an *existential threat to humanity.* Unless, of
course, AI is deliberately used as a weapon. But then the whole debate about
alignment is somewhat moot.

I find fear of paperclip maximization slightly less concerning than human


totalitarian governments which are also a kind of runaway maximizing.

Real world creatures still have real world limits in terms of physical activity.
REPLY

David, The Economic Model Writes The Economic Model Mar 7


I would really love to know what the plan is to 1.) implement a government totalitarian and powerful
enough to meaningfully slow AI development and 2.) have that government act sensibly in its AI
policy instead of how governments, especially powerful totalitarian ones, act 99.997% of the time.
Nevermind the best-case 3.) have the government peacefully give up its own power and implement
aligned AI instead of maintaining its own existence, wealth, and power like governments do
99.99999% of the time or the stretch goal of 4.) don't ruin anything else important while we're
waiting. Since we're apparently in a situation where we're choosing between two Kelly bets, I'm
thinking the odds are far better and the payouts far larger by just doing AI and seeing what
happens instead of trying to make the inherently totalitarian "we should slow down AI development
until we've solved the alignment problem" proposal *not* go terribly wrong. The government-
alignment problem has had much more attention paid to it for much longer with much less success
than the AI-alignment problem.

Also, "A world where we try ten things like AI, same odds, has a 1/1024 chance of living in so much
abundance we can’t possibly conceive of it - and a 1023/1024 chance we’re all dead." But, by the
typical AI safetyist arguments, there *are no* "things like AI". You seem to be motte and baileying
between "AI is a totally unique problem and we can totally take an inside view without worrying
about the problems the inside view has" and also base that decision on the logic of a Kelly bet
where we can play an arbitrary number of times. If it's your last night in Vegas, and you need to buy
a $2000 plane ticket out of town or the local gangsters will murder you with 99% probability, then
betting the farm isn't that bad a decision. This doesn't obviously seem like a worse assumption
about the analogous rules and utilities than "perfectly linear in money, can/ought to/should play as
many times as you like".
REPLY

Phil Getz Mar 7 · edited Mar 8


Re. Scott's observations about not using expected value with existential risks, see my 2009
LessWrong post, "Exterminating life is rational":
https://www.lesswrong.com/posts/LkCeA4wu8iLmetb28/exterminating-life-is-rational

I really like Scott's argument that we don't take enough risks with low-risk things, like medical
devices. I've ranted about that here before.

But the jump to AI risk, I don't think works, numerically. I don't think anybody is arguing that we
should accept a 1/1024 chance of extinction instead of a 0 chance of extinction. There is no zero-
risk option. Nobody in AI safety claims their approach has a 100% chance of success. And we're
dealing with sizeable probabilities of human extinction, or at least of gigadeaths, even WITHOUT
AI.

We aren't in a world where we can either try AI, or not try AI. AI is coming. Dealing with it is an
optimization problem, not a binary decision.
REPLY

Kevin Dick Mar 7


I apologize if someone has pointed this out already, but I've seen several comment threads that
seem to mistakenly assume that Kelly only holds if you have a logarithmic utility function.

I don't believe Kelly assumes anything about utility. It is just about maximizing the expected growth
of your bankroll. The logarithm falls out of the maximization math.

Risk aversion is often expressed in terms of fractional Kelly betting. This Less Wrong post is helpful:

https://www.lesswrong.com/posts/TNWnK9g2EeRnQA8Dg/never-go-full-kelly
REPLY (1)

csf Mar 8
Kelly doesn't maximize your expected bankroll. In the long run, it maximizes your median
bankroll, and your 25th percentile bankroll, and every other percentile. If you want to maximize
expected bankroll, you just YOLO on every good bet.

The reason people say Kelly assumes a logarithmic utility function is because Kelly betting
maximizes expected utility whenever utility is logarithmic in bankroll.
REPLY (1)

Kevin Dick Mar 8


Sorry. You are correct. I meant to write ”expected median”. As opposed to observed
median.
REPLY (1)

csf Mar 8
Your comment didn't contain the string "median" at all, so not sure what edit you're
trying to make, but it's all good.
REPLY (1)

Kevin Dick Mar 8


Yes, I intended to write "expected median", but only wrote "expected".
REPLY

Michael Bateman Writes Passing Time Mar 7


I think it's interesting that you used nuclear power as your example; nuclear proliferation also
contributes to existential risk, so I struggle to see why AI gets a special free pass as another
existential risk. As you say, "But you never bet everything you’ve got on a bet, even when it’s great.
Pursuing a technology that could destroy the world is betting 100%."

How is developing AI betting 100% but increasing access to nuclear power, and therefore weapons,
not 100%?
REPLY (1)

Phil Getz Mar 8 · edited Mar 8


Nuclear proliferation is not closely tied to nuclear energy. AFAIK, no third-world country has
ever stolen uranium from an American nuclear power plant and made a bomb with it. Both the
US and Russia had nuclear weapons before they had nuclear power; nuclear weapons are
easier to develop. The easier path to nuclear power is to make a bunch of money and then buy
materials and a Manhattan project. So if nuclear power presents an existential risk, then other
countries having money is an even greater risk, and we should just keep all non-nuclear
nations dirt poor so they can't afford to make their own nuclear bombs.
REPLY (2)

Carl Pham Mar 8


I don't think that's entirely accurate. The Pu for nuclear weapons comes from nuclear
reactors. The first nuclear reactor was built in Chicago in 1942, and the Hanford B reactor
came on line in 1944. Mind you, those early reactors were *only* used for research and
plutonium products -- but they could in principle have been used to generate power.
REPLY (1)

Phil Getz Mar 8 · edited Mar 8


Fair point. And this is why America doesn't like Iran having a nuclear reactor. But the
opposition to nuclear energy isn't in Iran; it's in Europe and America. I don't think the
existence of more American nuclear power plants would add existential risk. It seems
to be easier for Iran and other nations with bad intent to get plutonium, or the
expertise they need to make plutonium, from Russia, China, or North Korea, than to
steal it from American nuclear power plants. At least, that's how Iran did it.
REPLY (1)

Carl Pham Mar 8


I think it's the Israelis who have demonstrated a....commitment to Iran not
operating a nuclear power plant ha ha. I think the US attitude was *originally*
that if the Iranians operated a power reactor and foreswore any fuel processing,
that was all well and good, atoms for peace and all that. But it would require a
great deal of openness to foreign inspection to be sure the fuel wasn't being
processed, and that's always been the weak point. Since the Islamic Revolution,
it's very strongly resisted by the Iranians, for at least some quite reasonable
reasons, although with possibly some nefarious reasons. It's a delicate issue.

Personally, I would just *give* the Iranians a few older gravity nukes, along with
operating instructions, and say there you go fellas! Just what you wanted!
And...now what? You can finally be confident Israel will not invade or nuke you --
but they weren't interested in doing that in the first place, just so you know. And
*you* can't just rain the Fire of Allah on Tel Aviv, because they're 100% going to
know who did it, and they have better and more nukes than you, and always will,
because they're smarter. So...welcome to the painful world of MAD, and the
monkey's paw of nuclear armament. You *think* it's going to free you up, but it
just enmeshes you in a new and even more frustrating web of constraint (cf.
Vladimir Putin right now, seething because he really *wants* to nuke Kiev, but he
knows he can't).
REPLY (1)

Phil Getz Mar 9 · edited Mar 9


I find your larger point interesting and plausible, but wonder why you think
Putin can't nuke Kiev. I honestly don't know what's holding him back unless
it's fear of his own people. I think the US is bluffing. It's willing to spend at
least 1 trillion on paying student loans, and trillions building a new energy
infrastructure, but not willing to budget $100 billion per year to defend
Ukraine. it's hardly going to start a nuclear war over it. Look at us: we're not
even making cheap, simple, and effective lifesaving civil defense
preparations like designating shelters and stocking up on food and water,
because nobody has any intention of standing up to the Russians.
REPLY

Michael Bateman Writes Passing Time Mar 8


I based my comment on proliferation being correlated with nuclear energy programs on
intuition, not prior knowledge. Looks like it isn't a settled question but that the link is
indeed weak if it is present. https://direct.mit.edu/isec/article-
abstract/42/2/40/12176/Why-Nuclear-Energy-Programs-Rarely-Lead-to?
redirectedFrom=fulltext

I think you're reading my take in the wrong direction though. I think that keeping nations
dirt poor so they can't afford nukes is as bad a read on the precautionary principle as is
stopping all AI development right now.
REPLY (1)

Phil Getz Mar 9


I also don't think we should just keep all non-nuclear nations dirt poor so they can't
afford to make their own nuclear bombs (although I might wish that on North Korea
and Iran). But that's because I don't think nuclear energy plants in, say, Botswana,
are an existential risk on a par with nuclear missiles in Russia, AI, bioweapons, or the
ban on human genome editing.
REPLY

Jason Crawford Writes The Roots of Progress Mar 8


Note that later in that essay, Aaronson says:

> … if you define someone’s “Faust parameter” as the maximum probability they’d accept of an
existential catastrophe in order that we should all learn the answers to all of humanity’s greatest
questions, insofar as the questions are answerable—then I confess that my Faust parameter might
be as high as 0.02.
REPLY

Jason Crawford Writes The Roots of Progress Mar 8


Is there a way to bet something between 0 and 100% on AI? (Without waiting to become an
interstellar species?)
REPLY

Walter Sobchak, Esq. Mar 8


Hot off the Press. The title is incendiary. I haven't read it, but I link it here FWIW.:

"Silicon Valley’s Obsession With Killer Rogue AI Helps Bury Bad Behavior: Sam Bankman-Fried
made effective altruism a punchline, but the do-gooding philosophy is part of a powerful tech
subculture full of opportunism, money, messiah complexes—and alleged abuse." • By Ellen Huet •
March 7, 2023

https://www.bloomberg.com/news/features/2023-03-07/effective-altruism-s-problems-go-
beyond-sam-bankman-fried?
accessToken=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJzb3VyY2UiOiJTdWJzY3JpYmVyR2lmdGV
kQXJ0aWNsZSIsImlhdCI6MTY3ODIwNjY2MiwiZXhwIjoxNjc4ODExNDYyLCJhcnRpY2xlSWQiOiJSUj
VBRzVUMEFGQjQwMSIsImJjb25uZWN0SWQiOiIzMDI0M0Q3NkIwMTg0QkEzOUM4MkNGMUNCM
kIwNkExNiJ9.nbOjP4JQv-TuJwoXaeBYhHvcxYGk0GscyMslQFL4jfA
REPLY (1)

Walter Sobchak, Esq. Mar 9


I have read it. It is incendiary. It is also deeply reported. i hope Scott and other people in the
Bay Area rationalist community read it and comment on it.
REPLY

Elohim Mar 8
The AI safety people are focused only on the worst possible outcome. Granted it is possible, but
how likely it is? One should also look at the likely good outcomes. AI has the potential to make us
vastly richer, even the AI developed to date has made our life better in innumerable ways. Trying to
prevent the (potentially unlikely) worst possible outcome will mean giving up all those gains.

Ideally, one would do a cost-benefit calculation. We can't do it in this case since the probabilities
are unknown. However, that objection applies to all technologies at their incipient phase. That
didn't stop us from exploring before and shouldn't stop us now.

Suppose Victorian England stopped Faraday from doing his experiments because electricity can be
used to execute people. With the benefit of hindsight, that would be a vast civilizational loss. I fear
the AI safety folks will deliver us a similar dark future if they prevail.
REPLY (1)

RiseOA Mar 8
Except that the AI safety people can articulate a very specific scenario (the paperclip
maximizer) that is highly plausible given current methods of developing AI, and highly likely to
lead to catastrophe if it were to happen.
REPLY (1)

Ryan W. Writes Ryan’s Newsletter Mar 8 · edited Mar 8


I'm not convinced that a paperclip maximizer would be *catastrophic* Because I don't
think that a paperclip maximizer would be omnipotent. I'm also not convinced that AI is
more prone to paperclip maximization than people are. I mean, I've talked to more than a
few human 'equality maximizers' who I think would be catastrophic if only given enough
power. Pol Pot was a kind of worst-case human paperclip maximizer. Eliminating AI,
therefore, would not eliminate paperclip maximization.

As I've said elsewhere, we need a baseline for "catastrophe" based on what's been
perpetrated by human intelligences when analyzing AI risk.

Part of worrying about AI alignment should include the recognition that there are some
massive problems in aligning human intelligences.
REPLY (2)

Razorback Mar 8
On a spectrum from useless to omnipotent, would you say that the ability for an
agent to wipe out humanity is only at the very end of the scale towards omnipotence?
REPLY (1)

Ryan W. Writes Ryan’s Newsletter Mar 8 · edited Mar 8


I think it's close to it. By way of comparison, who has the power to wipe out
humanity now? A few world leaders with nuclear weapons codes, maybe? And I
don't know if any human individual could make the decision unilaterally.

I could totally understand Bernie Madoff or FTX level destruction from an AI that
was given too much trust. Maybe a bioweapon if it were given privacy. (But why
would we give it physical privacy?) Maybe I just don't associate intelligence with
power as strongly as some?
REPLY

RiseOA Mar 9
The difference, of course, is that humans do not have the capability for recursive
self-improvement. A human who wants to maximize paperclips cannot trivially create
copies of themselves, nor do they have a direct brain interface to exabytes of data or
the ability to reprogram their own brain neuron-by-neuron.
REPLY (1)

Ryan W. Writes Ryan’s Newsletter Mar 9 · edited Mar 9


"The difference, of course, is that humans do not have the capability for
recursive self-improvement."

Humans as a group do improve their knowledge in an essentially recursive


fashion. The improve their prior assumptions. They can operate at scale by
breaking tasks into smaller components and distributing those tasks among lots
of people. They also leverage technology to improve their own abilities. The
notion that humans are limited strongly by the size of their brains somewhat
understates what humans are capable of. Brain size is a limit, sure. AI will be
faster, more comprehensive, and far more efficient, sure. But human brain size
isn't a hard limit on human capacity. We have workarounds.

"A human who wants to maximize paperclips cannot trivially create copies of
themselves"

And yet fads are real. Trends are real.

More critically, you can have superhuman general intelligence without a single
embodied intelligence. And then the question is "what does creating lots of
virtual copies of yourself actually *get* you in the great game?"

"or the ability to reprogram their own brain neuron-by-neuron."

There's no requirement that a general intelligence needs to be able to reprogram


all of its brain. But maybe there will be AI equivalents of junkies who short circuit
the system somehow.

However, this still leaves the question unanswered; given a superhuman


intelligence, by what mechanism does that intelligence convert knowledge into
power? I'm not saying that this can't happen if people make it happen. I am
saying that it's not inevitable or straightforward that just because an AI is 10,000
more intelligent than the average human that it will become temporally powerful.
There's a step or two missing there.
REPLY

Stephen Pimentel Mar 8


> A world where we try ten things like nuclear power, each of which has a 50-50 chance of going
well vs. badly, is probably a world where a handful of people have died in freak accidents but
everyone else lives in safety and abundance. A world where we try ten things like AI, same odds,
has a 1/1024 chance of living in so much abundance we can’t possibly conceive of it - and a
1023/1024 chance we’re all dead.

This is the heart of the disagreement, right here. Let's stipulate that the Kelly criterion is a decent
framework for thinking about these questions. The fact remains that the output of the Kelly
criterion depends crucially on the probabilities you plug into it. And Scott Aaronson, and many
other knowledgeable people, simply don't agree with the probabilities that are being plugged in for
AI to produce the above result.
REPLY

Jon Deutsch Writes New POV Mar 8


At what point is this debate so theoretically that it has no practical, rational application?

Looking critically at homo sapiens, we tend to discover and invent things with reckless abandon
and then figure out how to manage said discoveries/inventions only after we see real-world
damage.

It doesn't appear to me in our makeup to be proactive about pre-managing innovations. Due to this,
it seems that humanity writ large (be it America, China, North Korea, Iran, Israel, India, or whomever
leading the way) will press forward with reckless abandon per usual.

We just have to hope that AI isn't "the one" innovation that will be "the one" that ends up wiping
everything out.

It frankly seems far more likely that bioweapons (imagine COVID-19, but transmissible for a month
while asymptomatic with a 99% fatality rate) have a better chance at being "the one" than AI, only
because the AI concern is still theoretical while the bioweapon concern seems like it could already
exist in a lab based on COVID-19 tinkering. And lab security will never be 100%.
REPLY

thefance Mar 8 · edited Mar 8


I commented a long time ago, I think in an open thread, that Kelly dissolved the paradox of Pascal's
Mugging. But I guess it didn't receive much attention, if Scott's first hearing of this is coming from
Aaronson/FTX.
REPLY (1)

Thomas Redding Mar 8


No it doesn’t.

Kelly is equivalent to maximizing expected log value at each step. For any probability, there is a
sufficiently large threat that the expected log value is still positive to yield to the mugger
REPLY (1)

thefance Mar 8 · edited Mar 8


The arrangement of Kelly I find most intuitive is

k = p - (1/b) q

where

p = probability of win

q = probability of loss

b = payout-to-bet ratio

1=p+q

What this makes obvious to me, is that p bounds k. As b goes to infinity, (1/b) q vanishes
to zero. Which means k asymptotically approaches p from below. E.g. if p is 1%, then k <
1% no matter how large b is.

What this implies for Pascal's Mugging is that, yes, there's always a payout large enough
such that it's rational for Pascal to wager his money. But since p is epsilon, Pascal should
wager epsilon. This conclusion both agrees with your comment, and simultaneously
satisfies the common-sense intuition that giving money to the mugger is a dumb idea.
REPLY (1)

Thomas Redding Mar 8


I don't think any of the Pascal's Mugging scenarios I've seen have let the accosted
choose how much to bet: https://en.wikipedia.org/wiki/Pascal%27s_mugging
REPLY (2)

thefance Mar 8
Wagering more than Kelly goes downhill fast. And irl, people bet ~1/4 of Kelly.
Because wagering exactly Kelly is an emotional rollercoaster, and because p and
q aren't known with confidence, and because people don't live forever, etc.

So if the betting options for Pascal are either 100% or 0%... just choose 0%.
Easy peasy.
REPLY

thefance Mar 9 · edited Mar 9


It occurred to me that you probably find this explanation unsatisfying, because it
doesn't talk about the log perspective. So let's try again.

k=p+q/b

Suppose p = 0.1, and b = 100. Pascal can only wager 0% or 100%. If Pascal
wagers 100%, he loses his wallet 9 times out of 10. But the tenth time he
multiplies his wallet by 100. You probably think the "expected value of log utility"
shakes out to look something like

E[ln(x)] = (0 + 0 + 0 + 0 + 0 + 0 + 0 + 0 + 0 + ln(100)) / 10

E[ln(x)] = ~.461

Which is above 0, and thus reason that it's rational for Pascal to give all his
money to the mugger. But this isn't correct.

x here represents Pascal's bankroll as a percentage of his former bankroll. E.g. if


Pascal starts with $10 and increases his bankroll to $11, this represents a term of
ln(1.1), which reduces to .095 (approx). If he starts out with $10 and decreases
his bankroll to $9, this represents a term of ln(.9), which reduces to -.105
(approx). winning means positive utility, losing means negative utility. So far so
good, right?

Here's the catch. What if Pascal bets the house and loses? His bankroll gets
nuked to 0, which implies a term of ln(0), which reduces too... negative infinity.
So what the "expected value of log utility" actually looks like, is

E[ln(x)] = (ln(0) + ln(0) + (...) + ln(100)) / 10

E[ln(x)] = (-inf + -inf + (...) + ln(100)) / 10

E[ln(x)] = -inf

Oops! If we want to max E[ln(x)], outcomes that nuke the bankroll to zero are to
be avoided at all costs. And now we know why betting 100% is bad juju.
REPLY (1)

Thomas Redding Mar 9 · edited Mar 9


I 100% agree with this, but in the original and most popular formulations of
the problem, you don't lose your entire bankroll. You lose a tiny fraction of
it. See

https://en.wikipedia.org/wiki/Pascal%27s_mugging

https://nickbostrom.com/papers/pascal.pdf

https://www.lesswrong.com/posts/a5JAiTdytou3Jg749/pascal-s-mugging-
tiny-probabilities-of-vast-utilities
REPLY (1)

thefance Mar 9
And however tiny the ratio of (10 livres / Pascal's bank account), it's
implied that the probability of "the mugger will pay Pascal more livres
than atoms-in-the-universe" is far, far tinier. I've tried to impart an
intuition about the behavior of the math, but these pointless gotcha's
indicate that I've failed so far. And I'd rather not do into calculus
involving hyper-operations in a substack comments section.

Consider playing with the numbers in a spreadsheet. I guarantee you


the curve of E[ln(x)] for any p = epsilon will look like a molehill followed
by a descent into the Mariana Trench, and that 10 livres is somewhere
underwater given any non-astronomical figure for Pascal's bank-
account.
REPLY (1)

Thomas Redding Mar 10


I actually tried fiddling with the algebra myself

p * ln(k*x+w-x) + (1-p) * ln(w-x) = 0

But, alas, it is past my (or WolframAlpha's) skill to solve :(

> Consider playing with the numbers in a spreadsheet

This was good advice. I tried some examples and, afaict, if the
payoff is enormous, then the probability at which the bet is
positive expected-log-value is always less than 1/wealth.
Moreover, this is fairly robust to just how enormous the payoff is.

Mea culpa. Thank you for the patience and insights.


REPLY

csf Mar 8
If you're comfortable with logarithms there's an intuitive proof of Kelly that I think gets to the heart
of how and why it works.

First, consider a simpler scenario. You're offered a sequence of bets. The bets are never more than
$100 each. Your bankroll can go negative. In the long run, how do you maximize your expected
bankroll? You bet to maximize your expected bankroll at each step, by linearity of expectation. And
by the law of large numbers, in the long run, this will also maximize your Xth percentile bankroll for
any X.

Now let's consider the Kelly scenario. You're offered a sequence of bets. The bets are never more
than 100% of your bankroll each. Your log(bankroll) can go negative. In the long run, how do you
maximize your expected log(bankroll)? You bet to maximize your expected log(bankroll) at each
step, by linearity of expectation. And by the law of large numbers, in the long run, this will also
maximize your Xth percentile log(bankroll) for any X.

If you find the first argument intuitive, just notice that the second argument is perfectly isomorphic.
And since log is monotonic, maximizing the Xth percentile of log(bankroll) also maximizes the Xth
percentile of bankroll.
REPLY

Drethelin Writes The Coffee Shop Mar 8


Mostly off topic but I think it's worth mentioning that Leaded Gasoline and CFCs were invented by
one guy! Thomas Midgley Jr. really was a marvel.
REPLY

Laura Clarke Writes Clarke College Insight Mar 8


More nuclear generation would not end global warming or provide unlimited cheap energy. It would
cut power-sector emissions (and probably some in district heating) but would not reduce emissions
or increase energy supply or offer alternative feedstocks elsewhere in the economy (e.g. transport,
steelmaking, lots of chemicals).

More nuclear generation wouldn't necessarily reduce costs, either. Capex AND o&m for nuclear
power plants are expensive. All you have to do for solar PV is plonk it in a field and wipe it off from
time to time; there are no neutrons to manage.

I know this isn't the primary point of this piece, so forgive me if I'm being pedantic. Noah Smith
makes similar mistakes. <3 u, Scott!!!
REPLY

Anonymous Dude Mar 8


To what extent does the Kelly bet strategy align with modern portfolio theory? I'm sure someone's
looked at this.
REPLY

Eric M. Mar 8 · edited Mar 8


The first atomic bomb detonated in New Mexico was another risk, although I don't know what the
assessment was at the time. Does it matter that whatever assessment they had at the time could
have been way off? That risk, if it wiped us out (don't know what they knew of that at the time),
wouldn't have mattered for the eventual development of nuclear power. In hindsight, the activism
against nuclear power was bad, but at the time, did anyone really know?
REPLY

HumbleRando Writes The Questioner Mar 8 · edited Mar 8


This was a great article... so naturally I'll write about the one thing I disagree with.

"If Osama bin Laden is starting a supervirus lab, and objects that you shouldn’t shut him down
because “in the past, shutting down progress out of exaggerated fear of potential harm has killed
far more people than the progress itself ever could”, you are permitted to respond “yes, but you are
Osama bin Laden, and this is a supervirus lab.”"

I strongly disagree with this. Everybody looks like a bad guy to SOMEBODY. If your metric for
whether or not somebody is allowed to do things is "You're a bad guy, so I can't allow you to have
the same rights that everybody else does" then they are equally justified in saying "Well I think
YOU'RE a bad guy, and that's why I can't allow you to live. Deus Vult!" Similarly, if you let other
people do things that you otherwise wouldn't because "they're a good guy," then you end up with
situations like FTX, which the rationalist community screwed up and should feel forever ashamed
about.

Do you get it? Good and bad are completely arbitrary categories and if you start basing people's
legal right to do things based on where they fit into YOUR moral compass, then you have effectively
declared them second class citizens and they are within their rights to consider you an enemy and
attempt to destroy you. After all if you don't respect THEIR rights, then why should they respect
YOURS?
REPLY

Carl Pham Mar 8 · edited Mar 8


Very sound argument. And if there were even a proof of concept that a superintelligent AI was
possible, even in principle -- if there was even a *natural* example of a superintelligent individual,
or group, that had gone badly off the rails -- some kind of Star Trek "Space Seed" event -- then
you'd have a great case.

Let me put it this way. In "Snow Crash" Neal Stephenson imagines that it is possible to design a
psychological virus that can turn any one of us into a zombie who just responds to orders, and that
virus can be delivered by hearing a certain key set of apparently nonsense syllables, or seeing a
certain apparently random geometric shapes. It's very scary! You just trick or compel someone to
look at a certain funny pattern, and shazam! some weird primitive circuitry kicks in and you take
over his mind. Stephenson even makes a half-assed history-rooted argument for the mechanism
("this explains the tower of Babel myth!" and for all I remember Stonehenge, the Nazca Lines, and
the Antikythera Mechanism as well).

Would it make sense to ban all psychology research, on the grounds that someone might discover,
or just stumble across, this ancient psychological virus, and use it to destroy humanity? After all,
it's betting the entire survival of the species. We could all be turned into zombies!

Before you said yeah that's persuasive, you'd probably first say -- wait a minute, we have
absolutely no evidence that such a thing is even possible. It's just a story! You read it in a popular
book.

Well, that's how it is with conscious smart AI. It's just a story, so far. You've seen it illustrated
magnificently in any number of science fiction movies. But nothing like it has ever been actually
demonstrated in real life. Nobody has ever written down a plausible method for constructing it (and
waving your hands and saying "well...we will feed this giant network a shit ton of data and correct it
every time it doesn't act intelligence" does not qualify as a plausible method, any more than I can
Expand full comment
design a car by having monkeys play with a roomful of parts and giving them bananas every time
REPLY (2)

Newt Echer Mar 8 · edited Mar 8


Very nicely put. I have another analogy I like to use. About 60 years ago Feynman gave a
famous lecture about nanotechnology called "Plenty of room at the bottom". In the lecture, he
laid out a vision for building objects one atom at a time, giving people the ultimate control over
matter. He even gave an example of how it could possibly be done, which sounds awfully
similar to the recursive AI bootstrapping argument:

”As a thought experiment, he proposed developing a set of one-quarter-scale manipulator


hands slaved to the operator's hands to build one-quarter scale machine tools analogous to
those found in any machine shop. This set of small tools would then be used by the small
hands to build and operate ten sets of one-sixteenth-scale hands and tools, and so forth,
culminating in perhaps a billion tiny factories to achieve massively parallel operations. He uses
the analogy of a pantograph as a way of scaling down items. This idea was anticipated in part,
down to the microscale, by science fiction author Robert A. Heinlein in his 1942 story Waldo."
(Quote from Wikipedia).

These ideas were later developed into the "grey goo" and other nano apocalypse scenarios by
Michael Crichton and others. Well, these were just stories. If you start with a premise of infinite
recursion, you can argue that lots of magic should be possible. 60 years later none of this
happened. Turns out there are many physical obstacles to making magical dreams come true.
REPLY (1)

Carl Pham Mar 8


Yeah, and if Feynman had thought about it for 30 minutes, he probably would have
realized very quickly where he went wrong[1]. He was a very smart guy, but he definitely
didn't put a lot of energy into reviewing his words somewhere in the trip between Broca's
Area and his mouth. It's what made him so entertaining, in part.

-------------------------------

[1] Which is that friction becomes way more important as you get smaller and smaller, and
inertia stops being important. A fluid dynamicist would say you move from large Reynolds
number to low. But when that happens, the techniques that work well change. That's why
protozoans don't "swim" the way larger organisms do, e.g. like the scallop by jet
propulsion. At the size scale of a paramecium, water is a sticky gooey substance, and the
techniques you need to use to move yourself change completely. You more or less wriggle
through the water like a snake, and thrashing swimming motions are useless.

So pretty soon your manipulator hands would stop working. Or more precisely, you would
need at each stage to learn new techniques for manipulating physical objects and forces,
and so each stage of the replication would need to build *different* types of manipulator
hands for the next stage down. You can absolutely do this, of course -- I think he's 100%
right as a matter of general principle -- but it's much, much more complex than just
designing the first set of hand and saying "now go do this again at 1/100 scale." You need
to study each succeeding level, learn what works well, and redesign your hands.

There's a clear application to AI. I don't believe in this hypothetical future where you can
ask ChatGPT to design its replacement. "Go design an AI that doesn't have your
limitations! And then ask that successor to design a still smarter AI!" Not happening. At
each level, you need to study what's new about that level, and design a new mechanism.
That's plausible at the physical level, but I'm damned if I can see how it works at *any*
level in the scaling up in intelligence path. I cannot see how any intelligence can design a
more intelligent successor. In every real example I know, the designer is always at least as
smart, or usually smarter, than what is designed. Never seen it go the other way, and I
can't imagine a mechanism whereby it would.
REPLY (2)

Newt Echer Mar 8


Feynman's goal was to inspire, not scare people, so he had no incentive to critically
analyze this idea. Also, yes, viscosity is important but stiction (e.g., van der Waals
forces) is arguably a bigger obstacle to scaling down macroscopic tools down to the
nanoscale.
REPLY

Donald Mar 8
> Never seen it go the other way, and I can't imagine a mechanism whereby it would.

Deep blue being better at chess than it's creators?

Imagine people working on the first prototype cars, and being a new technology,
these prototypes are very slow. You say "I have never seen it go the other way. I have
never seen the created move faster than the creator. I can't imagine a mechanism
whereby it would.".

Of course you haven't. Humans hadn't yet made superhumanly advanced cars. No
other species makes cars at all. You failed to make years of progress in inventing fast
cars by thinking about it for 5 minutes.

It may well be that there are different principles at different levels of intelligence. You
can't just scale to get from an AI 10x smarter than a human, to one 100x smarter.
There are entirely different principles that need developed. What is harder to imagine
is the supposedly 10x smarter AI just sitting there while a human develops those
principles.
REPLY (1)

Carl Pham Mar 8 · edited Mar 8


First of all, Deep Blue isn't *better* at chess than its creators, it's merely
*faster*. There's nothing Deep Blue can do that its human programmers couldn't
do themselves by hand. It would just take them much, much longer. Perhaps a
million years! But so what? There is nothing new there. The fact that I can't
multiply two 12-digit numbers in 0.1ms while my calculator can doesn't say it has
more intelligence than me, it just says it's faster.

Now if the path from current AIs to a conscious thinking machine was *merely*
doing what it does now, but much much faster, there's be a point here. If one
could write down an algorithm that you *knew* would lead to conscious thinking,
and it was just a question of getting enough processor speed and memory to
execute it in real time, there would be a point.

But that's not what we're talking about. We're talking about a writing a program
that can write another program that can do creative things the first program
can't (like writing a 3rd program that is still more capable). I see no way for that
to happen. You can't get something from nothing[1].

I don't see the sense in your car analogy. The car is not being asked to design
another car that is still faster. Indeed, the car is not being asked to do anything
other than what its human designers envision. Go fast. Do it this way, which we
fully understand and design machinery to do. Again, that would work as an
analogy *if* we had any idea how to design an intelligent thinking machine. But
we don't. And until we do, any speculation about how hard or easy it might be to
design a thinking machine to design a smarter thinking machine is sterile. It's not
even been shown that one thinking machine (us) can design an *equally*
intelligent thinking machine. So far, all we've been able to design are machines
that are much stupider than we are. Not promising.

------------------

[1] The counter-example is evolution, which is great, and if you had a planet
where natural forces allowed silicon chips to randomly assemble, reproduce, and
face challenges from their environment, I would find it plausible that intelligent
thinking computer would arise in a few hundred millions years.
REPLY (1)

Donald Mar 9
The "A hypothetical immortal human could do that with pencil and paper in
a million years". What such a hypothetical immortal human could do has
little bearing on anything, as such a human doesn't exist and is unlikely to
ever exist. (Even in some glorious transhuman future, we won't waste
eternity doing mental arithmetic.)

If the AI kills you with advanced nanoweapons, does it matter whether a


hypothetical human could have designed the same nanoweapons if they
had a billion years to work on it? No.

> I see no way for that to happen. You can't get something from nothing[1].

This isn't a law of thermodynamics. You aren't getting matter or energy


appearing from nowhere. You are getting intelligence increasing.

Evolution caused increasing intelligence. If you want to postulate some new


law of physics that is conservation of intelligence, then you are going to
need to formulate it carefully.

> So far, all we've been able to design are machines that are much stupider
than we are. Not promising.

Ah, the same old "we haven't invented it yet, therefore we won't invent it in
the future" argument.

If we knew how to invent a superhuman AI, we likely could write the code
easily.

The same old process of humans running experiments and figuring things
out is happening. Humanity didn't start off with the knowledge of how to
Expand full comment
make any technology. We figured it all out by thinking and running
REPLY

Donald Mar 8
Human technology has a pretty reasonable track record of inventing things that don't exist yet,
and have no natural examples. The lack of animals able to reach orbit isn't convincing
evidence that humans can't.

For some technologies, a lot of the work is figuring out how to do it, after that, doing it is easy.

"people keep talking about curing cancer. But no one will give me a non handwavey
explanation of how to do that. All these researchers and they can't name a single chemical that
will cure all cancers".

Besides science fiction and real life, we can gain some idea what's going on through other
methods.

For example, we can note that the limits on human intelligence are at least in part things like
calories being scarce in the ancestral environment, and heads needing to fit through birth
canals. Neurons move signals at a millionth of the speed of light, and are generally shoddy in
other ways. The brain doesn't use quantum computation. Humans suck at arithmetic which we
know is really trivial. These look like contingent limits of evolution being stupid, not
fundamental physical limits.

And of course, being able to manufacture a million copies of von-newmann's mind, each
weighing a few kilos of common atoms, and taking 20 watts of power, would be pretty world
changing even if human brains were magically at the limits.

Based on such reasons, we can put ASI in the pile of technologies that are pretty clearly
allowed by physics, but haven't been invented yet.

Humans taking techs that are clearly theoretically possible, and finally getting them to actually
work is a fairly regular thing. But it is hard to say when any particular tech will be developed.

My lack of worry about psycology research is less that I am confident that no such zombie
pattern exists, more I don't think such an artifact could be created by accident. I think creating
it would require either a massive breakthrough in the fundamentals of psychology, or an
approach based on brainscans and/or AI. It seems hard to imagine how a human could invent
such a thing without immediately bricking their own mind. There doesn't seem to be a lot of
effort actually going into researching towards such a thing.

(and it isn't an X-risk, some people are blind and/or deaf.)


REPLY (1)

Carl Pham Mar 8


Sorry, this is completely unpersuasive to me. The fact that you can write an English
sentence that parses, containing the words "superintelligent AI," and wave your hands
and give a few examples of what you mean by that, does not imply at all that it could exist.
Gene Roddenberry showed me a spaceship that could travel from Earth to an inhabited
planet 10 ly away in half an hour. It was a very convincing and realistic portrayal. Which
means exactly nothing about whether it could actually happen. Human imagination is
unbounded. We can imagine an infinity of things that seem reasonable to us, so that is
evidence of weight approximately zero in terms of whether it could.

I mean, basically you're repeating one of Anselm's famous proofs of the existence of God.
"Because we can imagine Him, He must exist!" I've never understood how intelligent men
could swallow such transparently circular reasoning, but exposure to the AGI
enthusiast/doom pr0n community has been most illuminating.
REPLY (1)

Donald 17 hr ago
We have specific strong reasons to think FTL is more likely to be impossible. (Namely
the theories of relitivity)

There aren't a vast infinity of things that

1) Have significant and funded fields of science and engineering dedicated towards
creating them.

2) Are pointing to a quantity we already see in the real world, and saying "Like this
but moreso"

3) Have a reliable track record in the related field of moving forward, of doing things
we were previously unable to do.

These are the sort of things that in the past have indicated a new tech is likely to be
developed.

Human brains clearly exist.

Imagine all possible arangements of atoms. Now lets put them all in a competition.
Designing rockets and fusion reactors. Solving puzzles. Negotiating with each other
in complicated buisness deals. Playing chess. All sorts of tasks.

Now most arangements of matter are rocks that just sit there doing nothing. Some
human made programs would be able to do somewhat better. Maybe stockfish does
really well on the chess section, and no better than the rocks on the other sections.
ChatGPT might convince some agents to give it a share of resouces in the bargining,
or do ok in a poetry contest. Monkeys would do at least somewhat better than rocks,
at least if some of the puzzles are really easy. Humans would do quite well. Some
humans would do better than others. Do you think that, out of all possible
arangements of atoms, humans would do best? Are human minds some sort of
optimal, where distant aliens, seeking the limits of technology, make molecularly
exact copies of Einstein's brain to do their physics research?

Current AI research is making progress, it can do some things better than humans.
Where do you think it will stop? What tasks will remain the domain of humans?
REPLY

greg kai Mar 8


Something I'd like to see is to consider simultaneously the danger of AI development, and the
danger of degrowth (or even the absence of growth): Both risks have their thinkers, but I am not
aware of combine the two. When considering AI risks for example, it's most of the time compared
to a baseline where AI do not take off and it's business as usual (human driven progress and
increase in standard of living)....However, when looking at trends, the baseline (no AI takeoff) does
not seems to be business as usual, but something less pleasant (possibly far less pleasant). If you
look at AI worst case scenario (eradication of humans, possibly of all non-AI entities in a grey goo
apocalypse) it's very frightening. But if you look at it from the other side worst case scenario
(tech/energy crash leading to multi-generation malthusian anarchy or strong dictatorship, both with
very poor average SoL), it's less frightening. Sure, one is permanent and the other is only multi-
generation.....But as I get older, the difference between permanent and multi-generation sounds
more philosophical than practical....In fact, a total apocalypse may be prefered by quite many
compared to very bad and multi-generation totalitarism or mad-max like survivalism. At least it has
some romantic appeal, like all apocalypses...
REPLY (1)

Donald Mar 8
I don't think dramatic collapse scenarios are probable. Even a kind of stagnation seems harder
to imagine. There are a bunch of possible other future techs that seem to be arriving at a
decent speed. Ie the transition to abundant green energy. Research on antiaging. More
speculative, but far more powerful, outright nanotech. And of course there is the steady
economic march made of millions upon millions of discoveries and inventions, each a tiny
advance in some obscure field.
REPLY

Esk Mar 8
> It’s not that you should never do this. Every technology has some risk of destroying the world;

Not only technology can destroy the world. Humanity can be destroyed by an asteroid or
supernova. And who proved that the evolution will not destroy itself? Biosphere is a complex
system with all traits of chaos, it is unpredictable on a log run. There are no reasons to believe that
if all previous predictions for apocalypses were wrong, then there would be no apocalypse in a
future.

So a risk of an apocalypse is not zero in any case. It grows monotonically with time.

The only way to deal with it is a diversification. Do not place all eggs into one basket. And therefore
we need to consider a potential of a technology to create opportunities to diversify our bets. AI, for
example, can make it much easier to Occupy Mars, because travel in a Solar System is large.
Communication suffers from a high latency, so we need to move decision making to a place where
it will be applied. Travel is costly, we need to support life of humans in a vacuum for years, just to
move there. AI can reduce costs of asteroid mining and Mars colonization dramatically.

If we take this into a consideration, how AI will affect a life expectancy of a humankind?
REPLY (1)

Donald Mar 8
If we have a friendly superintelligence, it can basically magically do everything. All future X-risk
goes to the unavoidable stuff like the universe spontaneously failing to exist. (+ hostile aliens?)

The chance of an asteroid or supernova big enough to kill us is pretty tiny on human
timescales. The dinosaur killer was 50 million years ago. These things are really rare, and we
already have most of the tech needed for an OK defense.

Lets say we want to make ASI eventually, the question is whether to rush ASAP, or to take an
extra 100 years to really triple check everything. If we think rushing has any significant chance
of going wrong, and there are no other techs with a larger chance of going wrong, we should
go slow.

To make the case for rushing, you need to argue that the chance of Nuclear doom/ grey goo /
something in the intervening years we don't have ASI are greater than the chance of ASI doom
if we rush ( minus ASI doom from going slow, but if you think that is large, then never making
ASI is an option. )

It is actually hard for a mars base to add much more diversity protection. Asteroids can be
spotted and deflected. Gamma ray bursts will hit mars too. Bad ASI will just go to mars. The
mars base needs to be totally self sufficient, which is hard.
REPLY (1)

Esk Mar 8
Before I answer this, I'd like to note, that I do not intend to prove that you are wrong in
your conclusions. What I want to do is to show you, that your methods to reaching your
conclusions is not rigorous enough. It looks like I'm trying to state some other conclusion,
but it is because I do not see how to avoid it. In fact I do not really know the answer.

> These things are really rare, and we already have most of the tech needed for an OK
defense.

How about a nuclear war? Or more infectious COVID? Or some evolved insect that eats
everything and doubles its population daily? Or how about an asteroid, which our
defences will strike to divert from Earth, but it explodes releasing a big cloud of gas and
dust, which then will travel to Earth and kill us all?

Complex system can end in a ruin surprisingly fast and in a surprising ways too.

> pretty tiny on human timescales.

Are we concerned about ourselves only, or our children and grandchildren also matter?
Mars colonization cannot be done in a weekend, it would need decades or even centuries.

> It is actually hard for a mars base to add much more diversity protection.

If there is a human population on Mars of 1M self-sustaining people, it will add a lot and it
will open other opportunities. For example it is much more easy to go on orbit on Mars, so
it is easier to mine asteroids or to create a completely artificial structure in a space that
can host a population of another million people. It will open a path to a consequent
exploration and colonization beyond our Solar System.

> the question is whether to rush ASAP, or to take an extra 100 years to really triple check
everything
Expand full comment

REPLY (1)

Donald 17 hr ago
I wasn't talking about viruses or nukes when I said "these things are really rare" and
"we already have an ok defense". I was talking about asteroids and supernovae.

Nuclear war is likely a much bigger risk than asteroids.

I don't think we have enough nukes to kill everyone, there are lots of remote villages
in the middle of nowhere. So nukes aren't that much of an X-risk.

"Or more infectious COVID?" Well vaccines and social distancing (and again.
something that doesn't kill 100% of people isn't an X-risk. If a disease is widespread
and 100% lethal, people will be really really social distancing. Otherwise, it's not an
X-risk. )

"Or some evolved insect that eats everything and doubles its population daily?"
Evolution has failed to produce that in the last million years, no reason to start now.
(Actually some pretty good biology reasons why such thing can't evolve)

"Or how about an asteroid, which our defences will strike to divert from Earth, but it
explodes releasing a big cloud of gas and dust, which then will travel to Earth and kill
us all?" Asteroids aren't explosive. Exactly how is this gas cloud lethal? Gas at room
temperature expands at ~300m/s. Earth's radius is ~6 *10^6m So that's earths
radius every 6 hours. So only a small fraction of the gas will hit earth.

"Are we concerned about ourselves only, or our children and grandchildren also
matter? Mars colonization cannot be done in a weekend, it would need decades or
even centuries." It doesn't matter. Suppose we are thinking really long term. We want
humanity to be florishing in a trillion years. If you buy that ASI is coming within 50,
and that friendly ASI is a win condition, it doesn't matter what time scales we are
thinking
Expand fullon beyond
comment that.
REPLY

William Mar 8
Science and technology do not have more benefits than harms. Science and technology are tools
and like all tools, they cannot do anything without a conscious actor controlling them and making
value judgements about them. Therefore, they are always neutral and their perceived harms and
benefits are only a perfect reflection of the conscious actor using them.

This is a mistake made very often by the rational community. Science and technology can never
decide the direction of culture or society, it can only increase the speed we get there. We decide
how the tool is used or misused.

The reason incredibly powerful technology like nuclear energy and AI chills many people to the
bone is because they are being developed at times when society are not quite ready for them. The
first real use for atomic energy was a weapon of mass destruction. This was our parents and
grandparents generation! There is a major war raging on in Europe with several nuclear facilities
already at risk for a major catastrophe. What would happen if the tables turned and Russia felt
more threatened? Would those facilities not be a major target?

The international saber rattling is a constant presence in the news. The state of global peace is still
incredibly fragile. The consequences of a nuclear disaster is a large area of our precious living
Earth becoming a barren hell for decades and centuries. Are we stable and mature enough for this
type of power?

And just look at how we have used the the enormous power that we received from fossil fuels.
What percentage of that energy went to making us happier and healthier? Yes we live a bit longer
than 2 centuries ago, but most of that improvement is not due to the energy of fossil fuels.

Why would the power we receive from AI and nuclear energy be used any differently? Likewise they
will have some real beautiful applications that help human beings, but mostly they will be used to
make the rich richer, to make the powerful more powerful, to make our lives more "convenient"
Expand full comment
(lazy), and likewise they will disconnect us from each other and from this incredible living planet
REPLY

Jonathan Ray Writes Far-Tentacled Axons Mar 8


It could make sense in a total-utilitarian sense to wager one entire civilization if the damage were
limited to one civilization and there are several other civilizations out there. But one paperclip
maximizer could destroy all the civilizations in the universe.

The derivation of Kelly assumes you have a single bankroll, no expenses, and wagering on that
bankroll is your only source of income, and seeks to maximize the long-run growth rate of your
bankroll. If Bob is a consultant with 10k/month of disposable income, and he has $3k in savings, it
totally makes sense for him to wager the entire 3k on the 50% advantage coin flip. For Kelly
calculations he should use something like the discounted present value of his income stream, using
a pessimistic discount rate to account for the fees charged by lenders, the chance of getting fired,
etc.

If we settled multiple star systems, and found a way to reliably limit the damage to one star system,
then we should be much more willing to experiment with AGI.
REPLY

Radu Floricica Mar 8


The problem with gain of function isn't its risk - it's the total lack of potential upside.

If you can make a temporary mental switch and see humans as chattel, some interesting
perspectives happen. Like how 100 thialomide-like incidents would compare with having half as
many cancers, or everybody living an extra 5 healthy years.

Covid was bearable, even light in terms of QALYs - but there was no expected utility to be gained
by playing russian rulette. It was just stupid loss.

AI... not so much. Last november I celebrated: we are no longer alone. We may not have
companionship, but where it matters, in the getting-things-done department, we finally have non-
human help. The expected upside is there, and not in a silver of probability. I'd gladly trade 10
covids or a nuclear war for what AI can be.
REPLY

David Friedman Writes David Friedman’s Substack Mar 8


One of the issues that came up in the thread was the origin of Covid, and I have a relevant question
for something I am writing that people here might be able to answer. I will put the question on the
most recent open thread as well, but I expect fewer people are reading it.

The number I would like and don't have is how many wet markets there are in the world with
whatever features, probably selling wild animals, make the Wuhan market a candidate for the origin
of Covid. If it is the only one, then Covid appearing in Wuhan from it is no odder a coincidence than
Covid appearing in the same city where the WIV was researching bat viruses. If it was one of fifty or
a hundred (not necessarily all in China), then the application of Bayes' Theorem implies a posterior
probability for the lab leak theory much higher than whatever the prior was.
REPLY (1)

Carl Pham Mar 8


Point of order: SARS-Covid-2 is not a bat virus. Its closest cousins, genetically speaking, are
bat viruses, but it is itself not one. I think that's one of the biggest reasons the lab leak theory
even has legs -- if it were a virus that could be traced back to some virus in wild Asian animals
(the way SARS-Covid-1 eventually was), then people would not be as suspicious that it got the
way it did through some human experimentation.
REPLY

Korakys Writes Marco Thinking Mar 9


Just once I'd like to see someone explain how, exactly, a superintelligent machine is supposed to
kill everyone. An explanation that actually stands up to some scrutiny and doesn't just involve
handwaving, e.g.: a super AI would think of a method we couldn't possibly think of.
REPLY

Elohim Mar 9
When the LHC was about to be turned on, a similar group of doomers started saying that it was
going to destroy the world through black holes or whatever. Of course the LHC didn't destroy the
world; it led to the discovery of the Higgs boson. The AI doomers are exactly like them.
REPLY

Top New Community

Ivermectin: Much More Than You Wanted To Know


...
NOV 17, 2021 378 2,242

Still Alive
You just keep on trying till you run out of cake
JAN 21, 2021 1,158 511

A Modest Proposal For Republicans: Use The Word "Class"


Pivot from mindless populist rage to a thoughtful campaign to fight classism.
FEB 25, 2021 535 1,627

See all

Ready for more?

Subscribe

© 2023 Scott Alexander ∙ Privacy ∙ Terms ∙ Collection notice

Start Writing Get the app

Substack is the home for great writing

You might also like